• 0 Posts
  • 49 Comments
Joined 1 month ago
cake
Cake day: February 19th, 2026

help-circle










  • Did they try to anonymise its origin or something? O.o

    My theory is the background sucked so they blacked it out, but also maybe took a photo of it on a phone that uses an overzealous AI tool that redrew over the existing words to make them clearer
    (TL;DR: AI upscale is the term I was looking for)

    Edit: It may have been an AI upscale of a blurry photo. Here are the results of a bad screenshot ran through my phone’s upscaler:



  • Can confirm what another user said, that Intel iGPU would be better in your case.

    I’ll let you know now – if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.

    Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.

    Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it’s worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and… Well I don’t have the RAM but you get it. LLMs are intensive.