A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
YouTube is usually the first thing I open on first boot of a new machine. That way I know if the sound is working, network is working and video drivers are ok all at once.
What do you check two hours later?
YouTube
And after that?
Redtube
Honestly feels like that’s necessary, with how much youtube jitters on my gaming rig. At least before I remember that YouTube runs like shit on every machine.
YouTube is by far the slowest website I visit, it’s so bloated.
Are you maybe using Windows? I’ve heard it can be slow sometimes, even on modern hardware.
Nope, but yes, it was even worse while I still had windows.
Probably Firefox. Firefox handles video like shit.
YouTube on Firefox is a complete no go as far as I’ve tried. And not sure how much Firefox is to blame since other video players work just fine on it.
But no, I’m using Opera or other chromium browsers for Youtube nowadays. Still jitters from time to time.
Firefox doesn’t use hardware acceleration on Linux I think?
I have a smart tv but is kinda old, so, when i want to watch a bit of youtube without ads i connect my steam deck and open it with firefox. The ads-free experience is well worth the time to do it.
Check out smart tube beta if you can hook a Chromecast or similar up
I think I will start doing that. Just dock it and use KDE on phone to wireless control it I’m thinking.
Me with my 3090 pc playing only Stardew Valley: 🗿
Started playing this for the first time today on an RG35xxSP.
That 3090 is living a stress free life
This such a bad setup for self hosting
i just upgraded from a core2duo with 2gb ddr2 to a 7th gen i3 with 8 gb ddr4 and for the first time in my life an actual gpu (nvidia k620).
That’s crazy! If you don’t mind me asking what do you typically do on your machine and how much time do you spend on it?
I’m just curious because I spend a lot of time on my PC and can only imagine how horrible all the stuff I do would be on that hardware lol
I have only used it for doomscrolling and watching videos so far. I spend at least 12 hours on it, Im disabled and live in the third world so I dont have a lot going on in my life lol.
I gotcha, thanks for sharing. It’s cool hearing from people who are so far away!
Heh, I did something similar for my dad. He went from 2x core2quad 24gb DDR2 to a 12th Gen i5 with 32gb ddr5. Something like triple the compute power, at under $500 when he paid ~$5k for the original
Got a new system and I’m playing diablo 2 and balatro on it…
That 4x animation speed is worth it.
Latest $5000 macbook pro with the hottest, absolute state-of-the-art Apple Silicon chip. 48GB of the most expensive memory on the market, 2TB SSD.
Run everything in a terminal.
I only watch free tube in 12k.
Did not buy the 9800X3D
deleted by creator
Me with a 7900 XTX playing brighter shores 🥲
I really wanted to like that game… Tries so hard but I just couldn’t.
Yeah? What wasn’t clicking for you? I love it
Feels like a mobile game, and felt rather unpolished and buggy, (I played on launch day).
Hmm… There’s been a lot of quality of life patches (key binds, esc to close interfaces, clicking outside of interfaces closes them, smarter quantities on the withdraw screen, the option to have left click do a “default action” rather than opening the window, middle click drag, etc). He was pushing out changes every day for like two weeks, then weekly patches.
I haven’t really seen anything I’d call a bug (it’s actually one of the most stable games I’ve ever played).
It’s definitely a true early access game (and they’ve said as much; they’re open to a lot of potential changes and have been quite receptive to feedback with strong consensus), so I’d definitely check back from time to time if you like it in concept. They’re talking about adding action queuing and reworking the combat to feel “better” in the near term. Player trading and PvP duels should come soon after as well along with a bunch of other stuff.
The game is designed to be friendly to touch screens and they do plan to have a mobile client eventually (similar to RuneScape). However, they have said they will not add any micro transactions or other predatory stuff … and I believe them; the Gowers have been quite principled about that over the years.
In the opposite end, what is the cheapest device that you could watch YT on? I’m thinking one of those retro game consoles, which are like $60, run Linux, and have WiFi.
Runs flawlessly on my raspberry pi (4, 2 GB RAM, bought new for 28€)
*requires a keyboard and screen
I used to be able to watch yr on a 30 bucks android tv device in which I installed coreelec.
Sadly youtube apps on there stopped working for me a while ago due the war on adblocks. But the device was perfectly capable of playing YouTube.
I suppose that with tubearchivist and jellyfin you could still somehow watch youtube.
I’ll sell you my old phone for $10.
my 2013 ThinkPad plays YouTube just fine : )
I feel like building a top of the line PC cost less than half this just one generation ago. What on earth happened?
Edit: maybe the specs in the comic are just crazy.
The SSD is what’s jacking up the price so much.
I built a similar PC in 2022: Ryzen 7700X, 4090, 32GB of DDR5 6000, 4TB NVME and 6TB HDD; it was $4400 including tax. If you spec the same PC today, it’s under $3K now.
TBF I misread the storage originally and went with all SSD, so the price should be about a thousand less. Here’s the part list for anyone who wants to check my work: https://newegg.io/6d2b327
You could easily save $138.99 by using Linux.
Prices for all parts soared during corona
AI heavily increased GPU prices
“AI heavily increased GPU prices” Nvidia one of the greediest companys around increased the prices of GPUs, not necessarily AIs fault because nvidia is in direct control of their card prices.
Take a look at Intel’s new GPUs, they are actually priced in a way that isn’t fist fucking the consumer.
The specs in the comic are just crazy. The top of the line option has expanded a lot too. In the past Nvidia wouldn’t have bothered making a 4090 because the common belief was nobody would pay that much for a GPU… But seemingly enough people are willing to do it that it’s worth doing now.
AMD also revived CPUs in desktop PCs from extreme stagnation and raised the bar for the high end on that side as well by a lot.
So it’s a mix of inflation and the ceiling just being raised as to what the average consumer is offered.
Is new egg back to being a good site to buy from? Felt like they got pretty crappy for awhile
I still order from them although it’s definitely gotten worse and the website feels like It was designed exclusively by the marketing team and high-up executives instead of web engineers.
Honestly, I typically gravitate towards Amazon over Newegg these days, but Newegg does have a lot more options for computer hardware, so I still order from them. I just ordered an ITX motherboard and SFX power supply from newegg yesterday actually because amazom didn’t have the products I wanted.
Just built a server and a PC and they were fine. I had a couple of delivery issues and they sent replacements no questions asked. Even ended up with an extra 4 TB HDD because UPS fucked up, they sent me another, then UPS came by with the original.
Only issue now is a backordered SSD that they keep pushing the release date back on, but $180 for a 2TB nvme pcie Gen5 x4 ssd is worth the wait, I think.
I’m self hosting LLMs for family use (cause screw OpenAI and corporate, closed AI), and I am dying for more VRAM and RAM now. Even if I had a 4090, it wouldn’t be nearly enough.
My 3090 is sitting at 23.9GB/24GB because I keep Qwen 32B QwQ loaded and use it all the time. I even have my display hooked up to my IGP to save VRAM.
Seriously looking at replacing my 7800X3D with Strix Halo when it comes out, maybe a 128GB board if they sell one. Or a 48GB Intel Arc if Intel is smart enough to sell that. And I would use every last megabyte, even if I had a 512GB board (which is the bare minimum to host Deepseek V3).
I don’t know how’s the pricing, but maybe it’s worth building a separate server with second-hand TPU. Used server CPUs and RAMs are apparently quite affordable in the US (assuming you live there) so maybe it’s the case for TPUs as well. And commercial GPUs/TPUs have more VRAM
From where? I keep a look out for used Gaudi/TPU setups, but they’re like impossible to find, and usually in huge full-server configs. I can’t find Xeon Max GPUs or CPUs either.
Also, Google’s software stack isn’t really accessible. TPUs are made for internal use at Google, not for resale.
You can find used AMD MI100s or MI210s, sometimes, but the go-to used server card is still the venerable Tesla P40.
Aren’t LLMs external algorithms at this point? As in the all data will not fit in RAM.
No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.
There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.
No, I’m talking about https://en.m.wikipedia.org/wiki/External_memory_algorithm
Unrelated to RAGs
Unfortunately that’s not really relevant to LLMs beyond inserting things into the text you feed them. For every single word they predict, they make a pass through the multi-gigabyte weights. Its largely memory bound, and not integrated with any kind of sane external memory algorithm.
There are some techniques that muddy this a bit, like MoE and dynamic lora loading, but the principle is the same.
Yeah, I’ve had decent results running the 7B/8B models, particularly the fine tuned ones for specific use cases. But as ya mentioned, they’re only really good in thier scope for a single prompt or maybe a few follow-ups. I’ve seen little improvement with the 13B/14B models and find them mostly not worth the performance hit.
Depends which 14B. Arcee’s 14B SuperNova Medius model (which is a Qwen 2.5 with some training distilled from larger models) is really incrtedible, but old Llama 2-based 13B models are awful.
I’ll try it out! It’s been a hot minute, and it seems like there are new options all the time.
Try a new quantization as well! Like an IQ4-M depending on the size of your GPU, or even better, an 4.5bpw exl2 with Q6 cache if you can manage to set up TabbyAPI.
I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.
Not running any LLMs, but I do a lot of mathematical modelling, and my 32 GB RAM, M1 Pro MacBook is compiling code and crunching numbers like an absolute champ! After about a year, most of my colleagues ditched their old laptops for a MacBook themselves after just noticing that my machine out-performed theirs every day, and that it saved me a bunch of time day-to-day.
Of course, be a bit careful when buying one: Apple cranks up the price like hell if you start specing out the machine a lot. Especially for RAM.
Yeah I got mind refurbished also, so someone else took the first hit on driving it off the lot (and waiting for it to be built). I guess they didn’t use it to its full extent though. That didn’t make it “cheap” though.
You can always uses system memory too. Not exactly an UMA, but close enough.
Or just use iGPU.
It fails whenever it exceeds the vram capacity, I’ve not been able to get it to spillover to the system.
You don’t want it to anyway, as “automatic” spillover with an LLM painfully slow.
The RAM/VRAM split is manually configurable in llama.cpp, but if you have at least 10GB VRAM, generally you want to keep the whole model within that.
Oh I meant for image generation on a 4080, with LLM work I have the 64gb of the Mac available.
Oh, 16GB should be plenty for SDXL.
For flux, I actually use a script that quantizes it down to 8 bit (not FP8, but true quantization with huggingface quanto), but I would also highly recommend checking this project out. It should fit everything in vram and be dramatically faster: https://github.com/mit-han-lab/nunchaku
I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)
The issue with Macs is that Apple does price gouge for memory, your software stack is effectively limited to llama.cpp or MLX, and 70B class LLMs do start to chug, especially at high context.
Diffusion is kinda a different duck. It’s more compute heavy, yes, but the “generally accessible” software stack is also much less optimized for Macs than it is for transformers LLMs.
I view AMD Strix Halo as a solution to this, as its a big IGP with a wide memory bus like a Mac, but it can use the same CUDA software stacks that discrete GPUs use for that speed/feature advantage… albeit with some quirks. But I’m willing to put up with that if AMD doesn’t price gouge it.
Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.
I’ll look into the Amd Strix though.
GDDR is actually super cheap! I think it would only be like another $75 on paper to double the 4090’s VRAM to 48GB (like they do for pro cards already).
Nvidia just doesn’t do it for market segmentation. AMD doesn’t do it for… honestly I have no idea why? They basically have no pro market to lose, the only explanation I can come up with is that their CEOs are colluding because they are cousins. And Intel doesn’t do it because they didn’t make a (consumer) GPU that was eally worth it until the B580.
Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.
Ah, yeah. Absolutely. The situation sucks though.
Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.
But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.
I’ve got a 3090, and I feel ya. Even 24 gigs is hitting the cap pretty often and slowing to a crawl once system ram starts being used.
You can’t let it overflow if you’re using LLMs on windows. There’s a toggle for it in the Nvidia settings, and get llama.cpp to offload though its settings (or better yet, use exllama instead).
But…. Yeah. Qwen 32B fits in 24GB perfectly, and it’s great, but 72B really feels like the intelligence tipping point where I can dump so many API models, and that won’t fit in 24GB.