removed by mod
fedilink
1.18K
@Skunk@jlai.lu
link
fedilink
English
16410d

@tacosanonymous@lemm.ee
link
fedilink
English
89d

I only watch free tube in 12k.

@JoeKis@lemmy.world
link
fedilink
English
349d

Me with my 3090 pc playing only Stardew Valley: 🗿

Bakkoda
link
fedilink
English
159d

That 3090 is living a stress free life

@Elkenders@feddit.uk
link
fedilink
English
49d

Started playing this for the first time today on an RG35xxSP.

My Password Is 1234
link
fedilink
English
99d

256 GigaBITs (=32GB) of ram is pretty low

It’s not as much as a lot of people have, but it definitely isn’t low.

Parculis Marcilus
link
fedilink
English
29d

Me: Only have 8GB Ram at hand

@Echolynx@lemmy.zip
link
fedilink
English
209d

A 4090 is pretty great for playing Skyrim at 60 FPS, I’ll have you know.

Did not buy the 9800X3D

Dark Arc
link
fedilink
English
19d

deleted by creator

@brucethemoose@lemmy.world
link
fedilink
English
21
edit-2
9d

I’m self hosting LLMs for family use (cause screw OpenAI and corporate, closed AI), and I am dying for more VRAM and RAM now. Even if I had a 4090, it wouldn’t be nearly enough.

My 3090 is sitting at 23.9GB/24GB because I keep Qwen 32B QwQ loaded and use it all the time. I even have my display hooked up to my IGP to save VRAM.

Seriously looking at replacing my 7800X3D with Strix Halo when it comes out, maybe a 128GB board if they sell one. Or a 48GB Intel Arc if Intel is smart enough to sell that. And I would use every last megabyte, even if I had a 512GB board (which is the bare minimum to host Deepseek V3).

I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.

@thebestaquaman@lemmy.world
link
fedilink
English
3
edit-2
9d

Not running any LLMs, but I do a lot of mathematical modelling, and my 32 GB RAM, M1 Pro MacBook is compiling code and crunching numbers like an absolute champ! After about a year, most of my colleagues ditched their old laptops for a MacBook themselves after just noticing that my machine out-performed theirs every day, and that it saved me a bunch of time day-to-day.

Of course, be a bit careful when buying one: Apple cranks up the price like hell if you start specing out the machine a lot. Especially for RAM.

Yeah I got mind refurbished also, so someone else took the first hit on driving it off the lot (and waiting for it to be built). I guess they didn’t use it to its full extent though. That didn’t make it “cheap” though.

@uis@lemm.ee
link
fedilink
English
19d

You can always uses system memory too. Not exactly an UMA, but close enough.

Or just use iGPU.

It fails whenever it exceeds the vram capacity, I’ve not been able to get it to spillover to the system.

@brucethemoose@lemmy.world
link
fedilink
English
2
edit-2
9d

You don’t want it to anyway, as “automatic” spillover with an LLM painfully slow.

The RAM/VRAM split is manually configurable in llama.cpp, but if you have at least 10GB VRAM, generally you want to keep the whole model within that.

Oh I meant for image generation on a 4080, with LLM work I have the 64gb of the Mac available.

@brucethemoose@lemmy.world
link
fedilink
English
1
edit-2
9d

Oh, 16GB should be plenty for SDXL.

For flux, I actually use a script that quantizes it down to 8 bit (not FP8, but true quantization with huggingface quanto), but I would also highly recommend checking this project out. It should fit everything in vram and be dramatically faster: https://github.com/mit-han-lab/nunchaku

I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)

@brucethemoose@lemmy.world
link
fedilink
English
7
edit-2
9d

The issue with Macs is that Apple does price gouge for memory, your software stack is effectively limited to llama.cpp or MLX, and 70B class LLMs do start to chug, especially at high context.

Diffusion is kinda a different duck. It’s more compute heavy, yes, but the “generally accessible” software stack is also much less optimized for Macs than it is for transformers LLMs.

I view AMD Strix Halo as a solution to this, as its a big IGP with a wide memory bus like a Mac, but it can use the same CUDA software stacks that discrete GPUs use for that speed/feature advantage… albeit with some quirks. But I’m willing to put up with that if AMD doesn’t price gouge it.

Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.

I’ll look into the Amd Strix though.

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
9d

GDDR is actually super cheap! I think it would only be like another $75 on paper to double the 4090’s VRAM to 48GB (like they do for pro cards already).

Nvidia just doesn’t do it for market segmentation. AMD doesn’t do it for… honestly I have no idea why? They basically have no pro market to lose, the only explanation I can come up with is that their CEOs are colluding because they are cousins. And Intel doesn’t do it because they didn’t make a (consumer) GPU that was eally worth it until the B580.

Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

@brucethemoose@lemmy.world
link
fedilink
English
2
edit-2
9d

Oh I didn’t mean “should cost $4000” just “would cost $4000”

Ah, yeah. Absolutely. The situation sucks though.

I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.

But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.

I don’t know how’s the pricing, but maybe it’s worth building a separate server with second-hand TPU. Used server CPUs and RAMs are apparently quite affordable in the US (assuming you live there) so maybe it’s the case for TPUs as well. And commercial GPUs/TPUs have more VRAM

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
9d

second-hand TPU

From where? I keep a look out for used Gaudi/TPU setups, but they’re like impossible to find, and usually in huge full-server configs. I can’t find Xeon Max GPUs or CPUs either.

Also, Google’s software stack isn’t really accessible. TPUs are made for internal use at Google, not for resale.

You can find used AMD MI100s or MI210s, sometimes, but the go-to used server card is still the venerable Tesla P40.

@uis@lemm.ee
link
fedilink
English
29d

Aren’t LLMs external algorithms at this point? As in the all data will not fit in RAM.

@brucethemoose@lemmy.world
link
fedilink
English
5
edit-2
9d

No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.

There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.

@uis@lemm.ee
link
fedilink
English
19d

If you “talk to” a LLM on your GPU, it is not making any calls to the internet,

No, I’m talking about https://en.m.wikipedia.org/wiki/External_memory_algorithm

Unrelated to RAGs

@brucethemoose@lemmy.world
link
fedilink
English
1
edit-2
9d

https://en.m.wikipedia.org/wiki/External_memory_algorithm

Unfortunately that’s not really relevant to LLMs beyond inserting things into the text you feed them. For every single word they predict, they make a pass through the multi-gigabyte weights. Its largely memory bound, and not integrated with any kind of sane external memory algorithm.

There are some techniques that muddy this a bit, like MoE and dynamic lora loading, but the principle is the same.

@Hackworth@lemmy.world
link
fedilink
English
29d

Yeah, I’ve had decent results running the 7B/8B models, particularly the fine tuned ones for specific use cases. But as ya mentioned, they’re only really good in thier scope for a single prompt or maybe a few follow-ups. I’ve seen little improvement with the 13B/14B models and find them mostly not worth the performance hit.

Depends which 14B. Arcee’s 14B SuperNova Medius model (which is a Qwen 2.5 with some training distilled from larger models) is really incrtedible, but old Llama 2-based 13B models are awful.

@Hackworth@lemmy.world
link
fedilink
English
29d

I’ll try it out! It’s been a hot minute, and it seems like there are new options all the time.

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
9d

Try a new quantization as well! Like an IQ4-M depending on the size of your GPU, or even better, an 4.5bpw exl2 with Q6 cache if you can manage to set up TabbyAPI.

Altima NEO
link
fedilink
English
29d

I’ve got a 3090, and I feel ya. Even 24 gigs is hitting the cap pretty often and slowing to a crawl once system ram starts being used.

@brucethemoose@lemmy.world
link
fedilink
English
6
edit-2
9d

You can’t let it overflow if you’re using LLMs on windows. There’s a toggle for it in the Nvidia settings, and get llama.cpp to offload though its settings (or better yet, use exllama instead).

But…. Yeah. Qwen 32B fits in 24GB perfectly, and it’s great, but 72B really feels like the intelligence tipping point where I can dump so many API models, and that won’t fit in 24GB.

MrGerrit
link
fedilink
English
2510d

Got a new system and I’m playing diablo 2 and balatro on it…

Nicht BurningTurtle
link
fedilink
English
89d

That 4x animation speed is worth it.

@chetradley@lemm.ee
link
fedilink
English
569d

@Abnorc@lemm.ee
link
fedilink
English
15
edit-2
9d

I feel like building a top of the line PC cost less than half this just one generation ago. What on earth happened?

Edit: maybe the specs in the comic are just crazy.

@Psythik@lemmy.world
link
fedilink
English
89d

The SSD is what’s jacking up the price so much.

I built a similar PC in 2022: Ryzen 7700X, 4090, 32GB of DDR5 6000, 4TB NVME and 6TB HDD; it was $4400 including tax. If you spec the same PC today, it’s under $3K now.

  • Prices for all parts soared during corona

  • AI heavily increased GPU prices

@Zetta@mander.xyz
link
fedilink
English
129d

“AI heavily increased GPU prices” Nvidia one of the greediest companys around increased the prices of GPUs, not necessarily AIs fault because nvidia is in direct control of their card prices.

Take a look at Intel’s new GPUs, they are actually priced in a way that isn’t fist fucking the consumer.

Dark Arc
link
fedilink
English
29d

The specs in the comic are just crazy. The top of the line option has expanded a lot too. In the past Nvidia wouldn’t have bothered making a 4090 because the common belief was nobody would pay that much for a GPU… But seemingly enough people are willing to do it that it’s worth doing now.

AMD also revived CPUs in desktop PCs from extreme stagnation and raised the bar for the high end on that side as well by a lot.

So it’s a mix of inflation and the ceiling just being raised as to what the average consumer is offered.

@chetradley@lemm.ee
link
fedilink
English
39d

TBF I misread the storage originally and went with all SSD, so the price should be about a thousand less. Here’s the part list for anyone who wants to check my work: https://newegg.io/6d2b327

You could easily save $138.99 by using Linux.

Is new egg back to being a good site to buy from? Felt like they got pretty crappy for awhile

@Zetta@mander.xyz
link
fedilink
English
39d

I still order from them although it’s definitely gotten worse and the website feels like It was designed exclusively by the marketing team and high-up executives instead of web engineers.

Honestly, I typically gravitate towards Amazon over Newegg these days, but Newegg does have a lot more options for computer hardware, so I still order from them. I just ordered an ITX motherboard and SFX power supply from newegg yesterday actually because amazom didn’t have the products I wanted.

y0kai
link
fedilink
English
19d

Just built a server and a PC and they were fine. I had a couple of delivery issues and they sent replacements no questions asked. Even ended up with an extra 4 TB HDD because UPS fucked up, they sent me another, then UPS came by with the original.

Only issue now is a backordered SSD that they keep pushing the release date back on, but $180 for a 2TB nvme pcie Gen5 x4 ssd is worth the wait, I think.

@Donkter@lemmy.world
link
fedilink
English
109d

41 tb?? What are you doing? Recording a lossless video of a 24/7 live stream?

I have about 60TB in mine. Media, man. Collecting is addictive.

Just another flight simmer, sounds like

I upgraded to a new GPU a few weeks ago but all I’ve been doing is playing Factorio which would run just fine on 15 year old hardware.

You can upgrade to satisfactory now! Its like factorio all growed up!

That’s the spirit!

After 3 upgrades since the game came out, I can get a pretty consistent 40fps in Arma 3. 😤

Core i9 - Well there’s your problem.

No NVMe M.2s? What a noob! HDDs in this day and age!?!? Would you like a floppy disk with that?

4 slots of RAM? What is this, children’s playtime hour? You are only supposed to have 2 slots of RAM installed for optimum overclocking.

Does the dude even 8K 300fps ray trace antialias his YouTube videos!?!? I bet he caps out his Chrome tabs below a thousand.

@IceFoxX@lemm.ee
link
fedilink
English
149d

NVMe uses SSDs as well as flash memory. NVMe is just the protocol.

Although joking, I do tend to assume that people who say SSD refer to the traditional SATA SSD drives and not M.2.

I think they were saying that the read write speeds being from a NVMe would be faster than (an unspecified) SATA drive. But that was my assumption while reading

SATA SSDs are still more than fast enough to saturate a 2.5G ethernet connection. Some HDDs can even saturate 2.5G on large sequential reads and writes. The higher speed from M.2 NVMe drives isn’t very useful when they overheat and thermal throttle quickly. You need U.2 or EDSFF drives for sustained high speed transfers.

Exactly. NVMe for my gaming desktop, HDD and SATA SSD for my NAS.

I didn’t get a thing, but sounds cool

@Mr_Dr_Oink@lemmy.world
link
fedilink
English
7
edit-2
9d

HDD for long term storage. More reliable, has a higher number (essentially infinite assuming the drive never fails) of read/writes before failing. Cheaper and higher capacity than any ssd or m.2. Also if you dont keep applying a small electrical charge to an m.2 they eventually lose the data. HDD doesnt really lose data as easily. Also data recovery is easier with HDD. Finally you know when a HDD is on its way out as it will show slower write speeds and become noisier etc.

I used to work in a service desk looking after maybe… 4000 desktops and 2000 laptops for a hospital and the amount of ssd and m.2 failures we had was very costly.

@TheObviousSolution@lemm.ee
link
fedilink
English
1
edit-2
9d

I actually only installed M.2 a few years back when I went serious on my PC. I’m aware of the issues, although it’s still running good. I wonder how long it will last. I still have a few IDE drives, and some no longer can be read. Not because they’ve lost the data, but it just doesn’t spin up correctly. It will be interesting to see how it works out, at the moment I’m keeping an eye out on the health using CrystalDiskInfo. There’s certainly been cases of M.2 sticks with shitty firmware, but so far I seem to have avoided them. I’m also trying out a RAIDed M.2 mini NAS, it will be fun to see how that works out compared to the traditional NAS.

@Juvyn00b@lemmy.world
link
fedilink
English
19d

Are you doing this nas on custom hardware? Curious about the build if so.

@magikmw@lemm.ee
link
fedilink
English
139d

He heck is HHD+? Is this some new fangled storage tech I’m too SSD to understand?

As in “and” at least that’s how I interpret it.

@magikmw@lemm.ee
link
fedilink
English
49d

What’s just HHD then?

Why would you buy a 25tb HDD. Have they never heard of RAID?

Oh good lord I’m blind, Resolidified spinning rust?

cally [he/they]
link
fedilink
English
79d

Harder Hard Drive

@TrickDacy@lemmy.world
link
fedilink
English
75
edit-2
10d

A raspberry pi 5 can play YouTube in HD just fine, so if you wanna save 4000 bucks maybe do that instead

You can also just buy a used laptop or business computer which is infinitely better and cheaper.

@Prunebutt@slrpnk.net
link
fedilink
English
89d

Cheaper than a raspberry? O.o

Pi 5 desktop kit is like $150 isn’t it?

Yeah you can beat that performance and price with some used hardware. Will cost more in power though.

@Prunebutt@slrpnk.net
link
fedilink
English
89d

You could get away with nothing but the Pi, depending on what you’ve got lying around.

@curbstickle@lemmy.dbzer0.com
link
fedilink
English
13
edit-2
9d

Sure, depends on needs of course. Just saying I can see how someone could arrive at a better price point than a pi with more performance.

Just not more per watt (except in more burst demanding scenarios).

The pi foundation lost a lot of goodwill with me though, so I stick to the alternatives (orangepi for example) if I need one.

Edit: I a whole word.

@TrickDacy@lemmy.world
link
fedilink
English
3
edit-2
9d

oh man, I tried an orangepi and I cannot express how sketchy that thing was, top to bottom. It had a lot of power but that is the one good side it had (it was a lot more expensive than a rpi too). That shitty flashing utility alone make it worth picking something different.

I had so much trouble trying different OSes on it. I think actually none of them felt stable and I tried like 5 (multiple versions of each) I think.

Ive got very specific needs when it comes to pi-alikes, so I can only speak to how ive used it.

I still won’t support the pi foundation though.

@peregus@lemmy.world
link
fedilink
English
79d

Well, actually with 150$ you could buy a used business SFF/tiny PC with an 8th/9th gen i5 CPU and I don’t think that it will consume that much more than a rpi.

@curbstickle@lemmy.dbzer0.com
link
fedilink
English
4
edit-2
9d

Only at idle.

At peak the sff PCs are going to be at least triple the ~30W of the pi 5.

Edit: You’ll get way more out of the sff though, which is what I was saying. Tiny/mini/micro is my entire self hosted environment (as well as lab and work setup for the most part).

@peregus@lemmy.world
link
fedilink
English
59d

At peak the sff PCs are going to be at least triple the ~30W of the pi 5.

Are you sure? I think that for the same tasks, the i5 (at least 9th gen) is more power efficient than the rpi 5. I was a pi guy, I had them all over the places, but like you, I’m now using SFF/tiny used PCs (when I don’t need GPIO).

@curbstickle@lemmy.dbzer0.com
link
fedilink
English
3
edit-2
9d

At least as far as my setup, yeah. Ive got 5th-10th gens, under high loads I’ll see a spike to 80+ watts, the highest is 170W but those have nvidia quadros in them.

Edit: For gpio now I’ll just use an esp32 or something instead.

My only pi usage these days is work stuff, and orangepi is supported there. In terms of arm, also Jetson, but that’s kind of outside the discussion here.

Used stuff is generally cheaper than new stuff.

@Prunebutt@slrpnk.net
link
fedilink
English
19d

Yeah, but I wouldn’t be sure used stuff below 100€/$/whatever could handle the internet too well, nowadays.

Anything made in the past 10-15 years still works great, I have a couple of really old thin clients that I bought for around $20 and dumped my pis when the prices were way up. One runs octoprint and the other one runs Lubuntu out in the garage so I can look up vehicle specs and other things while I’m out there. I have a fifth Gen Intel laptop that still works great. I have a desktop with a Ryzen 3000 series that works just fine both bought used for under $100. Raspberry pi is good for certain tasks, but using it for a desktop makes little sense. Even now I’m working this message on an Android phone that was around $100 with no issues.

CPU power hasn’t changed much, they’ve added more features over the years, but power hasn’t changed a lot, only Windows has gotten more bloated so you need more ram to run it.

Yes. Here’s a random listing on CL.

@TrickDacy@lemmy.world
link
fedilink
English
39d

HP

Kinda says it all.

@Prunebutt@slrpnk.net
link
fedilink
English
39d

Phenom X4? That thing probably can’t really handle YT HD streams.

@Rai@lemmy.dbzer0.com
link
fedilink
English
19d

Surprisingly, it can with a SSD! I just replaced mine with a Pi5 because the single tiny fan was loud hahaha

@teejay@lemmy.world
link
fedilink
English
09d

Yep. First of all, the person who said an RPI5 can show YouTube HD just fine is lying. It’s still stuttery and drops frames (better than the RPI4b, but still not great). Second, you’ll end up dropping well north of $100 for the RPI5, active cooler, case, memory card (not even mentioning an m2 hat), power supply, and cable / adapter to feed standard HDMI.

You can find some really solid used laptops and towers in that price range, not to mention the n100 NUC. And they’ll all stream YouTube HD much better, as well as provide a much smoother desktop experience overall.

Don’t get me wrong, I love me a RPI, I run a couple myself. They’re just not great daily drivers, especially if you want to stream HD content.

@TrickDacy@lemmy.world
link
fedilink
English
19d

I’m the person you’re accusing of lying. To your point, there are some dropped frames but that’s not a problem for me, and I figure most people wouldn’t notice 10 dropped frames out of every 1000, or whatever similar ratio it is. I have a rpi for a media PC and I’m happy with it. I play HD video in several web apps and only the shittiest of them (prime and paramount+) ever have a noticeable issue with playback.

People who complain about rpi’s being expensive kinda make me scratch my head. Like, do you not count the accessories you buy for other hardware? It seems the comparison is between the RPI and every single thing you buy for it, vs a laptop/PC itself with no accessories (which you will almost certainly be buying some amount of). I get that it sucks that these devices have gone up in price, but yeah, the accessories aren’t all that much more than any other device. You could have a very solid RPI setup for $120 all-in. And it would be more durable than some sketchy Acer laptop.

@teejay@lemmy.world
link
fedilink
English
19d

Youtube on the RPI5 drops frames and is stuttery. If that’s fine for you, great. But I’d argue it’s not what people consider a good viewing experience. See https://www.youtube.com/watch?v=UBQosbjl9Jw&t=278s and https://youtu.be/nBtOEmUqASQ?si=VXFGVBid5wCrhu-u&t=797 if you’d like more info.

The accessories I mentioned for the RPI5 are the bare essentials just to get the thing to power up, boot to a web browser, and connect to a monitor to try to play YouTube, which is the foundation of your original comment. Please show me where a $120 used laptop or desktop tower needs additional hardware purchases to boot and plug in an HDMI cable.

You’re picking the wrong fight with the wrong guy, friend. I’m a huge RPI advocate and I think they are great tools for specific use cases. I simply want to point out that if folks are considering it in the hopes that it’s a small and cheap way to watch YouTube, they’re gonna have a bad time.

@Rai@lemmy.dbzer0.com
link
fedilink
English
19d

My Pi5 setup with a very fast SD card plays YouTube without dropping frames or stuttering at 1080p, that other guy is wrong. The UI is slow and a bit janky, but once YT is loaded and fullscreen, it plays perfectly. It plays 24/7 on a TV in our living room for my partner’s WFH and for our cats when they’re done.

@TrickDacy@lemmy.world
link
fedilink
English
09d

Thank you for that validation! I actually just tested mine and saw the same results as you describe. I would drop about 30-50 frames going full screen and then only one here or there every few minutes. It is damned close to perfect.

@Rai@lemmy.dbzer0.com
link
fedilink
English
19d

They run 1080p without any frame drops or stuttering. I have one playing YT 24/7 in my living room. I have a fast SD card for it, though.

@AtariDump@lemmy.world
link
fedilink
English
69d

By the time you factor in a case, cooling, SD card, and power supply a Pi can cost about as much (~$100) as an off lease i5.

@Prunebutt@slrpnk.net
link
fedilink
English
29d

Guess it depends on how much hardware you already got lying around. I could scrounge up an SD card, an old phone charger, a fan, keyboard and mouse from my supply and I’ve got a 3D printer.

I’m all for recycling though. I’m just not sure if a $100 i5 system could handle all that bloat on the web.

Also: I really like AV1. 😅

@AtariDump@lemmy.world
link
fedilink
English
13
edit-2
9d

SD card, sure.

Old phone charger? Might not be strong enough to power a Pi 4/5; this isn’t the Pi 3 days with a microUSB.

“… and I’ve got a 3D printer.“

Yeah, you’re “ahead” of most people with that one.

This i5 could probably handle it, and you’ve got $$$ left over for an SSD.

Go with the 8500 i5 instead. I have that with a 1650 for the kids.

@AtariDump@lemmy.world
link
fedilink
English
19d

Agreed.

Was proving that you can find a half way decent machine that’s more capable than a Pi5 at the same price point (~$100)

If you have that lying around, you can still beat a pi 5 by only buying an i5 motherboard instead of an entire pc. As to handling the bloat, it’s faster.

Have you had success with phone chargers? My RPi4 is very picky about power and is unstable on anything but the official power supply.

@Prunebutt@slrpnk.net
link
fedilink
English
19d

I’m using the fast charging devices that are bundled with Samsnug phones and the like. But my raspberry 4 is currently not in use.

@NeatoBuilds@mander.xyz
link
fedilink
English
39d

Yeah what i did is i got one of those dell thin client laptops. It runs great. I just open up parsec and can remote in to my server that has an i9 and 256gb ram with a 4090 and like 100tb hdd and 4tb nvme

Amber Rose🌹
link
fedilink
English
09d

You’re right

@daniskarma@lemmy.dbzer0.com
link
fedilink
English
7
edit-2
9d

If there were not for youtube shitty war on adblocks I was able to watch youtube 1080p on a 30 bucks android tv thingy.

I would have to check is someone built an alternative app to keep watching it because power of the device was no issue. When running on a minimal kodi installation it just worked fine.

@uis@lemm.ee
link
fedilink
English
19d

Just buy used PC. Same perf, lower price.

That was the joke.

@TrickDacy@lemmy.world
link
fedilink
English
39d

I mean, yeah, I realize it was the joke. I think I was just adding context some people may not know about. I didn’t know a rpi could do that task until I started researching media PC options.

@Zetta@mander.xyz
link
fedilink
English
19d

I wanted to go with a pi for my HTPC but I have a Plex server and all my movies are full bitrate 4k files straight from the UHD blurays and the pi couldn’t handle that bitrate. Ended up building a small ITX PC with my old PC hardware and a new Intel A380 gpu.

I’m so thankful intel is doing their best to enter the discreet GPU market. Such banger cards for so little.

مهما طال الليل
link
fedilink
English
9
edit-2
9d

Change that last to “Time to play some 8bit games upscaled to 4K”

SkaveRat
link
fedilink
English
59d

“finally upgraded my steam deck to 1TB” plays Vampire Survivor

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 93 users / day
  • 445 users / week
  • 1.2K users / month
  • 3.8K users / 6 months
  • 1 subscriber
  • 3.95K Posts
  • 80.7K Comments
  • Modlog