• 0 Posts
  • 16 Comments
Joined 1Y ago
cake
Cake day: Aug 08, 2023

help-circle
rss

This is more complicated than some corporate infrastructures I’ve worked on, lol.


Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).


OSMC’s Vero V looks interesting. Pi 4 with OSMC or Librelec could work. I’m probably going to do something like this pretty soon. I just set up an *arr stack last week, and just using my smart TV with the jellyfin app installed ATM.

My PC running the Jellyfin server can’t transcode some videos though; probably going to put an Arc a310 in it.


I’ve been using last.fm for, I guess, decades now. Looking at what my “neighbors” are listening to is the most helpful.


Yeah, torrents usually run 100-300KiB/s. I guess not too bad for smaller files. About an hour or three per GB.


I mean, you can be sued for anything, but it will get thrown out. Like, I guess the MPAA could offer a movie for download, then try to sue the first hop they upload a chunk to, but that really doesn’t make any sense (because they offered it for download in the first place). Furthermore, the first hop(s) aren’t the people that are using the file, and they can’t even read it. If people could successfully sue nodes, then ISPs and postal services could be sued for anything that passes through their networks.


Onion-like routing. It takes multiple hops to get to a destination. Each hop can only decrypt the next destination to send the packet to (i.e. peeling off a layer of the onion).


Hmm, so looks like around 100kB/s. That’s about what I remember (100kB/s - 300kB/s).

I’ve recently been trying out Tribler, and it’s much faster than the last time I tried it (I’ve seen 2MB/s on popular torrents, but around 500kB/s on less popular). Not sure if there are simply more exit nodes with more bandwidth now or if there are more people on the Tribler network seeding.


According to the docs there’s some kind of search functionality built into it: https://github.com/BiglySoftware/BiglyBT/wiki/MetaSearch

Off-topic: I haven’t tried i2p in years and have never used BiglyBT. Out of curiosity, what download speeds are you seeing?


I like the Turris Omnia and (highly configurable) Turris Mox. They come with OpenWrt installed.


IDK, looks like 48GB cloud pricing would be 0.35/hr => $255/month. Used 3090s go for $700. Two 3090s would give you 48GB of VRAM, and cost $1400 (I’m assuming you can do “model-parallel” will Llama; never tried running an LLM, but it should be possible and work well). So, the break-even point would be <6 months. Hmm, but if Severless works well, that could be pretty cheap. Would probably take a few minutes to process and load a ~48GB model every cold start though?


Where I live, I would still need to pay for a VPN to use torrents. I’ve been banned from an ISP before for torrenting (thankfully, I had multiple ISPs available for me).

At the moment, I just “pay” legally because I get a few “free” streaming plans from my mobile provider and ISP. Occasionally, I just use a free streaming site if I really want to watch something that’s not available to me. Every once in a while, I try anonymous p2p such as Tribler or torrenting over I2P, but it’s still extremely slow, unfortunately. I’ve never used Usenet, but I think it’s about the same price as a VPN or seedbox would be?


Server side rendering looks like it could be useful. I imagine SSR could be used for graceful degradation, so what would normally be a single page application could work without Javascript. Though, I’ve never tried SSR, and nobody seems to care about graceful degradation anymore.



It’s good at refactoring smaller bits of code. The longer the input, the more likely it is to make errors (and you should prefer to start a new chat than continue a long chat for the same reason). It’s also pretty good at translating code to other languages (e.g. MySQL->PG, Python->C#), reading OpenAPI json definitions and creating model classes to match, and stuff like that.

Basically, it’s pretty good when it doesn’t have to generate stuff that requires creating complex logic. If you ask it about tasks, languages, and libraries that it has likely trained a lot on (i.e. the most popular stuff in FOSS software and example repos), it doesn’t hallucinate libraries too much. And, GPT4 is a lot better than GPT3.5 at coding tasks. GPT3.5 is pretty bad. GPT4 is a bit better to Copilot as well.


I haven’t ran into a good use-case to try out server-less yet. Either cold starts would be a problem (for example, I have an endpoint that needs to load a 5GB model into RAM, and it takes about 45 seconds). Or, it’s just much more expensive than a VPS if the service is projected to constantly serve many requests all day. Containerized services on a VPS doesn’t require much server maintenance (unless you have a dozen or so micro-services, then yeah, Kubernetes maintenance adds a lot of overhead).