DaGeek247 of https://dageek247.com

  • 1 Post
  • 43 Comments
Joined 9M ago
cake
Cake day: Feb 16, 2024

help-circle
rss

Yup. If the sd card doesnt have enough space for everything, you could attach an m.2 hat to it as well. https://www.raspberrypi.com/news/using-m-2-hat-with-raspberry-pi-5/

Basically, jellyfin on the pi, with the wifi setup as an access point, and whatever amount of storage you need. The pi requires 5v/5a, so you’ll probably run into issues running off the car usb power, but a cheap 30amp hour battery should run it for 6-10 hours if my napkin math is right.


After minor setup, my experience has been incredibly plug and play.


You understand that, for everyone except for a complete network pro, that is worse for security and privacy, right?

Don’t get me wrong, it’s great that you can.

But the reason piracy websites struggle so much with long term stability isn’t because they’re hosting the wrong software.


Or just use a password manager like keepass where the problem of storing passwords has been solved already…



That’s not how hard drives work, and doesn’t take into account that OP might want to download more than one thing at a time.

Hard drives are fastest when they are moving large single files. SSDs are way better than hard drives at lots of small random reads/writes.Setting qbittorrent up so that all the random writes inherent to downloading a torrent go to a small ssd, and then moving that file over to the big hard drive with a single long writer operation is how you make both devices perform to their best.


qbittorrent moves the completed files to the assigned literally as soon as it is done.


Yeah, I use the incomplete folder location as a cache drive for my downloads as well. works quite nicely. It also keeps the incomplete ISOs out of jellyfin until they’re actually ready to watch, so, bonus.

If it’s not going faster for you there’s probably something else that’s broke.



torrent galaxy has what you’re after (as a boxset), and does imdb id search results.


Not access, knowledge. Giving a specifically unique device identifier every time you visit a page is different from the website guessing if you visited recently based on your screen size and cookies.

You have to set up ipv6 to change regularly to avoid that.


You have to take extra steps to ensure that the benefits of NAT aren’t lost when you switch to ipv6. Everyone knowing exactly which device you’re using because a single ipv6 IP per-device is the default.

Ipv6 is nice, but also you need to know what you’re doing to get all the benefits without any of the downsides.


I have the att bgw-320 as well. Very excited for when the hardware for the bypass comes around.

I tried using the IP passthrough setup on it, but it ended up causing all sorts of slowdowns that I had troubles diagnosing. I was using the nanopi r4s with a WiFi AP when I had this issue. Make sure to look into compatibility with ATTs IP passthrough is not total passthrough so you might have to dig into the details to make sure it all works together.


Is this a bug, or is it actually just limited to the transcode speed? I would love to read the incident/bug report about this.


Nope. SD cards can do terabytes now. Walking away with it is probably the easiest part of the whole heist plan.

Getting around the obscure hardware and software DRM schemes, moving that much data quick enough that you don’t have to make two trips, getting the knowledge required to do all that… I figure those would probably be harder.


My robots.txt has been respected by every bot that visited it in the past three months. I know this because i wrote a page that IP bans anything that visits it, and l also put it as a not allowed spot in the robots.txt file.

I’ve only gotten like, 20 visits in the past three months though, so, very small sample size.


I ended up using a cam for a movie from 2009 to check to see if the movie had differences between the theater version and the DVD release. It didn’t but it was neat that I could, 15 years later.

So I respect it, but also, good god will I never actually watch them for the actual movie itself.


They make you compile it because it’s non-free software and you’re beta testing it in order to use it for free.

It has nothing to do with linux. They do the same for the windows beta.


I run Debian with zfs. Really simple to set up and has been rock solid for it too. As far as I can tell all the issues I’ve had have been my fault.

ZFS looks like it uses a lot of RAM, but you can get away without it if you need too. It’s basically extra caching. I was thrilled to use it as an excuse to upgrade my ram instead.

Mdadm has a little more setup then zfs, as far as I’m concerned. You need to set your own scrubbing up whereas zfs schedules it’s own for you. You need to add monitoring stuff for both though.

I’ve considered looking into the various operating systems designsd for this, but they just don’t seem to be worth the effort of switching to me.




Nah. Metric should have just been base twelve.


Craft computing has been chasing this for several years now. His most recent attempt being the most successful one. https://m.youtube.com/watch?v=RvpAF77G8_8



I csn’t speak to your last requirement, but nunti promises your own custom adaptive learning rss feed.


Both. How quickly a server can send a webpage with images (even if they’re small) is directly proportional to the storage mediums seeks times. The worse the seek times, the less ‘responsive’ a website feels. Hard drives are a terrible location to keep your metadata.

The server scan will search for the files, look them up and grab metadata, and then store that metadata in the metadata location. If your metadata location is the same spot as your movie, it will cause some major thrashing, and will significantly increase the scan time for jellyfin. Essentially, it gets bogged down trying to read and write lots of tiny files on the same drive, the absolute worst case scenario for a hard drive to have.

If the movies are on a hard drive, and the metadata on an ssd (or even just a different hard drive) the pipeline will be a lot less problematic.


You can significantly speed this process up by putting the cache folder on an ssd, instead of the same hard drive the videos are on.


Skipping the audio encode from a blu-ray will lose op out on a surprisingly large amount of space, especially with 110 source disks. I checked one of my two hour blu-ray backups. Audio will net you about nine audio tracks (english, french, etc). A single 5.1 448kbs audio track will take about 380MB of space per movie. Multiply that by nine (the number of different tracks in my sample choice) and you’ll get 3420MB per disk. That means about 376GB of space is used on audio alone for ops collection. A third of a terabyte. You can save a lot of space by cutting out the languages you don’t need, and also by compressing that source audio to ogg or similar.

By running the following ffmpeg command; ffmpeg -i out-audio.ac3 -codec:a libvorbis -qscale:a 3 small-audio.ogv I got my 382MB source audio track down to 200MB. Combine that with only keeping the language you need, and you end up dropping from 376GB down to 22GB total.

You can likely save even more space by skimping on subtitles. They’re stored as images, so they take up a chunk of space too.


I did a comparison trying to find where my personal “good enough” and techinically indistinguishable crf levels were at a little while ago. It may be worth looking into as a start. I’ve never really touched hdr before though.

Comparing compression levels


God no. X264 is way worse than x265 is way worse than av1 for quality by size.

Yes, everything made in the past 15 years can do x264, but that does not mean it is a good idea. Only do x264 if you have a specific device that needs it. Otherwise, x265 is a better choice for long term storage.



If you’re putting it in a box it is going to cook itself to death regardless of its need or lack of need for fans. If you’re putting it on a dirty floor convection is going to move the dust into it anyways.

If youre putting it in a shop, consider hardware purpose built for that.


My old ISP just let me us their device that did this and no routing when I asked for it. I didn’t have to buy a MOCA device, I just had to ask to use my own router.

This of course is not true for my new ISP, but it’s worth the effort to avoid the hassle of accidentally getting the wrong device to put between your router and the wall.


I quoted the article. I read it, and it’s stupid. Also, religious ≠ believes in gods. 28% of Americans are “Nones” and growing, and that number includes religious people.

The number you quoted is practically the same the one i quoted. I’m not sure why you bothered.

I completely missed your quoting the article. My bad. Even the article is saying the premise in the title is silly / unknowable. I was wondering why you were saying the same things the article was; that arguing for piracy using religion is a bit of a mixed bag.

But whether someone cares about the status of gods’ existence matters insomuch as it’s the core precondition of the article. If gods don’t exist, wondering what they think is like wondering what Harry Potter thinks about piracy—interesting as a shower thought, but hardly relevant to making real moral decisions.

The core question is not moot because more than half the population agrees with the articles core premise. It doesn’t matter if god exists, it matters that most everybody thinks one exists. Using that belief to discuss piracy is not a flawed discussion, and it is not dependent on the actual existence of a god, just the existence of people’s belief in them.


Prove gods exist, else the core question is moot.

75% of americans consider themselves religious. Your statement is wrong. Ignoring three quarters of the population because you can’t get over the existence of religion is a personal failure.

Nobody here actually cares about the status of god’s existence. All of us care about piracy. Find somewhere else to stand on your religion soapbox.

Hell, while we’re here, did you even consider reading the article? It makes for some pretty great reading.


It’s linux / osmc running on it, basically a whole pc. So, yes, it can run external stuff.


The vero V supports av1. As far as im aware it is the most modern player with support for most every codec (except dolby vision) out there.


It’s not just limited to Canada, it’s limited to a single ISP in Canada posting their traffic stats.



In case you haven’t seen them, i just wanna make sure you’ve heard of these guys; https://notawheelchair.com/


I have really enjoyed my vero 4k. they came out with a new version (vero v), which I haven’t used, but now has av1 support. my older model, the 4k+ version, has done everything great, with the exception of handling IR, which it did a mediocre job of managing. Linux irrecord is ass, but the vero software mitigates it by having a premade library of common remotes.

Their user forums are actually really great, their software support is also pretty good. I had help from both with setting my box up to match my TV feature set.

The vero V looks to be over budget for you, but if you end up deciding you need to spend money to get a solid product, i definitely recommend this one.