• 0 Posts
  • 27 Comments
Joined 1Y ago
cake
Cake day: Dec 14, 2023

help-circle
rss

Lmao the first thing that came to mind was the “is there anyone else you forgot to ask” meme with apple in between the user and app developer.


At least with radicle all the forks will still exist even if the authoritative copy is taken down. And even then I think because radicle is like BitTorrent, anybody who pinned the main repo would still be seeding it so it would be very hard to scrub it completely. The main challenge in using radicle is getting an active contributor with some reputation to maintain their copy on there. Otherwise there’s no momentum and nobody will pin the countless mirrors published by randos.


Hah I wish we could ignore them. It seems to just vary from ISP to ISP in the US but our small town ISP turns off your connection and puts you behind a captive portal forcing you to click through and accept what you did wrong before your connection is turned back on.



I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.



You can achieve a similar thing using vlans - usually by default they’re isolated but you may add specific rules that allow traffic between vlans if it meets certain criteria (specific ports, specific types of traffic, traffic to or from specific hosts, any combination of those). So yeah you can imagine client isolation being like having each client on their own vlan - except without needing a different subnet for each client.


To add to the other reply, client isolation is about controlling whether an ap, switch, or router willingly sends traffic between clients. Because of that, it doesn’t kick in if you listen to packets over the air before they’ve been received by an AP. For that kind of security you need a wifi specific security measure - which I think “enhanced open” is what you’d be interested in. It allows you to have an open passwordless wifi but it generates temporary encryption keys for each connected client, then the rest is as if it was using WPA, so that you don’t need to enter a password but your traffic gets encrypted and protected from anyone else listening in on the WiFi.

If you combine both then you should have a network where each device is isolated both over the air and from a routing perspective so that each device only sees an Internet connection and no other devices.


The same way filebot and any other tool does - the file needs to have some label, either an absolute episode number or a season + episode number. I’m not aware of any tool that is able to look at the contents of the video to figure out which episode it is visually without any information from the filename - but I’d be happy to be proven wrong because I would be impressed.

Sonarr/radarr does analyze the content somewhat but that’s just for gathering resolution, codec, HDR, audio languages, and subtitle information, which can all be added to the filename format for inclusion during renaming.


I second using sonarr/radarr, once imported it detects episodes and lets you one click rename to a specific format and folder organization.

If you don’t want any of the other features of sonarr/radarr (like having a way to filter and manage your collection to see what’s in what quality or from what release group, searching multiple indexers with a single search, being able to send a specific search result to a downloader and have it automatically imported and organized when complete, or have auto downloading based on requests using scoring rules that you set), then there’s also filebot which a lot of people seem to like and seems to be just for matching with online metadata and renaming.

But I haven’t tried filebot since I like the extra features and capabilities of sonarr/radarr. It makes it easy to manage several library folders like an archive for anything that’s been reviewed, is complete, and in a quality/codec that I’m satisfied with, and keeping track of currently airing shows in my active folder which is where I also keep auto downloaded stuff I haven’t reviewed.


I use a nuc10i7fnkn and since transcoding is almost entirely done using the dedicated quicksync hardware in the CPU you don’t end up actually using the CPU much. So I’m sure it would work on an older generation or the i5 version. I don’t know much about the N100 but it looks like it would be very capable. Supposedly it boosts to 3+GHz and it’s a 10nm node compared to my NUCs 14nm. But the GPU has the same number of execution units so I’m not sure if the quicksync transcoding performance is that different. I saw someone mention 3 simultaneous 4K transcodes and I think I got about that much on mine. Generally for quick sync performance you just compare the Intel hd or uhd graphics model (like 630, 730, uhd, etc) and the number of execution units and that should correlate to the performance. Also check the Wikipedia page for quicksync for codec compatibility (under the Hardware decoding and encoding section), but anything recent will handle most stuff you’d need: https://en.m.wikipedia.org/wiki/Intel_Quick_Sync_Video


I actually run my arrstack on a Synology, it has official support for docker and docker-compose. Granted I do have a higher powered model (the DS1621xs+) but most of the arrstack is fairly low power friendly.

You can also get away with running Plex on a nas but I would only do it if 1. Your nas has a quick sync supported CPU and you get that enabled properly or 2. You go the direct streaming only / no transcoding setup - which means checking the codec support for all client devices and either only downloading exactly the supported codecs or pre-transcoding everything.

What I do is actually run Plex/JF on a separate nuc and point it at the nas using a network mount. Just don’t use a network mount for the Plex app database (maybe same applies to JF too), just mount the media files itself. Running Plex and having it access the DB over a network mount is a big no no for various reasons.


I use a Synology nas which has official support for docker / docker-compose to run my arrstack and has n+2 btrfs redundancy. Then for running Plex and jellyfin I use an Intel nuc10i7 with quick sync with the nas media folder mounted over the network but using a direct gigabit link between the 2 so that the traffic stays off my switch.

I could have gotten away with doing it all on the nas if I forewent ECC in favor of quick sync, but my first priority with my nas is keeping personal artifacts safe so I went with ECC.


Unless it’s a very weird special order display it’s probably still 60hz, that way the transitions between menu screens and animations look smooth.


It should be safe, using fstab is how I do a network mount to a specific folder also so it doesn’t change or anything.


Somebody should make an api shim that proxies openapi compatible requests to this. And since Microsoft is forcing copilot on windows 10 they’re on the shit list too. Load balance all the AI workloads onto both of them through API adapters.


Brb uploading a 5GiB file from /dev/urandom to make sure there isn’t a byte of space left in OneDrive for them to do this to me.



Has anybody made a matrix app that looks like a discord clone? That sounds easier since the federated rich text chat is already made, the current clients don’t really appeal to the discord crowd.


I’m gonna rcm my switch now and copy my animal crossing island to my PC to be emulated. Shoulda done that a long time ago to have a backup that I actually own instead of having to pay Nintendo to keep my save data backed up. I see no reason to play any more Nintendo games on my switch, much less purchase any more when I have my steam deck.


Prowlarr has torrentlite as one of the domain options under the rarbg indexer. I guess they use a common profile for all the rarbg clones since they use similar html structure. You just add rarbg and then switch the base URL to torrentlite in the options for it.


Isn’t Miracast for sending video data? The thing I like about Chromecast is that the phone or remote app just tells the Chromecast where to load the media directly from, and then only sends playback control commands. That makes it a lot lighter resource wise because you don’t need to proxy the stream through a device like a phone that wants to go to sleep to save battery.


Note that the 2x10G is SFP+ not SFP. I was briefly confused. I have tons of SFP+ stuff but no SFP gear whatsoever


If it’s just videos you want, you can try using network inspector to see if you can catch the url of the file - assuming giving the url of the video’s webpage to youtube-dl along with a snapshot of your browsers logged in cookies doesn’t work. You might also see an m3u8 in the network inspector, which you can also give the url of to youtube-dl and it’ll download all the segments and merge them into a video file (you might also need auth cookies or headers unless it’s a temporary url which can work anywhere, just check the network request to see what’s sent). Some sites do separate m3u8 for video and audio or multiple ones for different video qualities, so you might need to change the quality to maximum for the browser to request the high quality stream url. You might also see a file requested that just lists the urls for m3u8s of each quality. If you see a vtt file then you can also grab that, convert to an srt, and remux with mkvtoolnix to embed it into the file as an optional subtitle.

This should all work as long as they don’t use drm / widevine type stuff and as long as they don’t have some supremely annoying security measures (like using authenticated urls that are one time use so by the time your browser shows it in the network inspector the url is expired or something). Otherwise for widevine you’ll need to do some kind of screen / HDMI capture type setup.


I think that text is from melroy, so according to him. From seeing his interactions in the kbin issue tracker I get somewhat of an egotistical impression of him, because he would often take an issue that has just been opened and not triaged or discussed what the best fix is, and he would open a PR with how he thinks it should be fixed, and it sounds like his frustration is that his hasty PRs weren’t getting merged quickly because people wanted to come to a consensus.

Maybe I’m just reading into it but it felt like he just wanted his name on something and it wasn’t happening with kbin.

Edit: I want to add that I don’t mean to shit on him as a dev or as a person - it’s possible that I’ve only seen a one-sided view of his interactions as a busy contributor who just wants to whittle down the issue list as fast as possible and that he’s got good intentions, and regardless he seems like a very capable dev. It’s just that based on my perusing of issues and discussions I’ve come across, it doesn’t seem fun to work with him to contribute, and if I were to treat the contributors list as a scoreboard and had the goal of having my name on as many commits as possible, I think it would be hard to tell us apart. I was just going to keep my thoughts about this to myself but I’ve seen some other people comment similar things in other threads about mbin so maybe it’s worth sharing my skepticism about mbin. Take from it what you will.


I went with the DS1621xs+, the main driving factors being:

  • that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
  • I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.

If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.

If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.

And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.

Only jellyfin/Plex is on my NUC. On the nas I run:

  • Adguard

  • Sonarr/radarr/lidarr/prowlarr/transmission/overseerr

  • Castblock

  • Grocy

  • Nextcloud

  • A few nginx instances for websites

  • Uptime-kuma

  • Vaultwarden

  • Traefik and wire guard which connects to a vps as a reverse proxy for anything that needs to be accessible from the public internet


Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.

This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.