• 1 Post
  • 26 Comments
Joined 1Y ago
cake
Cake day: Jul 24, 2023

help-circle
rss

“If you get sued for the lies our AI pumped onto your website that we paid you for, it’s on you and nothing to do with us gl hf.”


In their defense, they also clearly label immich as under active development with frequent changes and bugs.

Edit: nvm I saw it was already discussed in another reply.


Would you accept a certificate issued by AWS (Amazon)? Or GCP (Google)? Or azure (Microsoft)? Do you visit websites behind cloudflare with CF issued certs? Because all 4 of those certificates are free. There is no identity validation for signing up for any of them really past having access to some payment form (and I don’t even think all of them do even that). And you could argue between those 4 companies it’s about 80-90% of the traffic on the internet these days.

Paid vs free is not a reliable comparison for trust. If anything, non-automated processes where a random engineer just gets the new cert and then hopefully remembers to delete it has a number of risk factors that doesn’t exist with LE (or other ACME supporting providers).


It used to be an open source project, then at some point the developers moved it to closed source. In reaction to this, a couple of people forked the last open source version of emby and launched it as an open source project (again) named jellyfin.

It is still open source and under active development, and has a significant userbase. Especially on Lemmy I think it’s much preferred by people to emby (or at least more vocally supported).


I get the convenience part so the staff doesn’t have to go around do it by hand, but it just seems infeasible to do it for the other examples mentioned.

E.g. you go in, pick up item listed for $10, finish shopping in 20 mins, item now costs $15 at till… probably leave it (so now the staff has to re-shelf it) and start shopping at a place that is not trying to scam you.

For the other example, if there are a few packs of something expiring and they reduce the price for all the items on the shelf, everyone will just take the ones which have a reasonable shelf life left leaving the expiring ones.

Both of these just seem stupid.



I have never seen contributors get anything for open source contributions.

In larger, more established projects, they explicitly make you sign an agreement that your contributions are theirs for free (in the form of a github bot that tells you this when you open a PR). Sometimes you get as much as being mentioned in a readme or changelog, but that’s pretty much it.

I’m sure there may be some examples of the opposite, I just… Wouldn’t hold my breath for it in general.


Haven’t had any experience with eweka, but this is the reason why people tend to have multiple providers from different backbones and multiple indexers - to increase your chance for completion. Weirdly, eweka does not follow DMCA, but NTD which I’ve seen regarded as slower to take down content, so in theory the experience should be better, especially on fresh content.

Your mileage will vary greatly depending on what indexers/providers you pick and unfortunately it’s very difficult to say whether it will reach your expectations until you try different options.

If you’re willing to spend some more on it, you could try just looking for a small and cheap block account from a different backbone to see if it helps with the missing articles, but there are no guarantees.


No worries!

I can empathise somewhat, I have burned myself out with work before. I have given myself anxiety by procrastinating my work and then spending time thinking about all the things I need to do and how I won’t have the time instead of just doing it… To the point that I struggled to sleep, which just made me even less productive. It’s all a downward spiral, unfortunately.

I hope you get your life on the track you want it to be on!


I obviously don’t know your situation, but just remember you can’t take care of others unless you take care of yourself first - you should not be overworked either.

Great point about being aware of the strengths and weaknesses in the team!


Personally, I’ve had an experienced manager and took great inspiration from him.

A few things I fell into:

  • it was a lot faster for me (I.e. experienced senior dev with context knowledge) to finish a task than for me to assign it to someone less experienced who has to learn the context and takes 5x as long to do it, with lots of help needed from me still. This yielded me not building up my team either in experience or knowledge.
  • I assumed deadlines I got told were set in stone and my job was to meet them. This made business-y people happy. It made everyone else (including me) miserable. I had to learn to say no and push back, it very much changes between companies but most of the time I found it to be a negotiation and either the deadline could move or I had to argue to exclude things from the scope to make the deadline reasonable.
  • on the above, everything takes at least 3-5x as long as I think it takes. If things finish early, great time to give my team some slack, add in additional QA work like extending tests or repay some tech debt. Delivering something early gives a pat on the back for us but no discernible benefit to the team.
  • every time someone said “you’ll have time to write tests/repay tech debt/upskill later once X is shipped” it never came true. Those things have to be built into delivery scopes, and it’s a constant battle - if you don’t do this, nobody else will.

I’m sure there were other things too, but these are the ones I mainly recall. Talk to your team, ask for feedback. Every team, project and company are different - you’ll have to adapt.


Very difficult to predict the future, but my bet would be on no (to the in 20years question).

I doubt the hardware would last 20 years and eventually it’ll become hard to source parts as the popularity falls off, even if you could repair it yourself. I’m sure anything with an online dependency will not work either, but offline games have a chance.

But the real question is would you want to use the switch in 20 years (or honestly, even today)? There is already a better alternative (steam deck) with a much more open platform with way more capabilities and I believe it can already emulate Nintendo games (although no first hand experience with that)

I have a switch myself and would never recommend it to anyone personally.


Unless you configure pihole to connect to CF via DoH, the above is still entirely true. Pihole is not a privacy tool, it’s a filtering tool.

I used to have this setup too until I realised spending a single hour per year on pihole “costs” me more than paying for a good DNS resolver which can also do the blocking, and I can easily use on my phone as well when I’m away. I’m very happy to have switched, personally.


The above is still true for the upstream regardless, pihole provides filtering - it doesn’t replace the privacy provided by using a trusted upstream server and you should still configure pihole to use DoH to the upstream.


Your isp can most likely tell which VPN you’re using (unless you also use tor, and even then there’s the theories that a lot of it is ran by law enforcement… depends on how paranoid you are), they will still see the quantity of traffic coming from your home to the VPN and vice versa. All they need to do is to check the IP and they’ll likely find it’s in use by … VPN service.

As long as using a VPN is not illegal in your country you can pay for it however you want really (in some places paying with crypto may make it more suspicious than if you just paid for it through PayPal), if law enforcement really wanted to find out the VPN service you use they probably could, the payment would only make it a tiny bit easier.

The key point as mentioned multiple times is to use one you trust, there’s no objectively best one, but you’ll find a lot of objectively bad ones (for privacy) if you research them. As a start just never use any which are sponsoring YouTube videos or blog articles, pretty much all of those are crap.


VPNs usually route your DNS through them as well, sometimes to other DNS servers but sometimes they just send them to your original DNS server but through the VPN, kinda up to your VPN config - all of the vpn services I’ve used to date did this, although they were all reputable ones. I’d not recommend to use a questionable VPN though.

Dnssec only verifies authenticity of the server and the integrity of the data, so it helps to prevent man-in-the-middle of DNS, it doesn’t provide privacy. Look into DNS over Https (DoH) instead. It provides e2e encryption for your DNS traffic which achieves what dnssec does, but also gives you privacy. DNS over TLS (DoT) also does this, but it runs on a different port so it’s easier to block (e.g. if your isp decided they don’t like private DNS), while with DoH your DNS traffic looks the same as other web traffic - and afaik it can’t be blocked. As above, it’s likely this is not needed for use with a VPN, but I’d recommend looking into in general for use even when not on the VPN. Things like controld or nextdns can give you even more peace of mind (although read up on their policies for yourself)




I use unraid (currently without parity since it’s all just stuff I’ve been okay to lose) with drives I’ve collected over the years: 2x3TB WD red (one of which is almost 10y old, the other ~7 since it had died once in warranty and got replaced), 1x 12 TB WD red (which is ~3y old).

I was going to add something between another 16-20TB drive depending on the price/TB whenever the next expansion comes up. I’ve mentioned it in another comment, but I’ve never used not-new drives and have been fairly shy about them, hence the larger price tag for expansion than expected.

Even if I cut down on my usenet providers/indexers since I’ve shot a bit overboard with coverage, the cost of realdebrid/alldebrid is still very similar to just the cost of those/year, entirely excluding the cost of disks - hence my interest in feasibility.


This is likely very true, good point. I imagine there is some resilience with there being multiple debrid providers available so worst comes to worst you’d have to pay up for another membership and swap (and I guess hope that the other one still fulfills the purpose).


Findroid is not available for android TV as far as I know so couldn’t try it, my other option there would be to use Kodi with a plugin but I’ve never really been a Kodi user so it’s less appealing to me.


It’s a bit nitpicky to be fair but:

  • chapter API (skip intro/credits), I know it’s in the works and there is a plugin but I’ve found it work much better in Plex (actually emby has this and it’s alright)
  • the android apps, particularly on TV. I find the jellyfin one somewhat meh for UX. Not huge gripes but just things like how in a list you have to press a button at the top of the screen to display the alphabet shortcut (i.e. jump to all moves starting with a letter). On a TV this is pretty awkward IMO. I know there a bunch of different screens around this, e.g. the one you get with smart screen to go “by letter”, or setting the list direction to horizontal allows getting to the button on top easier but it feels clunky to me, so many screens which could be replaced with 1 better designed one.

I do think eventually I’ll end up on jellyfin, probably once the chapter API arrives and skipping credits and intro has first party support tho.


Thank you, those are pretty good prices! Have you used recertified drives? I’ve been fairly scared of used drives so curious if you have and their failure rate compared to new?


Debrid vs usenet and the *arrs?
I currently have a very comfortable lil home server with the arrs and plex (would like jellyfin but it's not there yet for me, currently fielding emby given how Plex is going), basically all sources are usenet. I'm nearing a point where I either have to delete some stuff or expand space, which is not cheap, and some of my older drives are likely due for some failures too. So after seeing the popularity of debrid I've been wondering if it'd be worth to instead spend the money on it, but would like to ask some questions. I spend maybe around $70/year on the various bits for Usenet and I expect I'd have to spend around an average of $80/year on drives for just expanding storage (obviously assuming I don't just delete stuff). And that's with avoiding 4k just for storage reasons (my internet could take the streaming tho) Even just the price of Usenet seems to be more than the price of a debrid subscription though and from what I understand I'd not need new disks with it either. From what I understand debrid is a shared download space for Torrents/direct downloads where if someone adds something it's available for everyone (presumably it gets deleted if noone accessed it for some time and would have to be re-downloaded?). It's possible to mount the content via WebDAV to make it accessible to clients/media servers to stream directly from debrid. My questions are.. 1. Is there still a point to sonarr/radarr with debrid? 2. How is the quality? (both in terms of media quality and in terms of file organisation so things are discoverable and accurate, e.g. chances of things explicitly named wrong so you think you're about to watch Brooklyn 99 and instead get porn) 3. I would likely go the path of using zurg and keeping with Plex/emby - any experience with how well does this work (any recommendations for or against)? What's the mechanism for picking what is available in the mounts to the media server.. or is it just.. everything on debrid? 4. I don't really use any torrents at the moment, from what I understand that's primarily how you get things on debrid. Would I have to start looking for good trackers to get content or is there no need because chances are someone will have downloaded/shared most things? 5. I guess, am I assuming this works very differently to how it actually does? Any experience from people who did the swap from Usenet/arrs to some debrid + media server? Many questions in a wall of text, I'd be grateful for any answers to any of them! Thanks!
fedilink

Another option you could consider is rather than running the VPN on the pi and downloading through that, to run the VPN on your media server as you planned and instead use the pi (or your router if you can run openwrt) as the remote access point. Then you only need to worry about the performance needed for remote access

I’m not sure how tailscale works, but this is what I do with zerotier (i.e. run it on my router).


I think I understood what you were suggesting: try disabling the script tags one by one on a website until either we tried them all or we got through the paywall.

My point is that it’s very unlikely to be feasible on most modern websites.

I mention files because very few bits of functionality tend to be inline scripts these days, 90-95% of JavaScript will be loaded from separate .js files the script tags reference.

In modern webapps the JavaScript usually goes through some sort of build system, like webpack, which does a number of things but the important one for this case is that it re-structures how the code is distributed into .js files which are referenced from script tags in the html. This makes it very difficult to explicitly target a specific bit of functionality to be disabled, since the code for paywall is likely loaded from the same file as a hundred other bits of code which make other features work - hence my point that the sites would actively have to go out of their way to make their build process separate their paywall code from other bits of functionality in their codebase, which is probably not something they would do.

On top of this, the same build system may output differently named files after the build since they’re often named after some hashing of the content, so if any code changes in any of the sources the output file name changes as well in an unpredictable way. This would likely be a much smaller issue since I can’t imagine them actively working on all parts of their codebase all the time.

Lastly, if the way a website works is that it loads the content and then some JavaScript hides it behind a paywall then it’s much simpler to either hide the elements in front of it or make the content visible again just by using CSS and HTML - i.e. the way adblockers remove the entire ad element from the pages.


It would only work if they specifically bundle the functions which cause the paywall in a separate file (it is very unlikely for this to be the case), and also relies on the assumption that the paywall is entirely front-end side, as well as the “default” content to be without paywall (as opposed to the default content being paywalled and requiring JavaScript to load the actual content).