• 0 Posts
  • 12 Comments
Joined 1Y ago
cake
Cake day: Jul 08, 2023

help-circle
rss

Not really but I do notice that sometimes my ISP with throttle me and it stops when I use a VPN, so I just usually use a VPN (and never torrent local anymore, it’s like waiting for a snail to deliver your amazon package).


Sure, there’s a huge variety among private trackers. Googling “buy private tracker invite” shows 10s of different sellers. some tracker invites can be $150 because they’re gigantic communities full of content and don’t send out invites often. Some of them are cheap enough to throw in as freebies when you buy something else.

What’s really nice about buying an invite is splurging the extra $10 and getting a built in 500-800GB of quota with it (really you’re buying the account itself). Then you don’t have to “work your way” up as long as you keep seeding whatever is popular.


Make an image of your SD card if you haven’t already. Better yet run the OS over USB. sd cards to die.


appreciate the advice, would make it less aggravating. Which one do you recommend? I’m on newshosting and have no problems that aren’t just general usenet problems.

I’m just gonna to invite you to google this and see where it takes you. Might not be up your alley, might be a compete gamechanger: InviteHawk


They’re running in a datacenter in the netherlands with a ridiculous amount of bandwidth. I did find out they’re classified as an “isp and web hosting company”.

All our Dedicated Servers have 1Gbit connections with a dedicated 1GigE uplink.

I’d also guess that many of the seeds on any torrent (on a private tracker) are going to also be coming from seedboxes. That might explain why it’s so fast too, there is tons of bandwidth between the datacenters themselves. I’m definitely throttled at 100MB/s regardless of how many torrents I’ve got running (1 or 100), but if they’re running 50-100+ instances along with dedicated servers they must have tbps of bandwidth.

So long story medium, unless you can install your home server into a datacenter with a multi terrabit link to the backbone, it will be tough to replicate


I’ve used it for 15+ years and it’s a huge downside. Older content used to be widely available, but more often then not anything popular is removed within a few months of posting. It is actually pretty great for obscure content that won’t get taken down. It’s cheap but a whole new thing to learn. It is faster than torrenting directly to your own computer but a seedbox blows usenet out of the water as far as speed. 50-100 MB/s easily (at least using private trackers).


Usenet was great 10-15 years ago but nowadays it’s flooded with fake / private downloads and retention is shit simply because the few remaining backbone providers comply with takedown requests. Absolutely useless for older content by any major studio. It’s all new stuff which is mostly garbage anyway. We were able to get a ton of “this old house” recently though.


No but this isn’t really limiting sales of the book in any way. I buy real used books, I buy new books sometimes. I go through a few audible credits a month. I also pirate books if I feel like it. I’ve had books I bought and gotten rid of, then years later decided to pirate it and read it again. Anyway used books are so ridiculously cheap it’s very rare for me to buy a book new, often it’s a gift for a friend.

I also use ChatGPT almost every day, and while I have asked it for the summary to a book I didn’t feel like reading, it has never once replaced “reading a book” in my life. You can also get the summary to most books on wikipedia if that’s all you want.


My gf got several letters and I started using a VPN. Easy peasy. No problems.

Now I’ve moved to seedboxes (seedhost.eu) and private trackers. First I buy an invite to a private tracker (if you spend like $20 you can get an invite to one of the less prestigious ones and like 500gb of quota). This is kind of a process since private trackers are 1000% against selling invites so it’s kind of a “marketplace” forum type deal. Not a 1 min paypal transaction. Took me a couple days to get my first invite.

Then use that tracker on the seedbox which has a few tb disc. Then I sftp in (I have used the app Forklift for many years and highly recommend if you’re on a Mac, it’s amazing) and transfer down.

I get like 7 MB/s through VPN which is alright for me and even without a VPN, it’s just random traffic coming from a server. You aren’t torrenting from your machine so there’s no issue.

To get quota on the trackers, you can either buy an invite that includes some quota or build it up yourself. The seedboxes I use have like 100 MB/s upload speed so you’d just download some super popular (freeleach if possible) torrents and then seed for a while. If your invite comes with some quota, likely you’ll have more quota than you know what to do with. I bought an invite with a 100gb quota and now I have like 4tb of quota.

The downside is cost which might defeat the point of pricy for some. I pay like $6 a month for my instance. But if you’re willing to pay for a more powerful instance you can run Plex directly and stream everything if you wanted. I download locally and put it on my local Plex server.


I don’t remember the presentation, but luckily I did remember the concept and here’s an article: https://netflixtechblog.com/reactive-programming-in-the-netflix-api-with-rxjava-7811c3a1496a

It’s called “reactive” programming and that article goes over some of the basic premises. The context of the presentation was in front-end (web) code where it’s a god awful mess if you try to handle it in an imperative programming style. React = reactive programming. If you’ve ever wondered why React took off like it did, it’s because these concepts transformed the hellish nightmare landscape of jquery and cobbled together websites into something resembling manageable complexity (I’m ignoring a lot of stuff in between, the best parts of Angular were reactive too).

Reactive programming is really a pipeline of your data. So the concepts are applicable to all sorts of development, from low level packet processing, to web application development on both the front and back end, to data processing, to anything else. You can use these patterns in any software, but unless your data is async it’s just “functional programming”.


Yep, that’s how I write my code too. I took a class in college, comparative programming languages, that really changed how I thought about programming. The first section of the class was Ruby, and the code most of us wrote was pretty standard imperative style code. If statements, loops, etc. Then we spent a month or so in Haskell, basically rewriting parts of the standard library by only using more basic functions. I found it insanely difficult to wrap my head around but eventually did it.

Then we went back and wrote some more Ruby. A program that might have been 20-30 lines of imperative Ruby could often be expressed in 3 or 4 lines of functional style code. For me that was a huge eye opener and I’ve continued to apply functional style patterns regardless of the language I’m using (as long as it’s not out of style for the project, or makes anything less maintainable/reliable).

Then one day a coworker showed us a presentation from Netflix (presentation was done by Netflix software engineers, not related to the service) and how to think about event handlers differently. Instead of thinking of them as “events”, think about them as async streams of data - basically just a list you’re iterating over (except asynchronously). That blew my mind at the time, because it allows you to unify both synchronous and asynchronous programming paradigms and reuse the same primitives (map/filter/reduce) and patterns in both.

This is far beyond just eliminating if statements, but it turns out if you can reduce your code to a series of map/filter/reduce, you’re in an insanely good spot for any refactoring, reusing functionality, easily supporting new use cases, flexibility, etc. The downside would be more junior devs almost never think this way (so tough for them to work on), and it can get really messy and too abstract on large projects. You can’t take these things too far and need to stay practical, but those concepts really changed how I looked at programming in a major way.

It went from “a program is a step by step machine for performing many types of actions” to “a program is a pipeline for processing lists of data”. A step by step machine is complex and can easily break down, esp when you start changing things. Pipelines are simple + reliable, and as long as you connect them up properly the data will flow where it needs to flow. It’s easy to add new parts without impacting and existing code. And any data is a list, even if it’s a list of a single element.


Personally I try to keep my code as free of branches as possible for simplicity reasons. Branch-free code is often easier to understand and easier to predict for a human. If your program is a giant block of if statements it’s going to be harder to make changes easily and reliably. And you’re likely leaving useful reusable functionality gunked up and spread out throughout your application.

Every piece of software actually is a data processing pipeline. You take some input, do some processing of some sort, then output something, usually along with some side effects (network requests, writing files, etc). Thinking about your software in this way can help you design better software. I rarely write code that needs to process large amounts of data, but pretty much any code can benefit from intentional simplicity and design.