I’m excited to announce the first alpha preview of this project that I’ve been working on for the past 4 months. I’m initially posting about this in a few small communities, and hoping to get some input from early adopters and beta testers.

What is a DHT crawler?

The DHT crawler is Bitmagnet’s killer feature that (currently) makes it unique. Well, almost unique, read on…

So what is it? You might be aware that you can enable DHT in your BitTorrent client, and that this allows you find peers who are announcing a torrent’s hash to a Distributed Hash Table (DHT), rather than to a centralized tracker. DHT’s lesser known feature is that it allows you to crawl the info hashes it knows about. This is how Bitmagnet’s DHT crawler works works - it crawls the DHT network, requesting metadata about each info hash it discovers. It then further enriches this metadata by attempting to classify it and associate it with known pieces of content, such as movies and TV shows. It then allows you to search everything it has indexed.

This means that Bitmagnet is not reliant on any external trackers or torrent indexers. It’s a self-contained, self-hosted torrent indexer, connected via the DHT to a global network of peers and constantly discovering new content.

The DHT crawler is not quite unique to Bitmagnet; another open-source project, magnetico was first (as far as I know) to implement a usable DHT crawler, and was a crucial reference point for implementing this feature. However that project is no longer maintained, and does not provide the other features such as content classification, and integration with other software in the ecosystem, that greatly improve usability.

Currently implemented features of Bitmagnet:

  • A DHT crawler
  • A generic BitTorrent indexer: Bitmagnet can index torrents from any source, not only the DHT network - currently this is only possible via the /import endpoint; more user-friendly methods are in the pipeline
  • A content classifier that can currently identify movie and television content, along with key related attributes such as language, resolution, source (BluRay, webrip etc.) and enriches this with data from The Movie Database
  • An import facility for ingesting torrents from any source, for example the RARBG backup
  • A torrent search engine
  • A GraphQL API: currently this provides a single search query; there is also an embedded GraphQL playground at /graphql
  • A web user interface implemented in Angular: currently this is a simple single-page application providing a user interface for search queries via the GraphQL API
  • A Torznab-compatible endpoint for integration with the Serverr stack

Interested?

If this project interests you then I’d really appreciate your input:

  • How did you get along with following the documentation and installation instructions? Were there any pain points?
  • There’s a roadmap of high-priority features on the website - what do you see as the highest priority for near-term development?
  • If you’re a developer, are you interested in contributing to the project?

Thanks for your attention. If you’re interested in this project and would like to help it gain momentum then please give it a star on GitHub, and expect further updates soon!

@Dasnap@lemmy.world
link
fedilink
English
21
edit-2
1Y

It’s only once you install something like this that you realize just how many torrents are porno.

I’ve always been curious about ‘Anal Police Stories 2’ but I’ve never found the time.

@PipedLinkBot@feddit.rocks
bot account
link
fedilink
English
21Y

Here is an alternative Piped link(s):

scrubs YouTube clip

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

@pedroapero@lemmy.ml
link
fedilink
English
41Y

Great project !

Naming conventions are missing some important information like bitrate, color depth, and most importantly language and subtitles.

Do you plan to scrape additional infos from known torrent sites (searching for torrent hashes for well named torrents) ?

@mgdigital@lemmy.world
creator
link
fedilink
English
21Y

Scraping torrent sites will be avoided is it’ll be prohibitively slow and break the self-sufficiency concept - we’ll infer as much as possible from the torrent meta info alone. You could have a guess at the bitrate from the file sizes. Sonarr/Radarr will already do this for you with quality profiles I think.

@prim3r@lemmy.ca
link
fedilink
English
101Y

This looks really cool! How resource intensive is this? What sort of storage requirements are there for this to be a reasonably reliable method of acquiring media? I’m probably just gonna find out myself. I’ve recently fully switched over to usenet, but this could make torrents pretty compelling again.

@deafboy@lemmy.world
link
fedilink
English
110M

Running for 6 days, save_pieces: false

My database is currently 184 GB

@mgdigital@lemmy.world
creator
link
fedilink
English
91Y

Hi, and thanks!

As a priority I’d like to gather some more rigorous performance benchmarks, but I can give you some hand-wavey stats now: Bitmagnet is currently fluctuating between 2-10% CPU usage on my M2 Mac Mini, and is using ~120MB of memory having currently been running for around 48 hours. Overall, the GoLang implementation seems pretty efficient to me considering how much I know is going on in the background.

Disk space usage of the database- this will be highly dependent on 2 configuration options, the first of which I’ve only just added in the just-released version. Copied from the configuration page of the website:

  • dht_crawler.save_files (default: true): If true, file metadata from the DHT crawler will be saved to the database. This provides more rich information about a torrent, but will use a lot more disk space. If disk space is at a premium you may want to consider disabling this.
  • dht_crawler.save_pieces (default: false): If true, the DHT crawler will save the pieces bytes from the torrent metadata. The pieces take up quite a lot of space, and aren’t currently very useful, but they may be used by future features.

For me, 24 hours of crawling uses ~2.5GB of database disk space for metadata on the ~120k torrents it has discovered. Yep, that sounds like a lot, however 90% of that is taken up with the files metadata, and could have been saved by setting dht_crawler.save_files to false. In fact I may set this to false by default and allow users to opt-in to the full-fat torrent info.

I’ve also imported the entire RARBG backup (the SQLite one, see tutorial on the Bitmagnet website). This, along with all the associated metadata from TMDB, took around 4GB of database space, which seems quite acceptable considering it’s basically every movie and TV show. Note that this does NOT include the metadata on individual files as I described above.

A priority feature for me (detailed on website) is smart deletion - this would allow you to automatically discard a lot of data that can be automatically determined of no interest and therefore greatly reduce disk space demands.

@kautau@lemmy.world
link
fedilink
English
3
edit-2
1Y

As someone interested in Usenet, what’s the best provider and client to start with in your opinion?

@prim3r@lemmy.ca
link
fedilink
English
41Y

I’ve been using easynews/nzbgeek/nzbget with an arr stack on debian and it’s worked well for me. I’m fairly new to usenet, so take this with a giant grain of salt.

@kautau@lemmy.world
link
fedilink
English
11Y

Cool, thanks for the reply!

Kushan
link
fedilink
English
21Y

Sabnzbd is probably the best choice of download client, fyi.

CosmicApe
link
fedilink
61Y

Linux program names are fucking wild

@palitu@aussie.zone
link
fedilink
English
21Y

Very cool!

@Shdwdrgn@mander.xyz
link
fedilink
English
51Y

Looks like a fun project, but will you be providing any info on setting it up from scratch? I just don’t have an interest in docker containers.

Out of curiosity, why not? I’ve come around.

@Shdwdrgn@mander.xyz
link
fedilink
English
31Y

I’ve just always used VMs for everything and set up each service to match my existing system. For example, my postfix servers have to all tie in to LDAP, mailman, and the host of services for authenticating email. It seems like the point of docker is to just have a completely preconfigured and self-contained setup. I guess I Just don’t see how that would work in my environment where I already have some services like databases or LDAP already running elsewhere, and I run multiple instances for redundancy. And if I have to reconfigure all that stuff in docker anyway, how is that any better than simply using my existing VMs?

Used to be like you, then I moved from truenas core to scale where it’s now Linux and docker instead of freebsd and iocage jails.

So docker has this concept of persistent volumes. You configure all your settings in the initial setup command (docker compose) and define persistent volumes. This way you don’t lose your data.

Here’s an example, Plex. I run Plex in docker now. So my config directory is defined as a persistent volume. If I need to update Plex, or rebuild it or whatever, the container just updates and has all the data I need via the persistent volume. If the install is messed up or whatever I just get a newer image and run the docker compose and it fires up and mounts the persistent volume and off I go.

Basically it takes away the burden of having to figure out the OS configuration. Makes backups easier - and smaller. And the things are spun up, installed, and usable in seconds.

@Shdwdrgn@mander.xyz
link
fedilink
English
21Y

Not sure the OS configuration is really a burden :-) I have several servers I have to keep up to date anyway. And backups aren’t really an issue, I just run rdiff-backup on everything to provide a year’s worth of incremental backups, which doesn’t really take much extra space. Maybe one of these days when I catch up on other projects I’ll look into it though.

On truenas scale though it’s just tiles in a web browser, it’s super easy. And since it runs on ZFS backups are easier too. Just click your way through periodic volume snapshot tasks.

Definitely a bit of a learning curve but it’s a sleek setup once you understand.

@Shdwdrgn@mander.xyz
link
fedilink
English
11Y

I’m not quite sure what “truenas” is? All of my stuff is individually installed, I decided a long time ago to split it up onto VMs that each perform an specific task. I have a main file server that runs zfs, then two servers to run the redundant VMs. There’s not really anything difficult about backups, I just add a cron job to run a script once a day and never touch it again, so I have backups of each VM but then the backups of the main servers includes the VM image files so each VM gets backed up twice. There’s a lot of info there but the backups of all the critical stuff only use about 6TB (I could actually cut that in half if I got rid of the backups from older machines).

So lets say I put in the time to learn how docker works, and then put in a lot more time converting all of my existing systems over to docker images… What exactly what I get out of all that effort? The thing that nobody’s been able to sell me on so far is that I don’t see how docker is going to make anything any easier, it just seems like it’s a “different” way to do things but nothing more.

Your data footprint would be less. Maintenance is a breeze. If you update your image and it breaks, just roll it back. Less consumption of resources. No need to divide your storage and ram for VMs. There are millions of docker images so you can start something new in seconds. And the learning curve isn’t too bad if you’re on truenas scale. Truenas core is a NAS operating system built on freebsd (Unix), and truenas scale is built on Linux. Both use ZFS for the underlying storage.

@Dasnap@lemmy.world
link
fedilink
English
11Y

I personally love containers (probably because I use them for work) but I can understand someone not wanting another layer of abstraction if they’ve worked bare-metal for a long time.

@mgdigital@lemmy.world
creator
link
fedilink
English
41Y

Hi, yes this is mentioned on the installation page of the website, below the Docker instructions. The app can be installed Dockerless using go install; if you choose this option you’ll have to provide and configure Postgres and Redis instances for the app to connect to. That said, Docker is the recommended and easiest option.

@Shdwdrgn@mander.xyz
link
fedilink
English
-11Y

I saw that, but didn’t recognize the ‘go’ command as anything available on Debian. Just did some quick digging though and now I see it’s a new language and I believe I have an idea how to get it installed for compiling so I will give that a shot.

Golang v1.0 was released in March of 2012. Not sure I would consider it a new language.

@Shdwdrgn@mander.xyz
link
fedilink
English
11Y

Oh interesting… I thought I read something that said 2017. No worries, I’ll get it figured out now that I understand what it is.

@droopy4096@lemmy.ca
link
fedilink
English
191Y

@mgdigital, first thing I’be noticed: reliance on “heavier” database stack (pg + redis), at least from the first glance at docker-compose. My suggestion would be to have an option for minimalist setup with sqlite and without redis if possible. That would work better for those of us flying with minimal hardware (rpi, old PC and such).

@mgdigital@lemmy.world
creator
link
fedilink
English
181Y

Hi, this is a great point and one that I’ve already given consideration to. I’ll address separately the issue of the primary datastore ,i.e. Postgres, and the Redis dependency:

Postgres as the only option for the data store

There are 2 reasons for this:

  • Performance: while SQLite could offer a simpler/embedded data store, it simply doesn’t have the performance and features of Postgres. Bitmagnet has a faceted search engine and is write-intensive (it will be discovering ~5k torrents per hour and writing these to the database along with associated metadata). As such, its database may not be suitable for running on older hardware. A SQLite adapter, if it was developed, may simply not be up to the job (although as I haven’t attempted this I can’t say what the performance would be like). That said, Bitmagnet itself is not especially resource intensive, you could probably run it on a Raspberry PI but point it to a Postgres instance on some more powerful hardware. At this stage I’ve only been running it on a M2 Mac Mini with Postgres located on its SSD and so would be interested to know people’s mileage on other hardware.
  • Development, support and maintenance overhead: I’m a lone developer and this project is already too big for one person. A SQLite adapter, if feasible performance-wise, I think could only happen if other contributors joined the project as my to-do list is already pretty long. It would have to achieve feature parity with the Postgres implementation which makes use of several Postgres-specific features and extensions. It would also mean a longer testing cycle and therefore probably a slower release cadence. That said, if there was enough demand and assistance then I’d be open to looking into the feasibility of this once the rest of the application is a little more mature and the current database schema more finalised.

Redis dependency

Redis is currently used only for the asynchronous task queue. I would like to have put this in Postgres, but there simply is not a good out-of-the-box solution that works well with Postgres and GoLang, and is actively maintained. I looked at quite a few queuing libraries and eventually settled on asynq (https://github.com/hibiken/asynq), which is a great library and does the job well - but could really do with support for non-Redis backends.

Using Redis here was a pragmatic decision that allowed me to make progress, rather than an optimal one. I guess I could have built a simple Postgres-based queue myself but that would have been a distraction and probably sub-optimal compared with a mature/separately developed library. It remains an option. Since I looked into this a new project has sprung up which I’m keeping an eye on - https://www.tork.run/ - it has a Postgres backend and looks like it might be up to the job, but is very new.

So yes, I’m very aware that the additional Redis dependency is not ideal and it may well disappear at some point.

@droopy4096@lemmy.ca
link
fedilink
English
11Y

thank you for such a detailed response. I would love to contribute however at the moment my capacities are rather limited but otherwise I’d be willing to add sqlite adapter. From your description it sounds like currently architecture is narrowly locked on PostgreSQL features. In my daily job I love PostgreSQL for big apps and stacks but I’m also aware how “hungry” PG can be, which is why I’m wondering whether it’s “too big of a hammer” for this particular problem. Also, setting up single service is easier to novices vs maintaining several. Docker compose is nice but it has it’s limitations.

mlunar
link
fedilink
English
41Y

Hi, those points are certainly valid and I have nothing against these picks!

I just wanted to chime in that perf might not be as big of a problem as you might expect. 5k/hour is 1.4/sec, which sqlite should for sure be able to handle.

In fact, you can do hundreds to thousands of writes/sec, as long as you batch them in transactions (as by default each query is executed in its own transaction).

@Stephen304@lemmy.ml
link
fedilink
English
41Y

A dht crawler is inherently an intensive service to run, magnetico used sqlite and would take 10 minutes just to load the splash page that includes the total count of discovered torrents.

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
5
edit-2
10M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
Plex Brand of media server package
SSD Solid State Drive mass storage
VPN Virtual Private Network

4 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #191 for this sub, first seen 5th Oct 2023, 14:25] [FAQ] [Full list] [Contact] [Source code]

@Willdrick@lemmy.world
link
fedilink
English
11Y

This looks kinda neat, I even tore down my whole servarr stack to give it a go, alas I can’t get bitmagnet to “talk” with prowlarr. I’m probably doing something really stupid, but I can’t figure out how to add the whole thing under a single docker network, I get errors like network somename was found but has incorrect label com.docker.compose.network set to ""

BlueÆther
link
fedilink
English
41Y

seems to work well

just one question, is it expected to have 10,000 out of 12,000 as unknown?

@mgdigital@lemmy.world
creator
link
fedilink
English
8
edit-2
1Y

Hi, yep that’s expected. Torrents will only move out of “Unknown” once the classifier is able to categorise them. The classifier currently only supports movie and TV show content, and can recognise these with quite high accuracy assuming a well-named torrent (and a badly named torrent is unlikely to be a high quality release). The other content types (music, games etc) can currently only be populated via an import (see the tutorial on the website). A priority feature is classifiers for other content types - however we will likely always have a lot of torrents ending up in “Unknown” given the poor naming of many crawled items. Another roadmap feature, smart deletion, could help in future with getting rid of all the rubbish whose contents cannot be inferred from the torrent name.

Maybe I’m misunderstanding but wouldn’t it just be easier to use a good private tracker, assuming you can get an invite?

@lud@lemm.ee
link
fedilink
English
21Y

Yes, of course.

Is it safe to run this without a VPN if I am just using it to index?

Dude this is amazing! Exactly the sort of thing I’ve been hoping would pop up to further “decentralize” the torrent search experience.

So I’m trying to run it on my machine through the docker-compose option, and I’m seeing something weird. It shows as successfully running, but when I go to the port it should be running on, I get “unable to connect” on my browser.

When I check my containers running, it shows the 3 bitmagnet containers, but the port doesn’t show.

https://i.imgur.com/D4R1Le5.png

@mgdigital@lemmy.world
creator
link
fedilink
English
51Y

Hi, the default port is 3333, which should be exposed if you’re using the example configuration here: https://bitmagnet.io/setup/installation.html - I’m not sure what the app is in your screenshot but the provided config definitely exposes that port and is tested on Docker for Mac.

Just pulled the latest and tried again, and it works now! Thanks

@LienNoir@lemmy.world
link
fedilink
English
11Y

Hi, am i missing something, the bitmagnet image keep restarting when i check with “docker ps”, the other 2 containers are working as intended. And port 3333 doesn’t show anything.

@mgdigital@lemmy.world
creator
link
fedilink
English
21Y

There’s a PR currently open for multi-platform builds so should have this sorted soon

@emhl@feddit.de
link
fedilink
English
11Y

What are your logs showing? docker logs -f bitmagnet

@LienNoir@lemmy.world
link
fedilink
English
11Y

log: exec /bitmagnet: exec format error

I am on ARM (pi4) maybe it’s the issue

@emhl@feddit.de
link
fedilink
English
2
edit-2
1Y

the parent image should support that arm version, so you could just build the docker image locally on your pi and use that.

Btw. There already is an open pull request to add arm support

@LienNoir@lemmy.world
link
fedilink
English
11Y

thanks, for pointing that out, it works great now.

@drugo@sh.itjust.works
link
fedilink
English
21Y

Yeah, that’s the error you get when trying to run an x86 program on ARM or vice versa

Sounds interesting 😀 I’ll keep an eye on it, though I won’t be a primary user, I switched to usenet about a decade ago and only use torrents as a last resort.

@726a67@lemmy.sdf.org
link
fedilink
English
11
edit-2
1Y

Looks super interesting; starred!

Will report back once I’ve run through the installation.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 279 users / day
  • 589 users / week
  • 1.34K users / month
  • 4.55K users / 6 months
  • 1 subscriber
  • 3.47K Posts
  • 69.4K Comments
  • Modlog