Mateys! We have plundered the shores of tv shows and movies as these corporations flounder in stopping us seed and spread their files without regard for the flag of copyright. We have long plundered the shores of gaming and broke DRM that have been plaguing modern games, and allowing accessibility to games in countries where a game would cost a week or even a month of wages (I was once in this situation, so I am grateful for the pirating community for letting me enjoy the golden era of games back in 2012-2015).

But there, upon the horizon, lies a larger plunder. A kraken who guards a lair of untouched gold and emeralds, ready for the taking.

Closed-source AI models.

These corporations have stolen what was once ours, our own data, and put them in their AI models so that only they can profit off of it. These corporations raze the internet with their spiders and their bots to gather as much morsel of data from us which they can feed to their shiny new toy. We might not be able to stop them from stealing our data, but we have proven ourselves to be adept at copying things, leaking software, and this is what we need to do. AI is already too dangerous and to powerful for a select few corporations to control.

As long as AI is within the hands of corporations, not people, the AI will serve their goals, not ours. This needs to change, so this is what I propose for our next voyage.

TheOneCurly
link
fedilink
English
81Y

Unless they start offering on-prem or there are some very high profile server hacks I don’t see that being possible. Unlike media and client software they don’t need to provide the core functionality to end users, just the output.

@aldalire@lemmy.dbzer0.com
creator
link
fedilink
English
31Y

I agree. As for the how, it’s gonna be tricky to say the least

Hot Saucerman
link
fedilink
English
31Y

You can start by using the same data sources they do. Several had admitted to using Books3.

https://huggingface.co/datasets/the_pile_books3

metaStatic
link
fedilink
41Y

let me just check how much supercompute I have and … oh, zero.

Hot Saucerman
link
fedilink
English
11Y

Well, let’s just assume we have a can opener.

Closed-source AI models, huh? The only thing that comes to mind is NovelAI’s model leak, and the leaker allegedly burnt a 0-day exploit to do it.

Hot Saucerman
link
fedilink
English
22
edit-2
1Y

Closed-source AI models.

Books3 corpus would like you to know that all the data in it is from copyrighted books. It has reportedly been widely used in closed-source AI LLMs. “Rules for thee, not for me” shit. They’ll break copyright and then copyright what they made from it.

https://huggingface.co/datasets/the_pile_books3

Books3 is literally everything from the Bibliotik private tracker for books.

So yeah, fuckin roll out the cannons, mateys, let’s sink these hypocritical fuckers.

@aldalire@lemmy.dbzer0.com
creator
link
fedilink
English
111Y

This has the same vibe as Github (owned by microsoft) training its AI Copilot on repositories under the GPL license, which specifically forbids any work based on it not be made proprietary. Literally a blatant disregard for the license, but it’s ok because it’s a mega-corporation doing it

@Even_Adder@lemmy.dbzer0.com
link
fedilink
English
6
edit-2
1Y

You’re allowed to train on copyrighted works, it isn’t illegal for anybody. This article by Kit Walsh does a good job of breaking it down. She’s a senior staff attorney at the EFF.

Hot Saucerman
link
fedilink
English
121Y

I didn’t say it was illegal, I said it was hypocritical.

Oh, my bad.

The£0b°t°m¡§t
link
fedilink
English
111Y

You are going straight for the One Piece

Okay, I’m with you but…

how are we using these closed source models?

As of right now I can go to civitai and get hundreds of models created by users to be used with Stable Diffusion. Are we assuming that these closed source models are even able to be run on localized hardware? In my experience, once you reach a certain size there’s nothing that layusers can do on our hardware, and the corpos aren’t using AI running on a 3080, or even a set of 4090’s or whatever. They’re using stacks of A100’s with more VRAM than everyone’s GPU in this thread.

If we’re talking the whole of LLM’s to include visual and textual based AI… Frankly, while I entirely support and agree with your premise, I can’t quite see how anyone can feasibly utilize these (models). For the moment anything that’s too heavy to run locally is pushed off to something like Collab or Jupiter and it’d need to be built with the model in mind (from my limited Collab understanding - I only run locally so I am likely wrong here).

Whether we’ll even want these models is a whole different story too. We know that more data = more results but we also know that too much data fuzzes specifics. If the model is, say, the entirety of the Internet while it may sound good in theory in practice getting usable results will be hell. You want a model with specifics - all dogs and everything dogs, all cats, all kitchen and cookware, etc.

It’s easier to split the data this way for the end user as this way we can direct the AI to put together an image of a German Shepard wearing a chefs had cooking in the kitchen, with the subject using the dog-Model and the background using the kitchen-Model.

So while we may even be able to grab these models from corpos, without the hardware and without any parsing, it’s entirely possible that this data will be useless to us.

@aldalire@lemmy.dbzer0.com
creator
link
fedilink
English
21Y

I was thinking the same thing. Would you think there’d be a way to take an existing model and pool our computational resources to produce a result?

All the AI models right now assume there is one beefy computer doing the inference, instead of multiple computers working in parallel. I wonder if there’s a way to “hack” existing models right now so it can be used to infer with multiple computers working in parallel.

Or maybe, a new type of AI should specifically be developed to be able to achieve this. But yes, getting the models is half the battle. The other half will be to figure out how to pool our computation to run the thing.

I’m not sure about for expanded models, but pooling GPU’s is effectively what the Stable Diffusion servers have set up for the AI bots. Bunch of volunteers/mods run a SD public server and are used as needed - for a 400,000+ discord server I was part of moderating this is quite necessary to keep the bots running with a reasonable upkeep for requests.

I think the best we’ll be able to hope for is whatever hardware MythicAI was working on with their analog chip.

Analog computing went out of fashion due to it’s ~97% accuracy rate and need to be build for specific purposes. For example building a computer to calculate the trajectory of a hurricane or tornado - the results when repeated are all chaos but that’s effectively what a tornado is anyway.

MythicAI went on a limb and the shortcomings of analog computing are actually strengths for readings models. If you’re 97% sure something is a dog, it’s probably a dog and the 3% error rate of the computer is lower than humans by far. They developed these chips to be used in cameras for tracking but the premise is promising for any LLM, it just has to be adapted for them. Because of the nature of how they were used and the nature of analog computers in general, they use way less energy and are way more efficient at the task.

Which means that theoretically one day we could see hardware-accelerated AI via analog computers. No need for VRAM and 400+ watts, MythicAI’s chips can take the model request, sift through it, send that analog data to a digital converter and our computer has the data.

Veritasium has a decent video on the subject, and while I think it’s a pipe dream to one day have these analog chips be integrated as PC parts, it’s a pretty cool one and is the best thing that we can hope for as consumers. Pretty much regardless of cost it would be a better alternative to what we’re currently doing, as AI takes a boatload of energy that it doesn’t need to be taking. Rather than thinking about how we can all pool thousands of watts and hundreds of gigs of VRAM, we should be investigating alternate routes to utilizing this technology.

oats
link
fedilink
English
21Y

The point about GPU’s is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it’s 0.79 USD per HR per A100.

On the other hand the freely available models are really great and there hasn’t been a need for the closed source ones for me personally.

@aldalire@lemmy.dbzer0.com
creator
link
fedilink
English
31Y

0.79 dollars per hour is still $568 a month if you’re running it 24/7 as a service.

Which open source models have you used? I’ve heard that open source image generation with stable diffusion is on par with closed source models, but it’s different with large language models because of the sheer size and type of data they need to train it.

oats
link
fedilink
English
31Y

I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.

The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won’t be in continuous use for a period of years.

Another important distinction is that LLMs are a whole different beast, running them even when renting isn’t justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta’s needs 16 https://blog.apnic.net/2023/08/10/large-language-models-the-hardware-connection/) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.

@rufus@discuss.tchncs.de
link
fedilink
English
4
edit-2
1Y

Fait point. But we’re talking about piracy here. Just steal it first and then let’s see if we can use it.

@MalReynolds@slrpnk.net
link
fedilink
English
31Y

Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate ‘ChatGPT’ network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.

You definitely can train models locally, I am doing so myself on a 3080 and we wouldn’t be as many seeing public ones online if that were the case! But in terms of speed you’re definitely right, it’s a slow process for us.

@MalReynolds@slrpnk.net
link
fedilink
English
21Y

I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You’re doing LoRA or augmenting with a local corpus of documents, no?

Ah yeah my mistake I’m always mixing up language and image based AI models. Training text based models is much less feasible locally lol.

There’s no model for my art so I’m creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I’ll be able to speed up variants of my process using LORA’s but that won’t be for some time, I want a good model first.

@MalReynolds@slrpnk.net
link
fedilink
English
21Y

Fair cop, Godspeed!

@ramjambamalam@lemmy.ca
link
fedilink
English
41Y

aldalire for president!

@aldalire@lemmy.dbzer0.com
creator
link
fedilink
English
31Y

I would be a poor choice of candidate but there have been worse

db0
mod
link
fedilink
English
141Y
@aldalire@lemmy.dbzer0.com
creator
link
fedilink
English
111Y

Woah, this is awesome work. I’m amazed as usual with the open source community and with people willing to share their computation for this.

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
!piracy@lemmy.dbzer0.com
Create a post
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don’t request invites, trade, sell, or self-promote

3. Don’t request or link to specific pirated titles, including DMs

4. Don’t submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

  • 1 user online
  • 106 users / day
  • 270 users / week
  • 1K users / month
  • 3.5K users / 6 months
  • 1 subscriber
  • 3.4K Posts
  • 82.2K Comments
  • Modlog