The rise and fall of robots.txt
www.theverge.com
external-link
As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.
@zaphod@lemmy.ca
link
fedilink
English
32
edit-2
9M

You know what?

I’m fine with that hypothetical risk.

“The bad guys will do it anyway so we need to do it, too” is the worst kind of fatalism. That kind of logic can be used to justify any number of heinous acts, and I refuse to live in a world where the worst of us are allowed to drag down the rest of us.

But, if we make training ai without copyright illegal, it will hamper open source models, while not affecting closed source ones , because they could just buy it off of big social media conglomerates

Chahk
link
fedilink
4
edit-2
9M

Alrighty then. If corps want to train their AI on all the content they can scrape without worrying about copyright, then they can’t complain when I torrent their shit without worrying about copyright too! Deal? Somehow I don’t see them taking that deal.

@zaphod@lemmy.ca
link
fedilink
English
1
edit-2
9M

Training new models is already the domain of large actors only, simply due to the GPU requirements, which serve as a massive moat. That ship has sailed. There isn’t a single open source model, today, that wasn’t trained by a corporate entity first, and then only fined tuned by the community later.

bedrooms
link
fedilink
3
edit-2
9M

The consequence of falling behind is gravely different from most heinous acts. It can impact the military, elections, espionage, or whatever.

@zaphod@lemmy.ca
link
fedilink
English
2
edit-2
9M

Really? I’m supposed to believe AI is somehow more existentially risky than, say, chemical or biological weapons, or human cloning and genetic engineering (all of which are banned or heavily regulated in developed nations)? Please.

I understand the AI hype artists have done a masterful job convincing everyone that their tech is so insanely powerful (and thus incredibly valuable to prospective investors) that it’ll wipe out humanity, but let’s try to be realistic.

But you know, let’s take your premise as a given. Even despite that risk, I refuse to let an unknowable hypothetical be used to hold our better natures hostage. The examples are countless of governments and corporations using vague threats as a way to get us to accept bad deals at the barrel of a virtual gun. Sorry, I will not play along.

samwise
link
fedilink
49M

If you don’t see how even the most basic of AI images, videos, deepfakes, etc. can manipulate the public, the electorate, popular opinion, or even sow just enough doubt as a cause a problem, then I don’t know what to tell you.

People are already dying because of deepfakes and fake AI porn. We know that most people who see some headline on Facebook will never click farther to read it, and will just accept the headline and/or the synopsis as fact. They will accept something a 1000x re-shared image says, without sources or verification. The fact that a picture or vid might have a person with 8 fingers on one hand in the background isn’t going to prevent them from taking in the message. And we’ve all literally seen people around the web say , explicitly, something to the effect of “I don’t care if the story is true or not, it’s a real issue we need to consider” when we know for a fact that it is not.

Yes, mis- and dis-information are far more of an existential thread than chem or bio weapons, and we know this because we are already seeing the consequences of it. If you refuse to see that, then you are lost.

@zaphod@lemmy.ca
link
fedilink
English
3
edit-2
9M

You don’t need AI for any of that. Determined state actors have been fabricating information and propagandizing the public, mechanical Turk style, for a long long time now. When you can recruit thousands of people as cheap labour to make shit up online, you don’t need an LLM.

So no, I don’t believe AI represents a new or unique risk at the hands of state actors, and therefore no, I’m not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case. We’ve had enough of that already, thank you very much.

And that’s ignoring the fact that an adversarial state actor having access to advanced LLMs isn’t somehow negated or offset by us having them, too. There’s no MAD for generative AI.

samwise
link
fedilink
19M

I’m not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case

What beliefs and values would we be abandoning by fighting back against tech that is literally costing people their literal lives?

@zaphod@lemmy.ca
link
fedilink
English
1
edit-2
9M

deleted by creator

@zaphod@lemmy.ca
link
fedilink
English
1
edit-2
9M

Hah I… think we’re on the same side?

The original comment was justifying unregulated and unmitigated research into AI on the premise that it’s so dangerous that we can’t allow adversaries to have the tech unless we have it too.

My claim is AI is not so existentially risky that holding back its development in our part of the world will somehow put us at risk if an adversarial nation charges ahead.

So no, it’s not harmless, but it’s also not “shit this is basically like nukes” harmful either. It’s just the usual, shitty SV kind of harmful: it will eliminate jobs, increase wealth inequality, destroy the livelihoods of artists, and make the internet a generally worse place to be. And it’s more important for us to mitigate those harms, now, than to worry about some future nation state threat that I don’t believe actually exists.

(It’ll also have lots of positive impact as well, but that’s not what we’re talking about here)

samwise
link
fedilink
19M

Ah gotcha. I must have misunderstood the flow there. Yeah, definitely seems like we’re mostly on the same side

frog 🐸
link
fedilink
English
99M

Yeah, I mean bad guys are going to commit murder too, doesn’t mean it shouldn’t be illegal.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 59 users / day
  • 169 users / week
  • 619 users / month
  • 2.31K users / 6 months
  • 1 subscriber
  • 3.28K Posts
  • 67K Comments
  • Modlog