Yeah and what is the first thing they teach you in art school? History. From day one you’re studying the works of other artists and its implications. How they managed to make an impact on the viewers and how it inspires you. Then we produce output that’s judged by our teachers on a scale and we use that as weighted training data.
Yeah, I love the idea of the fediverse because it creates a democratized community where anybody can choose to listen to who they want. Unfortunately this attracts very clicky users that feel like they own the fediverse and want to push others out. I’ve seen it a couple times already with people clambering to defederate other instances they don’t like. Thankfully we can just choose to not listen to them, lol.
I don’t like Facebook and I understand the concerns that Facebook will sort of take over the fediverse from the inside like a parasite. But at the end of the day you can just spin up a vanilla instance and connect with anyone willing to do the same. That’s what’s great about the fediverse.
I think if we sit here and debate the nuances of what is or is not intelligence, we will look back on this conversation and laugh at how pedantic it was. Movies have taught us that A.I. is hyper-intelligent, conscious, has it’s own objectives, is self aware, etc… But corporations don’t care about that. In fact, to a corporation, I’m sure the most annoying thing about intelligence right now is that it comes packaged with its own free will.
People laugh at what is being called A.I. because it’s confidently wrong and “just complicated auto-complete”. But ask your coworkers some questions. I bet it won’t be long before they’re confidently wrong about something and when they’re right, it’ll probably be them parroting something they learned. Most people’s jobs are things like: organize these items on those shelves, mix these ingredients and put it in a cup, get all these numbers from this website and put them in a spreadsheet, write a press release summarizing these sources.
Corporations already have the A.I. they need. You gatekeeping intelligence is just your ego protecting you from the truth: you, or someone dear to you, are already replaceable.
I think we both know that A.I. is possible, I’m saying it’s inevitable, and likely already at version 1. I’m sure any version of it would require access to training data. So the ruling here would translate. The only chance the general population has of keeping up with corporations in the ability to generate economic value, is to keep the production of A.I. in the public space.
A.I. exists. It will continue to get better. If letting people use it becomes illegal, they’ll just use it themselves and cut us out. A world where the general population have access to A.I. is the only one where we’re not totally fucked. I’m not simping for Google or Facebook, I’d much prefer an open source self hostable version. The only way we can stay competitive is if these companies continue to develop these in the open for the consumer market.
General purpose artificial intelligence will exist. Full stop. Intelligence is the most valuable resource in the universe. You’re not going to stop it from existing, you’re just going to stop them from sharing it with you.
Thanks for this comment. I totally get how it can feel like ‘free speech defenders’ have a blanket defense that ends up protecting evil people. And you’re right.
The world has become so loud with instant global communication. So many different ideas, cultures, personalities, perspectives… I think we all wish we could turn down the volume, but none more so than for people that spew hate.
No group deserves to receive threats of violence, harassment, or belittling of their existence. While I think we sit in agreement that it should be an obvious choice to ban people like Nazis and stop there, you could easily apply my previous sentence to many groups of people. There are some left-leaning communities where you’ll see people wishing Trump to be strung up, or saying MAGA supporters should use their second amendment right and kill themselves. Many members of those communities would never condone violence against the former president or his supporters, but whoever we give the power to make the ‘free speech’ decision may see it differently.
The whole concept of free speech is not that everybody has a good idea. It’s that nobody can be trusted to decide what is a good idea. If you believe in free speech, you believe in hearing a lot of bad ideas that can make people very uncomfortable. While I do agree that there can be very minimal exceptions to this in extreme circumstances, (death threats, stalking, harassment) we need to be very careful about who makes the call.
This is asking we put the responsibility of that arbitration in the hands of Spectrum and AT&T. Take a minute and think about that.
I keep rereading this comment and as someone in R&D… I’m so astonished that people think that companies just spontaneously come up with everything they produce without looking around. Companies start off almost every venture by analyzing any work in the field that’s been done and reverse engineering it. It’s how basically anyone you’ve heard of works. It goes double for art. Inspiration is key for art. Composers will break down the sheet music of great compositions, graphic designers will have walls full of competitors designs, cinematographers will study movies frame by frame.
I think it’s a pretty important question whether we’re reaching the end of the distinction between human and machine. People will begin to use machine minds more and more as part of their work. Tying strings now to the works of machines is screwing the creators of tomorrow. The line between what a person creates and what a machine creates WILL evaporate. It’s not a matter of if, but when.
Imagine we put a ton of regulations on people who use power tools to do carpentry. I’m sure the carpenters around the time power tools were created figured “That’s not true craftsmanship. They shouldn’t be able to make a living off that!” But the carpenters of today would be screwed by these regulations because of course they have to use the latest technology to stay competitive.
As for the argument that we’re taking the food out of creative’s mouths: I don’t think anyone is not buying Stephen King novels now because they can just ask for a Stephen King style novel from ChatGPT. You can pirate Stephen King already. People aren’t fascinated by LLMs because of how well they plagiarize. They’re fascinated by them because they’re capable of transformative works, not unlike humans. Nobody is typing “Write a Stephen King Novel” they’re typing, “Harold and Kumar go to White Castle but it’s Snoop Dogg and Betty White in the style of Stephen King.” As much as I’m sure King would love to suck up all royalties for these stories, there’s no universe where it makes sense that he should. You don’t own what you inspire.
Might not fit exactly with what you were thinking, but I use Obsidian for notes and it’s built in ‘wiki-style’ links make relating knowledge really easy. I use git to sync across devices.
Exactly, the reason LLMs are so fascinating to us is how close they get to sounding human. Thing is, it’s not a trick. When people dismiss LLMs because, “Oh they mostly just echo their training data set”. That’s just culture in humans. Then it’s the emergent behavior that makes us feel unique. I’m not saying LLMs are human equivalent. But they’re fairly close in design to how a huge part of our psyche works.
If you engineer for it, you can send up a machine to fabricate the miners with raw resources. Then you just have to send up a couple starter miners and you never have to send another rocket up. Infinite resources down (limited by time). Solar power to drive the machines. Hell the manufacturer can double as basic initial processing plant and drop purified metals.