I really want to use AI like llama, ChatGTP, midjourney etc. for something productive. But over the last year the only thing I found use for it was to propose places to go as a family on our Hokaido Japan journey. There were great proposals for places to go.
But perhaps you guys have some great use cases for AI in your life?
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Not much. I totally agree with Linus Torvalds in that AIs are just overhyped autocorrects on steroids, and I despise that the artwork generators are all based on theft.
Pretty much all I use them for is to make my life easier at work, like turning a quick draft into a formal email.
Ownership of anything is difficult to define. The internet has accelerated this loosening of definition. If I pay a subscription to use my coffee pot, do I really own it? If I take a picture of the coffee pot, do I own the picture? If I pay a photographer to take a picture of the pot do I own the picture, do I own their time?
I don’t intend on trying changing your opinion on theft, but its interesting to think about how ownership feels very different as time goes by.
If ownership doesn’t exist, then piracy doesn’t exist. Can’t steal that which is not owned. Of course companies don’t like that and consider it “not theft” if they’re doing the stealing.
Anti Commercial-AI license
Did he say that? I hope he didn’t mean all kinds of AI. While “overhyped autocorrect on steroids” might be a funny way to describe sequence predictors / generators like transformer models, recurrent neural networks or some reinforcement learning type AIs, it’s not so true for classificators, like the classic feed-forward network (which are part of the building blocks of transformers, btw), or convolutional neural networks, or unsupervised learning methods like clustering algorithms or principal component analysis. Then there are evolutionary algorithms and there are reasoning AIs like bayesan nets and so much much much more different kinds of ML/AI models and algorithms.
It would just show a vast lack of understanding if someone would judge an entire discipline that simply.
Copying isn’t theft. There is no “theft”.
It’s just a problem with the whole copyright laws not being fit for purpose.
After all, all art is theft.
There is literally no “artificial intelligence” in any of this. It would show a vast degree of BS, hype, and obfuscation to promote data, statistics, and other computations as “intelligence”.
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
Well of course if you redefine words all of the time then nothing is anything right.
You are literally wrong. Nice article, don’t see how that’s relevant though.
Could it be, that you don’t know what “intelligence” is? And what falls under definitions of the “artificial” part in “artificial intelligence”? Maybe you do know, but have a different stance on this. It would be good to make those definitions clear before arguing about it further.
From my point of view, the aforementioned branches, are all important parts of the field of artificial intelligence.
The LLMs for text are also based on “theft”. They’re just much better at hiding it because they have a multitude more source material. Still, it does sometimes happen that they quote a source article verbatim.
But yeah basically they’re just really good copy/paste engines that work with statistical analysis to determine the most likely answer based on what’s written in basically the whole internet :P It’s a bit hard to explain sometimes to people who think that the AI really “thinks”. I always say: If that were the case, why is the response to a really complicated question just as fast as a simple one? The wait is just based on the length of the output.
In terms of the “theft” I think it’s similar ethically to google cache though.
I’m hoping it’ll quote the license I put in my comments (should my text ever be included in the training set) and gets somebody in trouble. But yeah, transformed anything is difficult undo to see what the source material was, so commercial LLMs can mostly just get away with it.
Anti Commercial-AI license
If I had the patience, I’d try to explain the Chinese Room though experiment to the people that misunderstand AIs. But I don’t, so I usually just shut up 🙂