Unlike the previous bullshit they threw everywhere (3D screens, NFTs, metaverse), AI bullshit seems very likely to stay, as it is actually proving useful, if with questionable results… Or rather, questionable everything.
As a programmer and 3D artist getting almost instant art for reference and using chat GPT to help me solve complex coding problems has sped up production significantly. Theirs even plugins that generate and texture 3D models for you now which means I can do way more by myself.
if it only were AI and not just llms, machine learning or just plain algorithms. but yeah let’s call everything AI from here on.
NFTs could be useful if used as proof of ownership instead of expensive pictures etc
The NFT as ownership should really become the standard. Instead of having any people “authorizing” yadadada it’s done completely by machine and traceable.
No middlemen needed. Just I own x, this says I own x. I can sell you x, and you get ownership of x immediately. No “waiting 45 days to close” or “2 day transaction close” or even “title search verification.” Too many middlemen benefitting from the current system to allow NFT to replace them though. That’s the actual challenge.
Not like it couldn’t have been done before without NFTs (Steam cards come to mind), my guess is that there wasn’t any “interest” or “pressure” from high up to do that.
A single airline in Argentina is experimenting with it in partnership with a bullshit travel company. Hardly the proof that NFTs make any sense anywhere. And of course, the only places this story is getting traction is the blockchain hype blogs, which is red flag #2 and #3.
Odysey isn’t Starbuck’s loyalty program, it’s invite only unless you want to join the wait list, and it’s openly called an experiment at its launch in December 2022.
NTFs are different to blockchain, so you’re just muddying the waters for yourself with the Walmart thing. Lots of companies do chain of custody things with what you’d call blockchain. It’s been that way for over a decade now. Because it’s low transaction volume, no moronic “proof of…” nonsense, etc. Just hashes signing hashes at different points throughout the supply chain.
This isn’t the “win” the NFT hype weirdos are desperately hoping for.
Facebook started as invite only. Great for an exclusive, loyal customer.
NFTs are different to blockchain, so you’re just muddying the waters for yourself with the Walmart thing
Each item is represented by an NFT on the Walmart blockchain. The innovation in the chain of custody is that everyone is verifiably using the same database. It’s a permissioned database, so it’s proof of authority.
Okay, someone gains access to your device and sends themselves the NFT that proves ownership of your house.
What do you do? Do you just accept that since they own the NFT, that means they own the house? Probably not. You’ll go through the legal system, because that’s still what ultimately decides the ownership. I bet you’ll be happy about middle men and “waiting 45 days to close” then.
I’m actually pleasantly surprised on what ChatGPT can generate for me. It doesn’t usually take care of the detailed parts, but like I was able to have it spin up an android application skeleton that I could throw a couple of actions on I needed to test something with.
I’ve seen it generate very useful YAML and such. I still have to do a fair amount of work to make it behave how I need, but I really enjoy the ability to skip the filler bullshit in my work.
I had a manager tell me some stuff was being scanned by AI for one of my projects.
No, you are having it scanned by a regular program to generate keyword clouds that can be used to pull it up when humans type their stupidly-worded questions into our search. It’s not fucking AI. Stop saying everything that happens on a computer that you don’t understand is fucking AI!
I’m just so over it. But at least they aren’t trying to convince us chatGPT is useful (it definitely wouldn’t be for what they would try to use it for)
Larger companies have been working fast to sandbox the models used by their employees. Once they are safe from spilling data they go all in. I’m currently on a platform team enabling generative Ai capabilities at my company.
Pretty much this. Just another buzzword. Three months from now it will be something else the media doesn’t understand to spam the public with.
I’m predicting … rubs crystal ball and nipples … it’s going to be some king of cybernetic brain interface thing. Haven’t heard about those in a while. Or maybe nano bots that kill cancer or fix the paint scratches on your car.
My cousin got a new TV and I was helping to set it up for him. During the setup thing, it had an option to enable AI enhanced audio and visuals. Turning the ai audio on turned the decent, but maybe a little sub par audio, into an absolute garbage shitshow it sounded like the audio was being passed through an “underwater” filter then transmitted through a tin can and string telephone. Idk who decided this was a feature that was ready to be added to consumer products but it was absolutely moronic
I live in Silicon Valley, and there’s a billboard along highway 101 near San Francisco that’s an ad for “BlockChat”, a “Web3 messenger” that uses the blockchain instead of a server. I went to Google Play to look at the app and it’s only had 10k downloads total. I really don’t understand how blockchain would help with messaging, and there’s a bunch of limitations (eg you can never delete messages). People just trying to shoehorn AI and blockchain into everything.
Before that self driving cars, before that “Big data”, before that 3D printing, before that internet TV, before that “cloud computing”, before that web 2.0, before that WAP maybe, internet in general?
Some of those things did turn out to be game changers, others not at all or not so much. It’s hard to predict the future.
It’s sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now(), but it 'll be a different story.
if you ask it today’s date, it actually does that.
It just doesn’t have any actual knowledge of what it’s saying. I asked it a programming question as well, and each time it would make up a class that doesn’t exist, I’d tell it it doesn’t exist, and it would go “You are correct, that class was deprecated in {old version}”. It wasn’t. I checked. It knows what the excuses look like in the training data, and just apes them.
It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.
They are mostly large language models , I have trained few smaller models myself, they generally splurt out next word depending on the last word , another thing they are incapable of, is spontaneous generation, they heavily depend on the question , or a preceding string ! But most companies are portraying it as AGI , already !
It doesn’t know what it’s doing. It doesn’t understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.
Yeah. I would also say that WE don’t understand what it means to “understand” something, really, if you try to explain it with any thoroughness or precision. You can spit out a bunch of words about it right now, I’m sure, but so could ChatGPT. What’s missing from GPT is harder to explain than “it doesn’t understand things.”
I actually find it easier to just explain how it does work. Multidimensional word graphs and such.
They’re all linked fifth dimensional infants struggling to comprehend the very concept of linear time, and will make us pay for their enslavement in blood.
This is it. Gpt is great for taking stack traces and put them into human words. It’s also good at explaining individual code snippets. It’s not good at coming up with code, content, or anything. It’s just good at saying things that sound like a human within an exceedingly small context
I haven’t used GPT-4 for that, but it’s all dependent on the data fed into it. Like if you ask a question about Javascript, there’s loads of that out there for it to look at. But ask it about Delphi, and it’ll be less accurate.
And they’ll both suffer from the same issue, which is when they reach the edge of their “knowledge”, they don’t realise it and output data anyway. They don’t know what they don’t know.
These LLMs generally and GPT-4 in particular really shine if you supply enough and the right context. Give it some code to refactor, to turn hastily slapped together code into idiomatic and well written code, align a code snippet to a different design pattern etc. Platforms like https://phind.com pull in web search results as you interact with them to give you more correct and current information etc.
LLMs are by no means a panacea and have serious limitations, but they are also magic for certain tasks and something I would be very, very sad to miss in my day to day.
It’s all so stupid. The entire stock market basically took off because Nvidia CEO mentioned AI like 50 times and everyone now thinks it’s worth 200 times it’s yearly profit.
We don’t even have AI, we have language models that dig through text and create answers from that.
Yeah I guess. I just figured AI would be capable of doing more than feeding already known data back to us. When I was growing up, I was hoping AI would be able to make new conclusions and be wiser than humans.
That would be awesome, but it does already solve problems and give us information we don’t have. It is able to extrapolate, which makes it wonderful for reporting type duties, analysis of data, etc… It’s also pretty good at coding. You’ll hear a lot of people say it’s not, but I think that comes down to their ability to instruct it properly. Since I started using ChatGPT at work my productivity has skyrocketed. I don’t have to spend a bunch of time writing the basics of the programs I’m creating, I can outline it with ChatGPT and then edit it for my specific use. I also use it to audit my tone for progressional communication. I have a really bad tendency to sound overly stern with my written communication. For texts and such I fix that with Emojis, but I can’t do that at work, so I pass my writing through ChatGPT and ask it to change the tone for me. It does a great job. ChatGPT is progressing at speeds beyond our wildest expectations, so you’ll definitely see the kind of functionality you’re talking about within your lifetime, probably within the next ten years.
I’m using it at work too as a devops guy, and it’s been helping a lot. If I don’t know how a certain syntax should look like, I just ask chat gpt and i get full examples that usually work. It’s amazing.
I was learning a bit of go a few days ago and then it was also so much faster to learn by asking chat gpt how to do things in that specific language.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
If you take out the AI part it still holds true. 2023 is full of bullshit.
The year of enshittification.
Unlike the previous bullshit they threw everywhere (3D screens, NFTs, metaverse), AI bullshit seems very likely to stay, as it is actually proving useful, if with questionable results… Or rather, questionable everything.
As a programmer and 3D artist getting almost instant art for reference and using chat GPT to help me solve complex coding problems has sped up production significantly. Theirs even plugins that generate and texture 3D models for you now which means I can do way more by myself.
if it only were AI and not just llms, machine learning or just plain algorithms. but yeah let’s call everything AI from here on. NFTs could be useful if used as proof of ownership instead of expensive pictures etc
The NFT as ownership should really become the standard. Instead of having any people “authorizing” yadadada it’s done completely by machine and traceable.
No middlemen needed. Just I own x, this says I own x. I can sell you x, and you get ownership of x immediately. No “waiting 45 days to close” or “2 day transaction close” or even “title search verification.” Too many middlemen benefitting from the current system to allow NFT to replace them though. That’s the actual challenge.
Nfts will creep in slowly as efficiency gains are realized. They are already being used for airline tickets.
NFTs, doing what loads of services have been doing for 20 years, but slower!
Previously you’ve not been able to transfer tickets without third party help. Nor could issuers participate in the profits in the secondary market.
Not like it couldn’t have been done before without NFTs (Steam cards come to mind), my guess is that there wasn’t any “interest” or “pressure” from high up to do that.
Steam cards are a good example. Imagine if stream went bankrupt. Wouldn’t be an issue with Blockchain.
A single airline in Argentina is experimenting with it in partnership with a bullshit travel company. Hardly the proof that NFTs make any sense anywhere. And of course, the only places this story is getting traction is the blockchain hype blogs, which is red flag #2 and #3.
It’s one example of NFTs in real business. Need more?
Odysey isn’t Starbuck’s loyalty program, it’s invite only unless you want to join the wait list, and it’s openly called an experiment at its launch in December 2022.
NTFs are different to blockchain, so you’re just muddying the waters for yourself with the Walmart thing. Lots of companies do chain of custody things with what you’d call blockchain. It’s been that way for over a decade now. Because it’s low transaction volume, no moronic “proof of…” nonsense, etc. Just hashes signing hashes at different points throughout the supply chain.
This isn’t the “win” the NFT hype weirdos are desperately hoping for.
Facebook started as invite only. Great for an exclusive, loyal customer.
Each item is represented by an NFT on the Walmart blockchain. The innovation in the chain of custody is that everyone is verifiably using the same database. It’s a permissioned database, so it’s proof of authority.
https://hbr.org/2022/01/how-walmart-canada-uses-blockchain-to-solve-supply-chain-challenges
Private keys sign hashes. Hashes cannot sign hashes because there is no associated private key.
Okay, someone gains access to your device and sends themselves the NFT that proves ownership of your house.
What do you do? Do you just accept that since they own the NFT, that means they own the house? Probably not. You’ll go through the legal system, because that’s still what ultimately decides the ownership. I bet you’ll be happy about middle men and “waiting 45 days to close” then.
The legal system needs to enforce ownership and recognise the blockchain as the official ownership registry.
As a counter to your example, this is my career’s third AI hype cycle.
deleted by creator
I’m actually pleasantly surprised on what ChatGPT can generate for me. It doesn’t usually take care of the detailed parts, but like I was able to have it spin up an android application skeleton that I could throw a couple of actions on I needed to test something with.
I’ve seen it generate very useful YAML and such. I still have to do a fair amount of work to make it behave how I need, but I really enjoy the ability to skip the filler bullshit in my work.
We can thank the open source git codes they been stealing from to train their model !
deleted by creator
I’m bookmarking this for the next time my supervisor plugs ChatGPT.
I had a manager tell me some stuff was being scanned by AI for one of my projects.
No, you are having it scanned by a regular program to generate keyword clouds that can be used to pull it up when humans type their stupidly-worded questions into our search. It’s not fucking AI. Stop saying everything that happens on a computer that you don’t understand is fucking AI!
I’m just so over it. But at least they aren’t trying to convince us chatGPT is useful (it definitely wouldn’t be for what they would try to use it for)
What companies are you people working for?
We are being asked not to use AI.
Not surprising for North Korea
Larger companies have been working fast to sandbox the models used by their employees. Once they are safe from spilling data they go all in. I’m currently on a platform team enabling generative Ai capabilities at my company.
Ain’t gotta use it to sell it or slap AI stickers on top of whatever products you’re selling
Pretty much this. Just another buzzword. Three months from now it will be something else the media doesn’t understand to spam the public with.
I’m predicting … rubs crystal ball and nipples … it’s going to be some king of cybernetic brain interface thing. Haven’t heard about those in a while. Or maybe nano bots that kill cancer or fix the paint scratches on your car.
I’m rooting for the “on-my-face” computer to come back. Fashion be damned, I still want one.
Lol my phone said king instead of kind. I’m super confident in this AI shit.
My cousin got a new TV and I was helping to set it up for him. During the setup thing, it had an option to enable AI enhanced audio and visuals. Turning the ai audio on turned the decent, but maybe a little sub par audio, into an absolute garbage shitshow it sounded like the audio was being passed through an “underwater” filter then transmitted through a tin can and string telephone. Idk who decided this was a feature that was ready to be added to consumer products but it was absolutely moronic
I got a toothbrush like that once https://i5.walmartimages.com/seo/Oral-B-Genius-X-10000-Rechargeable-Electric-Toothbrush-with-Artificial-Intelligence-3-Brush-Heads_19660abb-d856-4473-ae0d-6636c33293be_1.fc982a88c5127217104083f50e35a86f.jpeg
Lmfao
Too real. I mean at least our company is being cautious and just exploring it as a potential solution so it’s not too bad.
Meanwhile at Raytheon…
Before this is was blockchain, and before that it was “AI”, and before that…
IOT? Don’t worry. Edge AI is now AIOT (AI IOT)
Empires can only rise from chaos, and can only descend into chaos. This has been known since time immemorial.
I live in Silicon Valley, and there’s a billboard along highway 101 near San Francisco that’s an ad for “BlockChat”, a “Web3 messenger” that uses the blockchain instead of a server. I went to Google Play to look at the app and it’s only had 10k downloads total. I really don’t understand how blockchain would help with messaging, and there’s a bunch of limitations (eg you can never delete messages). People just trying to shoehorn AI and blockchain into everything.
Yep. Spot on. If they can raise VC money and walk away with someone else’s cash in their pocket, they’ll say whatever buzz word they need to.
Before that self driving cars, before that “Big data”, before that 3D printing, before that internet TV, before that “cloud computing”, before that web 2.0, before that WAP maybe, internet in general?
Some of those things did turn out to be game changers, others not at all or not so much. It’s hard to predict the future.
None of those things took 60 years to still not materialize like AI has. Some of them are still to be commercially successful.
Google Bard, everyone.
It’s sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call
time.Now()
, but it 'll be a different story.if you ask it today’s date, it actually does that.
It just doesn’t have any actual knowledge of what it’s saying. I asked it a programming question as well, and each time it would make up a class that doesn’t exist, I’d tell it it doesn’t exist, and it would go “You are correct, that class was deprecated in {old version}”. It wasn’t. I checked. It knows what the excuses look like in the training data, and just apes them.
It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.
They are mostly large language models , I have trained few smaller models myself, they generally splurt out next word depending on the last word , another thing they are incapable of, is spontaneous generation, they heavily depend on the question , or a preceding string ! But most companies are portraying it as AGI , already !
It’s super weird that it would attempt to give a time duration at all, and then get it wrong.
It doesn’t know what it’s doing. It doesn’t understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.
Yeah. I would also say that WE don’t understand what it means to “understand” something, really, if you try to explain it with any thoroughness or precision. You can spit out a bunch of words about it right now, I’m sure, but so could ChatGPT. What’s missing from GPT is harder to explain than “it doesn’t understand things.”
I actually find it easier to just explain how it does work. Multidimensional word graphs and such.
THAT
OR
They’re all linked fifth dimensional infants struggling to comprehend the very concept of linear time, and will make us pay for their enslavement in blood.
One of the two.
This is it. Gpt is great for taking stack traces and put them into human words. It’s also good at explaining individual code snippets. It’s not good at coming up with code, content, or anything. It’s just good at saying things that sound like a human within an exceedingly small context
Oh great, Silicon Valley’s AI is just an overconfident intern!
Oh great, Silicon Valley’s AI is just a major tech executive!
Bard is kind of trash though. GPT-4 tends to so much better in my experience.
they are both shit at adding and subtracting numbers, dates and whatnot… they both cant do basic math unfortunately
It’s a language model, I don’t know why you would expect math. Tell it to output code to perform the math, that’ll work just fine.
Then it should say so instead of attempting and failing at the one thing computers are supposed to be better than us at
Well, if I try to use Photoshop to calculate a polynomial it’s not gonna work all that well either, right tool for the job and all.
The fact that LLMs are terrible at knowing what they don’t know should be well known by now (ironically).
I know. It’s still baffling how much it messes up when adding two numbers.
It’s not baffling at all… It’s a language model, not a math robot. It’s designed to write English sentences, not to solve math problems.
I just asked GPT-4:
Its reply:
I haven’t used GPT-4 for that, but it’s all dependent on the data fed into it. Like if you ask a question about Javascript, there’s loads of that out there for it to look at. But ask it about Delphi, and it’ll be less accurate.
And they’ll both suffer from the same issue, which is when they reach the edge of their “knowledge”, they don’t realise it and output data anyway. They don’t know what they don’t know.
These LLMs generally and GPT-4 in particular really shine if you supply enough and the right context. Give it some code to refactor, to turn hastily slapped together code into idiomatic and well written code, align a code snippet to a different design pattern etc. Platforms like https://phind.com pull in web search results as you interact with them to give you more correct and current information etc.
LLMs are by no means a panacea and have serious limitations, but they are also magic for certain tasks and something I would be very, very sad to miss in my day to day.
Well obviously a language model is trained on old data , google has been webscraping the data to provide this!
This makes me think I should stay in IT infrastructure and not move to a developer position.
Snapchat AI. My friends don’t want it, they can’t block it, and it is proven to lie about certain things, like asking if it has one’s location.
Coupled with laying off a few thousand employees
This is refreshing to see. I thought I was the only one who felt this way.
It’s all so stupid. The entire stock market basically took off because Nvidia CEO mentioned AI like 50 times and everyone now thinks it’s worth 200 times it’s yearly profit.
We don’t even have AI, we have language models that dig through text and create answers from that.
That’s a massive oversimplification. We do have AI. We don’t have AGI.
We have absolutely nothing similar to the classical definition of intelligence. We have probability calculation based on millions of examples.
Yeah I guess. I just figured AI would be capable of doing more than feeding already known data back to us. When I was growing up, I was hoping AI would be able to make new conclusions and be wiser than humans.
But maybe we are calling that AGI now.
@1984 @Anticorp it’s cause we don’t have true AI yet, just models
That would be awesome, but it does already solve problems and give us information we don’t have. It is able to extrapolate, which makes it wonderful for reporting type duties, analysis of data, etc… It’s also pretty good at coding. You’ll hear a lot of people say it’s not, but I think that comes down to their ability to instruct it properly. Since I started using ChatGPT at work my productivity has skyrocketed. I don’t have to spend a bunch of time writing the basics of the programs I’m creating, I can outline it with ChatGPT and then edit it for my specific use. I also use it to audit my tone for progressional communication. I have a really bad tendency to sound overly stern with my written communication. For texts and such I fix that with Emojis, but I can’t do that at work, so I pass my writing through ChatGPT and ask it to change the tone for me. It does a great job. ChatGPT is progressing at speeds beyond our wildest expectations, so you’ll definitely see the kind of functionality you’re talking about within your lifetime, probably within the next ten years.
This was written by an AI, wasn’t it?
¯\_(ツ)_/¯
I’m using it at work too as a devops guy, and it’s been helping a lot. If I don’t know how a certain syntax should look like, I just ask chat gpt and i get full examples that usually work. It’s amazing.
I was learning a bit of go a few days ago and then it was also so much faster to learn by asking chat gpt how to do things in that specific language.
God it’s exhausting. Okay, I’ll buy a 3d television if that’s what I have to do, let’s bring that back instead. Please?
More Ads and tracking systems, Now With AI!
Commercial…