Today, my last 3 messages to Gemini were all pretty much: “cool! We’re agreed on the framework and tone etc in which you’ll communicate this thing to me. Now please, create the fucking thing already”
I kinda feel like GPT is if you skipped college and just went with the apprenticeship strategy but it’s apprenticeship was with Reddit posts
Good enough but every now and then has some wildly inaccurate shit sprinkled in just enough to make you question the integrity of the whole thing.
LLMs (unless implemented with general knowledge AI) will never be accurate or more than a novelty toy. It’s close to being iRobot but right now it’s just an abacus. The future won’t be about one model, it’ll be about orchestration of models or the development of model ecosystems to make a better overall symphony as the product/tool
If the AI works then fantastic. It’s inevitable so it’s going to get used by companies but the issue is companies using it without understanding what it does or what it’s capable of doing.
This is the value I see in AI is letting human agents work way faster. An AI which is trained on your previous human-managed tickets and suggests the right queue, status and response but still allows the human agents to ultimately approve or rewrite the AI response before sending would save a mountain of work for any kind of queue work and chat support work
People just don’t get it… LLMs are unreliable, casual, and easily distracted/incepted.
They’re also fucking magic.
That’s the starting point - those are the traits of the technology. So what is it useful for?
You said drafting basically - and yeah, absolutely. Solid use case.
Here’s the biggest one right now, IMO - education. An occasionally unreliable tutor is actually better than a perfect one - it makes you pay attention. Hook it into docs or a search through unstructured comments? It can rephrase for you, dumb it down or just present it casually. It can generate examples, and even tie concepts together thematically
Text generation - this is niche for “proper” usage, but very useful. I’m making a game, I want an arbitrarily large number of quest chains with dialogue. We’re talking every city in the US (for now), I don’t need high quality or perfect accuracy - I need to take a procedurally generated quest and fluff it up with some dialogue.
Assistants - if you take your news feed or morning brief (or most anything else), they can present the information in a more human way. It can curate, summarize, or even make a feed interactive with conversation. They can even do fantastic transcriptions and pretty good image recognition to handle all sorts of media
There’s plenty more, but here’s the thing - none of those are particularly economically valuable. Valuable at an individual/human level, but not something people are willing to pay for.
The tech is far from useless… Even in it’s current state, running on minimal hardware, it can do all sorts of formerly impossible things.
It’s just being sold as what they want it to be, not what it is
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
lmao… when you give an LLM unlimited power and an ill-defined role, it assumes the position of a shitty project manager, of course
I laughed my ass off at this! So well put!
Today, my last 3 messages to Gemini were all pretty much: “cool! We’re agreed on the framework and tone etc in which you’ll communicate this thing to me. Now please, create the fucking thing already”
It’s learning capabilities are clearly unrivaled
I kinda feel like GPT is if you skipped college and just went with the apprenticeship strategy but it’s apprenticeship was with Reddit posts
Good enough but every now and then has some wildly inaccurate shit sprinkled in just enough to make you question the integrity of the whole thing.
LLMs (unless implemented with general knowledge AI) will never be accurate or more than a novelty toy. It’s close to being iRobot but right now it’s just an abacus. The future won’t be about one model, it’ll be about orchestration of models or the development of model ecosystems to make a better overall symphony as the product/tool
I see Bing horribly confabulate all the time (and sometimes subsequently gaslight).
Thus I was surprised at last month’s Klarna news:
Wonder what’s going on behind the scenes.
If the AI works then fantastic. It’s inevitable so it’s going to get used by companies but the issue is companies using it without understanding what it does or what it’s capable of doing.
Or without caring too eh?
This is the value I see in AI is letting human agents work way faster. An AI which is trained on your previous human-managed tickets and suggests the right queue, status and response but still allows the human agents to ultimately approve or rewrite the AI response before sending would save a mountain of work for any kind of queue work and chat support work
I bet that 75% of support requests are people who didn’t read the FAQ, and if you can get humans not doing that, it’s much better for both
People just don’t get it… LLMs are unreliable, casual, and easily distracted/incepted.
They’re also fucking magic.
That’s the starting point - those are the traits of the technology. So what is it useful for?
You said drafting basically - and yeah, absolutely. Solid use case.
Here’s the biggest one right now, IMO - education. An occasionally unreliable tutor is actually better than a perfect one - it makes you pay attention. Hook it into docs or a search through unstructured comments? It can rephrase for you, dumb it down or just present it casually. It can generate examples, and even tie concepts together thematically
Text generation - this is niche for “proper” usage, but very useful. I’m making a game, I want an arbitrarily large number of quest chains with dialogue. We’re talking every city in the US (for now), I don’t need high quality or perfect accuracy - I need to take a procedurally generated quest and fluff it up with some dialogue.
Assistants - if you take your news feed or morning brief (or most anything else), they can present the information in a more human way. It can curate, summarize, or even make a feed interactive with conversation. They can even do fantastic transcriptions and pretty good image recognition to handle all sorts of media
There’s plenty more, but here’s the thing - none of those are particularly economically valuable. Valuable at an individual/human level, but not something people are willing to pay for.
The tech is far from useless… Even in it’s current state, running on minimal hardware, it can do all sorts of formerly impossible things.
It’s just being sold as what they want it to be, not what it is