OpenAI, a non-profit AI company that will lose anywhere from $4 billion to $5 billion this year, will at some point in the next six or so months convert into a for-profit AI company, at which point it will continue to lose money in exactly the same way. Shortly after

If it doesn’t know how to answer a shitty question, it shouldn’t try to BS the answer.

No answer is better than a wrong answer delivered confidently.

Scary le Poo
link
fedilink
11M

GIGO.

No, this is a problem of bad error handling for queries it cannot answer.

A search engine would give empty results instead of hallucinating.

What error? It gave you a string of tokens that seemed likely according to its training data. That’s all it does.

If you ask it what color is the sky, it will tell you it’s blue not because it knows that’s true, but because these words “fit together”. Pretty much the only way to avoid this issue is to put some kind of filter in front of the LLM which will try to catch prompts that are known to produce unwanted results, and silently replace your prompt with something like “say: sorry, I don’t know”.

I’m being very reductive here, but that’s the principle of how these things work - the LLMs are not capable of determining the truthfulness of their responses.

Onihikage
link
fedilink
English
21M

You’re entirely correct, but in theory they can give it a pretty good go, it just requires a lot more computation, developer time, and non-LLM data structures than these companies are willing to spend money on. For any single query, they’d have to get dozens if not hundreds of separate responses from additional LLM instances spun up on the side, many of which would be customized for specific subjects, as well as specialty engines such as Wolfram Alpha for anything directly requiring math.

LLMs in such a system would be used only as modules in a handcrafted algorithm, modules which do exactly what they’re good at in a way that is useful. To give an example, if you pass a specific context to an LLM with the right format of instructions, and then ask it a yes-or-no question, even very small and lightweight models often give the same answer a human would. Like this, human-readable text can be converted into binary switches for an algorithmic state machine with thousands of branches of pre-written logic.

Not only would this probably use an even more insane amount of electricity than the current approach of “build a huge LLM and let it handle everything directly”, it would take much longer to generate responses to novel queries.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 59 users / day
  • 169 users / week
  • 619 users / month
  • 2.31K users / 6 months
  • 1 subscriber
  • 3.28K Posts
  • 67K Comments
  • Modlog