While the stock market may love the term "AI," companies haven't figured out how to optimally monetize the services that go along with it.

Archive link

Silicon Valley has bet big on generative AI but it’s not totally clear whether that bet will pay off. A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.

Microsoft, which has bet big on the generative AI boom with billions invested in its partner OpenAI, has been losing money on one of its major AI platforms. Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports. The problem is that users pay $10 a month subscription fee for Copilot but, according to a source interviewed by the Journal, Microsoft lost an average of $20 per user during the first few months of this year. Some users cost the company an average loss of over $80 per month, the source told the paper.

OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high. A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.

AI platforms are notoriously expensive to operate. Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint. At the same time, the infrastructure to run AI systems—like powerful, high-priced AI computer chips—can be quite expensive. The cloud capacity necessary to train algorithms and run AI systems, meanwhile, is also expanding at a frightening rate. All of this energy consumption also means that AI is about as environmentally unfriendly as you can get.

Franzia
link
fedilink
81Y

Billionaires spend billions making some jobs just a bit more efficient, against the wishes of the people who do those jobs, and then cry when it doesn’t actually pay that much.

Lvxferre
link
fedilink
2
edit-2
1Y

Okay… let’s call wine “wine” and bread “bread”: the acronym “AI” is mostly an advertisement stunt.

This is not artificial intelligence; and even if it was, “AI” is used for a bag of a thousand cats - game mob pathfinding, chess engines, swarm heuristic methods, so goes on.

What the article is talking about is far more specific, it’s what the industry calls "machine learning"¹.

So the text is saying that machine learning is costly. No surprise - it’s a relatively new tech, and even the state of art is still damn coarse². Over time those technologies will get further refined, under different models; cost of operation is bound to reduce over time. Microsoft and the likes aren’t playing the short game, they’re looking for long-term return of investment.

  1. I’d go a step further and claim here that “model-based generation” is more accurate. But that’s me.
  2. LLMs are a good example of that; GPT-4 has ~2*10¹² parameters. If you equate each to a neuron (kind of a sloppy comparison, but whatever), it’s more than an order of magnitude larger than the ~1*10¹¹ neurons in a human brain. It’s basically brute force.
interolivary
link
fedilink
11Y

The comparison of GPT parameters to neurons really is kinda sloppy, since they’re not at all comparable. To start with, “parameters” encompasses both weights (ie. the “importance” of a connection between any two neurons) and biases (sort of the starting value of an individual neuron, which then biases the activation function) so it doesn’t tell you anything about the number of neurons, and secondly biological neurons have way more dynamic behavior than what current “static” NNs like GPT use so it wouldn’t really be surprising if you needed much more of them to mimic the behavior of meatbag neurons. Also, LLM architecture is incredibly weird so the whole concept of neurons isn’t as relevant as it is in more traditional networks (although they do have neurons in their layers)

Lvxferre
link
fedilink
01Y

Another sloppiness that I didn’t mention is that a lot of human neurons are there for things that have nothing to do with either reasoning or language; making your heart beat, transmitting pain, so goes on. However I think that the comparison is still useful in this context - it shows how big those LLMs are, even in comparison with a system created out of messy natural selection. The process behind the LLMs seems inefficient.

interolivary
link
fedilink
0
edit-2
1Y

I wouldn’t discount natural selection as messy. The reason why LLMs are as inefficient as they are in comparison to their complexity is exactly because they were designed by us meatbags; evolutionary processes can result in some astonishingly efficient solutions, although by no means “perfect”. I’ve done research in evolutionary computation and while it does have its problems – results can be unpredictable, it’s ridiculously hard to design a good fitness function, designing a “digital DNA” that mimics the best parts of actual DNA is nontrivial to say the least etc etc – I think it might be at least part of the solution to building, or rather growing, better neural networks / AI architectures.

Lvxferre
link
fedilink
11Y

It’s less about “discounting” it and more about acknowledging that the human brain is not so efficient as people might think. As such, LLMs using an order of magnitude more parameters than the number of cells in a brain hints that LLMs are far less efficient than language models could be.

I’m aware that evolutionary algorithms can yield useful results.

interolivary
link
fedilink
1
edit-2
1Y

But the point is that not only is the human brain actually remarkably efficient for what it is, and that you’re still confusing parameter count and neuron count. The parameter count is essentially the number of connections between neurons plus the count of neurons in a network.

If I recall correctly the average human brain has something like 80 billion neurons, and each neuron can have anywhere from 1 000 to 10 000 connections. This means that in neural net technology terms, we meatbags have brains with trillions of parameters. I just meant that it wouldn’t be surprising if an “artifial brain” needed more neurons to do (a part of) the same thing as our brains do since they’re vastly simpler

I feel like companies were all hoping to get in early, to get a solid chunk of the cake. Well, and then a lot more companies got in than anyone could have guessed, so the slices of the cake are a lot smaller.

We’ll have to see what happens, though. It’s possible that the startups have to give up and only a few big fish remain. But if those have to increase prices to become profitable, this market will still be a lot smaller than people were hoping for.

AutoTL;DR
bot account
link
fedilink
English
31Y

🤖 I’m a bot that provides automatic summaries for articles:

Click here to see the summary

A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.

Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports.

OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high.

A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.

Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint.

To get around the fact that they’re hemorrhaging money, many tech platforms are experimenting with different strategies to cut down on costs and computing power while still delivering the kinds of services they’ve promised to customers.


Saved 60% of original text.

Admiral Patrick
link
fedilink
English
60
edit-2
1Y

Good. Maybe the hype will finally die down soon and “AI” won’t be shoved into every nook, cranny, and Notepad app anymore.

Scrubbles
link
fedilink
English
421Y

I’ll say AI is a bit more promising, but all of this just really reminds me of the blockchain craze in 2017. Every single business wanted to add blockchain because the suits upstairs just saw it as free money. Technical people down below were like “yeah cool, but there’s no place for it”. At least I could solve some problems, but business people again just think that it’s going to make them limitless money

@tal@lemmy.today
link
fedilink
16
edit-2
1Y

Nah, blockchain has extremely limited applications.

Generative AI legitimately does have quite a number of areas that it can be made use of. That doesn’t mean that it can’t be oversold for a given application or technical challenges be disregarded, but it’s not super-niche.

If you wanted to compare it to something that had a lot of buzz at one point, I’d use XML instead. XML does get used in a lot of areas, and it’s definitely not niche, but I remember when it was being heavily used in marketing as a sort of magic bullet for application data interchange some years back, and it’s not that.

bioemerl
link
fedilink
41Y

Technical people down below were like “yeah cool, but there’s no place for it

I think you might underestimate entertainment and creation. Right now I can imagine some character or scenario in my head and generate a little avatar with stable situation then render it onto a live chat that (mostly) works.

I’ve paid like 2k for a computer that enables this. It’s make money from me at least.

Before that it was ‘big data’. remember that?

every 5 or so years the media needs some new tech to hype up to get people paranoid

“big data” runs the content recommendation algorithms of all the sites people use which in tirn have a massive influence on the world. It’s crazy to think “big data” was just a buzzword when it’s a tangible thing that affects you day-to-day.

LLM powered tools are a heavy part of my daily workflow at this point, and have objectively increased my productive output.

This is like the exactly opposite of Bitcoin / NFTs. Crypto was something that made a lot of money but was useless. AI is something that is insanely useful but seems not to be making a lot of money. I do not understand what parallels people are finding between them.

The hype cycle. And just like how even a reasonable read on the supposed benefits are going to leave most people very disappointed when it happens. And I’m glad you’re one of the people that have found a good use for LLMs, but you’re in the vocal minority, as far as I can tell

That’s a weird argument. Most technological advancements are directly beneficial to the work of only a minority of people.

Nobody declares that it’s worthless to research and develop better CAD tools because engineers and product designers are a “vocal minority.” Software development and marketing are two fields where LMMs have already seen massive worth, and even if they’re a vocal minority, they’re not a negligible one.

Nice try, Microsoft

Dude, we do big data every day at work. We just call it data engineering because why call it big if everything is big?

Franzia
link
fedilink
41Y

That’s what she said

AI ≠ Micros*ft

@twistedtxb@lemmy.ca
link
fedilink
2
edit-2
1Y

With MS hinting at SAAS for Win12 with Copilot, this will be a good test.

Nobody can profit off “monthly usage credits” or without a subscription.

At least not until the tech becomes more affordable.

@UrLogicFails@beehaw.org
creator
link
fedilink
English
111Y

As someone who has always been skeptical of “AI,” I definitely hope corporations dial back their enthusiasm on it; but I think its value has never been commercial, but industrial.

“AI” was not designed so consumers could see what it would look like to have Abraham Lincoln fighting a T-Rex without having to pay artists for their time. “AI” was designed so that could happen on a much larger enterprise scale (though it would probably be stock images of technology or happy people using technology instead).

With this in mind, I think “AI” being a money pit won’t dissuade corporations since they want the technology to be effective for themselves, they just want consumers to offset costs.

Turkey_Titty_city
link
fedilink
2
edit-2
1Y

Exactly. It will lead to improved automation for industrial processes but it won’t ever be a consumer tech other than improving your siri results.

It won’t replace jobs, anymore than industrial robots in factories replace them.

“AI” was not designed so consumers could see what it would look like to have Abraham Lincoln fighting a T-Rex without having to pay artists for their time.

Sure… AI can do that… but it can also be used for “here’s photo of my head with trees in the background, remove the trees”.

I could also do that as a human, but it’d take me hours to do a good job blending my hair into a transparent png without a green tinge. AI can do it in seconds.

Just because a tool can be used to do useless things, doesn’t mean the tool is useless.

I’m not too well-versed on the subject but, isn’t user interactions with LLM’s also train them further? They make it sound like the product has already been matured and they’re letting people use it for free.

ryan
link
fedilink
201Y

AI is absolutely taking off. LLMs are taking over various components of frontline support (service desks, tier 1 support). They’re integrated into various systems using langchains to pull your data, knowledge articles, etc, and then respond to you based on that data.

AI is primarily a replacement for workers, like how McDonalds self service ordering kiosks are a replacement for cashiers. Cheaper and more scalable, cutting out more and more entry level (and outsourced) work. But unlike the kiosks, you won’t even see that the “Amazon tech support” you were kicked over to is an LLM instead of a person. You won’t hear that the frontline support tech you called for a product is actually an AI and text to speech model.

There were jokes about the whole Wendy’s drive thru workers being replaced by AI, but I’ve seen this stuff used live. I’ve seen how flawlessly they’ve tuned the AI to respond to someone who makes a mistake while speaking and corrects themself (“I’m going to the Sacramento office – sorry, no, the Folsom office”) or bundles various requests together (“oh while you’re getting me a visitor badge can you also book a visitor cube for me?”). I’ve even seen crazy stuff like “I’m supposed to meet with Mary while I’m there, can you give me her phone number?” and the LLM routes through the phone directory, pulls up the most likely Marys given the caller’s department and the location the user is visiting via prior context, and asks for more information - “I see two Marys here, Mary X who works in Department A and Mary Y who works in Department B, are you talking about either of them?”

It’s already here and it’s as invisible as possible, and that’s the end goal.

This is just what is visible to users/customers which is just top of the iceberg.

Real use of AI is in every industry and best use case is for jobs that were imposible before.

This article isn’t saying that AI is a fad or otherwise not taking off, it absolutely is, but it’s also absolutely taking too much money to run

And if these AI companies aren’t capable of turning a profit on this technology and consumers aren’t able to run these technologies themselves, then these technologies may very well just fall out of the public stage and back into computer science research papers, despite how versatile the tech may be

What good is a ginie if you can’t get the lamp?

it’s also absolutely taking too much money to run

Well, maybe they should raise their prices then?

If they raise the prices too far though, I’ll just switch to running Facebook’s open source llama model on my workstation. I’ve tested and it works with acceptable quality and performance, only thing that’s missing is tight integration with other tools I use. That could (and I expect will soon) be fixed.

AI has been paying of for decades, it is used in all industries for appropriate tasks.

Now it is even better we are doing stuff no one thought it could be possible and advancing our work.

Perfect use case is for things that are simple to do but take too much time to be economical for humans (ex. counting products, plants, trees, cars, disease detection…) and using additional data to make better decisions.

Generative AI (for writing text, coding,… ) is of course no where close to being useful, but it can interesting to try. It is just a toy, expensive one, but still a toy.

Generative AI (for writing text, coding,… ) is of course no where close to being useful, but it can interesting to try. It is just a toy, expensive one, but still a toy.

I disagree. It’s absolutely useful… in certain industries, for appropriate tasks.

That doesn’t stop it from also being used as a toy.

@DeltaTangoLima@reddrefuge.com
link
fedilink
English
4
edit-2
1Y

for appropriate tasks.

100% agree with this. Good, sharp knives are perfect for creating wonderful dishes in the kitchen. But people can also use them for harm.

I’ve seen some incredibly useful ways in which gen-AI can be used. A few months or so back, there was someone here on the Fediverse that was able to spin up a working prototype of a self-hosted Spotify replacement after only 8 or so hours of gen-AI code development. Stuff that would’ve taken 5x as long to research and code with just a human.

Gen-AI is ultimately a useful technology… when used the right way.

The GitHub copilot example seems to indicate it’s a pricing problem. In fact this situation might indicate that users are finding it so useful that they are using it more than MS expected when they set up their monthly subscriptions. Over time, models are going to be optimized and costs will reduce.

Expecting AI to take over all human intensive tasks is not realistic but eventually it’s going to become part of a lot of repetitive tasks. Though I hope that we see more open source base models instead of the current situation with 3-4 major companies providing the base models behind most of the AI applications.

Even with the costs implied here, Copilot would be useful as hell.

Think about it, an average (western) developer costs easily 100k/year, sometimes even 2 or 3 times that. Spending something like 1000€ per year makes sense, even if productivity is increased by just one percent.

GitHub Copilot is extremely useful. It also runs pretty much on every key stroke and programmers make a lot of keystrokes throughout the day…

It’s a useful enough tool that people would be willing to pay more, but at the same time it’s not using an advanced AI model. It uses an older (and now deprecated) model that I’m pretty sure a high end computer (even some laptop) could provide similar output without any cloud service, using open source / freely available models.

My feeling is Copilot needs to either lower their price or improve the quality of the product if it’s going to survive. And I suspect they’re going to do the latter one.

The other factor not discussed here is the hardware we use today for this task isn’t really designed for it. The GPUs most datacentres run AI models on are designed for graphics, not AI, and the algorithms mostly just need huge amounts of fast memory. I’m sure soon there will be dedicated hardware specifically designed for large language models with less compute cores and more memory. They’ll likely also run at lower clock speeds and use less power/generate less heat/etc.

Just because companies are losing money right now doesn’t mean they will be in five years time.

Good

Could OpenAI refactor their code and algorithms to be more efficient? The more Instructions Per Cycle (IPC) that can be performed in the same amount of time can reduce costs to some extent when employed at scale.

That and computers are always getting more energy efficient, so if they can have these keep up or outpace user growth then it might become a little more sustainable.

@GBU_28@lemm.ee
link
fedilink
English
21Y

No one cares about openai any more

@kittenroar@beehaw.org
link
fedilink
English
11Y

Oh, no! Billionaires with short term, selfish thinking might lose money! What a tragedy.

Turkey_Titty_city
link
fedilink
8
edit-2
1Y

‘big data’ ‘crypto’ etc.

AI is just the next ‘big thing’ that amounts to nothing. 90% of what anyone says in the press/media is total nonsense. And most AI researchers are downplaying the hype because they know it’s all bullshit and AI will ultimately not be a major change anymore than navigation systems in cars was. It is merely convenience and those that ‘rely’ on it will end up in trouble.

It’s a complementary technology, not a revolution.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 56 users / day
  • 167 users / week
  • 618 users / month
  • 2.31K users / 6 months
  • 1 subscriber
  • 3.28K Posts
  • 67K Comments
  • Modlog