danielbln
link
fedilink
3
edit-2
1Y

Bard is kind of trash though. GPT-4 tends to so much better in my experience.

@focus@lemmy.film
link
fedilink
English
31Y

they are both shit at adding and subtracting numbers, dates and whatnot… they both cant do basic math unfortunately

danielbln
link
fedilink
61Y

It’s a language model, I don’t know why you would expect math. Tell it to output code to perform the math, that’ll work just fine.

Then it should say so instead of attempting and failing at the one thing computers are supposed to be better than us at

danielbln
link
fedilink
1
edit-2
1Y

Well, if I try to use Photoshop to calculate a polynomial it’s not gonna work all that well either, right tool for the job and all.

The fact that LLMs are terrible at knowing what they don’t know should be well known by now (ironically).

And if Photoshop had a way to ask it for such, it’d be a mistake.

Gpt thinking it knows something and hallucinating is ultimatelya bug, not a feature, no matter what the apologists say

@focus@lemmy.film
link
fedilink
English
01Y

I know. It’s still baffling how much it messes up when adding two numbers.

@dan@upvote.au
link
fedilink
English
11Y

It’s not baffling at all… It’s a language model, not a math robot. It’s designed to write English sentences, not to solve math problems.

danielbln
link
fedilink
11Y

I just asked GPT-4:

What’s 7 * 8 divided by 10, to the power of 3?

Its reply:

Let’s break this down step by step:

First, multiply 7 and 8 to get 56.

Then, divide 56 by 10 to get 5.6.

Finally, raise 5.6 to the power of 3 (5.6 * 5.6 * 5.6) to get 175.616.

So, 7 * 8 divided by 10, to the power of 3 equals 175.616

@focus@lemmy.film
link
fedilink
English
11Y

It’s pretty hit or miss though… I’ve had lots of good calculations with the odd wrong one sprinkled in, making it unreliable for doing maths. Mostly because it presents the result with absolute certainty.

@Blackmist@feddit.uk
link
fedilink
English
61Y

I haven’t used GPT-4 for that, but it’s all dependent on the data fed into it. Like if you ask a question about Javascript, there’s loads of that out there for it to look at. But ask it about Delphi, and it’ll be less accurate.

And they’ll both suffer from the same issue, which is when they reach the edge of their “knowledge”, they don’t realise it and output data anyway. They don’t know what they don’t know.

danielbln
link
fedilink
5
edit-2
1Y

These LLMs generally and GPT-4 in particular really shine if you supply enough and the right context. Give it some code to refactor, to turn hastily slapped together code into idiomatic and well written code, align a code snippet to a different design pattern etc. Platforms like https://phind.com pull in web search results as you interact with them to give you more correct and current information etc.

LLMs are by no means a panacea and have serious limitations, but they are also magic for certain tasks and something I would be very, very sad to miss in my day to day.

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 77 users / day
  • 211 users / week
  • 413 users / month
  • 2.92K users / 6 months
  • 1 subscriber
  • 1.53K Posts
  • 33.8K Comments
  • Modlog