No you’re wrong. For example you can have a full conversation with GPT-4 about pointers or garbage collection, and come away from that with a really good understanding of the fundamentals.

You won’t get that from just reading a book or article, because you can’t ask an questions. The conversational nature of ChatGPT is amazing. I’ve learned things in days that have taken months in the past.

You can also paste in a snippet of code that doesn’t work, and it will usually explain why and how to fix it.

I very rarely encounter halucination.

Traister101
link
fedilink
3
edit-2
1Y

Can’t say that I struggled to understand pointers but if GPT helped you conceptualize em that’s good. I really don’t see much utility in even the current iterations of these LLMs. Take copilot for example, ultimately all it actually helps with is boilerplate which if you are writing enough for it to be meaningfully helpful you can have a fancy IDE live template or just a plain old snippet.

Theres a lot of interesting things it could be doing like checking if my documentation is correct or the like but all it does is shit I could do myself with less hassle.

There’s also the whole issue of LLMs having no concept of anything. You aren’t having a conversation, it just spits out the words it thinks are most likely to occur in the given context. That can be helpful for extremely generic questions it’s been trained on thanks to Stack Overflow but GPT doesn’t actually know the right answer. It’s like really fancy autocorrect based on the current context. What this means is you absolutely cannot trust anything it says unless you know enough about the topic to determine what it outputs is accurate.

To draw a comparison to written language (hopefully you don’t know Japanese) is 私 or 僕 “I”? Can you confidently rely on auto correct to pick the right one? Probably not cause the first one わたし (watashi) is “I” and the second ぼく (boku) is also “I” (more boyish). Trusting an LLMs output without being able to ensure it’s accuracy is like trusting auto correct to use the right word in a language you don’t know. Sure it’ll work out fine generally but when it fails you don’t have the knowledge to even notice.

Because of these failings I don’t see much utility in LLMs especially seeing as the current obsession is chat apps geared at the general public to fool around with.

Fucking love your example dude.

I’ve found ChatGPT3 OK for low level stuff, but I stopped using it pretty quickly once I went to trying to get it to help build intermediate stuff.

If its making errors in simple script design, it can’t handle more.

It is fab for the basics, but I wouldn’t truste it for learning anything else more complex for exactly the reasons you said.

Be liable to write my own backdoors that way hahah

Traister101
link
fedilink
2
edit-2
1Y

Thanks! Lot of people don’t seem to realize that GPT doesn’t actually have any idea what words mean. It just outputs stuff based on how likely it is to show up after the previous stuff. This results in very interesting behavior but there’s nothing conceptually “there”, there’s no thinking and so, no conversation to be had.

Create a post

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person’s post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you’re posting long videos try to add in some form of tldr for those who don’t want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



  • 1 user online
  • 1 user / day
  • 1 user / week
  • 1 user / month
  • 1 user / 6 months
  • 1 subscriber
  • 1.21K Posts
  • 17.8K Comments
  • Modlog