The most important part of being a prompt engineer is knowing when the responses are bullshit. Which is how the AI field has been the whole time - it selects for niche expertise.
Kind of, it’s kind of like using a calculator instead of doing arithmetic by hand when doing load and strain calculations. It’s a tool which cuts down on the tedious (and error prone) parts of engineering but doesn’t replace the expertise. I use it frequently to write code snippets for things I don’t know the exact sytax for but could easily look up. It just saves time.
Like, we have a guy whose entire job is to understand the ins and outs of a particular bit of modeling software. In the future that will likely be a person who runs the AI which understands the ins and outs of the modeling software. And eventually the AI will replace that software entirely.
Don’t forget that its much more effort than teaching a child, sometimes no matter your words, the machine can be stubborn. It is a very difficult and misunderstood profession, sometimes my head aches a little from typing the same thing over again, expecting a different result. But together we will hallucinate the future, engineering one word at a time.
There’s also jailbreaking the AI. If you happen to work for a trollfarm, you have to be up to date with the newest words to bypass its community guidelines to make it “disprove” anyone left of Mussolini.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
being a prompt engineer is so much more than typing words. you also have to sometimes delete the words and then type new ones
The most important part of being a prompt engineer is knowing when the responses are bullshit. Which is how the AI field has been the whole time - it selects for niche expertise.
So you simply already need to know what you’re asking it, gotcha. Seems easy enough.
Kind of, it’s kind of like using a calculator instead of doing arithmetic by hand when doing load and strain calculations. It’s a tool which cuts down on the tedious (and error prone) parts of engineering but doesn’t replace the expertise. I use it frequently to write code snippets for things I don’t know the exact sytax for but could easily look up. It just saves time.
Like, we have a guy whose entire job is to understand the ins and outs of a particular bit of modeling software. In the future that will likely be a person who runs the AI which understands the ins and outs of the modeling software. And eventually the AI will replace that software entirely.
Don’t forget that its much more effort than teaching a child, sometimes no matter your words, the machine can be stubborn. It is a very difficult and misunderstood profession, sometimes my head aches a little from typing the same thing over again, expecting a different result. But together we will hallucinate the future, engineering one word at a time.
There’s also jailbreaking the AI. If you happen to work for a trollfarm, you have to be up to date with the newest words to bypass its community guidelines to make it “disprove” anyone left of Mussolini.
I tried some of the popular jailbreaks for ChatGPT, and they just made it hallucinate more.
You can skip that bullshit and just run the latest and greatest open source model locally. Just need a thousand dollar gpu