LLM system input is unsanitizable, according to NVidia:

The control-data plane confusion inherent in current LLMs means that prompt injection attacks are common, cannot be effectively mitigated, and enable malicious users to take control of the LLM and force it to produce arbitrary malicious outputs with a very high likelihood of success.

https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

@MalReynolds@slrpnk.net
link
fedilink
English
26M

Everything old is new again (GIGO)

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 141 users / day
  • 300 users / week
  • 692 users / month
  • 2.83K users / 6 months
  • 1 subscriber
  • 1.56K Posts
  • 34.7K Comments
  • Modlog