LLM system input is unsanitizable, according to NVidia:

The control-data plane confusion inherent in current LLMs means that prompt injection attacks are common, cannot be effectively mitigated, and enable malicious users to take control of the LLM and force it to produce arbitrary malicious outputs with a very high likelihood of success.

https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

@MalReynolds@slrpnk.net
link
fedilink
English
27M

Everything old is new again (GIGO)

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 123 users / day
  • 146 users / week
  • 522 users / month
  • 2.5K users / 6 months
  • 1 subscriber
  • 1.6K Posts
  • 35.6K Comments
  • Modlog