This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.
Title
Prompt-injection is not a solved problem
Description
The upshot of the video linked below is that prompt injection has not really been addressed in any significant way because the LLM, by its nature, doesn't give us a good way of doing so without neutering the main advantage of it.
<media href="https://www.youtube.com/watch?v=vTKHB4rkDZM" src="https://www.youtube.com/v/vTKHB4rkDZM" source="YouTube" width="560px" author="Computerphile / Mike Pound" caption="Generative AI's Greatest Flaw">
The problem boils down to the inability to distinguish between query and parameters. The prompt is the prompt. It's all just arranged in a way that will hopefully influence the result of pouring it all into the same funnel. There is no analogue in LLM prompts to the ability to separate query text from parameters (program from data), as there is, for example, in SQL.
Since prompt injection is relatively easy and currently nearly completely unpreventable, then it seems that giving LLMs so-called agentic powers is a recipe for disaster.
<img src="{att_link}prompt_injections.jpg" href="{att_link}prompt_injections.jpg" align="none" scale="35%">