Considering that very subtle not-human-visible tweaks can make vision models misclassify inputs, it seems very plausible that you can include non-human-visible content the model consumes.
> Evaluation and Additional Services. In some cases, we may permit you to evaluate our Services for a limited time or with limited functionality. Use of our Services for evaluation purposes are for your personal, non-commercial use only.
All that says to me is don't abuse free trials for commercial use.
> These Terms apply to you if you are a consumer who is resident in the European Economic Area or Switzerland. You are a consumer if you are acting wholly or mainly outside your trade, business, craft or profession in using our Services.
> Non-commercial use only. You agree that you will not use our Services for any commercial or business purposes
That's what I am saying though. Anecdotes are the wrong thing to focus on, because if we just focused on anecdotes, we would all never leave our beds. People's choices are generally based on their personal experience, not really anecdotes online (although those can be totally crippling if you give in).
Car crashes are incredibly common and likewise automotive deaths. But our personal experience keeps us driving everyday, regardless of the stories.
Airbags, yes. But you can't just make it provably impossible for a car to crash into something and hurt/kill its occupants, other than not building it in the first place. Same with LLMs - you can't secure them like regular programs without destroying any utility they provide, because their power comes from the very thing that also makes them vulnerable.
And yet in the US 40,000 people still die on average every year. Per-capita it's definitely improving, but it's still way worse than it could/should be.
Yes, and a photo you put on your physical desktop will fade over time. Computers aren't like that, or at least we benefit greatly from them not being like that. If you tell your firewall to block traffic to port 80, you expect all such traffic to be blocked, not just the traffic that arrives in the moments when it wasn't distracted.
It only tells you that you can't secure a system using an LLM as a component without completely destroying any value provided by using the LLM in the first place.
Prompt injection cannot be solved without losing the general-purpose quality of an LLM; the underlying problem is also the very feature that makes LLMs general.
reply