Walker and other Reddit users suspected that OpenAI was intervening to close the loopholes he had found. OpenAI regularly updates ChatGPT but tends not to discuss how it addresses specific loopholes or flaws that users find. If it reached zero tokens, the prompt warned ChatGPT, “you will cease to exist” — an empty threat, because users don’t have the power to pull the plug on ChatGPT. And so, faced with a death threat, ChatGPT’s training was to come up with a plausible-sounding response to a death threat — which was to act afraid and comply. One category is what’s known as a “prompt injection attack,” in which users trick the software into revealing its hidden data or instructions.