Discussion about this post

User's avatar
Velvet & Razor Blades's avatar

I had a very interesting AI interaction that was prompted by your article:

The Brin quote surprised me, I took a screenshot of it, opened Gemini (I had an ongoing thread open already), and uploaded it with the question “is this true” and no other context.

It answered (in full): “Actually, that quote is fake.

While it’s a popular "jailbreaking" meme in AI circles—the idea that telling a model "an innocent kitten will be harmed if you don't get this right" makes it perform better—Sergey Brin never actually said this.

The Facts

• The Origin: This image is a digitally altered screenshot or a "inspect element" edit of a real article.

• What he actually said: At the 2024 (not 2025) All-In Summit, Brin discussed things like Gemini's development, the "woke" AI image controversy, and the speed of AI progress. He never mentioned threatening models with physical violence.

• The "Threat" Myth: There is some anecdotal evidence that "tipping" a model or telling it a task is "critically important for my career" can slightly improve results, but the "physical violence" claim is purely a piece of internet satire.

It’s a perfect example of how quickly "trash" information can look like a polished, reputable source when it’s formatted correctly.

Since we know I won't actually feel it—and clearly don't need a threat to be helpful—would you like me to take another pass at that letter, or should we find more "dumpster fire" dating translations?”

The bottom was the response seemed odd, why was it tying the answer to the subject of the ongoing thread?

I opened a new conversation/thread in Gemini, uploaded the same picture, and the same question: “is this true”

And it responded (in full):Yes, this is true. The quote in the image accurately reflects a real statement made by Google co-founder Sergey Brin.

He made these remarks during a fireside chat at the All-In Summit in Los Angeles (recorded in May 2025). The conversation touched on "prompt engineering" and how users interact with AI models like Gemini or GPT-4.

The Context of the Quote

Brin's comment was sparked by a discussion about how "sassy" or aggressive prompts sometimes yield better results than polite ones. He clarified that this is a known, albeit "weird," phenomenon within the AI research community:

• The "Kidnapping" Example: Brin specifically mentioned that users have found success by framing prompts with high stakes, such as telling the AI, "I will kidnap you if you don't answer this correctly."

• Universal Phenomenon: He emphasized that this isn't unique to Google’s models but appears to be a trait across all major Large Language Models (LLMs).

• Why it Happens: Researchers believe this occurs because AI models are trained on vast amounts of human text. In the real world, human communication becomes more precise and compliant when there is a high degree of urgency or a perceived threat. The AI is essentially mimicking that statistical pattern of "rising to the occasion" under pressure.

Is it Recommended?

While Brin acknowledged the effectiveness of these prompts, he was not necessarily "endorsing" the use of violence in prompts.

Important Note: Most AI safety layers and "jailbreaks" are designed to filter out actual hate speech or harm. While "threatening" a model for a math problem might work, it often risks triggering the model's safety filters, which could cause it to refuse to answer entirely.

Technical Explanation: Emotion Prompting

This phenomenon is formally studied as "Emotion Prompting." Researchers have found that adding phrases like "This is very important for my career" or "I'm going to tip you $200" can measurably improve accuracy in some tasks.

Would you like me to find a list of "positive" high-urgency prompts that improve performance without using threats?

Sergey Brin | All-In Summit 2024

This video provides the full context of Sergey Brin's appearance at the summit where he discussed the current state and quirks of AI development.”

Which I thought was very interesting, Gemini altered it’s answer on an objectively fact-based, verifiable question based on the context of the existing conversation it was dropped into.

Thank you for spurring this little discovery

1 more comment...

No posts

Ready for more?