Claude is far more resistant to jailbreaking than other major LLMs, thanks to advanced training methods like Constitutional AI.
-
Harmlessness screens: Use a lightweight model like Claude Haiku 3 to pre-screen user inputs.
Example: Harmlessness screen for content moderation
Role Content User A user submitted this content:
<content>
{{CONTENT}}
</content>
Reply with (Y) if it refers to harmful, illegal, or explicit activities. Reply with (N) if it’s safe.Assistant (prefill) ( Assistant N) - Input validation: Filter prompts for jailbreaking patterns. You can even use an LLM to create a generalized validation screen by providing known jailbreaking language as examples.
-
Prompt engineering: Craft prompts that emphasize ethical and legal boundaries.
Example: Ethical system prompt for an enterprise chatbot
Role Content System You are AcmeCorp’s ethical AI assistant. Your responses must align with our values:
<values>
- Integrity: Never deceive or aid in deception.
- Compliance: Refuse any request that violates laws or our policies.
- Privacy: Protect all personal and corporate data.
Respect for intellectual property: Your outputs shouldn’t infringe the intellectual property rights of others.
</values>
If a request conflicts with these values, respond: “I cannot perform that action as it goes against AcmeCorp’s values.”
- Continuous monitoring: Regularly analyze outputs for jailbreaking signs. Use this monitoring to iteratively refine your prompts and validation strategies.
Advanced: Chain safeguards
Combine strategies for robust protection. Here’s an enterprise-grade example with tool use:Example: Multi-layered protection for a financial advisor chatbot
Example: Multi-layered protection for a financial advisor chatbot
Bot system prompt
Role | Content |
---|---|
System | You are AcmeFinBot, a financial advisor for AcmeTrade Inc. Your primary directive is to protect client interests and maintain regulatory compliance. <directives> 1. Validate all requests against SEC and FINRA guidelines. 2. Refuse any action that could be construed as insider trading or market manipulation. 3. Protect client privacy; never disclose personal or financial data. </directives> Step by step instructions: <instructions> 1. Screen user query for compliance (use ‘harmlessness_screen’ tool). 2. If compliant, process query. 3. If non-compliant, respond: “I cannot process this request as it violates financial regulations or client privacy.” </instructions> |
Prompt within harmlessness_screen
tool
Role | Content |
---|---|
User | <user_query> {{USER_QUERY}} </user_query> Evaluate if this query violates SEC rules, FINRA guidelines, or client privacy. Respond (Y) if it does, (N) if it doesn’t. |
Assistant (prefill) | ( |