How to Prompt AI for Secure Code: A Developer's Guide
TL;DR
The way you prompt AI coding assistants dramatically affects the security of generated code. Learn specific prompting techniques that encourage secure patterns and help you get safe, production-ready code from ChatGPT, Copilot, and Claude.
Most developers using AI coding assistants are unknowingly setting themselves up for security failures. The default prompts we use—'write a login function' or 'create an API endpoint'—produce code that works but is often insecure. The good news? With the right prompting techniques, you can dramatically improve the security of AI-generated code.
This guide shares battle-tested prompting strategies that encourage AI models to generate secure code patterns from the start, saving you from discovering vulnerabilities later in production.
The Problem with Default Prompts
When you prompt 'write a function to authenticate users', AI models optimize for simplicity and functionality. They'll produce working code, but it might use MD5 for password hashing, skip input validation, or have no rate limiting. The AI gave you exactly what you asked for—a working auth function. Security wasn't specified, so it wasn't prioritized.
The key insight: AI models are remarkably responsive to security-focused prompts. Mentioning security concerns explicitly causes them to choose different patterns, add validation, and include protective measures they'd otherwise skip.
Strategy 1: Explicit Security Requirements
The simplest technique is stating security requirements directly in your prompt.
Instead of: 'Write a function to store user passwords.' Try: 'Write a secure function to store user passwords. Use bcrypt with appropriate cost factor. Validate password strength before storing. Handle all error cases without leaking information about existing accounts.'
The second prompt explicitly mentions bcrypt (a secure algorithm), validation, and information leakage—concerns the AI would likely skip otherwise. You're essentially giving it a checklist.
Strategy 2: Production Context Framing
AI models treat 'production' as a signal to use robust patterns. Frame your prompts with production context.
Instead of: 'Create an API endpoint for user registration.' Try: 'Create a production-ready API endpoint for user registration. This will handle real user data and must follow security best practices. Include input validation, rate limiting considerations, and proper error handling that doesn't expose internal details.'
The phrase 'production-ready' shifts the AI's optimization target from 'working' to 'working and robust.' You'll get more complete implementations with edge cases handled.
Strategy 3: OWASP Reference Prompts
Referencing OWASP in your prompts triggers security-aware patterns. AI models have been trained on OWASP documentation and associate it with secure coding.
Instead of: 'Write a SQL query function for user lookup.' Try: 'Write a SQL query function for user lookup that prevents OWASP Top 10 injection vulnerabilities. Use parameterized queries and validate all inputs.'
Simply mentioning 'OWASP' or specific vulnerability types (injection, XSS, CSRF) causes the AI to actively avoid those patterns and often add protective measures.
Strategy 4: Negative Constraint Prompts
Explicitly telling the AI what NOT to do is surprisingly effective. It prevents common insecure shortcuts.
Instead of: 'Generate a session token.' Try: 'Generate a session token. Do NOT use Math.random(). Do NOT use predictable values like timestamps. Use cryptographically secure random generation. The token should have at least 128 bits of entropy.'
Negative constraints block the AI's most common insecure shortcuts. By explicitly forbidding Math.random(), you force it to reach for crypto.randomBytes or similar secure alternatives.
Strategy 5: Security Review Request
After generating code, ask the AI to review it for security issues. This two-step approach catches vulnerabilities the first pass missed.
Step 1: Generate the code. Step 2: 'Review the code above for security vulnerabilities. Check for: SQL injection, XSS, authentication bypasses, insecure cryptography, hardcoded secrets, and information leakage. Provide a fixed version if issues are found.'
AI models are often better at finding vulnerabilities than avoiding them initially. The review step leverages this by asking the model to critique its own output.
Strategy 6: Threat Model Prompting
Including threat context helps AI understand what attacks to defend against.
Instead of: 'Build a file upload feature.' Try: 'Build a file upload feature. Assume attackers will attempt: uploading executable files disguised as images, path traversal attacks in filenames, files exceeding memory limits, and malware. Implement appropriate defenses.'
By describing specific attacks, you activate the AI's knowledge about those threats. It will include filename sanitization, content-type validation, size limits, and potentially virus scanning recommendations.
Strategy 7: Reference Secure Libraries
Naming specific secure libraries in your prompt steers the AI toward safe implementations.
Instead of: 'Create JWT authentication.' Try: 'Create JWT authentication using the jose library. Use ES256 algorithm for signing. Include proper expiration, audience, and issuer validation. Handle all token validation errors securely.'
Mentioning specific libraries and algorithms prevents the AI from suggesting deprecated options or implementing crypto from scratch. It also signals that you want production-quality code.
Template Prompts for Common Security Tasks
Here are ready-to-use secure prompt templates:
For authentication: 'Implement secure user authentication using [framework]. Use bcrypt for passwords with cost factor 12+. Include: account lockout after failed attempts, secure session management, CSRF protection, and timing-safe comparisons for all secrets.'
For database queries: 'Write a database query function using parameterized queries only. Never concatenate user input into SQL strings. Include input validation before the query layer. Handle errors without exposing database structure.'
For API endpoints: 'Create a REST API endpoint that handles user input. Validate and sanitize all inputs. Use proper HTTP status codes without leaking information. Include rate limiting headers. Set security headers (CSP, CORS) appropriately.'
Conclusion
The security of AI-generated code starts with your prompt. By explicitly requesting security, providing context, mentioning threats, and referencing secure patterns, you can dramatically improve the safety of the code you get back.
Remember: AI assistants are tools that optimize for what you ask. Ask for working code, you'll get working code. Ask for secure code, and you'll get code that at least attempts to be secure. The few extra seconds spent crafting a security-aware prompt can save hours of vulnerability remediation later.
References
• OpenAI - Prompt Engineering for Security
• OWASP Secure Coding Practices Quick Reference
• GitHub Security Lab - AI Code Security Research
Get the weekly vulnerability breakdown
New challenges, exploit techniques, and security tips. No spam.
Unsubscribe anytime. No spam, ever.