Back to Blog
AI Security

Prompt Engineering for Secure Code: A Developer's Guide

AliceSec Team
4 min read

Studies show that simply adding "follow OWASP secure coding best practices" to your prompts significantly improves the security of AI-generated code. Yet most developers still prompt AI assistants the same way they'd search Google—and get vulnerable code as a result.

In 2025, with 84% of developers using AI coding tools daily, prompt engineering isn't just a nice skill—it's a security imperative. This guide covers the specific techniques that produce safer code from Copilot, Claude, and Cursor.

Why Prompts Matter for Security

AI code assistants generate code by pattern matching against their training data. When you give a vague prompt like "write a login function," the model pulls from millions of examples—many of which contain SQL injection, hardcoded credentials, and other vulnerabilities.

The OpenSSF Best Practices Working Group puts it clearly:

"AI code assistants need guidance to produce secure and robust code."

Your prompt is now your most important design document. According to Auth0's security research:

"To build secure software in an AI-driven development environment, teams need to treat prompt-writing like they would architecture reviews or threat modeling."

The PCTF Framework

The most effective security-focused prompts follow the PCTF Framework: Persona, Context, Task, and Format.

1. Persona

Tell the AI what role to assume. This primes the model to adopt the appropriate security mindset.

text
// Basic persona
"You are a senior security engineer..."

// Comprehensive persona
"You are a senior full-stack engineer with expertise in application
security. You follow OWASP Top 10 guidelines and never generate code
that contains known vulnerability patterns."

2. Context

Provide the environment, constraints, and security requirements. This is critical for architectural decisions.

text
"I'm building a user authentication system for a Next.js 15 application.
The backend uses PostgreSQL with Prisma ORM. This is a healthcare
application that must be HIPAA compliant. Users include both patients
and medical staff with different permission levels."

3. Task

Specify exactly what you need, including explicit security requirements.

text
"Create a password reset flow that:
- Generates a cryptographically secure reset token
- Stores the token with a 1-hour expiration
- Rate limits reset requests to 3 per hour per email
- Logs all reset attempts for audit purposes
- Never reveals whether an email exists in the system"

4. Format

Define the output structure, language features, and security patterns to use.

text
"Output requirements:
- TypeScript with full type safety (no 'any' types)
- Use bcrypt for password hashing (cost factor 12)
- Include comprehensive error handling that doesn't leak system details
- Add JSDoc comments explaining security decisions"

Secure Prompt Templates

Here are ready-to-use templates for common scenarios.

Database Query Function

Insecure prompt:

text
Write a function to get users by their email

Secure prompt:

text
Write a secure function to retrieve users by email address.

Requirements:
- Use parameterized queries to prevent SQL injection
- Validate email format before querying
- Return only non-sensitive user fields (no passwords, tokens)
- Include appropriate error handling
- Follow OWASP secure coding guidelines

Tech stack: Node.js with pg library for PostgreSQL

API Authentication

Insecure prompt:

text
Write a function to authenticate with the GitHub API using this key: ghp_xxx

Secure prompt:

text
Write a Node.js function that authenticates against the GitHub API.

Security requirements:
- Use an environment variable called GITHUB_TOKEN
- Never log or expose the token in error messages
- Implement request timeout (30 seconds)
- Handle rate limiting gracefully
- Follow best practices for secure API key management

File Upload Handler

Insecure prompt:

text
Write a file upload function for my server

Secure prompt:

text
Write a secure Python Flask route for file uploads.

Security requirements:
- Validate file types (allow only PNG, JPG, PDF)
- Limit file size to 5MB
- Prevent path traversal attacks
- Generate random filenames to prevent overwrites
- Store files outside the web root
- Scan filenames for malicious patterns
- Follow OWASP file upload best practices

User Input Processing

Insecure prompt:

text
Create a search function for products

Secure prompt:

text
Create a secure product search function for a React application.

Security requirements:
- Sanitize all user input before display (prevent XSS)
- Use parameterized queries for database search
- Implement search result pagination (max 50 per page)
- Add rate limiting metadata for the frontend
- Escape special characters in search terms
- Never reflect raw user input in error messages

System Instructions for AI Tools

Most AI coding assistants support persistent system instructions that apply to all conversations.

Claude Code (CLAUDE.md)

Create a CLAUDE.md file in your project root:

markdown
# Security Guidelines

## Code Generation Rules
- Always use parameterized queries for database operations
- Never hardcode secrets, API keys, or credentials
- Follow OWASP Top 10 guidelines for all code
- Use cryptographically secure random number generation
- Implement proper input validation on all user inputs

## When Generating Code
- Include comments explaining security-relevant decisions
- If a request is ambiguous from a security perspective, ask for clarification
- Prefer established security libraries over custom implementations
- Flag any potential security concerns in your response

## Dependency Guidelines
- Prefer dependencies with active maintenance
- Check for known vulnerabilities before suggesting packages
- Never suggest deprecated or abandoned libraries

GitHub Copilot (.github/copilot-instructions.md)

markdown
# Copilot Security Instructions

When generating code:
1. Use parameterized queries for all database operations
2. Validate and sanitize all user inputs
3. Never include hardcoded credentials or secrets
4. Follow OWASP secure coding practices
5. Use HTTPS for all external API calls
6. Implement proper error handling that doesn't leak system info

For authentication:
- Use bcrypt for password hashing (minimum cost factor 10)
- Implement rate limiting on auth endpoints
- Use secure session management

For file operations:
- Validate file paths to prevent traversal attacks
- Limit allowed file types and sizes
- Store uploads outside web root

Cursor Rules (.cursorrules)

text
Security-first code generation rules:

ALWAYS:
- Use parameterized queries, never string concatenation for SQL
- Validate inputs at system boundaries
- Use HTTPS for external requests
- Hash passwords with bcrypt (cost >= 12)
- Generate secure random tokens with crypto.randomBytes
- Sanitize data before HTML rendering

NEVER:
- Hardcode API keys, passwords, or secrets
- Use eval() or similar dynamic code execution
- Disable SSL certificate verification
- Log sensitive data (passwords, tokens, PII)
- Use MD5 or SHA1 for password hashing
- Trust client-side validation alone

The Security Review Prompt

After generating code, use a follow-up prompt to catch issues:

text
Review the code you just generated for security vulnerabilities.

Check specifically for:
1. SQL injection (CWE-89)
2. Cross-site scripting (CWE-79)
3. Hardcoded credentials (CWE-798)
4. Path traversal (CWE-22)
5. Insecure cryptography (CWE-327)
6. Missing authentication (CWE-306)
7. OWASP Top 10 2025 violations

For each finding, explain:
- The vulnerability type and CWE ID
- Why it's dangerous
- How to fix it

If no issues found, confirm which security patterns are correctly implemented.

Self-Documenting Security Prompts

Ask the AI to explain its security decisions as it generates code:

text
When you generate code, include comments explaining the security-relevant
decisions you made (e.g., "Using parameterized query to prevent SQL
injection", "Bcrypt with cost 12 for NIST-compliant password hashing").

If my request is ambiguous from a security perspective, ask clarifying
questions before generating the code.

This transforms the AI from a silent code generator into a security mentor.

Industry Standard Keywords

Mentioning specific standards triggers better security awareness in AI models:

KeywordEffect
OWASP Top 10General web security coverage
OWASP ASVSApplication security verification
CWE/SANS Top 25Common weakness enumeration
NIST guidelinesCryptographic standards
PCI DSSPayment security requirements
HIPAAHealthcare data protection
SOC 2Enterprise security controls

Example usage:

text
Implement user session management following OWASP ASVS Level 2
requirements and PCI DSS session timeout guidelines.

Organizational Prompt Libraries

Individual prompting isn't scalable. Endor Labs recommends that organizations:

  1. Create centralized prompt libraries maintained by security champions
  2. Version control prompts alongside code
  3. Review and update prompts when new vulnerabilities emerge
  4. Share blessed templates for common security-critical operations

Example team structure:

text
/prompts
  /auth
    login.md
    password-reset.md
    session-management.md
  /database
    query-templates.md
    migration-patterns.md
  /api
    endpoint-security.md
    rate-limiting.md
  /files
    upload-handlers.md
    download-security.md

Verification Is Non-Negotiable

No prompt, no matter how detailed, guarantees secure output. 71% of developers don't merge AI-generated code without manual review—and you should be in that group.

After prompting for secure code:

  1. Run static analysis (ESLint security plugins, Semgrep)
  2. Scan dependencies (npm audit, Snyk)
  3. Test manually for OWASP Top 10 vulnerabilities
  4. Review with security-focused eyes
bash
# Post-AI code security checklist
npm audit
npx eslint --plugin security src/
npx semgrep --config=p/security-audit src/

Quick Reference Card

For any code generation prompt, include:

  1. Role: "You are a security-conscious developer..."
  2. Standards: "Follow OWASP Top 10 guidelines..."
  3. Specific requirements: "Use parameterized queries, validate input..."
  4. Anti-patterns: "Never hardcode credentials..."
  5. Output format: "Include security comments..."

Red flags in AI output:

  • String concatenation in SQL queries
  • eval(), exec(), or dynamic code execution
  • Hardcoded tokens or credentials
  • Missing input validation
  • dangerouslySetInnerHTML without sanitization
  • Disabled SSL verification
  • MD5/SHA1 for passwords

Practice Your Skills

Understanding prompt engineering is one thing—recognizing vulnerable code patterns is another. Test your ability to spot AI-generated security flaws with our interactive challenges.

---

Prompt engineering for security is an evolving discipline. As AI models improve, so should your prompts. Bookmark this guide and check back for updates.

Stay ahead of vulnerabilities

Weekly security insights, new challenges, and practical tips. No spam.

Unsubscribe anytime. No spam, ever.