In Repeater, Burp AI enables you to investigate HTTP traffic using natural language prompts. As with all AI systems, clear and focused prompts produce better quality results.
This guide helps you to write effective prompts that will make your AI-powered security testing more effective.
Write prompts that clearly define your testing objective. For example:
Check this login response for signs of SQL injection in the username parameter.
Analyze whether this API endpoint properly validates user permissions for accessing other users' data
These give Burp AI a clearer direction than vague instructions such as Look for vulnerabilities or Check security.
Burp AI's effectiveness depends on the context you provide. The more relevant and detailed this context is, the better the results.
Burp AI does not retain a conversation history. Include all necessary context in each prompt.
Clearly describe the testing scenario in the prompt. This helps Burp AI's analysis to stay focused and accurate. For example:
This is a user management API where users should only access their own data
I'm testing a password reset feature that should only allow token-based access
This endpoint processes user input from a search form
Highlighting key areas helps to direct Burp AI's attention to the most relevant parts of a request or response. By emphasizing specific parameters, headers, or unusual elements, you can guide the analysis more effectively.
Be selective, as over-highlighting can dilute Burp AI's focus and reduce the quality of the insights.
You can include the contents of Repeater's Notes tab when sending a prompt. Do this when the notes add useful information, such as:
Previous testing observations.
Known application behavior.
Specific concerns or hypotheses.
Related security findings.
If Burp AI's initial response is too generic or misses key details, refine your approach. You can do this by:
Providing more specific context.
Breaking complex requests into smaller, focused prompts.
Including examples of what you're looking for.
Burp AI is designed to support, not replace, manual testing. To effectively integrate it into your workflow:
Use Burp AI for initial reconnaissance to identify areas of interest.
Use Burp AI's recommendations as starting points for deeper testing.
Manually verify all vulnerabilities identified by Burp AI.
Document your process by keeping track of both AI insights and manual verification results.
Security testing works best when your prompts are rooted in specific, actionable concerns. To get better results:
Frame prompts in terms of specific security concerns.
Reference established vulnerability categories (for example, OWASP Top 10 or CWE).
Ask for evidence-based conclusions.
The following examples show how well-structured prompts can help Burp AI deliver focused, actionable responses.
Prompt: "Examine the 'userId' parameter in this API request. I'm testing for Insecure Direct Object References (IDOR). Analyze the response pattern and suggest tests to verify if users can access other users' data."
Why this works:
Focuses on a specific parameter.
Names the vulnerability type being tested.
Requests specific follow-up actions.
Prompt: "This error response occurred when I submitted malformed JSON. Analyze it for information disclosure issues and classify any exposed details by sensitivity level. Provide recommendations for safer error handling."
Why this works:
Explains the trigger condition.
Requests classification of findings.
Asks for remediation advice.
Prompt: "I'm testing this login endpoint to determine whether it properly enforces account lockout after repeated failed attempts. Analyze the response behavior and suggest how to confirm whether lockout is functioning correctly."
Why this works:
Focuses on a specific security control.
Explains the testing goal.
Requests both analysis and follow-up steps.