Edge Case Testing Best Practices
Edge case testing (fuzzing) automatically generates test variations that catch bugs in boundary conditions, error handling, and security. This guide covers strategies for effective fuzzing.
Designing Flows for Effective Fuzzing
Not all flows fuzz equally well. Design baseline flows with fuzzing in mind.
Flow Characteristics That Fuzz Well
| Characteristic | Why It Matters | Example |
|---|---|---|
| Clear input fields | AI can identify what to modify | Form with labeled email, password fields |
| Well-defined assertions | Failures are meaningful | "Error message appears for invalid email" |
| Atomic actions | Each step is isolated | One input per step, not multiple in one |
| Validation points | Tests can verify responses | Form shows inline validation errors |
Good fuzzing candidates:
- Login and registration forms
- Search functionality
- Checkout and payment flows
- Profile and settings forms
- API-driven data entry
Flow Characteristics That Fuzz Poorly
| Characteristic | Problem | Alternative |
|---|---|---|
| No inputs | Nothing to modify | Use for navigation-only tests; don't fuzz |
| Ambiguous targeting | AI can't identify elements | Improve element descriptions first |
| Missing assertions | Can't detect failures | Add assertions before fuzzing |
| Chained dependencies | Later steps fail if earlier modified | Break into smaller flows |
Fuzzing requires input fields to modify. Navigation-only flows (click, scroll, navigate) won't produce meaningful edge case variants.
Scenario Type Selection Strategy
Rock Smith offers 15 scenario types across three categories. Select strategically based on your testing goals.
Core Scenarios
Use for functional edge case coverage:
| Scenario | Best For | Example Test |
|---|---|---|
| Boundary Values | Numeric inputs, text limits | Max length username, zero quantity |
| Invalid Format | Structured data fields | Malformed email, invalid date |
| Special Characters | Text inputs | Unicode, emojis, control characters |
| Field Length | All text inputs | Empty string, very long input |
| Type Mismatch | Mixed data types | Letters in phone number field |
| Required Field | Form validation | Submit with empty required fields |
| State Transition | Multi-step workflows | Skip steps, go back, repeat |
Recommended combinations:
- Forms: Boundary Values + Invalid Format + Required Field
- Search: Special Characters + Field Length
- Workflows: State Transition + Boundary Values
Security Scenarios
Use for penetration testing and vulnerability detection:
| Scenario | Risk Level | What It Tests |
|---|---|---|
| Authentication Bypass | Critical | Session handling, direct URL access |
| Authorization Escalation | Critical | Permission boundaries, IDOR vulnerabilities |
| Injection Attack | Critical | SQL, NoSQL, command injection |
| XSS Variants | High | Script injection, encoded payloads |
Prioritize security testing for:
- Authentication flows (login, password reset)
- Payment processing
- User data access and modification
- Administrative functions
- File uploads
Always run security fuzzing in staging or development environments, never in production.
Data Scenarios
Use for internationalization and data handling:
| Scenario | Best For | Example |
|---|---|---|
| Encoding Issues | International applications | UTF-8/ASCII edge cases |
| Localization | Global user bases | Date formats, currency, number separators |
| Null Handling | API-driven forms | Null values, undefined properties |
| Whitespace | Text processing | Leading/trailing spaces, tabs, newlines |
Fuzzing Configuration Best Practices
Scenario Count Selection
Choose variant count based on coverage needs and credit budget:
| Count | Use Case | Credit Cost |
|---|---|---|
| 1 variant | Quick validation, low-risk areas | 1 credit |
| 2 variants | Balanced coverage | 2 credits |
| 3 variants | Maximum coverage per request | 3 credits |
Strategy:
- Start with 1-2 variants to validate the approach
- Increase to 3 for critical flows after initial success
- Run additional generation requests for more coverage
Custom Instructions
Guide the AI to focus on specific edge cases:
Effective instruction patterns:
"Focus on email validation edge cases.
Test with unicode characters and very long strings."
"Test password field with SQL injection patterns.
Include encoded payloads."
"Target the quantity field with boundary values.
Test negative numbers, zero, and maximum values."
Instruction tips:
- Name specific fields to target
- Combine scenario types explicitly
- Mention specific values or patterns to test
- Keep under 2000 characters
Custom instructions help the AI focus on your highest-risk areas. Use them to direct testing toward known problem spots or compliance requirements.
Managing the Flow Tree
Fuzzing creates hierarchical flow trees. Manage depth and breadth strategically.
Tree Depth Strategy
| Depth | Type | Recommended Use |
|---|---|---|
| 0 | Baseline | Original flow, always keep |
| 1-2 | Primary variants | Most fuzzing work happens here |
| 3-4 | Deep variants | Complex scenarios, selective use |
| 5-6 | Maximum depth | Rare, highly specific investigations |
General guidance:
- Depth 1-2: Standard edge case coverage (recommended)
- Depth 3-4: Complex combination testing (use selectively)
- Depth 5-6: Deep-dive investigations (use rarely)
Nested Generation Patterns
Generate variants from variants to explore complex scenarios:
Example progression:
Login Flow (depth 0)
├── SQL Injection in Email (depth 1)
│ └── Encoded SQL Injection (depth 2)
│ └── Double-encoded Payload (depth 3)
When to nest:
- Initial variant revealed interesting behavior
- Need to test encoding/bypass variations
- Combining multiple attack vectors
When NOT to nest:
- Previous variant showed expected behavior
- Tree already at depth 4+
- Diminishing returns on coverage
Tree Maintenance
Keep your flow tree manageable:
Regular cleanup tasks:
- Delete variants that duplicate findings
- Archive branches that have been fully explored
- Document which variants revealed issues
- Prune unsuccessful paths
Naming conventions:
- Generated variants have descriptive names
- Add tags or notes for important findings
- Use prefixes for easy filtering (e.g., "SEC-" for security)
Security Testing Best Practices
Prioritizing Security Scenarios
Focus security testing on high-impact areas:
| Priority | Area | Scenario Types |
|---|---|---|
| Critical | Authentication | Injection, Auth Bypass |
| Critical | Payment | Injection, Authorization |
| High | User data | Authorization, XSS |
| High | File upload | Injection, Special Characters |
| Medium | Search/filters | XSS, Injection |
| Medium | Public forms | XSS, Injection |
Injection Testing Approach
Progressive testing strategy:
- Basic payloads: Simple SQL injection patterns
- Encoded payloads: URL-encoded, double-encoded
- Context-specific: NoSQL, LDAP, command injection
- Chained attacks: Combine with other scenario types
What to look for:
- Unexpected error messages revealing system info
- Database errors in responses
- Successful data retrieval or modification
- Server-side command execution
XSS Testing Approach
Progressive testing strategy:
- Script tags: Basic
<script>injection - Event handlers:
onerror,onload,onclick - Encoded patterns: HTML entities, URL encoding
- DOM-based: JavaScript URL patterns
What to look for:
- Script execution (alerts, console output)
- Injected content rendered without escaping
- Data exfiltration possibilities
- Session or cookie access
Analyzing Edge Case Results
Interpreting Failures
Not all failures indicate bugs. Categorize results:
| Result | Meaning | Action |
|---|---|---|
| Pass | App handled edge case correctly | Document as expected behavior |
| Fail - Validation | Input rejected with error | Usually expected; verify UX |
| Fail - Crash | Application error or exception | Bug - investigate and fix |
| Fail - Data Accepted | Invalid input processed | Potential security issue |
| Fail - Unexpected Behavior | App behaved strangely | Investigate root cause |
Categorizing Findings
Expected failures (usually OK):
- Form validation rejects invalid input
- Error message displayed to user
- Input sanitized before processing
Unexpected failures (investigate):
- Server errors (500, 502, etc.)
- Crashes or exceptions
- No validation when expected
- Data persisted without sanitization
Security findings (immediate attention):
- Injection payloads executed
- XSS content rendered
- Authorization bypassed
- Sensitive data exposed
Documenting Results
For each significant finding:
- Flow name and variant
- Scenario type that triggered it
- Expected vs. actual behavior
- Steps to reproduce
- Severity assessment
- Remediation recommendation
Credit-Efficient Fuzzing
Generation Costs
| Operation | Credit Cost |
|---|---|
| Generate 1 variant | 1 credit |
| Generate 2 variants | 2 credits |
| Generate 3 variants | 3 credits |
Execution Costs
Running variants costs credits per agent step:
Execution cost = (agent steps per variant) × (number of variants run)
Example:
- Baseline flow: 15 steps
- Generate 3 variants: 3 credits
- Run all 4 flows (baseline + 3 variants): 60 credits
- Total: 63 credits
Cost-Saving Strategies
Generation optimization:
- Start with 1-2 variants, expand if valuable
- Use targeted scenario types (max 3 per request)
- Generate only for critical flows
Execution optimization:
- Review variants before running all
- Run security variants selectively based on risk
- Skip variants that test already-covered scenarios
- Execute in batches by priority
Review generated variants before executing. Some may test scenarios already covered by other variants, allowing you to skip redundant runs.
Common Edge Case Testing Pitfalls
Pitfall 1: Fuzzing Every Flow
Problem: Generating variants for all flows wastes credits on low-value tests.
Solution: Focus on critical flows—forms, authentication, payments, data entry. Skip navigation-only and read-only flows.
Pitfall 2: Running All Variants Blindly
Problem: Executing every generated variant without review wastes credits on redundant tests.
Solution: Review variants first. Skip those testing already-covered scenarios. Prioritize by risk.
Pitfall 3: Ignoring Expected Failures
Problem: Treating all failures as bugs, including legitimate validation rejections.
Solution: Categorize failures. Validation errors for invalid input are usually expected behavior, not bugs.
Pitfall 4: Excessive Tree Depth
Problem: Generating variants at depth 5-6 without clear purpose.
Solution: Stay at depth 2-3 for most testing. Use deeper levels only for specific investigations.
Pitfall 5: Security Testing in Production
Problem: Running injection and XSS tests against live production systems.
Solution: Always use staging or development environments for security fuzzing. Production testing risks data corruption and service disruption.
Next Steps
- Edge Case Testing User Guide - Full feature reference
- Flow Design Best Practices - Design flows that fuzz well
- Credit Optimization - Minimize fuzzing costs
- Working with Flows - Create baseline flows for fuzzing