Flow Design Best Practices
Well-designed test flows are easier to maintain, more reliable, and provide better coverage. This guide covers proven patterns for creating effective flows in Rock Smith.
Flow Architecture Principles
Single Responsibility Flows
Each flow should test one user journey or scenario. This makes flows easier to understand, debug, and maintain.
Recommended flow length: 5-15 steps
| Flow Size | Use Case | Maintainability |
|---|---|---|
| 1-4 steps | Quick smoke tests | High |
| 5-15 steps | Standard user journeys | Optimal |
| 16-25 steps | Complex multi-page workflows | Moderate |
| 25+ steps | Consider splitting | Low |
If a flow exceeds 15 steps, consider whether it tests multiple user journeys that should be separate flows.
Good examples:
- "Login with valid credentials"
- "Add item to shopping cart"
- "Submit contact form"
Avoid:
- "Login and checkout and update profile" (combines three journeys)
- "Full user workflow" (too vague and likely too long)
Flow Naming Conventions
Use descriptive, action-oriented names that explain what the flow tests:
| Pattern | Example | Why It Works |
|---|---|---|
| Action + Context | "Login with valid credentials" | Clear what's being tested |
| Feature + Scenario | "Checkout as guest user" | Specifies the variation |
| User Journey | "Complete password reset" | Describes the full path |
Avoid:
- "Test 1", "Flow A" (meaningless)
- "Login" (too generic—which login scenario?)
- "New flow 2024-01-15" (dates don't describe behavior)
Flow Organization
Organize flows using Rock Smith's project structure:
Project-based grouping:
- Group related flows by feature area or product
- Use projects like "Authentication", "Checkout", "User Profile"
- Keep discovery sessions and flows together in the same project
Flow hierarchy:
- Baseline flows (depth 0): Core happy-path scenarios
- Edge case flows (depth 1+): Variations generated through fuzzing
- Use the Flow Map navigator to visualize and navigate your flow tree
Flow status lifecycle:
- Draft: Flow under development, not ready for regular execution
- Review: Flow complete, awaiting team review
- Approved: Production-ready, included in test suites
Semantic Element Targeting Best Practices
Rock Smith uses semantic targeting—describing elements as users see them—instead of brittle CSS selectors. This makes tests resilient to UI changes.
Writing Effective Element Descriptions
Describe elements using visual characteristics that won't change when implementation details shift:
| Visual Cue | Example Description |
|---|---|
| Color | "The blue Submit button" |
| Position | "The search icon in the top-right corner" |
| Icon | "The gear icon for settings" |
| Text | "The button labeled 'Sign In'" |
| Context | "The email input below the welcome message" |
Good targeting descriptions:
✓ "The blue Submit button at the bottom of the form"
✓ "The email input field with placeholder 'Enter your email'"
✓ "The red Delete icon next to the file name"
✓ "The dropdown menu labeled 'Country'"
Avoid:
✗ "The button with class btn-primary" (implementation detail)
✗ "The third input on the page" (fragile to layout changes)
✗ "Submit" (too vague if multiple submit buttons exist)
✗ "#email-field" (CSS selector, will break)
Using Visual Context Fields
Rock Smith provides four targeting fields. Use them together for precise element identification:
| Field | When to Use | Example |
|---|---|---|
| Label | Primary identification | "Submit button", "Email input" |
| Position | Disambiguate similar elements | "top-right", "below the form" |
| Text | When visible text is distinctive | "Sign In", "Learn More" |
| Type | Clarify element category | "button", "input", "link", "dropdown" |
Combining fields for precision:
Label: "Submit button"
Position: "bottom of the login form"
Text: "Sign In"
Type: "button"
This combination uniquely identifies the element even if multiple buttons exist on the page.
Self-Healing Test Patterns
Semantic targeting enables self-healing—tests automatically adapt when UI changes. Design for this:
Do:
- Describe what users see, not implementation
- Use multiple visual cues for critical elements
- Reference stable text content (labels, headings)
Don't:
- Reference dynamic IDs or generated class names
- Depend on exact pixel positions
- Assume specific DOM structure
When a UI redesign moves a button but keeps its label and function, semantic targeting still finds it. Selector-based tests would fail.
Action Configuration Best Practices
Navigation Actions
| Action | Best Practice |
|---|---|
navigate | Use for direct URL access; set appropriate timeout for slow pages |
go_back | Use sparingly; prefer explicit navigation for clarity |
scroll | Specify direction and distance; use to reveal lazy-loaded content |
Navigate configuration tips:
- Set
wait_for_navigation: truefor pages with redirects - Increase timeout (30-60s) for slow-loading pages
- Use
open_in_new_tabwhen testing multi-tab workflows
Interaction Actions
| Action | When to Use |
|---|---|
click | Standard button/link interactions |
double_click | File selection, text selection, specialized UI |
right_click | Context menus |
hover | Reveal tooltips, dropdown menus, hidden elements |
Tips:
- Add a
waitstep after clicking if the next element needs time to appear - Use
hoverbeforeclickfor menu items that require hover to reveal
Input Actions
| Action | When to Use | Key Parameter |
|---|---|---|
type | Fill text fields | clear_first for editing existing values |
select_option | Dropdowns | option_text matches visible label |
send_keys | Keyboard shortcuts | keys like "Enter", "Tab", "Escape" |
submit_form | Form submission | Alternative to clicking submit button |
Type vs. Fill:
- Use
clear_first: truewhen editing pre-filled fields - Without
clear_first, text appends to existing content
Control Actions
| Action | When to Use |
|---|---|
wait | Page transitions, loading states, animations |
done | Mark flow completion (optional) |
custom | Complex actions not covered by standard types |
Wait strategy:
- Prefer
wait_for_elementover fixeddurationwhen possible - Use
conditionfor complex waits: "Wait until loading spinner disappears" - Keep timeouts reasonable (5-30 seconds) to catch real issues
Avoid excessive wait steps with long durations. They slow tests and hide real performance issues. If you need many long waits, the application may have performance problems worth investigating.
Assertion Best Practices
Choosing Assertion Types
| Assertion | Best For | Example |
|---|---|---|
visual_verification | Complex UI states | "Verify the shopping cart shows 3 items" |
element_visible | Confirming element appeared | "The success message is visible" |
element_hidden | Confirming element disappeared | "The loading spinner is hidden" |
text_visible | Specific content validation | "Verify 'Order confirmed' text appears" |
url_matches | Navigation verification | "URL contains '/dashboard'" |
navigation_occurred | Page transition happened | After form submission |
When to use each:
visual_verification: When you need AI judgment about complex visual stateselement_visible/element_hidden: For binary presence checkstext_visible: When specific text content mattersurl_matches: After navigation actionscustom: For business-logic validations
Assertion Placement Strategy
Add assertions after actions that change application state:
Critical assertion points:
- After form submissions
- After navigation/redirects
- After data modifications (create, update, delete)
- After authentication state changes
- At flow completion
Balance coverage vs. speed:
- Critical flows: Assert after every significant action
- Smoke tests: Assert only final outcome
- Regression tests: Assert key checkpoints
Avoid over-asserting. Too many assertions make tests brittle and slow. Focus on outcomes that matter to users.
Flow Configuration Settings
Timeout and Retry Settings
| Setting | Recommendation | Use Case |
|---|---|---|
| Timeout | 30s default, 60s for slow pages | Adjust based on actual page performance |
| Retry Count | 1-2 for flaky elements, 0 for stable | Use sparingly; fix root cause instead |
When to increase timeouts:
- Pages with heavy JavaScript
- Third-party integrations (payments, maps)
- File uploads or downloads
When to increase retries:
- Elements with animations
- Dynamically loaded content
- Known intermittent issues (temporary workaround)
Priority and Complexity
Use these fields for team communication and test suite organization:
- Priority: Order flows in execution queues (P1 runs first)
- Complexity: Help teammates understand maintenance burden
Common Pitfalls to Avoid
Pitfall 1: Overly Long Flows
Problem: Flows with 20+ steps are hard to debug and maintain.
Solution: Split into focused flows. Chain them if needed for end-to-end scenarios.
Pitfall 2: Vague Element Descriptions
Problem: "The button" or "input field" matches multiple elements.
Solution: Add visual context—color, position, label text, surrounding elements.
Pitfall 3: Missing Assertions
Problem: Flow runs but doesn't verify anything meaningful.
Solution: Add assertions after state-changing actions. Every flow should verify at least one outcome.
Pitfall 4: Ignoring Timing Issues
Problem: Flows fail intermittently due to race conditions.
Solution: Use wait_for_element instead of fixed waits. Increase timeouts for genuinely slow pages.
Pitfall 5: Not Using Browser Profiles
Problem: Flows requiring authentication re-login every run, wasting steps.
Solution: Create browser profiles with saved sessions. Select them during execution.
Flow Generation from Discovery
When to Use Discovery-Generated Flows
Discovery-generated flows provide quick baseline coverage:
Advantages:
- Fast creation (1 credit per flow)
- Based on actual application structure
- Includes visual context for assertions
Considerations:
- May need refinement for specific scenarios
- Generic naming (customize after generation)
- Best as starting points, not final flows
Refining Generated Flows
After generating flows from discovery:
- Review step accuracy: Ensure actions match intended user journey
- Improve targeting descriptions: Add visual cues for precision
- Add missing assertions: Discovery captures state, not all validations
- Rename flows: Replace generic names with descriptive ones
- Set priority and status: Organize for your test suite
Refinement costs 1 credit per adjustment—much cheaper than regenerating.
Next Steps
- Working with Flows - Detailed flow creation guide
- Discovery Session Best Practices - Optimize visual context capture
- Edge Case Testing Best Practices - Generate flow variations
- Credit Optimization - Minimize costs while maximizing coverage