Page Selection Approaches
Surfer — Step 2 Design Exploration
Step 2 asks users to select which pages to add to tracking. The quality of that selection directly affects what ChatGPT learns about the brand. Four approaches — four different bets on where to place trust.
Positioning
DeliberateFastHiddenExplained Hidden — AI picks, no reasoning shownExplained — AI shows why each page was chosen
Fast — low effort, quick to confirmDeliberate — user thinks and decides
Side-by-side comparison
| | | | |
|---|
| Time to complete | ~60s | ~30s | ~10s | ~90s+ |
| User control | High | Medium | Low | Full |
| Trust in AI required | Low | High | Very high | None |
| Works without GSC | Partial | Yes | Yes | Yes |
| 100-article problem | Cutoff line | Topic level | Invisible | Overwhelming |
| AI quality dependency | Medium | High | Very high | None |
| Completion rate (est) | Medium | High | Very high | Low |
Approach details
+Lowest friction — ~10s to complete
+Works with or without GSC
+Very high expected completion rate
+Right trust level for early onboarding
−User can't see what was selected or why
−Bad pre-selection is invisible until after onboarding
−Entirely dependent on AI accuracy
+Transparent — user sees why each page ranked
+Pre-selection backed by real GSC data
+Builds trust in AI quality over time
−Requires GSC (partial fallback without it)
−"Impact score" is an abstraction that needs explaining
−Ranked list can feel overwhelming on large sites
+Intuitive — think in topics, not URLs
+Surfaces gaps in coverage the user may not notice
+Feels collaborative, not fully automated
−Requires reliable topic clustering from brand data
−Topic labels may not match user's mental model
−More steps increases drop-off risk
+User knows their content better than any model
+No data dependency — works from day one
+Maximum user confidence in the result
−Overwhelming without an anchor — where do you start with 100 articles?
−Slowest approach by far
−Highest expected drop-off rate