A 30-Day Pilot Built Around One Core Question
Did our last decision improve player experience — and for whom?
Most teams already have feedback. What they lack is:
- Pattern detection across thousands of reviews
- Consistent thematic grouping
- Segment-aware analysis
- Time-window comparison around patches
- A way to verify impact
That was the loop we ran with the pilot cohort.
Example Signal
Recurring friction around reload speed and combat pacing, concentrated in competitive play.
Before patch
- High recurrence of "slow reload" friction
- Concentrated within competitive players
After patch
- Friction cluster drops 38%
- Positive pacing mentions increase
- Casual segment unaffected
Confidence: High (recurrence threshold met)
This isn't sentiment-only. It's pattern movement.
What Actually Happens Under the Hood
No black-box summarisation.
- Ingests full review datasets
- Breaks reviews into atomic experience snippets
- Classifies snippets against a fixed taxonomy
- Measures recurrence and intensity
- Links themes to player profile attributes
- Compares signals across defined time windows
The output is structured, reproducible, and auditable.
Clusters, archetype splits, time tracking, review-level signals, evidence quality, and emotional journey — so you see what’s recurring, who it affects, and how it moves.
1. Recurring Experience Clusters
What appears repeatedly across the dataset — not just what's loud.
- Inventory UX friction
- PvP latency complaints
- Crafting loop fatigue
- Narrative coherence praise
Each cluster includes:
- Volume
- Intensity score
- Segment distribution
- Confidence score
So you see what's really repeated, who it affects, and how confident you can be.
2. Player Archetype Splits
The same complaint can come from different player types. CoreFeedback splits signals by who said it — story-focused vs min-maxers — so you know who you're fixing for.
- Immersion-focused players reacting negatively to lore inconsistencies
- Optimisation-focused players reporting drop-rate imbalance
Targeted decisions instead of one-size-fits-all fixes.
3. Time-Based Signal Tracking
See how signals change before vs after a patch, campaign, or season — not just a single snapshot.
Define a window:
- Pre-patch
- Post-patch
- Marketing beat
- Seasonal update
Measure:
- Volume change
- Intensity shift
- Segment divergence
- Net directional movement
So you can validate whether a change or campaign actually moved the needle.
4. Structured Review Signals
Every review and snippet gets deterministic signals: emotion/taxonomy, polarity, quality, engagement.
So every insight and weight rests on the same per-review signals — filter and compare consistently.
5. Evidence Quality & Weighting
Experience and engagement bands, reliability scores, and skew warnings so decisions aren't distorted by noise.
So you weight feedback by who said it and how reliable it is — not one vote per review.
6. Emotional Journey
Within a single review, emotional tone can move from frustrated to satisfied — you see the pattern, volatility, and resolution.
So you spot recovery or escalation patterns, not just an average score.
The pilot included the full core decision loop. Clustering and the decision layer were simplified; not every signal from the full product was surfaced — enough to run the loop and verify impact.
The loop
The CoreFeedback Loop
Ingest → Snippetise → Classify → Signal Materialisation → Cluster → Compare → Decide → Verify → Learn
This is not a one-off report. It's a repeatable decision system.
Who This Is For
If you are responsible for prioritisation, roadmap calls, or performance interpretation.
01Devs & Live Ops
Verify tuning changes and systemic adjustments.
02Studio Leadership
Align around what is structurally recurring, not anecdotal.
03Marketing
Identify mismatched expectations vs actual player experience.
04Publishers & Investors
Assess product risk and trend trajectory.
What You Leave With
- A structured map of recurring experience patterns
- Segment-specific friction breakdown
- A ranked list of high-recurrence opportunity clusters
- A time-window comparison around one defined change (for impact validation)
- Decision records linked to evidence — with predicted vs actual outcomes and learnings
- A reusable workflow for ongoing feedback analysis
No lock-in. Exportable outputs.
The pilot has closed — what’s next
Deterministic Feedback Analysis. Not Black-Box LLM Summaries.
The pilot cohort has ended. Join the waitlist for wider access — or book a call to talk through the platform.