Pilots

Evidence and case studies (only when verified)

This page is designed to present proof responsibly: what was deployed, how it was used, and what metrics were tracked—without asserting outcome improvements unless validated.

What we track

Pilot-friendly metrics

Choose a small, publishable set. Add deeper outcome evaluation only when you can measure credibly saying so.

Adoption

Weekly active users (WAU) per cohort and repeat usage per learner.

Usage quality

Nudge completion rates, drop-off points, and time-to-first-help proxies.

Blockers

Top question clusters and where learners consistently get stuck.

Outcome claims (require data)

Improvements in grades, retention, or learning outcomes should only be presented when measured and attributed carefully (ideally with comparison groups and explicit limitations).

Case studies

Pilot write-ups

Replace these placeholders with approved narratives, metrics, and quotes.

Pilot #1 (example)

Status: placeholder

Example: London Business School pilot (details available upon request). Replace with: cohort size, deployment scope (which course/module), and what you can publicly share.

WAU: {{value}}
Completion: {{value}}
Top blockers: {{topic}}

Pilot #2

Status: placeholder

Add another pilot narrative here once approved (institution or corporate academy). Include verified metrics and quotes with permission.

Cohort: {{size}}
Scope: {{course/module}}

Permission checklist

  • Institution approval for naming/logo usage
  • Quote/testimonial permissions
  • Clear statement of what was measured vs inferred
  • Explicit limitations (pilot scope, timeframe)

Design a measurable pilot

We can scope a pilot that produces publishable metrics and useful instructor feedback without over-claiming outcomes.