From 20% to 60%: How London Business School used AI to improve engagement in executive education
How LBS's Generative AI executive programme used SLAN's Nudge to raise exercise completion from 20% to over 60% across 102 learners.

Associate Professor of Strategy and Entrepreneurship

Professor of Management Science and Operations
London Business School offers a Generative AI executive programme for senior leaders, led by Keyvan Vakili, Associate Professor of Strategy and Entrepreneurship, and Nicos Savva, Professor of Management Science and Operations, who are also Academic Directors of the Data Science and AI Initiative.
A central exercise asks participants to identify a high-value generative AI use case within their organisation. They use concepts from the programme to assess value, feasibility, and organisational fit.
In an earlier iteration, only about 20% of participants completed the exercise as intended. In recent SLAN-supported cohorts, that figure has risen to over 60%.
The Programme and the Gap
Keyvan and Nicos designed the programme to help senior leaders turn generative AI into business impact. The in-class exercise is where that shift happens. It moves participants from a blank page to a clear, organisation-specific use case.
In practice, a gap kept appearing in three patterns. Some participants got pulled back into emails and day-to-day demands, making only limited progress during the session. Others turned to ChatGPT for a quick answer rather than working through the problem themselves. Only about 20% of the group worked through the frameworks as intended.
Faculty felt it in the room. Debriefs often ran 45 minutes, as Keyvan and his team had to spend time explaining the underlying reasoning to participants.
How SLAN Changed the Exercise
SLAN replaced the single-prompt ChatGPT with Nudge, a proactive AI coach built from the course materials. Instead of asking participants to generate a use case in a single leap, Nudge guides them through a nine-step ThinkingPath drawn directly from the frameworks. At each stage, learners weigh options, make trade-offs, define clear goals, and reflect on how each decision fits their role and organisation.
By the end of the session, participants leave with a use case grounded in their own context, developed step by step in their own words.
Same frameworks. Better execution.
Results
The first three pilot cohorts included 102 learners, 92 of whom engaged with SLAN. Completion rose from 20% to over 60%.
Faculty also saw a clear shift in the room. Debrief time fell from 45 minutes to 20, as participants arrived having already worked through the key decisions and trade-offs. This gave faculty more time to provide deeper feedback, rather than using the session to answer surface-level questions or reconstruct the reasoning behind each use case.
Why It Worked
SLAN did not improve the exercise by giving participants a faster answer. It improved the exercise by changing how they worked through the problem.
Instead of relying on a single prompt, Nudge guided learners through a structured sequence built based on the course frameworks. To move forward, participants had to weigh trade-offs, define goals, test assumptions, and relate each decision to their own organisational context. The tool reinforced the reasoning that the exercise was designed to teach.
The engagement data reflects that shift. Learners averaged 12.9 messages with Nudge, compared with 1.7 in generic AI use on the same exercise. Completion also rose sharply once learners moved beyond superficial interaction: in the 11–15 message range, completion reached around 80%, and beyond 16 messages, it approached 95%.
That is what made the difference. In a generic AI tool, additional messages often indicate that a learner is refining the output. In Nudge, each message represented another step in the reasoning process. More interaction meant more of the actual thinking was happening.
Where It Goes Next
SLAN is now embedded in the programme, with more than 360 active learners in 2025. Keyvan and Nicos are now extending the approach across the Data Science and AI Initiative and into more courses, such as the Generative Artificial Intelligence elective available to all graduate students at London Business School.
What This Means for Educators
Access to AI alone doesn't improve learning outcomes. Alignment with the faculty's framework does. When the tool reinforces the same reasoning a professor would expect in a tutorial, learners are more likely to work through the problem properly.
The broader lesson is simple: AI improves learning outcomes when it supports the faculty's method rather than shortcutting it.
SLAN is built on that principle. If you run an executive education programme, an MBA course, or a corporate learning initiative and want learners to go deeper and complete the work, get in touch at hello@slan.co.