I’ve been on a Marmite journey with AI (catalogued here), swinging between fan-girling hard and mourning the practices of the past.
It’s a hydra in the best and worst ways: new and fascinating monsters to get to know every few minutes - but some of them might eat what you love.
So, how did I feel rocking up (that’s how I see it - giant iced coffee in hand) to the LPI’s first Learning Live: AI Edition event? I felt prepared to be overwhelmed, perhaps oversaturated, while witnessing a few cool hydra heads to mull over.
Instead, I was energised, fuelled by ceaseless curiosity and wonder. I heard from a diverse range of individuals, all clearly excited by what they were discovering and keen to share it. We weren’t treading over the same old AI ground - we were exploring the deep sea together, pondering the art of the possible, giddy to uncover.
From ‘Automation’ to ‘Transformation’
'Don’t just ask what AI can help you do better. Ask what you can completely change.' Rosemary Hoskins
Rosemary shared this as part of the end-of-day wrap-up, where appointed individuals had loyally chronicled a specific stream and fed back their observations.
This reminder landed beautifully for me. I’ve recently been tinkering with a bit of data cross-pollination in GPT - working out how to combine our employee engagement survey results with Gallup’s four levels of employee need, and answers to a question we ask in our Quarterly Reviews about what individuals want to change. Going beyond basic analysis. Asking: do these things relate? Do they tell a fuller story? What other data has meaning here?
Of course, the more complicated the input, the more time you need to invest in sense-checking the output. As tempting as a good ol’ copy and paste is, I’ve had many maddening conversations with GPT; endless circles of it saying, 'Ah yes, my mistake...' then brazenly repeating the error.
But it’s worth thinking about. Not just what can be faster? Or what do I hate doing that I can delegate? Now’s the time to consider: what if there were no constraints or limits? What sounds so ambitious it’s ridiculous? Is what once felt impossible now possible?
Human-Agent Teams Are Coming
'Human-agent teams will upend the org chart.' Simon Lambert
The Microsoft keynote (from Simon, Eileen Mackey Downing and Lurlene Duggan) started with their forecast for a new type of organisation - the Frontier Firm. I was delighted (because I’m a right square) to have read up on this already, thanks to Bruce Daisley’s newsletter - highly recommended for some truly engaging AI and world-of-work musings.
Basically, it’s a journey towards an army of AI colleagues who do our bidding, with us as AI-savvy overlords.
A key future skill? Building and managing agents. Leaders should be asking: 'How many agents do we need, and how many humans do we need to manage them?'
Microsoft urged leaders: 'set your human-agent ratio. What do you need humans for?'
Yes, we could hide under our beds and fear the possibility that this is the direction of travel. or we could spend time with our teams interrogating the way we work, how work gets done, and launching mini-experiments.
I truly believe that fear without momentum - fear that doesn’t spark action - will hold us back in our careers and organisations.
Microsoft is seeing particularly heavy AI adoption at early-career levels: 'AI won’t take your job, but someone who knows about AI will.' Now is the time to leap in and get involved. A good place to start? Attending events like this, to understand where people are trying, failing, and potentially catching the bug. Start small, with what captures you.
The team were also keen to call out the need for human skills in this evolution: 'Hire for good learners. In interviews, ask: how do you learn? Creative thinking and adaptability will be core differentiators.'
I made a mental note to ask that question. We need to hire coachable superstars: people with a growth mindset who welcome feedback, who enjoy the quest.
The Future of Feedback
NIIT covered a lot of ground in their presentation, but I want to zoom in on their last slide first (not literally, I took a terrible photo of it). Brandon Dickens let us in on his 2.5-year experiment, where he’s given AI full access to himself at work: meetings, emails, Teams - his entire digital footprint. The result? A live skills map spanning mastery levels, distribution, strengths and focus areas, constantly iterating in response to its study of him.
'A coach that knows you better than you know yourself.'
Like most of us, I work in an organisation that uses feedback (360, to be specific), so this triggered a lot of questions: Do they see this as replacing colleague feedback? Is human feedback another data source feeding this? Is it more objective? How has it evolved over time? What else might it be capable of?
I didn’t get a chance to ask them; there wasn’t time in the day to ask all my bouncy, sugar-high questions.
And that’s (I think) what makes AI so scary for some people; stepping into it breeds an abundance of questions. AI is uncertainty. It’s a pace we can’t run alongside. And that’s uncomfortable. We need to get comfortable with being uncomfortable to truly thrive in the new era of work.
A couple of other gems from their talk:
-
AI progression and usage have outstripped their predictions - they’ve had to shift them forward.
-
They predict the decline of traditional, static content, in favour of adaptive, intelligent systems that learn with and from us.
-
In-the-flow access to real ‘war stories’ (the been-there, done-that richness of stories within your organisation), is priceless. AI can help surface these and embed them into simulated learning experiences.
Upskilling Is Easier Than We Think
Jeff Fissel had a post-lunch slot and still killed it, invigorating the audience with GP Strategies' AI experiments and use cases.
The best bit was the Q&A, when a woman asked, 'But how did you know how to do all of this?' and Jeff replied, 'We asked AI.'
So, this week, when I decided it was time to create an L&D Mentor - a GPT that could coach and challenge me within my specific context - I skipped what, weeks ago, felt revolutionary (amending a programming template) and simply asked GPT to ask me questions until it believed it could build L&D Mentor beautifully, on my behalf. And it did.
The best AI training course available? Ask GPT, or Gemini, or Copilot (and many more beyond) to take the time to get to know you. then make recommendations for your learning journey.
Returning to Microsoft’s Frontier Firm prediction, Jeff showed us their work using 'Master Agents' - AI agents that monitor and evaluate other agents, once they’ve learned from us what good looks like. The human manages the master (for now!).
Jeff also echoed an observation made by NIIT, that we’re moving from volume to refinement: 'Use AI to create better, not more.' We’ve indulged, we’ve iterated, we’re at V2. And if you’re not? Into the sandbox with you for a mess-about.
The Pursuit of Joy
Nick Holmes reflected on his key takeaways at the end-of-day wrap-up: 'We can't lose our sense of play. Let’s use AI to elevate joy.'
This is an important one. Because one of the biggest dangers of AI, as I see it, is isolation - us slowly retreating from the complexity of interaction with one another, in favour of the validating safety of our ‘always-on’ AI friends. It’s something to be mindful of, something to check ourselves for.
I’ve personally found a lot of joy in sharing what I’m learning with colleagues and friends - suggesting new solutions to their challenges and ideas to explore.
AI is helping me with my new jungle of a garden, teaching me how to care for each plant and tree with a manageable schedule. It’s helping me with the structure of my novel, with crafting considered feedback, with thinking differently. I’m taking tentative steps - regularly exasperated, regularly blown away.
All this without guardrails, without consciousness of the cost, without retrospection, is reckless. So, in this sea of questions, let’s also ask some of ourselves. Like: How do I feel about my use of AI? What am I losing when I rely on it for x? What’s important for me to retain? What sparks joy? (classic Marie Kondo here.)
Wrapping Up the Hydra
'Show people the relationship they can have with AI, not just the technical capability, if you want real buy-in.'
The food was stupendous. The company and networking were exceptional. The conversation was essential. I bet the LPI are mighty chuffed with themselves, and so they should be.
The questions continue to unfurl: are we slowly but surely eradicating ourselves while convincing ourselves we’re bringing main character energy? Is AI laughing at us? What does all of this mean for our sense of purpose, our value, our relationships? Is my brain shrinking?
If you have an answer to any of these and would like to discuss it - let’s get chatty here.