TLDR; CIO AI Summit: The Uncomfortable Truths Enterprise AI Leaders Won’t Say Out Loud

I spent two days at the CIO AI Summit in Munich this September, attending every single session. What I witnessed wasn’t the typical vendor-driven optimism you’d expect at a tech conference. Instead, 500 European CIOs, from largest European companies, engaged in a rare, frank conversation about why their AI initiatives aren’t working.

The most memorable moment? A DAX30 CIO admitting: “We weren’t aware of just how bad our data quality was.”

That confession captures everything. The gap between AI promise and AI reality has become impossible to ignore. Here’s what nobody wants to admit publicly.

Culture is killing your AI initiatives (and you know it)

Every session circled back to the same conclusion: culture, not technology, is the primary blocker. The irony is brutal, organizations are solving the wrong problem. They’re obsessing over algorithms and models when 70% of AI implementation challenges stem from people and processes.

The resistance runs deeper than anyone expected. Nearly half of CEOs report that most employees are openly resistant or even hostile to AI. This isn’t irrational, it’s a predictable response to poorly managed change. When only 14% of companies have aligned their workforce around AI strategy, of course people resist.

The companies actually succeeding with AI share one crucial differentiator: they’ve implemented comprehensive change management strategies and they’re three times more likely to deliver results. Morgan Stanley’s AI assistant achieved 98% adoption by wealth management teams, but only after proving quality standards through transparency rather than forcing adoption through mandate.

Culture isn’t a soft factor you address after the technology works. It’s the primary determinant of whether your multi-million-dollar AI investment becomes transformative or expensive shelfware.

The data quality reckoning

That DAX30 CIO’s confession wasn’t unique, it was representative. Every organization discovered their data problems only when attempting AI implementation. Research confirms the uncomfortable truth: only 3% of companies’ data meets basic quality standards.

The pattern is consistent across industries: executives greenlight AI initiatives believing their data is ready, only to discover during implementation that it’s fragmented, inconsistent, incomplete, and ungoverned. Organizations report spending 20% of IT budgets on data infrastructure versus only 5% on AI itself, with 6-18 months required just for foundation building before meaningful deployment can begin.

The financial impact is staggering, poor data quality costs organizations an average of $12.9 million annually. More critically, 70-85% of AI projects fail due to data-related issues, twice the failure rate of non-AI IT projects.

The lesson? AI doesn’t just expose data problems, it makes them existential. You cannot automate your way around bad data.

Vendor confusion and the value realization crisis

One of my key observations: everyone seems curious about how to get value, yet every vendor has a completely different answer, Forward Engineering, Process Intelligence, Data Management. The market is in chaos, and customers are drowning in options.

This confusion has consequences. MIT research found that 95% of generative AI pilots fail to deliver measurable business impact. The core issue isn’t model quality, it’s what researchers call the “learning gap.” Organizations fundamentally misunderstand how to capture AI benefits.

Meanwhile, the consulting-industrial complex has amplified confusion rather than resolving it. Global AI consulting spending exploded from $1.34 billion to $3.75 billion in a single year. Every major firm has branded frameworks, McKinsey’s “Rewired,” Accenture’s “AI Refinery,” the proliferation of “data spaces” and “digital cores.” Each promises transformation while creating dependency on consultants to decode complexity they’ve generated.

Client skepticism is mounting. Million-dollar engagements often end in “long reports rather than functioning applications,” with executives discovering consultants lack the technical depth to move beyond proofs-of-concept.

The superficial use case trap

My observation about superficial use cases, “asking which cafeteria is the best”, captures a widespread pattern researchers call “AI theater.” Organizations confuse activity for impact.

Currently, over half of GenAI budgets flow to sales and marketing tools like email summarization and meeting documentation. These deliver “10 minutes saved here, 30 there” without measurable P&L impact. Meanwhile, back-office automation, which consistently delivers millions in annual savings, receives a fraction of investment.

This misallocation stems from fundamental confusion between AI tools and AI solutions. Organizations treat ChatGPT access as AI strategy when real ROI comes from purpose-built applications integrated with organizational data and systems.

The typical pattern: companies build dozens of AI prototypes, only 4 reach production, an 88% failure rate for scaling. Each high-profile stall makes the next budget request harder. We’re pursuing technology excitement rather than business impact.

Stop paving cow paths with AI

Here’s the critical insight from the summit: Don’t put AI on top of processes, focus on embedding it through redesign. Yet only 21% of organizations have fundamentally redesigned workflows as a result of GenAI deployment.

The failure pattern is classic: automating existing processes without questioning whether those processes should exist at all. One national retailer automated its convoluted returns process with AI, achieving 60% faster ticket closure. Customer satisfaction scores plummeted. They had efficiently guided frustrated customers to incorrect conclusions at machine speed, automating chaos rather than eliminating it.

Wharton professor Ethan Mollick captured the imperative: “The real benefits will come when companies abandon trying to get AI to follow existing processes, many of which reflect bureaucracy and office politics more than anything else, and simply let the models find their own way to produce desired business outcomes.”

The distinction matters: AI implementation focuses on deploying tools into existing workflows, typically delivering 5-10% efficiency gains with no EBIT impact. AI transformation reimagines how work gets done, delivering 60-90% productivity improvements with measurable business outcomes.

This isn’t a technology challenge. It’s organizational reinvention enabled by technology.

The generalist resurgence

My final observation: job roles are converging into generalists and cross-functional hybrids. No more classic PM/Eng/UX split. PMs who code. Engineers who write user stories.

The emergence of “vibe coding”, where product managers use generative AI to produce actual applications through prompts, has led Google, Stripe, and Netflix to introduce AI-prototyping rounds into PM interview loops. The boundaries defining product management, engineering, and design are dissolving as AI democratizes technical capabilities.

This convergence is driven by multiple forces: economic pressure to “do more with less,” AI tools enabling non-technical people to perform technical work, and simplified development toolsets. By 2024, low-code/no-code comprises over 65% of all application development activity.

The World Economic Forum predicts 44% of current skills will be disrupted in the next five years. Strict role boundaries are dying. The question isn’t whether this trend continues, it’s whether organizations can manage it without burning out their people or losing critical depth.

What this means for you

The organizations actually creating value with AI, the small minority succeeding, share common patterns that contradict conventional wisdom:

They treat AI as organizational transformation requiring most resources allocated to people and processes, not as technology implementation. Companies with fully implemented change management are three times more likely to succeed.

They confront data quality as strategic priority before AI investment, recognizing that months of foundation building prevents years of failed pilots.

They reject the automation trap, following the principle obliterate-integrate-automate rather than paving cow paths with expensive algorithms.

They involve significant percentages of their workforce rather than delegating to isolated AI teams. Minimum 7% for meaningful impact, 21-30% for highest performance.

They measure transformation by business outcomes, EBIT impact, revenue growth, customer satisfaction, not activity metrics like number of pilots or models deployed.

They focus resources on process redesign in core business functions rather than scattered experimentation in support functions.

The competitive implications are sobering. AI leaders already achieve 1.5x higher revenue growth and 1.6x greater shareholder returns compared to laggards. This gap will widen dramatically as leaders scale solutions while others remain trapped in pilot purgatory.

The reckoning

The CIO AI Summit’s frank conversations revealed what most won’t say publicly: we’re in the midst of a collective reckoning. The gap between AI promise and AI reality has become too large to ignore.

The technology is ready. The business case is proven. The only question is whether leadership will embrace the magnitude of organizational change required.

The time for experimentation is ending. The time for transformation is now. But transformation means confronting uncomfortable truths about culture, data quality, process design, and the fundamental ways we organize work.

The 18-Month Myth: Why AI Isn’t Destroying Critical Thinking, It’s Redefining It.

The conversation about artificial intelligence usually gravitates toward spectacular disasters. We worry about superintelligence seizing control or mass unemployment destabilizing society. Derek Thompson’s widely discussed essay, “You Have 18 Months,” introduced a quieter, more intimate anxiety. He focused not on the machines taking our jobs, but on the potential softening of the human mind.

His thesis is simple and unsettling. We have a short window, perhaps eighteen months, before the sheer convenience of artificial intelligence leads us to voluntarily stop thinking for ourselves.

The heart of Thompson’s argument is the belief that the struggle of writing is synonymous with thinking. Organizing a chaotic mind into coherent prose is hard work. It requires structuring arguments, refining logic, and discovering what we truly believe through the act of articulation. It is a form of resistance training for the brain.

When we ask Gemini or its equivalents to produce a first draft, we skip the workout. We gain efficiency but lose the mental strength that comes from the struggle. It is a persuasive fear. Who hasn’t felt the slightly guilty relief of letting the machine handle the heavy lifting?

Yet this perspective relies on a narrow definition of intellectual effort. It assumes the cognitive value lies solely in the generation of sentences. If we accept that premise, then yes, we are headed for trouble. But history suggests that technology does not eliminate the need for thought. It relocates it.

The Relocation of Rigor

We have been here before. The arrival of the calculator did not end mathematics. It freed mathematicians from the drudgery of long division, allowing them to tackle more complex problems. Google Search did not destroy memory; it changed the value of information recall, prioritizing synthesis instead.

In every significant technological shift, tools absorb a mechanical task. Generative AI is absorbing the mechanics of articulation. The ability to write competent, fluent prose is rapidly becoming a utility, like electricity or running water.

This transformation is forcing a profound shift in the knowledge worker’s role. We are moving from the role of the artisan wordsmith to something akin to a conductor or a film director. The primary intellectual effort no longer lies in crafting the individual sentence. It lies in the strategy that precedes the writing and the judgment that follows it.

The Director’s Chair

Before the AI begins to type, the human must define the intent. Guiding the machine effectively, what some call prompting, is not about finding magic words. It is about rigorous conceptualization.

This requires clarity of purpose and deep subject matter expertise. You cannot effectively direct a machine on a topic you do not understand. You must know the destination before you ask the AI for the route. In the past, we often discovered what we wanted to say only as we wrote it. Now, the intellectual heavy lifting moves upstream, demanding strategic clarity before the first word is generated.

The Age of Discernment

Once the draft is produced, the critical cognitive work begins. Modern AI is dangerously fluent. It produces text that is plausible, coherent, and often entirely wrong. This phenomenon of confident nonsense, or hallucination, demands a new level of critical engagement.

The human role shifts fundamentally from creation to verification. This is not proofreading. It is a rigorous analytical task, requiring the ability to spot subtle logical flaws, absent nuance, or factual inaccuracies. The editor’s role, demanding skepticism and expertise, is now the essential safeguard of meaning.1 It is often harder to identify subtle flaws in fluent prose than obvious errors in a messy human draft. The machine provides the fluency; the human must provide the truth.

The Adaptation Imperative

Thompson’s eighteen months are better viewed not as a deadline for our decline, but as an urgent adaptation period. The risk is not the technology. The risk is complacency.

If we use these tools passively, accepting their output without interrogation, the atrophy Thompson fears will become reality. We will become passive consumers of our own summarized reality.

However, if we approach them as partners, demanding rigor and applying expert judgment, we find ourselves not diminished, but significantly enhanced. The future of thought is not less demanding. It is just demanding in a different way.

The Amplifier Effect: Why Your AI Strategy is Failing (and How to Fix It)

For the past two years, the world of software development has been defined by a single, seismic question: What is the true impact of AI? The initial data was puzzling. The 2024 DORA report famously found that higher AI adoption was linked to worse software delivery stability and throughput, an anomaly that baffled many leaders who had just signed seven-figure deals for AI coding assistants.

This year’s 2025 State of AI-assisted Software Development report brings clarity, and the findings are more nuanced and profound than we could have imagined. The “anomaly” is partially resolved: AI adoption is now positively associated with software delivery throughput. Teams are indeed getting faster. However, the core problem remains:

AI adoption still increases software delivery instability.

This persistent friction reveals a fundamental truth that most organizations are missing. Successful AI adoption isn’t a tools problem; it’s a systems problem. The report’s most powerful conclusion can be distilled into three words:

AI is an amplifier.

The Mirror on the Wall

Like a powerful mirror, AI doesn’t just change your organization; it reflects its true nature with unflinching honesty.

If your organization has clear workflows, a high-quality internal platform, and a culture of excellence, AI will amplify those strengths, turning local productivity gains into measurable organizational velocity.

However, if your organization is plagued by downstream bottlenecks, technical debt, and misaligned teams, AI will only magnify that chaos. Giving developers tools to generate code 30% faster is useless if your code review process is a week-long bottleneck or your testing environments are brittle. You’re not accelerating value delivery; you’re just accelerating the creation of inventory that piles up at the next gate. This is the core of the productivity paradox: individual effectiveness soars, while the system groans under the added pressure.

The Blueprint for Amplification: The DORA AI Capabilities Model

So, how do you fix the system? The report doesn’t just diagnose the problem; it offers a compelling prescription in the form of the new

DORA AI Capabilities Model. This model identifies seven foundational capabilities, a mix of technical, cultural, and process-oriented practices, that are proven to amplify the positive effects of AI.

These capabilities are:

  1. A Clear and Communicated AI Stance: Ambiguity creates fear and risk. Teams need psychological safety and clear guardrails to experiment effectively.
  2. Healthy Data Ecosystems: AI models are only as good as the data they’re trained on. High-quality, accessible internal data is the fuel for context-aware, effective AI.
  3. AI-Accessible Internal Data: Generic AI is helpful. AI with secure access to your internal documentation, codebases, and systems is transformative.
  4. Strong Version Control Practices: In an era of high-velocity, AI-generated code, the ability to commit frequently and roll back changes safely is not just a best practice; it’s a critical safety net that enables speed.
  5. Working in Small Batches: AI can generate vast amounts of code quickly, but large, complex changes are difficult to review and destabilize systems. Enforcing the discipline of small, manageable work items reduces friction and improves product performance.
  6. A User-Centric Focus: This was one of the report’s most startling findings. For teams without a user-centric focus, AI adoption can actuallyharm team performance. A clear focus on user needs ensures that AI-driven speed is pointed in the right direction.
  7. Quality Internal Platforms: The internal platform is the essential foundation for AI success. It provides the paved roads, guardrails, and shared capabilities that allow the benefits of AI to scale securely across the entire organization.

Your First Step: Look in the Mirror

The message is clear: the race to leverage AI won’t be won by the company that buys the most advanced tools. It will be won by the company that builds the most robust and healthy socio-technical system.

For leaders, the path forward isn’t to ask “Which AI tool should we buy?” but rather, “Is our organization ready for the truth AI will show us?”

The report suggests a simple, powerful starting point. Gather your team and ask them one question: “Can we, right now, draw our software delivery value stream on a whiteboard?”. If the answer is no, or if the resulting diagram is a mess of tangled lines and question marks, you’ve found your bottleneck. You’ve found the place where the amplifier is currently plugged into noise.

And you’ve found where the real work begins.

Beyond the Screen: The Resurgence of Extreme Hardware and Embodied AI

For nearly a decade, Silicon Valley’s mantra has been clear: dematerialize everything. Transform atoms into bits. Move fast and break things, but preferably break them in code where the damage is reversible. This philosophy gave us streaming services, cloud computing, and mobile apps that transformed how we communicate, work, and entertain ourselves.

But 2025 marks an inflection point. The most ambitious technological ventures are no longer content to live behind screens. They’re returning to the messy, unforgiving world of physics, and the results are nothing short of revolutionary.

The Speed of Electricity

Consider what just happened on a German test track. An all electric hypercar shattered the production car speed record, reaching 308.4 mph. This wasn’t just an incremental improvement; it represented a fundamental reimagining of what’s possible when you combine cutting edge battery chemistry with sophisticated power management.

The achievement required solving problems that software alone could never address: managing heat dissipation at extreme speeds, creating batteries that can discharge at rates ten times higher than conventional systems, and building motors that can deliver nearly 3,000 combined horsepower without thermal failure. This is engineering at the absolute edge of material science.

What made this possible? A 1200 volt ultra high voltage platform, proprietary blade battery technology, and perhaps most importantly, the unique characteristics of electric powertrains that allow for smooth, consistent power delivery impossible with traditional combustion engines. The test driver noted that the quiet, vibration free experience allowed for a level of focus and control previously unattainable at such speeds.

The Trillion Dollar Challenge of Dexterous Manipulation

While speed records capture headlines, an even more profound transformation is happening in robotics. Major technology companies are making investments described as comparable in scale to the entire augmented reality market, potentially reaching hundreds of billions over the next decade.

The surprising truth about humanoid robotics is that walking isn’t the hard part anymore. Companies have largely solved locomotion; robots can walk, run, and even perform backflips. The grand challenge lies in something far more subtle: dexterous manipulation.

Think about the last time you picked up a glass of water. Your brain performed countless calculations instantaneously: estimating weight, adjusting grip pressure, predicting liquid dynamics, compensating for movement. This seemingly simple act represents one of the most complex challenges in robotics today.

Solving it requires more than clever algorithms. It demands unprecedented integration of advanced sensors, fine motor control, and sophisticated AI models that can simulate and predict real world physics in real time. The goal for many isn’t to manufacture robots at scale, but to develop the foundational software backbone that could become the operating system for an entire ecosystem of physical AI systems.

The New Engineering Renaissance

This return to physical innovation signals something profound about where technology is heading. After years of relatively frictionless digital innovation, the industry is tackling problems that can’t be solved with code alone. These challenges demand a new generation of engineers who are equally comfortable with neural networks and Newtonian mechanics.

Consider what this means for the problems we can now address:

Manufacturing processes that adapt in real time to material variations. Transportation systems that push the boundaries of energy efficiency while delivering unprecedented performance. Robots that can work alongside humans in unpredictable environments, handling delicate tasks that were previously impossible to automate.

The Convergence Point

What makes this moment unique is the convergence of capabilities that were previously developing in isolation. Advanced AI provides the intelligence. Breakthrough battery technology provides the power. Sophisticated sensors provide the perception. Novel materials provide the structure. Together, they’re enabling solutions to problems we couldn’t even properly frame a decade ago.

This isn’t about choosing between digital and physical innovation; it’s about their synthesis. The electric hypercar wouldn’t be possible without sophisticated software controlling every aspect of power delivery. The humanoid robot is essentially a physical platform for artificial intelligence to interact with the world.

What This Means for the Future

The return to atoms doesn’t diminish the importance of bits; it amplifies it. Every physical innovation requires increasingly sophisticated digital twins, simulations, and control systems. The difference is that now these digital capabilities must grapple with the unforgiving constraints of the physical world.

For organizations and individuals, this shift presents both challenges and opportunities. Companies that have focused purely on software may find themselves needing to develop hardware expertise or forge new partnerships. Engineers who can bridge the digital physical divide will become increasingly valuable. And problems that seemed intractable when approached from either domain alone may suddenly become solvable through their integration.

The revenge of the physical is really the beginning of a new synthesis. We’re not abandoning the digital revolution; we’re extending it into dimensions that actually push back. And in that resistance, in that friction with reality, we’re discovering possibilities that pure software could never achieve alone.

The future of technology isn’t just getting physical again. It’s getting real in ways we’re only beginning to imagine.

The Two-Month Sprint That Rewrote the AI Playbook: What NotebookLM Teaches Us About Building in the Age of AI

Last Tuesday, I watched a founder friend upload his entire company’s documentation into NotebookLM. Twenty seconds later, two AI podcasters were discussing his business model with the kind of insight his board of directors hadn’t managed in three meetings. “This is insane,” he said, replaying the segment where they debated his pricing strategy. “They actually understand it better than I explained it.”

That’s when it hit me: We’ve been thinking about AI products all wrong.

The 60-Day Revolution Nobody Saw Coming

While the tech world was busy debating whether Google had lost its edge to OpenAI, a team of three at Google Labs was quietly proving that the future of AI isn’t about who has the biggest model. It’s about who understands what humans actually need from their machines.

Jason Spielman, Raiza Martin, and Stephen Hughes built NotebookLM in two months. Not two years. Not two quarters. Two months.

They didn’t set out to create a viral sensation. They set out to solve a simple problem: What if AI could help you understand your own content better, rather than just regurgitating information from the internet?

The Magic of Constrained Innovation

Here’s what most people miss about NotebookLM’s success: Its genius isn’t in what it can do. It’s in what it chose not to do.

In an era where every AI product tries to be everything, your writer, your coder, your therapist, your fortune teller, NotebookLM picked one lane and owned it completely. Source-grounded AI. Your documents. Your context. Your truth.

The mental model was deceptively simple: Inputs → Chat → Outputs. Upload your sources, have a conversation about them, generate something useful. No promises of artificial general intelligence. No claims about replacing human creativity. Just a tool that helps you see your own ideas from a different angle.

When they added Audio Overview—the feature that generates those eerily realistic podcast discussions, they weren’t trying to replace podcasters. They were giving people a new lens through which to examine their own content. The viral moment wasn’t planned; it was the inevitable result of building something genuinely useful rather than merely impressive.

The Velocity Doctrine: Why Small Teams Win

The NotebookLM story reveals an uncomfortable truth about product development: Your greatest strength can become your greatest weakness.

Google has the best AI models in the world. Infinite computing resources. Thousands of brilliant engineers. And yet, it took a team you could fit in a sedan to create their first truly viral AI product in years.

The formula they discovered is becoming the new playbook:

1. Radical Focus Over Feature Creep
When you have 60 days, you can’t build everything. This constraint becomes your compass. Every feature request, every nice-to-have, every “what if we also…” gets filtered through one question: Does this serve our core mission of source-grounded AI?

2. Decision Velocity as a Metric
Large teams optimize for consensus. Small teams optimize for decisions. The NotebookLM team made choices in minutes that would take weeks in a traditional structure. Not because they were reckless, but because they had clarity of purpose.

3. Ship at 70% Perfect
The Audio Overview feature that went viral? It wasn’t perfect. Sometimes the AI podcasters would go off on tangents. Sometimes they’d miss key points. But users didn’t care, they were too busy being amazed that their PDFs had become engaging conversations.

The Source-Grounded Revolution

Here’s the paradigm shift everyone’s missing: The next wave of AI isn’t about building systems that know everything. It’s about building systems that deeply understand something specific, your something.

NotebookLM proved that the most powerful AI experiences come from constraining the problem space, not expanding it. While others were racing to build broader models with more parameters, this team went narrow and deep.

Think about it:

  • Google helps you access the world’s information
  • NotebookLM helps you understand YOUR information
  • One is a library. The other is a mirror.

The implications are staggering. Every company sitting on years of documentation, every researcher drowning in papers, every student trying to synthesize semester’s worth of notes—they don’t need artificial general intelligence. They need artificial specialized intelligence, trained on their specific context.

The Two-Month Rule

If NotebookLM teaches us anything, it’s this: In the age of AI, if you can’t build and ship something meaningful in two months, you’re probably solving the wrong problem.

The tools are there. The models are accessible. The infrastructure is commoditized. What’s scarce isn’t technology, it’s taste. It’s the ability to look at all the possibilities and choose the one that matters.

The new reality:

  • Week 1-2: Define the core problem and constraint
  • Week 3-4: Build the minimal viable magic
  • Week 5-6: Test with real users, iterate rapidly
  • Week 7-8: Polish the experience just enough to ship

Anything longer and you’re probably overthinking it.

The Lessons for Building in the AI Age

Watching the NotebookLM team work has fundamentally changed how I think about product development. Here are the principles they’ve proven:

Start with the constraint, not the capability. Don’t ask “What can AI do?” Ask “What specific problem needs solving?”

Your moat isn’t your model. It’s your understanding of a specific use case and your courage to ignore everything else.

Small teams aren’t a limitation—they’re a strategic advantage. Three people with shared context will outship thirty people in meetings.

Perfection is the enemy of magic. Users will forgive bugs if you give them something they’ve never seen before.

The best interface might not be chat. NotebookLM’s Audio Overview proves that sometimes the most powerful AI experience doesn’t involve typing at all.

The Future is Already Here

As I write this, thousands of product teams are sitting in planning sessions, creating roadmaps that stretch into 2026, debating KPIs and success metrics. Meanwhile, somewhere in Google Labs, or any small team that’s paying attention, another group of three people is 60 days away from making those roadmaps obsolete.

The NotebookLM team didn’t just ship a product. They shipped a proof of concept for a new way of building: Smaller teams. Shorter cycles. Sharper focus.

The age of two-year product roadmaps is over. The age of two-month sprints has begun.

And the beautiful irony? It happened at Google, proving that even in the largest tech companies, the future is still being built by people who refuse to accept that things have to take as long as they’ve always taken.

How AI Just Solved Education’s Oldest Problem

Every teacher knows this moment: A student struggling with Newton’s laws while their classmate breezes through. The traditional response? “Read it again.” But what if the problem isn’t the student’s effort, it’s that the textbook speaks only one language when learners need five?

Google Research just dropped something fascinating: Learn Your Way, an AI system that transforms static textbooks into dynamic, personalized learning experiences. The results? Students showed 11% better retention after just one study session.

But here’s the use case that caught my attention, one that was impossible at scale until now:

The Multi-Modal Bridge for Struggling Learners

Imagine Lars, a 16-year-old apprentice at Siemens who’s passionate about Bayern Munich but struggles with electrical engineering theory. Traditional textbooks explain Ohm’s Law the same way to everyone. Learn Your Way automatically:

  • Re-levels the content to his comprehension level
  • Replaces generic examples with football-themed ones (imagine learning resistance through the physics of Neuer’s goalkeeper gloves gripping a wet ball)
  • Generates an audio lesson in conversational German
  • Creates visual mind maps showing connections between circuit components
  • Produces practice scenarios from actual industrial applications

This isn’t just translation, it’s transformation. The same concept delivered through five different cognitive doorways, personalized to what resonates with each learner.

What This Means for European Organizations

The implications are massive for our dual education systems and industry training:

  1. Ausbildung Revolution: Germany’s apprenticeship programs could personalize technical training to each apprentice’s background, explaining CNC programming through gaming logic for Gen Z trainees or through traditional craftsmanship analogies for career changers
  2. Multilingual Workforce Training: A Portuguese engineer at Volkswagen could learn new assembly processes with examples from Portuguese automotive traditions, while their Turkish colleague gets culturally relevant analogies, all from the same source material
  3. Compliance Training That Sticks: GDPR training that adapts to whether you’re a developer (code examples), marketer (campaign scenarios), or HR professional (employee data cases)

Real-World Application: The BMW Academy Case

Imagine BMW’s production line training:

  • The 19-year-old gamer learns quality control through “debugging” metaphors
  • The 45-year-old career changer from hospitality learns through “customer service excellence” parallels
  • The mechanical engineering graduate gets pure technical specifications
  • All learning the exact same procedures, with 11% better retention

What now

If you’re in L&D or education technology:

  • Identify your most diverse learner populations (especially in technical training)
  • Map cultural and generational learning preferences in your workforce
  • Pilot AI-powered content transformation with safety-critical training first (where retention matters most)
  • Consider how this could reduce training time in high-turnover positions
  • Explore partnerships with vocational schools testing similar approaches

The European Advantage

With our strong tradition of apprenticeships and lifelong learning, Europe is uniquely positioned to lead this transformation. We’ve always known that a master craftsman teaches differently than a university professor. Now we can scale that personalized mastery.

The fascinating part isn’t just the technology, it’s that students wanted to use it. 93% preferred AI-adapted learning versus 67% for traditional digital content.

We’re witnessing the shift from industrial education to artisanal learning—at industrial scale.

The question for European businesses isn’t whether to adopt this, but whether our competitors in Silicon Valley or Shenzhen will get there first.

Read here the full Google Research article

The Day AI Agents Got a Credit Card: Why AP2 Changes Everything

Last week, during a conversation with a CTO friend in Berlin, he casually mentioned his AI assistant had just negotiated a 30% discount on their cloud infrastructure bill. Not him, his AI agent. It analyzed usage patterns, benchmarked pricing across providers, drafted the negotiation email, handled three rounds of back-and-forth, and executed the contract renewal. All while he was asleep.

“The crazy part,” he said, sipping his flat white at St. Oberholz, “is that I couldn’t actually let it complete the payment. That last step, transferring €47,000, still needed me to click ‘approve’ like it’s 1999.”

This is the paradox of 2025: We’ve built AI agents sophisticated enough to outperform MBA graduates in strategic negotiations, yet they can’t buy a €2 coffee without human intervention. It’s like giving someone a Ferrari but making them push it everywhere.

Google just changed that game entirely.

The Infrastructure Nobody Knew We Needed

When Google quietly launched the Agent Payments Protocol (AP2) in September 2025, most headlines focused on the sexy parts, AI agents shopping autonomously! Stablecoin integration! But they missed the real story.

AP2 isn’t about enabling purchases. It’s about solving the trust paradox that’s been strangling the autonomous economy since day one.

Think about how payments actually work today. Every transaction assumes a human, with their unique biometrics, behavioral patterns, and legal accountability, sits at the endpoint. Our entire financial infrastructure, from PCI compliance to fraud detection, is built on this assumption. It’s why your bank calls when someone uses your card in an unusual location. The system expects humans to behave like humans.

But AI agents don’t have fingerprints. They don’t have behavioral patterns that fraud systems can learn. They can’t be sued. When an agent initiates a payment, the system has no framework for answering three fundamental questions:

  1. Authorization: Did a human actually approve this?
  2. Authenticity: Is this real intent or an AI hallucination?
  3. Accountability: Who’s liable when things go wrong?

Without answers, the autonomous economy was stuck in demonstration mode. Impressive demos, no production deployments.

The Mandate System: Cryptographic Trust at Scale

AP2’s breakthrough isn’t technical complexity, it’s philosophical clarity. Instead of trying to make AI agents look like humans to existing systems (which is what everyone else attempted), Google created a new trust layer that sits above traditional payments.

The system works through cryptographically-signed “Mandates”, think of them as smart contracts for the real world, but without the blockchain overhead.

Intent Mandates are the workhorses. A user might authorize: “Buy winter jackets under €200 when available in green.” The agent gets a cryptographic proof of this authorization that merchants can verify. No more “trust me, my human said it’s okay.”

Cart Mandates maintain human oversight for larger purchases. The agent negotiates and assembles the cart, but needs explicit approval before execution. Perfect for B2B procurement where agents handle complexity but humans control budgets.

Payment Mandates are the clever bit, they don’t contain payment details at all. They’re just signals to payment processors that an AI agent is involved, triggering enhanced monitoring without requiring infrastructure changes.

What’s brilliant here is the separation of concerns. Payment credentials never touch the agent layer. PCI compliance remains intact. Existing payment rails work unchanged. It’s like adding a new protocol layer to the internet stack without modifying TCP/IP.

Why German Enterprises Should Care (Deeply)

Here’s what my LinkedIn feed keeps missing: Germany is accidentally perfect for this revolution.

Start with the numbers: 946 fintech startups, 64% fintech adoption rate, and—crucially, a SEPA instant payment system processing 1.2 billion transactions daily. While Americans debate whether Venmo or Zelle is better, Germans have had instant, interoperable, bank-grade payments for years.

But the real advantage is cultural. German businesses don’t chase shiny objects, they optimize processes. And AP2 is, fundamentally, a process optimization play.

Consider a typical German Mittelstand manufacturer. They’re already using SAP for ERP, have sophisticated supply chain management, and pride themselves on efficiency. Their procurement team spends 40% of their time on routine reordering, vendor management, and price negotiations.

With AP2, those procurement agents become actual agents—AI systems that monitor inventory, predict demand, negotiate with suppliers, and execute purchases within pre-approved parameters. The €200k procurement specialist doesn’t lose their job; they stop doing repetitive tasks and start setting strategic parameters.

One startup in Munich is already piloting this. Their AI agent manages relationships with 47 suppliers, automatically reordering based on production schedules, negotiating volume discounts, and even switching suppliers when quality metrics slip. Time from stockout detection to reorder? 3 minutes. Human involvement? Zero, until the monthly strategy review.

The Competitive Moat Nobody Talks About

Everyone’s focused on Google’s first-mover advantage, but they’re missing the real moat: network effects with a twist.

Traditional network effects are simple, more users make the platform more valuable. But AP2 has three interlocking network effects:

  1. Agent Network: More agents mean more sophisticated multi-agent transactions
  2. Merchant Network: More merchants mean more agent utility
  3. Trust Network: More successful transactions mean better risk models

This creates a fascinating dynamic. Early adopters don’t just get first-mover advantage, they help train the risk models that become the competitive moat. It’s like being paid to dig your own defensive trench.

The European players understand this. Nexi (processing €530 billion annually), Adyen, and Revolut aren’t just “partners”, they’re co-creating the risk frameworks that will define autonomous commerce. When American companies eventually adopt AP2, they’ll be using trust models trained on European transaction patterns.

There’s delicious irony here. Europe, often criticized for overregulation, has created the perfect sandbox for autonomous payments. GDPR-compliant by design? Check. PSD2 Strong Customer Authentication? Built in. EU AI Act compliance? Native to the architecture.

The Use Cases That Actually Matter

Forget the “AI buying your groceries” demos. The real money is in B2B.

Dynamic Software Licensing: Imagine your infrastructure automatically scaling licenses based on usage, negotiating volume discounts in real-time. One Frankfurt-based hedge fund is piloting this with their Bloomberg terminals—their agent monitors trader activity and automatically adjusts terminal licenses daily, saving €2 million annually.

Supply Chain Orchestration: A Stuttgart automotive supplier has agents managing relationships with 200+ vendors. The system doesn’t just reorder parts, it predicts supply chain disruptions, automatically sources alternatives, and maintains optimal inventory levels. Human procurement staff now focus on strategic vendor relationships rather than spreadsheet management.

Energy Arbitrage: With volatile energy markets, German manufacturers are using agents to automatically shift production schedules based on electricity prices, pre-purchase energy during low-demand periods, and even sell excess capacity back to the grid. All executed autonomously within pre-set parameters.

The pattern is clear: AP2 shines where complexity meets repetition. Tasks that require intelligence but follow patterns. Decisions that need context but have clear parameters.

The Stablecoin Subplot

The x402 extension, developed with Coinbase and the Ethereum Foundation—deserves its own analysis. While traditional payments move at banking speed, stablecoin settlements happen instantly. This isn’t just faster, it’s fundamentally different.

Instant settlement enables business models that were previously impossible. Micropayments for API calls. Real-time revenue sharing. Dynamic pricing that actually adjusts by the second. Pay-per-use everything.

But the real innovation is programmable money. Smart contracts handling escrow, automatic refunds when SLAs breach, or payments that release based on IoT sensor data. The agent economy isn’t just automated, it’s programmable.

German businesses, despite their traditional banking preferences, are surprisingly well-positioned here. The digital euro trials, strong regulatory framework, and cultural emphasis on stability make stablecoins less “crypto” and more “digital cash.”

The Risks Everyone’s Ignoring

Of course, this isn’t without risks. The optimistic takes are flooding LinkedIn, but let’s be realistic about the challenges.

Adversarial Agents: What happens when agents start gaming each other? We’ve already seen preview of this with algorithmic trading, flash crashes caused by algorithms reacting to algorithms. Now imagine that in every market.

Hallucination Liability: Yes, mandates provide authorization proof, but what about execution errors? If an agent misinterprets a mandate and orders 10,000 widgets instead of 10, who pays? The legal frameworks don’t exist yet.

Privacy Paradox: AP2 agents need extensive access to understand context, calendar, email, financial records. The same GDPR that makes Europe attractive for AP2 could also constrain its most powerful use cases.

Concentration Risk: If Google controls the protocol that powers autonomous commerce, they effectively become the tax collector for the AI economy. The “open” protocol still routes through Google’s infrastructure.

What This Actually Means for German Founders

If you’re running a startup in Germany, you have three options:

Option 1: Ignore it. Reasonable if you’re in deep tech or biotech. Not everything needs AI agents making purchases.

Option 2: Integrate it. If you’re in fintech, e-commerce, or B2B SaaS, start experimenting now. The protocol is open, the documentation is solid, and early integration gives you competitive advantage.

Option 3: Build on it. The real opportunity isn’t using AP2, it’s building the layer above it. Tools for mandate management. Risk assessment for agent transactions. Multi-agent orchestration platforms. The infrastructure for autonomous commerce is just beginning.

The smartest play might be the most boring: focus on compliance and trust. Every enterprise wanting to use AP2 will need help with mandate management, audit trails, and regulatory compliance. The Salesforce of agent payments doesn’t exist yet.

The 2030 Scenario

Fast forward five years. Your morning looks different.

Your personal AI assistant has already renegotiated your mortgage rate (saved 0.3%), switched your electricity provider (€50 monthly saving), and bulk-purchased household supplies with three neighbors for a volume discount. Your company’s agents have optimized cloud spending, rebalanced the investment portfolio, and identified three acquisition targets that match strategic parameters.

You haven’t eliminated human decision-making, you’ve elevated it. Instead of clicking “buy” 50 times daily, you’re setting strategies, defining parameters, and handling exceptions. The mundane became autonomous. The strategic remains human.

This isn’t science fiction. Intuit is already building “done-for-you” financial services. ServiceNow has autonomous procurement in production. The infrastructure exists today.

The question isn’t whether the autonomous economy is coming, it’s whether European businesses will help build it or simply consume it. With AP2, Google has provided the infrastructure. With SEPA, strong fintech ecosystem, and regulatory clarity, Europe has the foundation.

The rest is execution.

Read to Forget: The Art of Selective Information Consumption

We’ve all seen it.

Maybe it was a university colleague whose textbook looked more yellow than white, nearly every paragraph aggressively highlighted. Or perhaps it was the person in the quarterly briefing, furiously typing notes, attempting to transcribe an 80-slide deck word for word.

For a long time, I assumed this was the definition of diligence. It looks like work. It feels responsible. We are taught from an early age that learning means retention, and retention means being able to recall facts on command.

But somewhere along the way, perhaps while drowning in research papers or navigating the endless stream of industry reports, presentations, and webinars, I realized this approach is fundamentally flawed.

It felt productive, but it rarely led to better thinking.

The reality of our professional lives is this: compelling information is infinite, and our time is finite. Trying to retain everything is not just difficult; it’s counterproductive.

It’s time to change the objective. I read to forget.

The Storage Device Fallacy

The fundamental error we make is treating our minds like storage devices. We act as though we are hard drives that need to save every bit of information that passes before our eyes.

When we operate in “storage mode,” we focus on the wrong things. We worry about capturing the details rather than understanding the structure. We hoard information in complex note-taking systems, creating vast databases of knowledge that we rarely, if ever, consult.

This hoarding provides a false sense of security. We confuse having access to information with having integrated it. But you can’t possibly keep track of everything, nor can you build anything meaningful with knowledge you’ve only filed away.

Our minds are not storage devices; they are processing engines. And they need a different kind of fuel.

The Evolving Framework

Instead of a hard drive, I prefer to think of my mind as a constantly evolving framework, a system of beliefs and models (a Bayesian system, if you will) that are updated in small, incremental steps.

When I approach any new material, whether it’s a non-fiction book, a conference keynote, or a technical white paper, I am not trying to download it into my brain. I am exposing my current mental models to new stimuli and seeing how they react.

This requires a significant shift in mindset: You must give yourself permission to let the information go.

When I start reading or listening, I am prepared to lose 98% of what’s in front of me. This isn’t negligence; it’s focus. By releasing the pressure to remember the 98%, I can hunt effectively for the 2% that matters.

The Two Goals of Consumption

If we aren’t reading to memorize, what are we reading for? In my experience, I only look for two things from any piece of content.

1. The Incremental Update

The most valuable outcome of consuming information is a subtle alteration in my thinking. I am looking for a perspective that nudges my existing understanding of a problem in a new direction.

It rarely happens as a sudden epiphany. It’s usually a quiet realization, a slight adjustment to my judgment, or a new connection formed between two previously unrelated ideas. These small updates compound over time to create a more refined and accurate world model. This process happens almost subconsciously, provided you are engaging actively rather than just capturing passively.

2. The Actionable Tactic

The second thing I look for is a specific, highly applicable piece of information that I might use later in my own work.

This is the only time I take notes.

If I come across a brilliantly designed methodology section in a paper, I’ll save that. If a speaker presents data in a uniquely compelling visualization, I’ll screenshot the slide. If a book offers a specific framework for difficult conversations, I’ll jot it down.

These are tools, not facts. They are immediately useful and save me the time of reinventing the wheel later. Anything beyond that clutters my system.

The Test of True Value

The goal of professional learning should not be retention. It should be stimulation.

Information should stimulate thinking and produce new ideas. The ultimate test of a text or presentation is whether it forces you to stop consuming and start creating.

I’ve often found myself reading a paper, pausing midway, and immediately opening my editor to experiment with a variation of the algorithm described. This immediate experimentation often leads to new insights or even the foundation for a new project.

The highest compliment I can pay an author or a speaker is to stop paying attention to them and start applying their ideas.

If a non-fiction text, a lengthy slide deck, or a keynote speech doesn’t spark new thoughts or make you want to do something differently, it may not have been worthwhile consuming in the first place.

Let It Go

We are drowning in data. We are bombarded by presentations, reports, and books that promise to make us better. But the sheer volume paralyzes us if we try to drink from the firehose.

Stop highlighting 40% of the text. Stop transcribing webinars. Stop worrying that you can’t recall the five key points from the book you read last month.

Shift your focus from accumulating information to refining your intuition. Read to understand, consume to act, and be comfortable forgetting the rest.

Robo taxis are coming

During a family trip in San Francisco, I experienced a paradigm shift not in the cloud, but on the asphalt. It was a visceral lesson that the future often arrives not with a grand announcement, but with a quiet, electric hum as it pulls up to the curb.

The day started simply. We were sightseeing, and the familiar dance of hailing a ride began. But instead of the usual Lyft or Uber, my kids, having seen them glide silently through the city streets, were adamant: “Dad, can we try one of those Robotaxi?”

As a technologist, I was intrigued. As a parent, I was happy to indulge their curiosity. I knew the technology on paper, of course. I understood the complex symphony of LiDAR, radar, compute, and machine learning that made it possible. But understanding a system diagram and entrusting your family to one are two entirely different things.

The First Ride: From Science Fiction to Reality

Summoning the car felt familiar enough, a pin on a map, an ETA. But the arrival was anything but. A car pulled up, and the driver’s seat was empty. The steering wheel was completely still. For a moment, it was pure science fiction. We opened the doors and settled in, the cabin feeling spacious and strangely serene without a person in the front seat.

With a tap on the in-car screen, we were off. That first turn was the moment of truth. The steering wheel rotated with a calm, deliberate precision that was almost unnerving. It navigated a busy intersection, yielded to a pedestrian, and merged into traffic with a smoothness that felt both alien and deeply reassuring. My initial feeling was one of awe, the kind you get from watching a truly elegant solution to an incredibly complex problem.

The Normalization: From Awe to Utility

That sci-fi feeling lasted for exactly one ride.

By our second trip, the novelty was already giving way to something far more powerful: trust. By the fifth ride of the day (I think we took eight in total), the experience had become as natural and unremarkable as drinking a glass of water. The awe had been replaced by an appreciation for its sheer, boring competence. And I mean “boring” as the highest compliment an architect can give a system. It just worked.

This is where my worldview truly began to shift. The experience was defined by a set of characteristics that we, as engineers, strive for in the systems we build:

  • The app is seamless. The car arrives promptly. There’s no ambiguity.
  • Every ride was the same in its excellence. The acceleration was smooth, the braking was gentle, the lane changes were decisive but not aggressive. There was no variance in driver mood, no questionable route choices, no sudden braking. It was a system operating within perfectly defined, safe parameters.
  • When we got in, the car recognized my profile and my Spotify account began playing our family’s vacation playlist. It wasn’t just a taxi; it was our space. The temperature was ours to control, the music was ours to choose.
  • This was the most unexpected benefit. It was just us, the family. We could talk freely, laugh loudly, and just be ourselves. It felt like we were driving our own car, but without the stress of navigating a new city. We weren’t guests in a stranger’s vehicle, trying to subtly convince our host to turn on the AC. We were the sole occupants of a private, mobile space.

The Ultimate Litmus Test

The technology is one thing, but the human experience is the ultimate measure of success. At the end of a long day of exploring, I asked my kids a simple question.

“So, what do you prefer for our next ride? A Lyft Black in a fancy SUV, or another Waymo?”

They looked at me as if it were the most obvious question in the world. You don’t need to guess their answer. For them, the choice wasn’t between a human and a robot. It was between a predictable, private, personalized experience and one that was not. They had already accepted this as the new standard.

This feels like the shift from on-premise data centers to the cloud. At first, giving up control of the physical hardware felt like a risk. Now, the abstraction, scalability, and predictability of the cloud are the unquestioned foundation of modern technology. Waymo felt like that, a fundamental abstraction of the driving experience. It delivers the outcome you want (getting from point A to point B, safely and comfortably) while abstracting away the complex, messy, and unpredictable parts of the process.

I left San Francisco thoroughly impressed. The experience was more than just a cool piece of tech; it was a glimpse into a fundamentally better way to move through our cities. It’s a future that is safer, calmer, and more efficient.

This is the future. Everything else is the past.

Transform crisis into energy

Enjoyed listening to a recent podcast with Antje Heimsoeth (advisor to many DAX executives) talking about the current events.

Had, could, would – this does not help us in the crisis. We can only live forward, not backward. Straighten up and move on.

What helps now is to build a realistic future scenario in your head. How do I want to live after the crisis?”