I have spent close to two decades watching artificial intelligence move from a niche research curiosity — discussed in narrow conference rooms — to the single most consequential technology reshaping every industry on earth. For most of that time, I kept my observations to myself: in notebooks, in slide decks shared behind closed doors, and in conversations with colleagues over too-long lunches.
That changes today.
This is my public commitment to write regularly — and openly — about AI, research and development commercialisation, and the startup ecosystem building on top of these advances. Not as a curator of hype, but as a practitioner who believes that knowledge hoarded is knowledge wasted.
Why now? The numbers make the case
Consider where we actually stand. Global AI investment reached approximately USD 200 billion in 2025, more than double the figure from three years prior, according to data tracked by Stanford's AI Index. The number of AI-related patent filings has grown over 30% year-on-year since 2022. Meanwhile, the McKinsey Global Institute estimates that generative AI alone could add between USD 2.6 trillion and USD 4.4 trillion annually to the global economy across use cases.
USD 200B+
Global AI investment, 2025 (Stanford AI Index)
USD 4.4T
Potential annual GenAI economic impact (McKinsey GI)
30%+
YoY growth in AI patent filings since 2022
90 days
Average cycle between major model capability leaps
These are not abstractions for analysts. They are signals that the technology is moving faster than most organisations' ability to absorb and act on it. The gap between those who understand what is happening and those who do not is widening — and it is a gap with real economic, career, and societal consequences.
The Four Directions of AI in 2026
Having tracked this space across multiple inflection points — from the deep learning resurgence of 2012 to the transformer revolution and the generative AI explosion — I see four dominant vectors defining AI's trajectory this year.
Agentic AI moves from demo to deployment. 2025 was the year the industry talked about AI agents. 2026 is the year they start running payroll, triaging inboxes, and managing supply chains — autonomously. Gartner projects that by end-2026, over 15% of day-to-day business decisions in large enterprises will involve some form of AI agent, up from under 1% in 2023. The design question shifts from "what can AI do?" to "who is accountable when AI acts?"
Multimodal reasoning becomes table stakes. The frontier has moved beyond text. Today's leading models process images, audio, video, code, and structured data in unified architectures. For startups and R&D labs, this opens a commercialisation surface that was simply unavailable 24 months ago — from medical imaging pipelines to real-time manufacturing inspection systems.
The R&D-to-commercialisation gap narrows sharply. Historically, the journey from peer-reviewed breakthrough to deployed product took 7 to 10 years. AI is compressing that cycle to under 18 months in several domains. The implication for startups is profound: defensibility through patents or first-mover advantage is no longer sufficient. Execution velocity, domain expertise, and trusted data become the real moat.
Regulation shapes — not stops — the market. The EU AI Act entered its substantive enforcement phase in 2025. Similar frameworks are accelerating across ASEAN, the UK, and Canada. Contrary to the pessimistic reading, regulatory clarity historically stimulates investment — as we saw in fintech after PSD2 and in biotech after GCP harmonisation. Founders who understand the compliance landscape early will build faster, not slower.
Clayton Christensen's core insight from disruption theory bears repeating here: entrants do not beat incumbents on their own terms — they redefine the playing field entirely. AI is not just a better tool for existing processes. It is a fundamentally different playing field.
On continuous learning — but with a condition
Ray Kurzweil observed that we tend to overestimate what technology will do in two years and dramatically underestimate what it will do in ten. The practical implication is that most people either panic or disengage — neither is useful.
Continuous learning is not optional in this environment. But learning for its own sake is a form of procrastination dressed in intellectual clothing. The condition I want to attach to continuous learning is this: knowledge must be applied, and application must create value for others.
Adam Grant's research on organisational learning repeatedly surfaces a counterintuitive truth — the people who learn best are those who teach. They are forced to confront the gaps in their own understanding. They develop mental models robust enough to survive an audience's questions. And they create compounding value: one person who understands something deeply and shares it well can change the decisions of dozens of others.
That is the model I intend to follow. Every piece I write will attempt to move from insight to implication — from what is happening to what you should actually do about it.