The Good Thing Podcast

How to Make Your APIs Work for AI

February 2, 2026
Hosted by Stefan Avram & Jens Neuse
Directed by Jacob Javor

Featuring:

Erik Wilde
Erik Wilde
Head of Enterprise Strategy @ Jentic

Erik Wilde discusses why many enterprise APIs are not AI ready, why so many AI projects fail prematurely, and why API design and governance matter more than models for building reliable AI-driven systems.

About Erik Wilde

Head of Enterprise Strategy @ Jentic

Erik Wilde is a long-time leader in the API space with over 15 years of experience helping organizations align API design, governance, and digital strategy with real business outcomes. He is an OAI Ambassador at the OpenAPI Initiative and the creator of Getting APIs to Work, where he provides practical guidance on building effective API programs. Today, through his work at Jentic, Erik focuses on how large, complex API landscapes can evolve to support AI-driven systems, without breaking the foundations enterprises already rely on.

Visit Erik Wilde's website →

TL;DR

Erik Wilde joined Stefan and Jens to talk about why many enterprise APIs are not ready for AI, why so many AI projects fail prematurely, and how strong API fundamentals (design, governance) play a more critical role than ever. Drawing on decades in the API space, Erik argued that AI readiness emerges mainly from strong API design and organizational alignment, rather than advances in models alone. The conversation explored why fine-grained APIs break down for agents, why intent-driven design and business capabilities matter more than individual endpoints, and why using less AI can often lead to more reliable systems.


The AI bubble will burst. The useful parts will survive

Erik sees clear parallels between the dotcom boom and AI boom, including the inevitability of a correction.

I think it’s clear that it’s a bubble. Right. Like it’s going to be ugly. There’s going to be companies that will not fare well.

This is a reminder that hype compresses timelines and inflates expectations. When that pressure releases, some companies disappear, but the underlying shift often remains.

That is how he frames AI. The excess will get stripped away, but the parts that actually help people get work done will stick around.


Why Jentic’s bet is to use less AI, not more

His move to Jentic was not about chasing AI for its own sake. It was about putting boundaries around it.

He argues that most systems work best when the predictable parts stay predictable. Workflows that must run the same way every time should live in deterministic code. AI should be reserved for places where ambiguity or creativity is unavoidable.

This is not an anti-AI position. It is a cost-aware and reliability-driven one. Especially in enterprise environments, using less AI can actually make systems more robust.

What we do is kind of allows you to do as little AI as possible, right? By saying for those things where AI create value, it’s good to use it, but for all the other things, just use deterministic code.


APIs are business capabilities, not endpoints

Much of Erik’s career has been spent correcting a quiet misunderstanding inside large organizations: teams often believe that once they expose APIs, the hard part is done.

In reality, APIs only create leverage when they reflect real business capabilities. If they are misaligned with how the business works, teams still move slowly, even if the technology is structurally sound.

This is why API design, governance, and organizational structure are inseparable topics for him. Poor alignment shows up as friction everywhere else.

When we talk about APIs, we really talk more about business capabilities, right? Like some kind of thing that is made available in a digital way.


Why many enterprise APIs are not AI ready

When the discussion turns to AI readiness, Erik shifts the focus away from models and protocols and toward API shape.

He points out that many enterprise APIs are too fine grained. They expose large numbers of low-level endpoints that humans can piece together during development, but that agents struggle to reason about at runtime.

As the surface area grows, LLMs struggle with context and focus. Discovery becomes harder, because there is a mismatch between how APIs were designed and how agents consume tools.

APIs are many organizations kind of too low level. Right. They’re kind of very fine grained. And that oftentimes makes it hard. In particular, if you look at stuff like MCP.


The fix is not rewriting 50,000 APIs

What happens when an organization already has tens of thousands of APIs that were never designed for the AI world?

Erik’s answer is pragmatic. You do not rewrite everything. You do not expose the mess directly to AI. You introduce a new layer.

By placing well-described, business-level workflows on top of fine-grained APIs, teams can improve AI readiness without destabilizing existing systems. The same layer also makes life easier for human developers.

Instead of even exposing those to AI, let’s put some workflows on top of those which are well-described, right, are more at the business level, and then those will be the things that AI actually gets exposed.


The bigger picture

As the episode wraps up, Erik makes it clear that none of this points to a dramatic break with the past. APIs are not going away. Design still matters. Organizational alignment still shapes technical outcomes.

What is changing is how systems are consumed. Agents behave differently than human developers and that shift exposes weaknesses that already existed in many API landscapes.

AI raises the stakes, but it does not change the fundamentals.


This episode was directed by Jacob Javor. Transcript lightly edited for clarity and flow.

Frequently Asked Questions

What does “AI ready APIs” actually mean?

It does not mean adding an LLM or exposing OpenAPI specs to agents. In this conversation, AI readiness means APIs that are: Aligned to business intent rather than low level technical actions, coarse enough that agents can reason about them, and discoverable at runtime without scanning thousands of endpoints. In short, APIs designed for how agents consume tools, not how humans wire them up at build time.

Why are fine grained APIs a problem for agents?

Fine grained APIs explode the surface area. Humans can search, read docs, and manually compose calls once during development. Agents need to discover and combine tools repeatedly at runtime. As the number of endpoints grows, LLMs struggle with context, focus, and tool selection. This is not just theoretical. It shows up immediately in large enterprises with tens of thousands of APIs.

What actually changes because of AI?

Consumption patterns. Humans consume APIs at build time. Agents consume them at runtime. That shift exposes weaknesses that already existed in many enterprise API landscapes. AI raises the stakes, but it does not change the fundamentals.

Never miss an episode

Subscribe to The Good Thing to get notified when new episodes drop.

About the Hosts

Stefan Avram

About Stefan Avram

CCO & Co-Founder at WunderGraph

Stefan Avram is the CCO and one of the co-founders of WunderGraph, helping enterprise customers adopt and scale federated architecture. A former software engineer, he translates technical value into practical outcomes and shaped WunderGraph's early customer motion, guiding platform teams from onboarding to production in demanding environments. A former college soccer player, he brings a competitive, team-driven mindset to every stage of customer growth, with a focus on helping engineering-led organizations move fast without losing control.

Jens Neuse

About Jens Neuse

CEO & Co-Founder at WunderGraph

Jens Neuse is the CEO and one of the co-founders of WunderGraph, where he builds scalable API infrastructure with a focus on federation and AI-native workflows. Formerly an engineer at Tyk Technologies, he created graphql-go-tools, now widely used in the open source community. Jens designed the original WunderGraph SDK and led its evolution into Cosmo, an open-source federation platform adopted by global enterprises. He writes about systems design, organizational structure, and how Conway's Law shapes API architecture.