Episode 131 · Apr 22, 2025 · Talk

Patrick Debois on AI & DevOps: What’s Next?

Featuring Patrick Debois, Generative AI and DevOps specialist
Apple Podcasts Spotify Youtube

While AI-generated code and copilots have become commonplace, their role in DevOps and infrastructure remains less defined. The tooling is improving rapidly, but the risks—from non-determinism to lack of visibility—make AI adoption in production a complex, evolving journey.

In this episode of Semaphore Uncut, Patrick Debois—Generative AI and DevOps specialist —joins Darko to share his perspective on how AI intersects with DevOps, DevSecOps, and infrastructure as code. Patrick discusses everything from generative tooling to failure handling, and what makes this era of automation both exciting and risky.

A Career Built on Curiosity

Patrick’s journey through the software world has always been guided by a drive to learn. He’s worn almost every hat imaginable—tester, sysadmin, mobile developer, VP of Engineering—and that broad experience led him to the insights that sparked the DevOps movement.

What set Patrick apart early in his career was his desire to work across domains, building empathy between teams and solving the friction points he observed first-hand. That perspective would later shape the DevOps Handbook, which he co-authored alongside Gene Kim, Jez Humble, and others.

Although Patrick says most of the writing credit goes to his co-authors, many of the core ideas came from his real-world experiences and the early DevOpsDays community. The second edition, with contributions from Dr. Nicole Forsgren, introduced more data and up-to-date enterprise stories, but the foundational patterns remain relevant—even if the landscape has continued evolving.

From Virtual Production to AI in DevOps

Patrick’s entry into the AI world didn’t begin with infrastructure. During the pandemic, he started exploring virtual production—automating video and media workflows using game engines like Unreal Engine. That interest in automation and digital characters led him to experiment with generative AI, initially for rendering and voice generation.

Then ChatGPT landed, and suddenly AI had a meaningful place in software development again. Patrick was hooked. He began integrating LLMs into developer workflows, helping engineers at his company use AI tools effectively, and later started advising other companies on building more robust AI-powered systems.

What keeps him engaged? “Every day, I’m learning something new—and it hasn’t stopped for a year and a half.”

Infrastructure as Code Meets Generative AI

While AI-generated code has caught on quickly in application development, Patrick is most intrigued by its impact on infrastructure as code and DevOps workflows.

Because IaC languages like Terraform are more structured and domain-specific than general-purpose programming languages, they present an interesting target for generation. Tools can now create entire Terraform environments, visualize them, and even highlight only the parts relevant to a given change—helping reduce cognitive load.

But the bigger question is validation.

“Anyone can generate Terraform,” Patrick says, “but how do you validate it? How do you ensure it aligns with your organization’s security and compliance rules?” Patrick points to rule-based systems and emerging standards like .aiconfig files as a way to encode domain-specific requirements into the generation and review process.

Still, challenges remain. Much of the validation burden is still on the developer, and the feedback loops between code and production are often disconnected. Patrick sees a gap between developer tooling and production observability that AI could help bridge—but hasn’t yet.

From Testing to Chaos Engineering for AI

Patrick draws a compelling analogy between DevOps maturity and where AI is today.

In DevOps, teams moved from simple test automation to full CI/CD pipelines, observability, resilience engineering, and chaos testing. The AI lifecycle may follow a similar arc—from basic completions to robust feedback loops that simulate failure and enable automated recovery.

“Only about 20% of the work is automation,” Patrick says. “The other 80% is preparing for failure.”

He also emphasizes the need to teach AI systems to test themselves—or at least to validate outputs using other models, digital twins, or sandboxed environments. In incident response, for instance, Patrick envisions systems that don’t just suggest solutions, but actually try multiple fixes in parallel and report back.

The Human-AI Feedback Loop

One area Patrick finds fascinating is how AI affects engineering culture and team dynamics. As AI takes on more code generation, the role of engineers shifts toward reviewing, validating, and integrating. Juniors may be able to ship code faster, but seniors are more essential than ever to evaluate quality and ensure safety.

What’s missing, Patrick says, is tooling that supports the review process itself. He points to tools like Cursor, which offer multiple ways to review code changes—including text explanations and focused visual diffs—as the future of developer experience.

“If you always see the full picture, you lose focus. AI can help us zoom in on what matters.”

The Future of Observability & AI-Native Tooling

While DevOps has long relied on tools like Grafana and Prometheus, there’s still no AI-native operator capable of monitoring logs and dashboards in real time and proactively suggesting or testing fixes. Patrick is hopeful—but realistic.

“I can’t wait for those tools to exist,” he says. “But we’re not there yet.”

He expects progress to come from more specialized models trained on domain-specific data (like Grafana dashboards or logs), rather than general-purpose screen readers. Generative BI tools and prompt tracing are promising steps, but the field is still emerging—and often fragmented.

Final Thoughts: Why It’s Still Day One

Patrick believes we’re only at the beginning of understanding how generative AI will impact DevOps, infrastructure, and developer workflows. While the technology is advancing quickly, organizational change takes time—and toolchains still feel disjointed.

His advice? Get hands-on.

“I call it method acting. I have to use the tools myself. I need to feel the pain points firsthand.”

That hands-on mindset has guided Patrick through every major shift in software—and it’s what keeps him excited about what comes next.

Follow Patrick Debois

📺 YouTube: Jedi Forever
💼 LinkedIn: Patrick Debois
🐦 Twitter (X): @patrickdebois

Leave a Reply

Your email address will not be published. Required fields are marked *

Meet the host Darko Fabijan

Darko enjoys breaking new ground and exploring tools and ideas that enhance developers’ lives. As the CTO of Semaphore, an open-source CI/CD platform, he embraces new challenges and is eager to tackle them alongside his team and the broader developer community. In his spare time, he enjoys cooking, hiking, and indoor gardening.

Star us on GitHub