Episode 141 · Sep 23, 2025 · Talk

Sarah Novotny on Open Source, AI Governance, and Building Trust in Tech

Featuring Sarah Novotny, Open Source Champion
Apple Podcasts Spotify Youtube

In this episode of Semaphore Uncut, we chat with Sarah Novotny—open source leader, Kubernetes governance pioneer, and advisor at the intersection of infrastructure and AI. Sarah shares her journey from the early days of the internet to shaping open source communities at scale, and now, her work with the Coalition for Secure AI (COSI) on building governance and security frameworks for the AI era.

From Physics to the Internet Revolution

Sarah didn’t start in computer science. She left graduate school in physics to join the internet boom of the late ’90s, landing at Amazon when it was just 400 people.

From there, she founded a company offering remote databases-as-a-service—before cloud was even a term—and contributed extensively to the open source ecosystem through O’Reilly Media, NGINX, and Chef.

Perhaps her most defining chapter: at Google, she helped transition Kubernetes from company-led to community-led, writing the governance structures that allowed it to grow as a true cross-industry open source project.

“I helped build leadership in the project, not just in the companies,” Sarah recalls.

The Shift Toward AI and Critical Infrastructure

Today, Sarah works with GenLab Studios, a venture studio launching companies in critical infrastructure, while serving on the board of the Coalition for Secure AI (COSI) under the OASIS standards body.

Her focus: defining the technical and social norms for AI adoption.

“We don’t have a good way right now to think about risk, reliability, auditability, and telemetry in AI systems,” she explains. “That’s exactly what we’re working to change.”

Why AI Feels Different

Sarah compares today’s AI wave to previous tech revolutions:

  • The Internet: information shifting from libraries to online search.
  • Mobile: computing moving from desktops to our pockets.
  • AI: a new paradigm for interacting with computers.

Like those earlier shifts, AI is rewriting norms—but at a breakneck pace. Companies are adopting tools before governance frameworks exist, leaving legal and ethical “cleanup” to follow.

“It feels like we’re staring into a magic eight ball,” Sarah says. “Unless you’re training models yourself, the process is opaque. What are we trusting—the model, the data, the team behind it?”

Open Source vs. Commercial AI

Sarah is skeptical of claims that today’s AI is “open source.” While some research-driven models are close—offering open data, weights, and tooling—the hardware moat limits true accessibility.

“What really opened up open source was when computers ended up on everyone’s desk,” she notes. “Until GPUs are widely available, we’re closer to the era of computing lounges than true democratization.”

She also warns against companies applying restrictive licenses while branding their models as open source: “That’s in direct opposition to the precepts of open source.”

Security, Trust, and Traceability

One theme Sarah returns to is trust. In open source, trust comes from transparent processes: pull requests, reviews, and multiple sign-offs. In AI, that process is missing.

“We don’t know the provenance of training data, whether copyright applies, or how to trace a model’s decision,” she says. “Some security professionals joke that we should treat every AI model as a hostile actor in the network.”

The Coalition for Secure AI: Building Guardrails

Through COSI, Sarah and her colleagues are creating the frameworks and best practices needed to secure AI adoption. Current workstreams include:

  • Defenders: preparing security teams with guidelines for AI systems.
  • Supply Chain: mapping implications of AI across software delivery pipelines.
  • Security & Governance: developing organizational practices to manage risk.

A potential fourth workstream is in development, alongside outreach to public sector bodies to ensure shared standards.

“It’s a long slog,” Sarah admits. “We’re looking at decades, not months. But we can move faster this time than we did with the internet or mobile.”

Looking Ahead

AI raises deep questions about privacy, ethics, and ownership—issues Sarah acknowledges but keeps separate from COSI’s technical mandate. Still, she encourages technologists to engage intentionally with these debates rather than leaving them to lawyers or corporations.

“Let’s do this with intention and for the long term,” she says, “instead of rushing for profits and cleaning up the toxic data dump afterward.”

Follow Sarah Novotny

🔗 LinkedIn
🌐 Coalition for Secure AI
🌐 GenLab Studios

Meet the host Darko Fabijan

Darko enjoys breaking new ground and exploring tools and ideas that enhance developers’ lives. As the CTO of Semaphore, an open-source CI/CD platform, he embraces new challenges and is eager to tackle them alongside his team and the broader developer community. In his spare time, he enjoys cooking, hiking, and indoor gardening.

Star us on GitHub