Skip to content
← All posts

Shipping AI in regulated environments

Most organizations treat compliance and speed as opposing forces. In regulated environments, this creates a deadlock: security teams slow down AI adoption with review processes designed for traditional software, while engineering teams either route around controls or stop shipping entirely.

Neither outcome is acceptable.

The real constraint

The challenge is not that regulations prohibit AI. Most frameworks are technology-agnostic. The challenge is that AI systems have properties that existing control frameworks were not designed for:

  • Non-determinism: The same input can produce different outputs
  • Opacity: Model reasoning is not always inspectable
  • Data dependency: Model behavior changes as training data changes
  • Emergent capability: Models can develop behaviors not explicitly programmed

These properties do not make AI ungovernable. They require updated control implementations, not new regulations.

Practical approach

What works in practice:

  1. Map AI-specific risks to existing controls rather than inventing new frameworks
  2. Build observability first so you can demonstrate what the system does
  3. Use deterministic guardrails around non-deterministic models to bound behavior
  4. Treat model outputs as untrusted input and validate before acting
  5. Version everything: prompts, models, training data, evaluation results

The speed unlock

Counterintuitively, investing in AI governance infrastructure early makes teams faster, not slower. When you have automated evaluation pipelines, pre-approved deployment patterns, and clear escalation paths, shipping new AI features becomes routine rather than exceptional.

The goal is to make compliance a paved road, not a gate.