top of page
Search

AI Risk Isn’t Software Risk: Manage the Underlying Stuff Like a Market

  • Writer: vinit sahni
    vinit sahni
  • Sep 27
  • 2 min read

Here’s the deal: AI isn’t held back because models don’t work. It’s held back because risk teams and buyers get stuck waiting weeks—or even months—to say “yes, safe to use.” This drag is what we call trust latency, and it’s a real dealbreaker.

Why does this happen? Because most companies try to govern AI like regular software. You know, a massive questionnaire here, a PDF attestation there. Sounds familiar, right? But AI agents don’t behave like traditional apps—they behave more like something from Wall Street: complex, fast-moving, and packed with hidden risks lurking in their guts.

What makes AI risk so different?

Think of AI systems like a financial derivative—not a simple stock. They’re made up of multiple parts buzzing together:

  • Models that get updated weekly

  • Tools and plugins that talk to each other

  • Retrieval systems pulling data from changing sources

  • Sub-processors (aka suppliers of your supplier)

And here’s the kicker: risk lives not just in the app you see, but deep inside these changing components. A small tweak—say, swapping a model version or adding a new plugin with more permissions—can instantly change the risk profile, often in unpredictable ways. Plus, failures are correlated, meaning one hiccup (like a clever prompt hack) can trigger multiple problems at once.

Oh, and these systems keep changing—sometimes daily—making it impossible for annual checklists and static PDFs to keep up.

So, how should we manage AI risk?

Borrow a page from finance: manage AI like derivatives, not apps. That means:

  • Look through the stack. Map out all models, tools, data flows, and sub-processors to see where risks really hide.

  • Watch the sensitivities. Focus on what moves risk the most: model swaps, new tool permissions, data changes, supply-chain shifts.

  • Context matters. The same AI system can be low-risk for internal use but very high-risk if it directly affects customers.

  • Pause when needed. If something flips a critical assumption, hit pause—sandbox it, restrict it, fix it—before resuming.

  • Keep evidence fresh. Never trust a dusty PDF from last year; governance needs to move as fast as the system does.

We call this high-frequency risk management for AI: governance at the speed of change.

What good looks like in action

Great teams build:

  • Clear data and privacy boundaries (no silent secrets)

  • Policies around model updates with rollbacks and canaries

  • Defenses tested against prompt injection and data leaks

  • Least-privilege permissions that you can actually audit

  • Transparency about sub-processors and real-time change alerts

  • Real-time telemetry and incident playbooks tied to user impact

And crucially: all these controls are live, with triggers that flag when something changes and automatically pause risky activity until the situation is verified safe again.

Why this matters

Without such real-time, look-through control, risk reviews drag out, deals stall, and innovation slows. Suppliers get frustrated. Buyers get nervous. Everyone loses. But where trust moves at the speed of AI itself? That’s when things really start to fly.


ree



 
 
 
bottom of page