Your Risk Management Process is a Bicycle Helmet at a Formula 1 Race

You’re shipping faster than ever.

AI coding assistants are cranking out pull requests. Your product team is testing three hypotheses a week instead of three a quarter. Features that used to take months now take days. Everyone’s celebrating velocity metrics and patting themselves on the back.

Meanwhile, your risk management process is still the same quarterly review you designed in 2019. The one where someone fills out a spreadsheet, a committee meets for an hour, and everyone nods sagely before going back to their actual work.

That’s not a mismatch. That’s a disaster waiting to happen—potentially on a societal level.

The Multiplication Problem: More Output, Same Guardrails

Here’s the claim you need to sit with: AI doesn’t just accelerate your development—it multiplies your risk surface at every single layer of your organisation.

Think about what’s actually happening. At the code level, you’re generating more lines, more complexity, more potential failure points.

At the product level, you’re shipping more features, touching more users, creating more competitive pressure.

At the business level, there’s more stuff—more communication, more decisions, more surface area for things to go sideways.

And societally? More powerful tools mean more potential for disruption, displacement, and unintended consequences.

This isn’t theoretical. In 2024, 78% of all AI users were taking their own tools into the workplace, sharing data, company context, code, and potentially customer data with third-party vendors. Just this week, OpenAI reported on a security breach with one of its vendors. Follow the dependency trail quickly and you can model a situation in which a well-meaning employee causes a company-ending incident.

These employees won’t be acting maliciously; just fast.

Your team is probably doing something similar right now. Not because they’re reckless, but because your guardrails were built for a different speed.

What would happen if your quarterly risk review had to assess everything your team shipped this week?

The Regulation Fantasy: We’ve Seen This Movie Before

Here’s the uncomfortable truth: regulations aren’t going to save you, and waiting for them is a form of professional negligence.

Yes, we’ve managed technology risk through regulation before. Financial services got Basel accords. Aviation got the FAA. Pharmaceuticals got the FDA. Social media eventually got a firm slap on the wrist following the Cambridge Analytica incident. And these frameworks genuinely work—they’ve prevented countless disasters and created accountability structures that protect millions of people.

But let’s be honest about what regulation actually catches.

We had extensive financial regulation in 2007. The 2008 crash still happened because risk had migrated to places the rules weren’t looking—shadow banking, complex derivatives, interconnected failures that spread faster than any committee could track.

We have had environmental regulations for decades. Climate change is still accelerating because the rules focus on individual polluters while systemic emissions accumulate invisibly.

We had social media for fifteen years before anyone seriously started thinking about platform liability for algorithmic amplification of harmful content. By then, entire information ecosystems had been reshaped.

The pattern is consistent: regulation follows the last crisis, not the next one. It’s like driving by looking in the rearview mirror—useful for understanding where you’ve been, but dangerous as your primary navigation strategy.

I wonder how comfortable we all should be playing this same game at “AI scale”.

The Real Risk: Pretending This is Someone Else’s Problem

The most dangerous thing you can do right now is assume your current processes are “good enough until someone tells us otherwise.” Because that person will likely be an insolvency administrator.

I’ve sat in rooms where smart, well-intentioned engineering leaders convinced themselves that risk management was a compliance function. That their job was to ship, and someone else’s job was to worry about consequences. That “just doing the engineering” was a defensible professional position.

It’s not. It never was, but it’s especially not now.

Here’s what I’ve learned from getting this wrong myself: the moment you outsource risk thinking to a quarterly review or a compliance team, or a pre-build process, you’ve already lost. Not because those functions aren’t valuable—they are—but because risk moves at the speed of your decisions, not the speed of your bureaucracy.

The fix isn’t adding more review meetings. It’s embedding risk thinking into daily processes, making “what could go wrong here?” as routine as “what risk did we ship yesterday?” We need to treat every AI-assisted output as requiring the same scrutiny you’d give to code from a talented but brand-new junior developer. But is that even possible when Claude and Codex can churn out millions of lines of code a week?

Is your team asking that question? Or are they just celebrating the velocity?

If so, how long are they going to be celebrating?

Playing a Game of Risk

Your risk process needs to match the speed and scale of your output. That’s not optional, and it’s not someone else’s job.

Three things you can do this week:

Daily risk pulse, not quarterly reviews. Add one question to every standup: “What’s the biggest thing that could go wrong with what we shipped yesterday?” Make it as normal as checking CI/CD status. If your team can’t answer it, that tells you something important.

Audit your AI-assisted outputs like you’d audit a new hire. Every piece of generated code, every AI-drafted document, every automated decision—treat it as requiring human verification until your team has built genuine intuition about where these tools fail.

Map your risk multiplication. Take one hour this week to draw out how AI acceleration affects each layer: code, product, business, social. Where has your output increased 5x? Where have your guardrails increased 0x? That gap is where your next crisis lives.

The tools aren’t going to slow down. The regulations aren’t coming fast enough. The only variable you control is whether you build risk thinking into your culture now, or explain to a board later why you didn’t see it coming.

Your call.

Leave a Reply

Your email address will not be published. Required fields are marked *