Openlane Logo
How Openlane Built Its First AI Feature

How Openlane Built Its First AI Feature

Sarah Funkhouser
Feb 17, 2026
  • engineering ,
  • cybersecurity ,
  • openlane ,
  • AI for Compliance ,
  • soc2

AI is everywhere right now.

Every product announcement, roadmap update, and pitch deck seems to come with an AI-powered badge attached to it, and if you don’t have one, the question is always the same: why not?

At Openlane, we don’t believe in simply checking boxes. It’s not that we don’t believe AI has its place; it's because in trust and compliance, the cost of unhelpful or misleading AI is far higher than the benefit of novelty. This isn’t a space where being mostly right is good enough.

We don’t want to ship something flashy, open-ended, or wasteful. We want to ship something genuinely useful, and something that helps people understand compliance.

So when we released our first AI feature, it wasn’t a chatbot bolted onto the side of the product. It was a set of focused, contextual tools designed to help our customers understand, navigate, and get started without pretending AI can do compliance for you.


The Problem With Generic AI in Compliance Software

Compliance work is nuanced. It’s full of legal language, framework-specific intent, and organizational context. Generic AI tools struggle here, not just because they can hallucinate, but because compliance is fundamentally about how people actually operate, not just what’s written down. Policies, procedures, and controls encode intent, but real risk often lives in the gaps between documented process and day-to-day reality.

In this space, hallucinations aren’t just annoying, they’re risky. Even perfectly accurate answers can still create false confidence if they ignore the human systems behind them. An AI model might explain a control or generate a risk matrix, but it can’t see that a key reviewer has been out sick for weeks, that approvals are being rushed at quarter-end, or that a process only works because one person knows how to make it run. Those signals rarely appear in a system of record, yet they materially affect security and compliance posture.

That’s why we avoided the trap of adding a big, friendly chat box that says, “Ask me anything!” and hoping for the best. Instead, we asked a more uncomfortable question:

Where does AI actually reduce friction without replacing human judgment?


 “I'm usually pretty skeptical of AI features, but that one was good.”
- Director, Infrastructure Engineering @ Aiven


Fine-Tuned, Domain-Aware, and Purpose-Built

Our first AI features are powered by a fine-tuned model trained on deep, hard-earned compliance knowledge. We didn’t start with generic prompts. We started by feeding the model questions and answers drawn directly from the lived experience of the Openlane co-founders who have spent their careers working in and around compliance.

Rather than asking a general-purpose model to guess its way through SOC 2 controls or policy requirements, we’ve begun teaching the model how compliance actually works, the intent behind controls, the patterns auditors look for, and the kinds of evidence that are typically expected.

Just as importantly, these tools are deliberately not open-ended. They appear only where they make sense in the product, with clear context about what they’re helping with and what they’re not.


Practical Examples:

Using AI for Compliance Policy Generation

Writing policies from scratch is tedious and intimidating, especially early on. But most policies aren’t creative writing exercises; they follow clear structures, formal tone, and well-understood sections that auditors expect to see.

Policy generation is a great example of where AI works when it’s properly constrained.

When you generate a policy in Openlane, the AI isn’t guessing blindly. It’s given the policy type, the expected structure auditors look for, and optional context about your systems, scope, or background. The result is a clean, well-formatted draft that’s mapped to relevant controls.

It’s not a finished artifact you blindly accept. It’s created explicitly as a draft. You still review it. You still tailor it. You still own it. But you’re no longer staring at a blank page, wondering how to start.


Using AI to Explain SOC 2 Controls

Controls are often written in dense, legal, or framework-heavy language. Even experienced teams regularly find themselves asking the same questions:

  • What does this actually mean?
  • What are auditors usually looking for here?
  • What kind of evidence is commonly requested?

Instead of forcing users to leave the product and search the internet for fragmented answers, we added “Ask AI” directly where controls live.

Because the AI already knows the specific control, the framework context, and your organizational environment, you can ask practical questions like:

  • “ELI5 what does this control mean?”
  •  “What are common evidence requests for this control?”
  •  “What does ‘implemented and operating effectively’ look like in practice?”

This isn’t AI doing the work for you. It’s AI helping you understand the work faster, with better context, and without pretending there’s a single right answer. And if it doesn’t know, it says that. 


Helping People Learn, Not Just Check Boxes

At a deeper level, this is about what we believe compliance software should do.

Many tools optimize for checking boxes as quickly as possible, without helping teams actually understand why the boxes exist in the first place. We think that’s backwards. Real trust comes from knowledge, not from artifacts alone.

Our approach to AI reflects that belief. We’re using it to spread institutional knowledge, lower the barrier to understanding complex requirements, and help teams build confidence in what they’re doing, not to automate judgment or eliminate human responsibility.


What This Means Going Forward

Releasing our first AI feature wasn’t about keeping up with a trend. It was about setting a standard for how we’ll approach AI at Openlane.

We believe AI has a real role to play in trust and compliance, but only when it’s built with care, restraint, and deep respect for the complexity of the work. Used thoughtfully, it can help teams learn faster, make better decisions, and spend more time on what actually matters.

This is just the beginning, reflecting our approach to product development, commitment to trust, and understanding of the responsibility that comes with building tools in this space.


decorative circle decorative circle decorative circle decorative circle