AI Finds Zero-Days Autonomously. Who Is Accountable When AI Ships One into Production?

AI Finds Zero-Days Autonomously. Who Is Accountable When AI Ships One into Production?

Gain proven strategies and best practices for platform owners, architects, developers, CIOs, release managers, and QA leaders.

AI Code Governance

Agentic AI

Salesforce

ai-code-accountability-system-of-record

Table of content

Detection Is a Commodity. Accountability Is Not.

Anthropic’s Mythos Preview can autonomously identify and exploit zero-day vulnerabilities in every major operating system and web browser. Veracode’s 2025 GenAI Code Security Report confirms that 45% of AI-generated code introduces security flaws. ServiceNow Build Agent — running on Claude as its default model — expects to quadruple usage within twelve months. Fewer than half of developers consistently review AI-generated code before committing it.

The code ships. The questions come later.

AI generates code at production scale. AI also finds the flaws in that code. Detection has become a commodity. The question CISOs now face is different: who is accountable when an AI agent ships a policy violation into a production ServiceNow or Salesforce instance? Not “did we catch it?” but “who approved the decision to accept the risk, against which rule, and when does that approval expire?”

That question demands a system of record. Quality Clouds provides one — today — as the AI Code Governance layer for ServiceNow and Salesforce.

A Rules File Is Not an Accountability Layer

Context files — CLAUDE.md, Cursor rules, .cursorrules — can instruct an LLM to follow coding standards. They serve a purpose. They are not governance.

A rules file does not enforce compliance. It requests it. A rules file does not prove that enforcement occurred. It does not propagate across multiple AI tools operating in the same environment. When a second AI agent rewrites or extends the same artefact, the rules file from the first tool carries no authority.

Accountability requires four things that no rules file provides:

  1. An external record of every policy violation, independent of the tool that generated the code.

  2. Segregation of duties between the person who requests an exemption and the person who approves it.

  3. A lifecycle that tracks each finding from detection through resolution.

  4. An export format that audit and compliance teams can ingest without translation.

Quality Clouds delivers all four through Debt Manager, running today on ServiceNow and Salesforce.

The Ledger: Every Violation, Every Decision, Every Artefact

Quality Clouds Debt Manager maintains a complete record of every policy violation across a ServiceNow or Salesforce instance. Each finding carries severity, impact area, configuration element type and name, team ownership, tags, and time-to-fix. Each links directly to the artefact in question.

This is not a dashboard with aggregated scores. It is a line-item ledger. An auditor can trace any finding from the rule it violated to the specific configuration element that triggered it, to the team that owns it, to the time elapsed since detection.

Whether a human or an AI wrote the code, the governance record is identical. The ledger does not depend on knowing the origin of the code. It records what the organisation decided to do about it.

Segregation of Duties: Two People, Two Reasons, Two Timestamps

Accepting a known violation is a governance decision. Debt Manager enforces that decision through a segregation-of-duties workflow.

One person requests a write-off. They select a reason code and provide a free-text justification. A different person approves or rejects the request with their own description. Debt Manager captures both identities and both timestamps separately. Every exemption carries an expiration date — including an explicit “Never” option when permanent acceptance is the deliberate choice.

In a world where AI agents generate and modify code faster than any human team reviews line-by-line, the governance decision becomes the accountable unit. Not the individual line of code, but the organisational response to each finding. This is the human-in-the-loop architecture that regulators expect.

DORA Article 9 requires financial entities to maintain documented risk acceptance processes. The EU AI Act mandates human oversight for high-risk AI systems. Debt Manager’s write-off workflow provides the evidence trail for both — not as an add-on, but as the core mechanism.

Four States, Full Lifecycle

Every finding in Debt Manager moves through a four-state lifecycle: Open → Pending Write Off → Written Off → Closed.

Open means the violation exists and no one has made a governance decision. Pending Write Off means someone has requested acceptance and the request awaits approval. Written Off means a second person has approved the exemption with documented justification. Closed means the team has resolved the underlying issue.

No finding skips a state. No exemption exists without two distinct human decisions. The lifecycle is the audit trail.

SARIF Export: Regulatory-Grade, Not Dashboard-Grade

The ledger exports to the OASIS SARIF (Static Analysis Results Interchange Format) standard — the format that DevSecOps pipelines, SIEM platforms, and audit tooling already consume.

This is the single most important proof point. A dashboard shows a snapshot. A SARIF export produces a machine-readable record that feeds directly into your DORA evidence pack, your SOC 2 audit bundle, or your EU AI Act documentation. Quality Clouds also provides XLS export for teams that need human-readable formats.

The export transforms Quality Clouds from a governance tool into a governance signal — one that sits upstream of ServiceNow’s AI Control Tower and its equivalents, producing the structured data they consume.

One Ledger, Two Platforms

Debt Manager runs on both ServiceNow and Salesforce today. Same write-off workflow. Same segregation of duties. Same four-state lifecycle. Same SARIF export.

For organisations operating both platforms — and most large enterprises do — this eliminates the gap between two separate governance processes. One AI Code Governance layer covers both.

What Is Coming, and What Is Here

AI provenance — capturing which LLM generated which line of code — is an active capability on the Quality Clouds roadmap, designed to extend our governance layer as provenance metadata matures across AI coding environments.

What exists today is more fundamental: the accountability layer. Quality Clouds records every governance decision — accept, reject, exempt, close — with the identity of the decision-maker, the justification, the rule, the artefact, and the timestamp. That record exists whether the original code came from ServiceNow Build Agent, Agentforce, a human developer, or a contractor working in a text editor.

Organisations that wait until Mythos-class AI operates inside their ServiceNow or Salesforce instances will build the accountability record after the incident, not before it. The governance infrastructure needs to exist before the AI agents arrive at scale.

Detection has become a commodity. Context files are a starting point, not a finish line. The accountability layer — the auditable, segregated, exportable system of record for every governance decision — is what regulated enterprises need today.

Quality Clouds is that AI Code Governance layer. Production-Ready AI Code starts with knowing who decided what, and why.

Frequently Asked Questions

What does AI Code Governance mean in practice for ServiceNow and Salesforce?

AI Code Governance means enforcing and recording policy decisions on every configuration element and code artefact in a ServiceNow or Salesforce instance — regardless of whether a human or an AI generated it. Quality Clouds provides the external enforcement layer, the segregation-of-duties workflow, and the exportable audit trail that make governance auditable rather than aspirational.

How does Debt Manager’s SARIF export support DORA compliance?

DORA Article 9 requires financial entities to document ICT risk management decisions, including accepted risks and their justifications. Debt Manager’s SARIF export provides a machine-readable record of every policy violation, every write-off request, every approval, and every expiration date. This feeds directly into the evidence pack that your DORA compliance team assembles. Quality Clouds does not produce the complete DORA evidence pack — it provides the structured governance data that your compliance framework consumes.

How does Quality Clouds compare to ServiceNow AI Control Tower?

Quality Clouds operates upstream of ServiceNow AI Control Tower. Debt Manager scans the instance, records policy violations, manages the exemption workflow, and exports the governance signal in SARIF format. AI Control Tower consumes that signal alongside other inputs to provide unified monitoring and compliance. The relationship is sequential: Quality Clouds produces the governance data, AI Control Tower consumes it.

Can Quality Clouds identify which AI tool generated a specific piece of code?

Not today. Quality Clouds does not currently capture AI provenance — which LLM or AI tool generated which artefact. That capability is on the product roadmap. The current system of record is origin-agnostic: it records the governance decision regardless of whether the code came from a human, ServiceNow Build Agent, Agentforce, or any other source. The accountability layer exists independent of the provenance layer.

How does AI Code Governance through Quality Clouds differ from using context files like CLAUDE.md or Cursor rules?

Context files instruct a single AI tool to follow standards within a single session or repository. They do not enforce compliance, do not prove enforcement occurred, and do not propagate across other AI tools touching the same instance. AI Code Governance through Quality Clouds provides external enforcement, a segregated approval workflow, a four-state finding lifecycle, and a SARIF export. Context files are a useful input to LLM behaviour. Quality Clouds is the accountability layer that governs the output.


As Co-Founder and CSO at Quality Clouds, I lead our strategic vision and market expansion to help enterprises redefine their technical standards through AI Code Governance

As Co-Founder and CSO at Quality Clouds, I lead our strategic vision and market expansion to help enterprises redefine their technical standards through AI Code Governance

Albert Franquesa

Co-Founder & CSO, Quality Clouds

Don't just follow the change. Lead it

Subscribe to our newsletter

Don't just follow the change. Lead it

Subscribe to our newsletter

Don't just follow the change. Lead it

Subscribe to our newsletter