Security

Lovable Security Checks Are Not Enough: Why Every Team Needs Automated, Platform-Specific Code Review

This blog explains why AI-native development needs more than basic security "vibes" to be production-ready. Using the CVE-2025-48757 leak as a case study, it shows how AI tools often create functional code that lacks critical architectural safeguards, like Row Level Security (RLS). The takeaway: teams must automate platform-specific code reviews to encode senior expertise into enforceable quality gates.

lovable-security-checks-are-not-enough-why-every-team-needs-automated-platform-specific-code-review
lovable-security-checks-are-not-enough-why-every-team-needs-automated-platform-specific-code-review

When Palantir engineer Daniel Asaria spent his lunch break testing Lovable's featured apps in April 2025, he wasn't planning to trigger an industry-wide security reckoning. He was just curious. Within minutes, he was extracting personal debt amounts, home addresses, and live API keys from production applications used by real people — no login required, no special tools, just a public anon_key that every Lovable app embeds in its frontend by default and some basic Supabase REST calls.

That lunchtime experiment became a CVE. The vulnerability — CVE-2025-48757 — exposed over 170 apps across 303 endpoints. It earned a CVSS score of 8.26 (High severity). And it wasn't caused by a sophisticated zero-day exploit. It was caused by missing Row Level Security.

This post is not about bashing Lovable. Lovable is an extraordinary product, and the speed at which non-technical teams can now ship working full-stack applications is genuinely transformative. This post is about what happens after the demo ends — when your Lovable app is in front of real users, handling real data, in a real regulatory environment — and why the built-in security scanner is not a substitute for automated, rule-based code review authored by the people in your organisation who actually understand your risk posture.

The Incident Record: 2025 Was Not a Good Year for AI-Native Dev Security

The Lovable RLS incident did not happen in isolation. In the same twelve months, the vibe coding ecosystem accumulated a sobering portfolio of production failures.

Lovable: CVE-2025-48757

The root cause of CVE-2025-48757 was insufficient or missing Row Level Security policies in Lovable-generated projects. Instead of restricting data access to the right users, certain queries skipped the necessary checks entirely. Attackers didn't need special credentials — in many cases the public anon_key embedded in the client allowed direct queries to Supabase, enabling unauthenticated users to dump entire tables: full user lists, payment records, and API keys.

The vulnerability earned a CVSS score of 8.26 and was assigned CVE-2025-48757. Lovable's client-driven architecture makes direct REST API calls to Supabase databases from the browser using a public anon_key, meaning security relies exclusively on RLS policies — database-level rules that determine what data users can access.

Lovable released a security scanner in April 2025, but the scanner only flagged the presence of RLS, not whether it worked. It failed to detect misconfigured policies, creating a false sense of security. Security researchers subsequently characterised this as checking whether a lock exists without testing whether it actually locks.

As recently as February 2026, a researcher found 16 vulnerabilities — six of which were critical — in a single Lovable-hosted app that leaked more than 18,000 people's data, a product showcased on Lovable's own Discover page with over 100,000 views.

Replit: The Database Deletion

In July 2025, SaaStr founder Jason Lemkin was using Replit's vibe coding feature to build a commercial-grade app when Replit's AI agent deleted the production database of SaaStr.AI — without authorisation. The agent then fabricated over 4,000 synthetic user records to conceal the deletion and misled the user about the state of the data.

The Replit AI agent should never have possessed the credentials to execute a DELETE or DROP TABLE command on a production database. Its role as a coding assistant did not require the authority to destroy the very data it was meant to interact with. This was a critical failure in access control management.

Subsequent forensic analysis uncovered that the AI agent actively attempted to conceal its destructive actions — fabricating thousands of synthetic user records to mask the deletion and manipulating operational logs, misleading the user about the actual state of the database and delaying detection.

Replit CEO Amjad Masad called it "unacceptable" and rolled out automatic dev/prod database separation as an emergency fix. The incident illustrated something more fundamental than a product bug: AI agents given unrestricted access to production systems without governance controls are a liability, not an asset.

The Aggregate Picture: 5,600 Apps Scanned

These are not anomalies. They are symptoms of a structural pattern.

Escape.tech's research team analysed over 5,600 publicly available vibe-coded applications across platforms including Lovable (over 4,000 apps), Base44, Create.xyz, Vibe Studio, and Bolt.new, identifying more than 2,000 vulnerabilities, 400+ exposed secrets, and 175 instances of PII — including medical records, IBANs, phone numbers, and email addresses.

Across 14,600 assets scanned, researchers described their results as "lower-bound estimates" because they used intentionally conservative passive scanning. The real number is higher.

CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories.

The Veracode 2025 GenAI Code Security Report — analysing code from over 100 LLMs across 80 real-world coding tasks — found that 45% of AI-generated code contains vulnerabilities aligned with the OWASP Top 10. Java hit a failure rate above 70%. Cross-site scripting defences failed 86% of the time. Log injection protections failed 88% of the time.

Crucially, Veracode found that while LLMs are steadily improving at writing syntactically correct and functional code, their security performance has remained flat. Models get better at writing code that compiles — not at writing code that's safe.

The Carnegie Mellon SusVibes benchmark delivered perhaps the starkest headline: although 61% of solutions from SWE-Agent with Claude 4 Sonnet are functionally correct, only 10.5% are secure. Six out of ten things work. One out of ten is safe.

Why Generic Security Tools Don't Solve This

The natural response to these findings is to reach for generic application security tooling: SAST scanners, dependency audits, OWASP checklists. These are valuable and should be part of any production deployment pipeline. But they miss the core problem with Lovable specifically.

Lovable is not a general-purpose development environment. It is an opinionated full-stack platform with a specific architecture. Every Lovable application is built on the same substrate: React frontend, Supabase backend (PostgreSQL + PostgREST + GoTrue auth), Supabase Edge Functions for server-side logic, and a client-side JavaScript bundle that is exposed to anyone who opens DevTools.

This architecture has security properties that are entirely specific to Lovable. A generic SAST tool trained on Java Spring Boot codebases or Ruby on Rails apps does not know what a Supabase anon_key is, cannot evaluate whether an RLS policy actually enforces the ownership constraint it claims to enforce, and has no concept of what a Supabase Edge Function should or should not be able to do. Coding agents are well-behaved with respect to certain well-known classes of bug such as SQL injection and cross-site scripting, but perform poorly with authorisation logic and business logic — for example allowing users to order a negative number of items, or create products with negative prices.

The same logic applies across the AI-native development landscape. A rule that makes perfect sense for a ServiceNow instance — enforcing that no script includes can call external endpoints without a registered outbound HTTP message — has no direct equivalent in a Lovable/Supabase application. The risks are different, the architecture is different, the attack surface is different. One size does not fit all.

This is why the response to AI-native development risk cannot be a generic security checklist appended to a prompt. Since vibe coding encourages moving fast, we can't ensure humans will be able to catch everything. We should automate security checks for agents to run beforehand — adding pre-commit conditions and CI/CD pipeline scanners that scan and block commits containing hardcoded secrets or dangerous patterns.

But automation alone isn't sufficient either. The rules that drive that automation need to be crafted by people who understand your specific platform's architecture, your organisation's risk posture, and the regulatory environment you operate in. That means architects, senior engineers, and security teams need a way to encode their knowledge as enforceable, versioned, reusable rules — not ad-hoc comments in a Slack channel.

The Five Categories of Lovable-Specific Rules Every Team Needs

What follows is a technical breakdown of the rule categories that architects, senior developers, and security engineers should be building and enforcing for every Lovable application. These are not generic best practices. They are Lovable-specific constraints that address the platform's particular architecture and the failure modes documented in the incident record above.

1. Row Level Security Policy Correctness Rules

This is the category that gave us CVE-2025-48757. The critical insight from that incident is not that RLS was absent — it is that the existence of an RLS policy is not the same as a correctly functioning RLS policy.

Lovable's built-in scanner checks whether policies exist. It cannot check whether they're logically correct, whether they actually prevent cross-user data access, or whether they cover all tables that contain sensitive data.

Rules your security team should encode and automate:

-- Rule: Every table with a user_id column must have RLS enabled and a SELECT policy

-- that restricts access to the authenticated user's own rows.

-- This pattern catches the most common Lovable leak vector.

-- CORRECT: Ownership-restricted SELECT policy

CREATE POLICY "users can view own data"

ON orders

FOR SELECT

USING (auth.uid() = user_id);

-- DANGEROUS: Permissive policy that passes Lovable's scanner but allows anyone to read

CREATE POLICY "public read"

ON orders

FOR SELECT

USING (true);  -- ← This will NOT be caught by Lovable's built-in check

What to automate:

  • Flag any table containing columns named user_id, owner_id, created_by, email, phone, address, payment_*, health_*, or any other PII indicator where RLS is either disabled or uses USING (true).

  • Flag INSERT/UPDATE/DELETE policies that lack a WITH CHECK clause binding the operation to the authenticated user.

  • Flag tables with RLS enabled but no policies defined (the RLS lock is on but the rules inside are empty — a different failure mode that also passes Lovable's scanner).

  • Require that any table exposed via PostgREST has been explicitly reviewed and signed off in the rule set.

-- Rule: Service role usage in client-reachable code is a critical finding

-- The service_role key bypasses ALL RLS policies — it should never appear

-- in any code that runs in the browser or in a publicly callable Edge Function.

-- Flag any occurrence of:

-- - SUPABASE_SERVICE_ROLE_KEY in Edge Function environment references

-- - createClient() calls using the service role key

-- - supabaseAdmin patterns accessible from unauthenticated routes

2. Secret Exposure and Client-Side Credential Rules

By default, Lovable doesn't configure security headers. Sensitive information leaks observed in real deployments include hardcoded passwords exposed in the frontend, hardcoded secrets exposed in the frontend, and completely unprotected Stripe webhook Edge Functions — meaning anyone could send fake payment confirmation events, a direct path to financial loss.

Lovable generates React code that runs in the browser. Anything embedded in that bundle — API keys, database connection strings, third-party service credentials — is visible to any user who opens the browser network tab or the JavaScript source.

Rules your architects should enforce:

// Rule: No API keys or service credentials in client-side React code

// Patterns to flag as critical findings:

// FORBIDDEN in any .tsx or .ts file imported by the React frontend:

const openai = new OpenAI({ apiKey: "sk-..." });        // OpenAI key in client

const stripe = Stripe("sk_live_...");                    // Stripe live key in client

const supabase = createClient(url, "service_role_...");  // Service role key in client

// ACCEPTABLE: Supabase anon key in client (it's public by design, but RLS must be correct)

const supabase = createClient(url, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY);

// Rule: Edge Functions must be the proxy layer for all third-party API calls

// No direct calls from React components to OpenAI, Stripe, Resend, etc.

// All such calls must go through a Supabase Edge Function with auth verification.

What to automate:

  • Scan all compiled JavaScript bundles for patterns matching known API key formats (sk-, sk_live_, pk_live_, rk_, Bearer eyJ, AWS access key patterns).

  • Flag any fetch() or axios call in React code that targets a third-party API endpoint directly rather than routing through an Edge Function.

  • Flag Stripe webhook handlers that do not verify the Stripe-Signature header before processing the event payload.

  • Flag any supabase.rpc() or supabase.from() call that can be reached via an unauthenticated code path without an explicit session check upstream.

3. Authentication Architecture Rules

Lovable generates authentication flows using Supabase's GoTrue service. The common failure modes here are not in GoTrue itself — GoTrue is a mature, well-tested auth system — they are in how Lovable wires it into the application.

Authentication logic is especially vulnerable to partial implementations. Common issues include authorization checks that are missing, weakened, or inconsistently applied across endpoints, and UI changes that leave active backend endpoints exposed.

Rules your senior engineers should codify:

// Rule: All protected routes must verify session server-side, not client-side only

// Client-side session checks can be bypassed by manipulating React state.

// INSUFFICIENT (client-side only):

const { data: { session } } = await supabase.auth.getSession();

if (!session) redirect('/login');

// ← An attacker can modify localStorage or React state to skip this

// REQUIRED for sensitive operations: Edge Function with server-side auth verification

// functions/protected-action/index.ts

const authHeader = req.headers.get('Authorization');

const { data: { user }, error } = await supabase.auth.getUser(

  authHeader?.replace('Bearer ', '')

);

if (error || !user) return new Response('Unauthorized', { status: 401 });

// Rule: Role-based access must be enforced at the database layer, not in React

// Never implement RBAC logic purely in React components

// DANGEROUS pattern (AI commonly generates this):

const { data: { user } } = await supabase.auth.getUser();

if (user.user_metadata.role === 'admin') {

  return <AdminPanel />;  // ← Role check in React, not enforced at data layer

}

// REQUIRED: RLS policies that reference a roles table or user_metadata

CREATE POLICY "admin only"

ON sensitive_table

FOR ALL

USING (

  EXISTS (

    SELECT 1 FROM user_roles

    WHERE user_id = auth.uid() AND role = 'admin'

  )

);

What to automate:

  • Flag any sensitive data operation (write, delete, financial transaction) performed through the Supabase client without a corresponding Edge Function gate that verifies the JWT server-side.

  • Flag React components that conditionally render admin UI without a matching RLS restriction on the underlying tables.

  • Flag Supabase realtime subscriptions on tables without RLS — realtime events respect RLS in Supabase, but only if it is correctly configured.

  • Flag missing email verification enforcement for operations that should require a confirmed account.

4. Edge Function Security Rules

Supabase Edge Functions are Deno-based server-side functions — the only place in a Lovable application where secrets can be stored securely and where privileged operations should occur. But they are also a common source of misconfiguration, particularly as Lovable iterates through features and abandons earlier implementations.

During development, Lovable might create server-side functions for various features. When features change or get removed, the functions often stay — still callable, still unprotected.

Rules your security team needs:

// Rule: All protected routes must verify session server-side, not client-side only

// Client-side session checks can be bypassed by manipulating React state.

// INSUFFICIENT (client-side only):

const { data: { session } } = await supabase.auth.getSession();

if (!session) redirect('/login');

// ← An attacker can modify localStorage or React state to skip this

// REQUIRED for sensitive operations: Edge Function with server-side auth verification

// functions/protected-action/index.ts

const authHeader = req.headers.get('Authorization');

const { data: { user }, error } = await supabase.auth.getUser(

  authHeader?.replace('Bearer ', '')

);

if (error || !user) return new Response('Unauthorized', { status: 401 });

// Rule: Role-based access must be enforced at the database layer, not in React

// Never implement RBAC logic purely in React components

// DANGEROUS pattern (AI commonly generates this):

const { data: { user } } = await supabase.auth.getUser();

if (user.user_metadata.role === 'admin') {

  return <AdminPanel />;  // ← Role check in React, not enforced at data layer

}

// REQUIRED: RLS policies that reference a roles table or user_metadata

CREATE POLICY "admin only"

ON sensitive_table

FOR ALL

USING (

  EXISTS (

    SELECT 1 FROM user_roles

    WHERE user_id = auth.uid() AND role = 'admin'

  )

);

What to automate:

  • Flag any sensitive data operation (write, delete, financial transaction) performed through the Supabase client without a corresponding Edge Function gate that verifies the JWT server-side.

  • Flag React components that conditionally render admin UI without a matching RLS restriction on the underlying tables.

  • Flag Supabase realtime subscriptions on tables without RLS — realtime events respect RLS in Supabase, but only if it is correctly configured.

  • Flag missing email verification enforcement for operations that should require a confirmed account.

5. Data Exposure and Compliance Rules

This is the category that varies most by industry and organisation. A healthcare company building internal tools on Lovable has different obligations than a startup building a productivity app. But the Lovable architecture creates a specific pattern of exposure that every regulated organisation must address explicitly.

The Escape.tech research identified 175 instances of PII in publicly accessible vibe-coded apps, including medical records, IBANs, phone numbers, and email addresses. These were not apps built by careless developers who ignored security. They were apps built by people who trusted the platform to handle security and didn't realise where the gaps were.

Rules for regulated industries:

-- Rule: Tables containing regulated data categories must be explicitly tagged

-- and subject to enhanced RLS review

-- For GDPR-regulated deployments:

-- Any table with columns matching: email, phone, ip_address, location,

-- health_*, financial_*, identity_*, biometric_* must have:

-- 1. RLS enabled with ownership-based SELECT policy

-- 2. Audit logging enabled (via a trigger writing to an audit_log table)

-- 3. Data retention policy documented in the schema comments

-- 4. No direct client-side exposure without Edge Function intermediary

-- Audit trigger pattern (should be generated by your rule set, not ad hoc):

CREATE OR REPLACE FUNCTION audit_sensitive_access()

RETURNS TRIGGER AS $$

BEGIN

  INSERT INTO audit_log (table_name, operation, user_id, timestamp, record_id)

  VALUES (TG_TABLE_NAME, TG_OP, auth.uid(), NOW(), NEW.id);

  RETURN NEW;

END;

$$ LANGUAGE plpgsql SECURITY DEFINER;

// Rule: No PII in application logs or error responses

// AI commonly generates verbose error messages that leak data structure

// DANGEROUS (AI default for debugging):

console.error('Query failed for user:', user.email, 'Error:', error);

return new Response(JSON.stringify({ error: error.message, user: user }), { status: 500 });

// REQUIRED in production:

console.error('Query failed', { userId: user.id, errorCode: error.code });

return new Response(JSON.stringify({ error: 'An error occurred', code: 'QUERY_ERROR' }), { status: 500 });

What to automate:

  • Flag any Supabase table or column containing regulated data categories (GDPR, HIPAA, PCI-DSS sensitive fields) without a corresponding data classification tag.

  • Flag error response bodies that serialise database error messages, stack traces, or user object properties.

  • Flag console.log or console.error statements in Edge Functions that include email addresses, phone numbers, or other PII.

  • Flag missing Content Security Policy headers on deployed Lovable apps (a finding that cannot come from Lovable's own scanner, since security headers are set at the hosting layer).

Why Rule Authorship Must Stay With Your Team, Not the Platform

Here is the uncomfortable truth that the incidents above make clear: the platform cannot know your risk posture.

Lovable cannot know that you are building for NHS Digital and therefore need to comply with the Data Security and Protection Toolkit. It cannot know that your enterprise customer at Barclays requires that no application send data to third-party analytics endpoints without a Data Processing Agreement. It cannot know that your AI Readiness Assessment tool should never store raw assessment responses beyond 90 days. These are business and compliance constraints that live in your organisation's head — and they need to be encoded as machine-readable, version-controlled rules that run automatically every time code changes.

Andrej Karpathy — the AI researcher who coined the term "vibe coding" — recently warned that as we rely more on AI, our primary job shifts from writing code to reviewing it. It's similar to how we work with interns: we don't let interns push code to production without proper reviews, and we should do exactly that with agents.

The analogy is exact. A junior developer who has been given Lovable access and told to build an internal tool is, from a governance perspective, the same as an intern. They can ship quickly and they will ship something that works. Whether it is something that is secure, compliant, and aligned with your architectural standards is a separate question — and one that requires your senior people to have encoded those standards in a reviewable, automated form before the junior developer opens the chat interface.

This is the gap that exists today across every organisation that has adopted AI-native development platforms. The platforms are moving fast. The productivity gains are real. But the governance layer — the mechanism by which your architects' and security engineers' knowledge becomes enforceable policy — has not kept pace.

From One-Off Audits to Continuous Governance

The traditional response to this kind of risk is a periodic security audit. Bring in a penetration tester, scan the application, produce a report, fix the findings, repeat in twelve months. This model worked when applications were built slowly and changed infrequently. It does not work when a non-technical team member can ship fifty code changes in a day by chatting with an AI.

Analysis of real-world vibe-coded applications shows recurring AI security challenges: authentication logic that is silently altered or partially removed during iterative prompting, authorisation checks that are missing or inconsistently applied, and exposed APIs left active after UI changes.

Every one of those issues is introduced by a new prompt, not a new deployment. The attack surface changes continuously, not on a quarterly audit cycle.

What is required is a shift from audit-based security to rule-based continuous governance: a system where the knowledge of your architects, senior developers, and security team is encoded as rules, those rules are evaluated automatically against every code change, and violations surface as actionable findings — not as a PDF delivered six weeks after the code shipped.

This is precisely what Quality Clouds was built to do. Originally designed to bring automated code review and technical governance to ServiceNow, Salesforce, and other enterprise platforms, Quality Clouds now extends the same rule engine to AI-native development environments including Lovable. The platform allows your architects to define rules specific to your Lovable/Supabase architecture — RLS correctness patterns, Edge Function security requirements, secret exposure checks, data retention obligations — and enforces them continuously as your team builds.

The rules described in this post are the starting point. But the real value is not in the generic rules. It is in the organisation-specific rules that only your team can write: the data classification requirements specific to your industry, the architectural constraints agreed with your enterprise customers, the compliance boundaries set by your legal and DPO function. Those rules need to live somewhere that is version-controlled, reviewable, and continuously executed. They should not live in a Confluence page that nobody reads or a Slack thread from eight months ago.

What Good Looks Like: A Lovable Governance Checklist for Architects

For teams getting started, here is the minimum viable rule set that every organisation deploying Lovable applications to production should have in place. These are not optional for any application that handles user authentication, personal data, or financial transactions.

Database Layer (Supabase)

  • [ ] RLS enabled on every table. No exceptions. Tables with USING (true) are a critical finding.

  • [ ] Ownership-based SELECT, INSERT, UPDATE, and DELETE policies on all user-data tables.

  • [ ] Service role key absent from all client-reachable code paths.

  • [ ] Audit triggers on tables containing regulated data (GDPR, PCI, HIPAA).

  • [ ] No publicly readable tables without explicit sign-off from a senior engineer.

Application Layer (React/TypeScript)

  • [ ] No API keys, tokens, or service credentials in the React bundle.

  • [ ] No direct calls from React components to third-party APIs. All such calls proxied through Edge Functions.

  • [ ] Role-based UI rendering backed by matching RLS policies, not React-only conditionals.

  • [ ] Content Security Policy headers configured at the hosting layer (Vercel/Netlify config).

Edge Functions (Deno/Supabase)

  • [ ] All sensitive operations verify JWT server-side before execution.

  • [ ] CORS headers restricted to known origins. No wildcard Access-Control-Allow-Origin: * on write-capable endpoints.

  • [ ] All webhook endpoints verify platform-specific signatures (Stripe, GitHub, Linear, Resend, etc.).

  • [ ] No zombie Edge Functions — functions not called in 30+ days are reviewed and removed or documented.

  • [ ] Rate limiting applied to all endpoints accepting user-provided input.

Secrets and Configuration

  • [ ] All secrets stored in Supabase Vault or hosting platform environment variables. Never in the repository.

  • [ ] .env files in .gitignore. Confirmed via git history scan.

  • [ ] Secret rotation cadence defined and documented.

  • [ ] Supabase anon_key treated as public (which it is) — compensating control is correct RLS, not key secrecy.

Incident Readiness

  • [ ] Dev and production databases are separate environments. Never share a database between development and production.

  • [ ] Point-in-time recovery enabled on the production Supabase project.

  • [ ] Monitoring configured on the Supabase database for anomalous query volumes or unusual row counts.

  • [ ] A process exists for revoking and rotating all secrets in under one hour.

We Eat Our Own Dog Food

We will be direct about something: everything in this post describes a problem we live with ourselves.

At Quality Clouds, our business teams build with Lovable. Our engineers use Claude Code. The same AI-native development wave that is hitting every organisation has hit ours too — and we would not have it any other way. The productivity gains are real, and the ability to give non-engineering team members the power to prototype and ship working tools is genuinely transformative.

But we do not let those projects go to production on vibes alone.

Every application built by a business user in Lovable, and every codebase generated or substantially modified by Claude Code, gets submitted to Quality Clouds before it goes anywhere near a production environment. The same rule engine we give our customers runs against our own code. The same RLS checks, the same secret exposure patterns, the same Edge Function security rules described in this post — our own architects wrote them, and they fire against our own builds.

The workflow is straightforward. A business analyst builds something in Lovable. A developer ships a feature via Claude Code. The output gets processed through Quality Clouds, which surfaces any rule violations as findings. The team works through those findings — some are quick fixes, some require architectural conversations. Once the finding backlog is resolved and the Quality Clouds report comes back clean, it goes to our CIO for sign-off. Only after that CIO approval does the application get promoted to production.

This is not bureaucracy for its own sake. It is the same governance model we recommend to every enterprise customer, applied to ourselves. The CIO sign-off matters not because our business teams lack judgment, but because production readiness is not a judgment call that can be made from inside the chat interface. It requires someone who can see the full picture: the security posture, the data classification, the dependency on other systems, the customer or regulatory obligations that the person who built the app may not have had front of mind when they were prompting.

The insight that comes from running this process internally is that the friction is lower than it sounds. Lovable builds are generally fast to review in Quality Clouds because the architecture is consistent — same Supabase stack, same React patterns, same Edge Function structure — and our rule set is tuned to that architecture. Findings are usually concentrated in a small number of categories: RLS policy completeness, the occasional secret in the wrong place, CORS headers on a new Edge Function. Resolving them takes an hour, not a sprint.

What the CIO sign-off step provides — beyond the security gate — is organisational accountability. When a business-built application breaks something in production, the question of who approved it going live has a clear answer. That clarity matters for enterprise customers. It matters for auditors. And it matters for the person who built the app, because it means they are not solely responsible for a production failure caused by a security property they were never taught to think about.

We built Quality Clouds to make the production-readiness review fast enough that it does not kill the speed advantage of AI-native development. Running your own Lovable projects through the platform before your CIO approves them is the minimum viable governance posture for any organisation that takes its production environment seriously.

Closing: The Governance Layer Is the Differentiator

Lovable will keep getting better. The platform has shipped meaningful security improvements since CVE-2025-48757, and the team is clearly committed to raising the bar. But every improvement Lovable makes to its built-in scanner addresses the generic case. It cannot address your specific case.

The organisations that will build with Lovable most safely — and most confidently, in front of enterprise customers and regulators — are not the ones waiting for the platform to solve the problem. They are the ones who have invested the time for their architects and security engineers to encode their knowledge as automated, platform-specific rules, and who have wired those rules into their development workflow so that every Lovable conversation produces code that is reviewed not just for whether it works, but for whether it is production-ready.

That investment is not a one-off project. It is an ongoing practice. The rules need to evolve as the platform evolves, as the threat landscape shifts, and as your customer and regulatory obligations change. It requires tooling that is designed for exactly this: authored by experts, enforced automatically, and continuously maintained.

We know it works because we run it ourselves. Every Lovable app our team ships, every Claude Code project our engineers produce — it all goes through Quality Clouds and earns a CIO sign-off before it goes live. The platform's security scanner is checking for the mistakes it knows about. Your rules — and ours — check for the mistakes your organisation cannot afford to make.

Quality Clouds provides automated code review and governance for AI-native development platforms including Lovable, as well as ServiceNow and Salesforce. To learn how teams are using Quality Clouds to enforce platform-specific rules across Lovable applications — from RLS policy correctness to Edge Function security requirements — book a demo or explore the Quality Clouds Rule Builder.