AI Code Has an Owner Problem

Linus Torvalds runs the most important codebase on Earth, and he just told the people writing it: if an AI helped you, fine, but a human name goes on the line. The Linux kernel will not accept a patch signed off by a model. Your dev shop probably will. That gap - between code an AI wrote and the human accountable for it - is the AI code ownership responsibility nobody warned you about.
You paid an agency to ship something. You can read a contract; you can read a commit log if somebody points at it. C++ you can’t read. So when an investor’s diligence partner emails on a Friday afternoon and asks “what percentage of your codebase was generated by AI, and who reviewed it?” you have to go ask the agency, and the agency has to ask Marcus, the developer who left in February, and by Monday morning you don’t have an answer - the deal slips a week, then two.
What Torvalds Did #
In April 2026 the Linux kernel maintainers added a new tag, Assisted-by:, to the patch submission rules. The position, written up in Hackaday on April 14, 2026
, is short: a contributor may use an AI tool to draft a patch, but they still read every line, sign it under the existing Developer Certificate of Origin (the DCO is the kernel’s standing rule that whoever signs the patch is legally responsible), and stay liable if the code turns out plagiarized or improperly licensed. Maintainers don’t try to detect AI involvement; they enforce through review.
A model can’t be sued, so a person stays on the hook for whatever the model wrote. Your Stripe webhook handler runs on Linux too. The kernel won’t let a model sign a patch into the stack your billing depends on; your contract probably will.
Half The AI Code Out There Is Broken #
Your app doesn’t run the world’s data centers. It does run Stripe and hold customer credit-card data, and the people on the other side of the browser assume somebody who knew what they were doing wrote it. That somebody is partly Claude now, and Claude can’t be deposed.
Veracode’s 2025 State of Software Security report found that 45% of AI-generated code samples - both the code the model wrote and the packages it suggested - contained at least one vulnerability from the OWASP Top 10, the standard list of things that get startups breached. The Stanford 2025 AI Index Report tracks adoption of AI coding tools across professional developers: more than three quarters were using one at work by the end of 2025.
Half of AI-written code ships with a known security defect, and most of those developers aren’t running the kernel’s review process. A lot of them work at the agency you hired.
The Ownership Gap In Your Contract #
Pull your dev shop contract out and look for the words “artificial intelligence” or “machine learning” or “generative.” If they’re not there, your contract is from before the question existed. That’s most contracts. We’ve read about 40 agency MSAs in the last six months on rescue engagements; the gap shows up in nearly all of them. The IP-assignment clause says the deliverable belongs to you. The warranty clause says the work will be free of third-party claims. Neither clause says whether a human typed it or a model sampled it from somebody else’s licensed source - so you own the code without owning the answer to who or what wrote it.
A founder we worked with in March 2026 - Series A B2B, Rails 7 stack, six months in - asked her agency a simple question after reading about the PocketOS incident. Which functions in our codebase were AI-assisted, who signed off on each one, and what dependencies did the AI pull in that nobody on her team has personally audited? Her agency’s answer came back four days later: we can’t tell you, the developer who did most of it left in February, and his Cursor history is on his old laptop. She forwarded the email and asked what to do. The first thing we told her: that email is the audit finding. The second thing was harder for us to say out loud - the same gap had sat in our own template MSA until last quarter, and we’d missed it in two reviews before catching it in the third. Her contract didn’t require her agency to keep that history. Neither did ours, until we added the clause.
What Investors Are Asking #
A partner at a Bay Area seed fund told us in April that “AI code ratio” is now a standard line item on his technical diligence checklist for Series A and later. He skips past “did you use AI?” because everybody used AI. What he wants to hear on the call is “we know how much, and here is who signed off on each piece.”
Founders who close the round answer on the call. The ones who don’t ship a follow-up email two weeks later and get a polite delay. Two companies lost term sheets in 2026 over follow-ups that were technically truthful and operationally embarrassing - one of them was ours to advise. The diligence partners we’ve spoken to are happy to back AI-heavy teams; they pass on founders who can’t describe what’s in their own product.
Gitar raised $9 million in April 2026 to sell agents that audit AI-generated code for the issues humans aren’t catching, and they’re one of about half a dozen companies in the same space. You don’t have to buy any of them to fix this in your own startup - the check just confirms the market thinks the problem is real.
Can You Name What’s In Your Software? #
One specific question every diligence partner asks: can your team name every dependency in the codebase, including the ones the AI added without telling anyone?
This is the Software Bill of Materials problem - the SBOM - and Security Boulevard wrote up the practitioner version in April 2026 with a sentence that should be on the wall of every founder’s office: “if you cannot name what is in your software, you do not control your software.”
A model suggests a small piece of free third-party code to fix a bug. That piece pulls in six more from other strangers on the internet - the kind of transitive dependency chain Security Boulevard’s piece describes, where an unmaintained library or a hijacked publishing account ends up in your build because the test suite went green and the ticket needed closing.
You didn’t consent to any of that, and you can’t list it. The codebase has code from people nobody on your team can name, and your name still goes on the line. We covered the longer version of this in the vibe coding crisis post - “we shipped fast” is half an answer now.
Five Clauses That Close The Gap #
The cheapest fix is the contract you haven’t signed yet. Add these five clauses to the next master service agreement you put in front of an agency, or to the amendment you send the one you already have. A working agency will sign them without renegotiating rates; one that fights all five is telling you something useful.
Start with AI disclosure on every commit. The agency agrees to use the kernel’s Assisted-by: convention or its equivalent (a commit message tag, a separate column in their PR template) so each commit says whether AI tooling was involved and which one. Most days you’ll never read this. On the diligence call, it’s your answer.
Pair it with a human reviewer of record. Every AI-assisted commit names a person who read the diff before merge, and that person is liable for what the patch does. This is the clause that turns “the developer left in February” from a missing-person problem into a breach-of-contract one.
Third clause: SBOM delivery on every release. The agency hands over a machine-readable list of every dependency in the codebase, including the ones that came along for the ride, with each release tag. Tools like Syft and CycloneDX produce this for free in under a day, so asking for it isn’t exotic.
Add human approval on AI-introduced dependencies. Any new package the AI suggests goes through a person before it lands in main. This slows some merges by an hour or two; that’s the cost, and it’s worth it. The Veracode finding lives mostly here - a model picks the wrong package, a developer accepts the suggestion in a hurry, and the vulnerability rides in.
Last clause, the one that closes the back door: source-handover obligation on termination. You receive not only the code but the AI tooling history (Cursor sessions, saved prompts, a README listing which models were used in which periods). It turns “we can’t tell you, that developer left in February” into a breach instead of an apology.
Expect a 30-minute call about the amendment. Expect the agency to ask for a 5-10% rate bump on the SBOM and handover clauses; they require operational changes on their side, and that’s a fair ask. If they push back on all five, you’ve already learned what was missing from the original deal.
But Wait - You Trust Your Agency #
Sure. So did Jer Crane at PocketOS , a live car-rental SaaS whose AI agent dropped the production database and the backups in nine seconds. His agency wasn’t malicious. They just didn’t gate the model the way the kernel gates a contributor. The diligence partner asks for a signature, not for trust. The kernel maintainers trust their contributors and still demand one. Without a paper trail, trust is just hope, and hope doesn’t survive due diligence.
The last three codebases we inherited had no Assisted-by: history, no named human reviewer per commit, no SBOM, and no record of which dependencies the model had introduced. Each absence maps to one of the clauses above. None of them required exotic engineering - they required somebody asking the question before the contract was signed.
How To Audit Yourself This Weekend #
You don’t need a forensic engineer to take the first pass.
Open your repository in your browser. Look at the commit history and read the last 50 messages out loud. If you can’t tell which of those changes were AI-assisted, your team isn’t tracking it. That’s finding number one.
Then look at your agency’s last invoice. Any line item reference AI tooling, prompts, or model usage? If not, ask the agency directly, by email, in writing: which of our commits this quarter used AI assistance, who reviewed each one, and what new dependencies were added as a result? Save the email - the reply is your audit document.
While you wait, open your hosting dashboard. Find out who can deploy to production and when somebody last tested restoring from a backup. PocketOS lost the live database and the offsite backup in nine seconds because the same access path opened both, and that path was an over-scoped agent token nobody had audited. The credentials audit is the conversation you have with your hosting provider; the code-ownership audit you’ve already started.
Two of the three rescues we ran last quarter started with a 48-hour silence on exactly this email.
If you’d rather have a second pair of eyes do the audit: send us a read-only repo invite. We send back a one-page report in 48 hours - AI-vs-human commit ratio, any unscoped tokens or god-mode keys, and any AI-introduced dependencies nobody vetted. No force-push, no deploy keys, no rebuild pitch. Revoke the invite the moment the report ships. Not ready to share code yet? Send your three biggest concerns and we’ll send back a one-page checklist.
When Not To Bother #
This whole conversation is overkill for a category of work that is genuinely fine to vibe-code: prototypes you will throw away in three weeks, internal scripts that touch one CSV and run once on a founder’s laptop, marketing-page experiments, hackathon projects. The throwaway lane where Karpathy originally framed vibe coding is real and useful, and dragging it through a five-clause contract review will slow you down for no reward.
The line is whether the code touches production data or paying users. Below the line, ship fast and don’t feel guilty. Above the line, the kernel’s rule applies and so do the five clauses. The opposite failure shows up too - two founders we worked with last year applied audit-grade rigor to a Notion-replacement side project and shipped nothing for two quarters.
What To Do Monday Morning #
You don’t have to fire anyone or switch agencies. The work fits on a postcard: ask one question in writing, see what comes back, decide from there. If the agency can describe their AI policy in two paragraphs and produce an SBOM in 48 hours, your contract just needs the five clauses bolted on at renewal. If they can’t, our guide to firing a dev shop covers what comes next, and the questions to ask the next one covers how to avoid landing here again.
Torvalds didn’t change the world. He codified what the kernel had been enforcing informally for a year - a human signature on every patch, AI or not. Your codebase runs the same rule whether your contract says so or not. Find out on a weekend audit, or find out from your diligence partner.