AI coding agents run plugins with full system access. They can read your files, write code, execute commands. And right now, there is no mechanism to verify that the plugin files on disk are the same ones you reviewed and approved.
Think about that for a second.
You install a plugin. You review the code. Looks good. A week later, the author pushes an update. Or their account gets compromised. Or someone pushes a malicious change upstream. The agent loads the new code silently. No diff. No warning. No verification.
Your agent just became a supply chain attack vector.
We solved this already
package-lock.json solved this for npm packages. go.sum solved it for Go modules. The pattern is proven: pin what you reviewed, verify before you load, fail if something changed.
AI agent plugins deserve the same treatment. The stakes are actually higher because these aren’t library functions running in a language sandbox. They’re extensions with full system access.
Seal
Seal is a specification for tamper-evident supply chain verification of AI coding agent plugins.
It works like this:
- You install a plugin and review the code
seal.jsonrecords the SHA-256 hash of every file- Before the agent loads the plugin, it re-hashes the files and compares
- If anything changed (a single byte, an added file, a removed file) it’s flagged or blocked
No key management. No infrastructure. No runtime dependencies. Just a lockfile committed to your repo, reviewable in code review, enforceable in CI.
Agent agnostic
Seal is not tied to any specific agent. Claude Code, OpenCode, Cline, Codex, any agent can implement it. One seal.json per project, shared across the team, verified regardless of which agent each person uses.
This matters because teams don’t all use the same agent. If both agents implement Seal, the integrity guarantee holds across the entire team.
What it catches
Supply chain compromise. A plugin author’s account gets compromised. Malicious code is pushed. Seal detects the hash mismatch before the agent loads anything.
Silent updates. A plugin author pushes a legitimate update that changes agent behavior. Without Seal, it loads silently. With Seal, you see the change and decide whether to accept it.
What it doesn’t do
Seal verifies that files haven’t changed since you pinned them. It does not tell you whether a plugin is safe to install in the first place. That’s on you. First-install review is still your responsibility.
It also doesn’t monitor runtime behavior. A plugin that passes verification can still be malicious if the code you reviewed was malicious. Seal is about integrity, not intent.
The spec
The full specification is here. It covers the lockfile format, content hashing algorithm, canonicalization rules, verification workflow, threat model, and implementation guidance.
I’m working on a standalone CLI tool so teams can adopt Seal without waiting for their agent to implement it natively.
If you’re building or maintaining an AI coding agent, I’d love to hear your thoughts. If you’re using AI agents at work with plugins you haven’t pinned, now you know why that should concern you.