We've all been in that meeting.
"How do we actually know if our source code is being sent to AI tools?"
— Someone from legal, compliance, or the executive team
The room goes quiet. Security looks at IT. IT looks at DevOps. DevOps looks at their shoes.
Everyone knows the honest answer: we don't.
Sure, we have policies. We have endpoint controls and network proxies. We block certain URLs and deploy DLP solutions.
But what happens when...
A contractor copies a repository to their personal laptop and pastes it into Claude or Copilot at home?
A former employee who "forgot" to delete their local clone decides to explore it with Cursor?
An attacker exfiltrates source code and feeds it to AI tools to hunt for vulnerabilities?
We're blind.
They know they're analyzing our code. We have no idea. That's the problem we set out to solve.
Organizations have no visibility when their code is analyzed by AI assistants—internally or externally.
Security teams can't audit which developers use AI tools on which codebases. Compliance violations occur when regulated data is exposed to AI models.
When code is stolen, attackers use AI to rapidly find vulnerabilities. Organizations have zero visibility into this analysis.
AI bans on sensitive codebases are meaningless once code leaves the network. Client-side controls are trivially bypassed.
Register invisible patterns already in your code. Detect AI usage regardless of who, where, or which tool.
Register function names, variables, and code snippets already in your codebase—no marker files to remove.
Detection happens at the AI provider level. Client-side modifications don't affect it.
Detect when stolen code is analyzed—even by attackers using personal AI accounts.
Notify, require approval, or block access completely based on sensitivity level.
Organizations register code patterns through the Husn admin console. No code changes required.
When any AI tool reads files, it calls the Husn API to check for registered patterns.
On match, Husn alerts the organization and enforces the configured policy.
Organizations receive real-time alerts with user identity for rapid incident response.
Watch a complete demonstration of Husn Canaries detecting and blocking AI analysis of protected code.
Watch on YouTube
| Capability | Client Hooks | Network Proxies | Husn Canaries |
|---|---|---|---|
| Bypass resistant | ✗ | ✗ | ✓ |
| Works across all AI clients | ✗ | ✗ | ✓ |
| Detects external threats | ✗ | ✗ | ✓ |
| Works for web UI | ✗ | ✗ | ✓ |
| No client configuration | ✗ | ✗ | ✓ |
| Detects stolen code analysis | ✗ | ✗ | ✓ |
Our paper presents the complete design, threat model, security analysis, and proof-of-concept implementation.