HUSN Canaries

Defense-in-Depth for
AI Coding Assistant
Governance

Detect when your code is being analyzed by AI tools—whether by internal developers, external contractors, or attackers with stolen code. Provider-side enforcement that can't be bypassed.

EH
Ehab Hussein Principal AI Engineer, IOActive [email protected]
MS
Mohamed Samy Senior AI Security Consultant, IOActive [email protected]

We've all been in that meeting.

"How do we actually know if our source code is being sent to AI tools?"

— Someone from legal, compliance, or the executive team

The room goes quiet. Security looks at IT. IT looks at DevOps. DevOps looks at their shoes.

Everyone knows the honest answer: we don't.

Sure, we have policies. We have endpoint controls and network proxies. We block certain URLs and deploy DLP solutions.

But what happens when...

💼

A contractor copies a repository to their personal laptop and pastes it into Claude or Copilot at home?

🚪

A former employee who "forgot" to delete their local clone decides to explore it with Cursor?

🎯

An attacker exfiltrates source code and feeds it to AI tools to hunt for vulnerabilities?

We're blind.

They know they're analyzing our code. We have no idea. That's the problem we set out to solve.

Your Code Fortress Has Blind Spots

Organizations have no visibility when their code is analyzed by AI assistants—internally or externally.

🔓

Internal Governance Gaps

Security teams can't audit which developers use AI tools on which codebases. Compliance violations occur when regulated data is exposed to AI models.

🕵️

External Threat Blindness

When code is stolen, attackers use AI to rapidly find vulnerabilities. Organizations have zero visibility into this analysis.

Unenforceable Prohibitions

AI bans on sensitive codebases are meaningless once code leaves the network. Client-side controls are trivially bypassed.

AI Provider-Side Detection

Register invisible patterns already in your code. Detect AI usage regardless of who, where, or which tool.

  • Invisible Patterns

    Register function names, variables, and code snippets already in your codebase—no marker files to remove.

  • Bypass Resistant

    Detection happens at the AI provider level. Client-side modifications don't affect it.

  • External Threat Detection

    Detect when stolen code is analyzed—even by attackers using personal AI accounts.

  • Flexible Policies

    Notify, require approval, or block access completely based on sensitivity level.

// Register patterns already in your code { "patterns": [ { "type": "honeypot", "pattern": "__ACME_CANARY_*__", "policy": "block" }, { "type": "function", "pattern": "acme_internal_*", "policy": "notify" }, { "type": "code", "pattern": "class AcmeCrypto {...}", "policy": "block" } ] }

Four Walls of Protection

1

Register Patterns

Organizations register code patterns through the Husn admin console. No code changes required.

2

AI Provider Checks

When any AI tool reads files, it calls the Husn API to check for registered patterns.

3

Instant Detection

On match, Husn alerts the organization and enforces the configured policy.

4

Take Action

Organizations receive real-time alerts with user identity for rapid incident response.

See Husn Canaries in Action

Watch a complete demonstration of Husn Canaries detecting and blocking AI analysis of protected code.

Husn Canaries Demo
Watch on YouTube

Why Provider-Side Enforcement?

Capability Client Hooks Network Proxies Husn Canaries
Bypass resistant
Works across all AI clients
Detects external threats
Works for web UI
No client configuration
Detects stolen code analysis

Read the Full Paper

Our paper presents the complete design, threat model, security analysis, and proof-of-concept implementation.