Most organizations that need an AI agent governance framework do not have one. They have fragments: an acceptable use policy that mentions 'AI tools' in passing, a vendor assessment process that was built for SaaS applications and does not fit agents well, and a monitoring setup that captures some AI activity by accident.
Building a coherent framework from scratch is achievable in a matter of weeks for most organizations, but it requires tackling the problem in the right order. The four pillars—inventory, classify, approve, monitor—give you a sequential path that produces value at each stage rather than requiring full completion before anything works.
Pillar 1: Inventory — Know What Exists
You cannot govern what you cannot see. The inventory phase has two objectives: discover what agents are currently deployed (the shadow AI discovery problem) and establish a registry that all future agents must enter.
Discovery Exercise
Run a structured discovery exercise across four channels: network traffic analysis (destinations to AI API endpoints), OAuth/API key auditing (tokens issued to unrecognized applications), employee self-disclosure (structured survey framed as support, not enforcement), and SaaS management tool review (CASB data, browser extension inventory).
Scope the discovery to action-taking agents specifically. A static AI assistant with no tool access represents a different risk profile than an agent with CRM write access. Your initial inventory can focus on agents with one or more of: external API write access, internal data system access, code execution, email/calendar read/write, or file system access.
The Agent Registry
Establish a centralized registry where every agent deployment must be recorded. The registry entry for each agent should capture: agent name and owner, tool access permissions, data classifications it can read, external systems it can write to, the model or framework it uses, and the approval status.
The registry does not need to be sophisticated—a shared spreadsheet works at small scale. What matters is that it exists, that submissions to it are required (not optional), and that it is reviewed regularly.
Pillar 2: Classify — Understand the Risk Level
Not all agents are equally risky. A classification framework lets you apply proportionate governance effort: light-touch for low-risk agents, rigorous review for high-risk ones.
Risk Dimensions
Classify each agent across four dimensions:
- Data sensitivity: What is the classification of the most sensitive data the agent can access? (Public / Internal / Confidential / Restricted)
- Action scope: Can the agent take irreversible actions? (Read-only / Reversible writes / Irreversible actions)
- External exposure: Can the agent send data outside the organization's control perimeter? (None / Logging only / External API writes / Public web)
- Blast radius: If the agent were compromised or misconfigured, how many records or systems could be affected? (Individual / Team / Department / Enterprise-wide)
Risk Tiers
Map the four dimensions to three tiers:
- Tier 1 (Standard): Public or Internal data only, read-only or reversible writes, no external exposure, individual blast radius. Lightweight approval, standard logging.
- Tier 2 (Enhanced): Confidential data or irreversible actions or external API writes, bounded blast radius. Full security review, enhanced logging, quarterly access review.
- Tier 3 (Critical): Restricted data, irreversible actions AND external exposure, enterprise-wide blast radius. Full security review, CISO approval, continuous monitoring, human-in-the-loop gates for high-impact actions.
Most agents deployed by knowledge workers will be Tier 1 or Tier 2. Tier 3 agents are typically built by development teams for business-critical automation and should be rare.
Pillar 3: Approve — Structured Decision Process
The approval process should be proportionate to the risk tier and fast enough that employees do not route around it.
Tier 1 Approval
Self-service with manager acknowledgment. The agent owner completes the registry entry, selects the appropriate tools from an approved tool catalog, and their manager acknowledges the submission. No security team involvement required. Target turnaround: same business day.
Tier 2 Approval
Security review required. The agent owner submits the registry entry with a completed data access and tool permission form. Security reviews the submission, focusing on data access justification and external write permissions. Target turnaround: 3–5 business days. Optional: automated pre-screening that fast-tracks common low-complexity patterns.
Tier 3 Approval
Full review with CISO sign-off. The submitter completes a detailed security assessment form covering threat modeling, incident response procedures, and data handling controls. Security conducts a technical review. CISO approves. Target turnaround: 10–15 business days. This timeline is a feature, not a bug—it creates natural pressure to avoid unnecessary Tier 3 deployments.
The Approved Tool Catalog
Parallel to the approval process, maintain an approved tool catalog: a list of tools (APIs, MCP servers, browser automation libraries, etc.) that agents are permitted to use. Tools not on the catalog require security review before they can be added. This prevents the combinatorial explosion of novel tool combinations that would otherwise make each agent review a greenfield assessment.
Pillar 4: Monitor — Ongoing Visibility
Approval is a point-in-time control. Monitoring provides continuous assurance that agents are operating within their approved parameters.
What to Log
At minimum, capture: every tool invocation (which tool, which agent, which user context, timestamp), data classification of inputs (if determinable), external write destinations, and error/exception events. Structured logging at the agent gateway layer, not just prompt/completion logs, is required for this.
Alerting Thresholds
Configure alerts for: agents accessing data classifications above their approved level, agents writing to external destinations not listed in their registry entry, read volume spikes (>3x baseline for Tier 2/3 agents), and agents spawning sub-agents not listed in their registry entry.
Periodic Review Cadence
- Tier 1: Annual registry review — confirm the agent is still in use and access is still appropriate.
- Tier 2: Quarterly access review — review tool permissions and data access patterns against baseline.
- Tier 3: Continuous monitoring with monthly review — security team reviews anomaly reports and confirms human-in-the-loop gates are functioning.
Implementation Roadmap
For most organizations, the path from zero to operational framework takes 6–8 weeks:
- Weeks 1–2: Discovery exercise — surface existing agents, establish initial registry.
- Weeks 2–3: Classification framework — define tiers, classify discovered agents.
- Weeks 3–4: Approval process design — build intake forms, define review workflow, identify reviewers.
- Weeks 4–5: Tool catalog — define approved tools, document catalog maintenance process.
- Weeks 5–6: Monitoring setup — instrument agent gateways, configure alerts, define review cadence.
- Week 6+: Communicate to organization — announce framework, provide self-service path for Tier 1 submissions, set a sunset date after which unapproved agents are blocked.
A governance framework that is operational and enforced at 80% coverage is more valuable than a perfect framework that exists only in documentation. Start with the inventory, build the classification, and let the rest follow.