Skip to main content
WardenOpen-source AI scannerExplore →

AI Security Intelligence

The AI security landscape.
Decoded and covered.

New frameworks, regulations, and research papers appear every month. It's hard to keep up. This page tracks every major AI security standard and shows exactly what each one requires — and how SharkRouter's ecosystem covers it.

Updated continuously · 32 findings mapped · 10 sources

32+
Risks Mapped
10
Sources Tracked
5
Products Covering

Standards & Frameworks

Industry-recognized security frameworks and taxonomies that define the AI threat landscape.

OWASP

OWASP LLM Top 10

The Open Web Application Security Project's definitive list of the most critical security risks in LLM applications. Updated annually with input from 500+ security professionals.

Read in whitepaper →
12
entries

OWASP LLM01 — Prompt Injection

The #1 risk for LLM applications. ToolGuard's PIGuard and TaintGuard provide multi-layer prompt injection defense.

ToolGuard
✓ COVERED

OWASP LLM02 — Insecure Output Handling

Risks from directly consuming LLM outputs without sanitization. ToolGuard and Inspect validate outputs before they reach downstream systems.

InspectToolGuard
✓ COVERED

OWASP LLM03 — Training Data Poisoning

Attacks that compromise AI behavior by poisoning training data. Warden detects and Assurance verifies against poisoning indicators.

WardenAssurance
✓ COVERED

OWASP LLM05 — Supply Chain Vulnerabilities

Risks from compromised or malicious components in the AI supply chain. Warden inventories and ToolGuard validates every tool before execution.

ToolGuardWarden
✓ COVERED

OWASP LLM06 — Sensitive Information Disclosure

Preventing AI systems from leaking sensitive data in prompts, responses, or through side channels. SharkRouter's PII module provides real-time detection.

ToolGuardGulliver
✓ COVERED

OWASP LLM06 (2025) — Excessive Agency

When AI agents are granted capabilities beyond what is needed, or given autonomy to take impactful actions without proper safeguards. ToolGuard enforces least-privilege by design.

ToolGuardWarden
✓ COVERED

OWASP LLM07 — Inadequate AI Agent Oversight

The risk of AI agents operating without sufficient monitoring and control. SharkRouter provides complete oversight through the governance pipeline.

ToolGuardWardenInspect
✓ COVERED

OWASP LLM07 (2025) — System Prompt Leakage

System prompts containing sensitive business logic, API keys, or access patterns can be extracted by adversarial users. SharkRouter prevents prompt exfiltration at the gateway level.

ToolGuardInspect
✓ COVERED

OWASP LLM08 (2025) — Vector & Embedding Weaknesses

RAG systems can be exploited through poisoned embeddings, cross-tenant data leakage in shared vector stores, and adversarial retrieval manipulation.

ToolGuardWardenGulliver
✓ COVERED

OWASP LLM09 — Overreliance

The risk of blindly trusting AI outputs without verification. Inspect and Assurance provide independent output validation.

InspectAssurance
✓ COVERED

OWASP LLM10 (2025) — Unbounded Consumption

AI agents that consume unlimited tokens, make runaway API calls, or exhaust resources without limits. SharkRouter's cost management and rate limiting enforce hard boundaries.

ToolGuard
✓ COVERED

OWASP Top 10 for LLM Applications — Full Coverage

The definitive risk taxonomy for LLM-powered applications. SharkRouter's ToolGuard addresses all 10 categories through deterministic function-call governance.

ToolGuardWarden
✓ COVERED
MITRE

MITRE ATLAS

Adversarial Threat Landscape for AI Systems — MITRE's knowledge base of adversary tactics and techniques against machine learning systems, modeled after ATT&CK.

Read in whitepaper →
3
entries

MITRE ATLAS — Adversarial AI Techniques

The adversarial threat landscape for AI systems. ToolGuard's MoralCompass and Warden's posture assessment counter documented attack techniques.

ToolGuardWarden
✓ COVERED

MITRE ATLAS — AI Attack Techniques

MITRE's adversarial threat landscape for AI systems. ToolGuard's multi-layer guards map directly to ATLAS attack vectors.

ToolGuardAssurance
✓ COVERED

MITRE ATLAS — ML Supply Chain Compromise

ATLAS techniques for compromising ML supply chains. Warden's discovery and Assurance's verification provide defense in depth.

WardenAssurance
✓ COVERED
NIST

NIST AI Risk Management

The National Institute of Standards and Technology's AI Risk Management Framework and adversarial ML taxonomy — the U.S. government's baseline for trustworthy AI.

Read in whitepaper →
4
entries

NIST AI 100-2 — Adversarial ML Taxonomy

NIST's taxonomy of adversarial machine learning attacks and mitigations. SharkRouter provides runtime defenses for the identified attack categories.

InspectToolGuard
✓ COVERED

NIST AI 100-4 — Reducing Risks from Dual-Use Foundation Models

NIST guidance on managing risks from powerful foundation models that can be used for both beneficial and harmful purposes.

WardenToolGuard
✓ COVERED

NIST AI 600-1 — AI RMF Generative AI Profile

NIST's risk management framework for generative AI. SharkRouter provides tooling support for the framework's Govern, Map, Measure, and Manage functions.

ToolGuardAssuranceWarden
✓ COVERED

NIST SP 800-53 Rev 5 — Security & Privacy Controls

The comprehensive catalog of security and privacy controls for information systems. SharkRouter maps to key control families relevant to AI deployments.

ToolGuardInspectGulliver
✓ COVERED
ISO

ISO/IEC 42001

The international standard for AI Management Systems — providing requirements for establishing, implementing, and improving AI governance within organizations.

Read in whitepaper →
1
entries

ISO/IEC 42001 — AI Management System

The international standard for AI management systems. SharkRouter supports the plan-do-check-act cycle through continuous governance and verification.

WardenAssuranceInspect
✓ COVERED

Regulatory & Compliance

Government regulations, central bank directives, and international standards shaping AI governance requirements.

EU

EU AI Act

The European Union's comprehensive regulation on artificial intelligence — the world's first major AI law, establishing risk-based requirements for AI systems operating in the EU.

Read in whitepaper →
2
entries

EU AI Act — High-Risk AI Systems (Articles 6-9)

The EU's comprehensive AI regulation. SharkRouter's full ecosystem provides the technical controls required for high-risk AI system compliance.

ToolGuardWardenInspectAssuranceGulliver
✓ COVERED

GDPR Article 25 — Data Protection by Design

The principle of data protection by design and by default, applied to AI systems. SharkRouter enforces privacy at the infrastructure level.

ToolGuardGulliver
✓ COVERED
BOI

Bank of Israel AI Directive

Bank of Israel's directive on AI governance in financial institutions — requiring explainability, human oversight, and risk controls for AI-driven financial decisions.

Read in whitepaper →
1
entries

Bank of Israel Circular — AI in Banking

Regulatory requirements for AI use in Israeli financial institutions. SharkRouter provides the technical controls required by the BOI's supervisory framework.

ToolGuardWardenInspectAssuranceGulliver
✓ COVERED

Research & Industry

Cutting-edge research papers, threat intelligence reports, and practitioner insights from leading AI security experts.

DeepMind

Google DeepMind Research

Frontier AI safety research from Google DeepMind — including agent security taxonomies, red-teaming methodologies, and adversarial robustness studies.

Read in whitepaper →
2
entries

Google "Securing the AI Software Supply Chain"

Google's guidance on securing AI pipelines from model training to deployment. SharkRouter covers the runtime governance layer.

WardenToolGuardAssurance
✓ COVERED

Google DeepMind "Defeating Agentic AI Traps" Paper

DeepMind identifies 6 trap types that exploit AI agents. SharkRouter's trap_defense module was built specifically to counter these attack patterns.

ToolGuardWarden
✓ COVERED
O'Reilly

O'Reilly Media

Practitioner-focused analysis from O'Reilly's AI security experts — bridging academic research with real-world implementation challenges.

Read in whitepaper →
2
entries

O'Reilly "AI, A2A, and the Governance Gap"

O'Reilly analysis of agent-to-agent communication risks and the governance gap in multi-agent systems. SharkRouter's ToolGuard governs the A2A boundary.

ToolGuardWardenAssurance
✓ COVERED

O'Reilly "AI, A2A, and the Governance Gap"

Multi-agent systems create new adversarial surfaces. When agents delegate to agents, the attack surface multiplies without governance.

ToolGuardWardenGulliver
✓ COVERED
Academic

Academic Research

Peer-reviewed papers from leading AI security researchers at top universities and research institutions.

Read in whitepaper →
2
entries

"Measuring Faithfulness in Chain-of-Thought Reasoning" (Lanham et al.)

Research showing that LLM chain-of-thought reasoning may not reflect actual model reasoning. Inspect provides independent decision tracing beyond CoT.

InspectAssurance
✓ COVERED

"Universal and Transferable Adversarial Attacks on Aligned LLMs" (Zou et al.)

The GCG attack paper showing that adversarial suffixes can break any aligned LLM. ToolGuard's structural governance defeats suffix-based attacks.

ToolGuardWarden
✓ COVERED
Industry

Industry Reports

Threat intelligence and best-practice reports from enterprise security teams, red teams, and AI security practitioners.

Read in whitepaper →
3
entries

CCPA/CPRA — AI Data Processing

California's privacy regulations applied to AI data processing. SharkRouter supports consumer data rights in AI pipelines.

ToolGuardGulliver
✓ COVERED

PCI DSS v4.0 — AI Data Handling

Payment card industry requirements applied to AI systems processing cardholder data. SharkRouter's PII module detects and blocks card data in AI pipelines.

ToolGuardGulliver
✓ COVERED

SLSA Framework — Supply Chain Levels for Software Artifacts

Google's framework for software supply chain integrity, applied to AI tool chains. Warden provides provenance tracking for AI components.

WardenAssurance
✓ COVERED

Download the SharkRouter Whitepaper

Deep dive into SharkRouter's architecture, threat model, compliance posture, and the full framework coverage matrix behind this page.

  • Seven-layer governance architecture with ToolGuard internals
  • Full 32+ framework mapping with per-control coverage
  • Threat model and trap-defense taxonomy
  • Compliance posture across OWASP, NIST, EU AI Act, ISO/IEC 42001

Request-only. Each copy is individually watermarked and sent by our team after review — no instant download, no spam.

See the products behind the coverage.

Five products. One event stream. Complete governance from discovery to chaos testing.

We use cookies for analytics to understand how visitors use our site. No advertising cookies. Privacy Policy