AI Compliance
Readiness Assessment

5 questions. Know exactly where your AI stack stands — and what to fix first.

Question 1 of 5 0% complete
Question 1 / 5
What kind of data does your AI process?
PII or PHI
Names, emails, medical records, financial data
Internal business data
Employee records, contracts, IP, customer comms
General / public data
No sensitive or regulated content
Question 2 / 5
Which regulations apply to your AI use cases?
HIPAA
Healthcare data — highest enforcement risk
SOC 2 or GDPR
SaaS, EU customers, or enterprise sales
EU AI Act
AI deployed to EU users after Aug 2026
None yet
No regulatory obligations identified
Question 3 / 5
Do you have audit trails for AI decisions?
Yes — structured, queryable, retained
Every request logged with action, timestamp, and model
Partial — some logging, not compliance-grade
Application logs exist but not structured for audit
No — nothing tracked
No per-request logging of AI interactions
Question 4 / 5
Do you enforce runtime policy on AI outputs?
Yes — jailbreak detection + PII redaction active
Automated guardrails before content reaches users
Partial — system prompts only, no runtime filter
Instructions baked into prompts but no interception layer
No — raw LLM output sent to users
No filtering or policy enforcement in place
Question 5 / 5
How do you handle AI incidents or policy violations?
Documented runbook — alert, contain, report
Written process with breach notification timeline
Ad hoc — we'd figure it out when it happens
No written process, relies on team judgment
No process — haven't thought about it
No plan for AI-specific incidents or violations
0
/100

// Gap breakdown