John Kwan

AI Security / Agent Security. I build and audit AI systems, and I ship reproducible security demos.

This hub is designed for fast diligence: three flagship projects, stable evidence packs, grounded AI Q&A, and explicit limits on what is proven.

Live demos

LLM AppSecAgent SecurityGovernanceEvidenceSimulation

Grounded portfolio AI

Answers only from published artifacts, cites evidence IDs, and refuses unsupported claims.

Reproducible project evidence

Each flagship links to stable snapshots, repo references, and explicit limitations.

Fit Check for recruiters

Paste a job description to generate an evidence-backed match and 30/60/90 plan.

Flagships

Three current projects, each with a case study and evidence pack

The site foregrounds stable proof artifacts rather than broad claims. Each project page combines context, architecture, limits, and cited evidence.

AI AssuranceGovernanceEvidenceRules Engine

AI Assurance Control Plane

An assurance layer over AI telemetry, evaluation, and review workflows, focused on evidence management rather than generic observability.

Agent SecuritySimulationSupabaseEducation

AI Security Navigator

Interactive periodic-table style navigator for AI security learning, design recommendations, and safe simulations with a constrained execution model.

LLM AppSecRegressionEvidenceHardening

LLM AppSec Harness

Deterministic regression harness for comparing baseline and hardened LLM application behavior with stable reports and explicit limits.