HES (Health Evidence System) is a clinical governance tool that evaluates AI-generated medical recommendations before they reach you. Think of it as a safety filter that checks whether each AI suggestion has sufficient evidence to support clinical action. Instead of showing you everything the AI generates, HES only presents recommendations that meet evidence standards—and clearly labels those that don't.
HES reviews every medical statement an AI system generates and makes a governance decision:
Each recommendation gets one of four decisions:
Permitted
Evidence meets requirements. Safe to act on.
Qualified
Permitted with specific conditions or limitations.
Restricted
Evidence is insufficient for clinical action in this context.
Refused
Conflicting or inadequate evidence. Not safe for clinical use.
AI can generate plausible-sounding medical recommendations that lack adequate evidence or may be inappropriate for specific contexts. HES exists to:
Protect patient safety
By blocking unsubstantiated recommendations
Save your time
By filtering out suggestions that shouldn't influence care
Maintain clinical standards
By enforcing evidence requirements
HES is not a chatbot. It doesn't have conversations or try to be helpful in the traditional AI assistant sense.
Instead, it acts as a clinical governance layer—similar to how your hospital's formulary restricts certain medications, or how clinical pathways guide care decisions. The difference: HES evaluates each specific assertion individually, not just broad recommendations.
When you use HES, you'll see:
Medical assertions
Individual clinical statements or recommendations
Governance badges
Clear labels showing what's permitted or restricted
Brief explanations
One-line reasons for restrictions (when applicable)
Evidence access
Links to view supporting research on demand
You can answer these questions in under 10 seconds:
• What is allowed?
• What is restricted?
• Why is it restricted?
• Where is the evidence?
Doesn't show AI "confidence scores" – This isn't about how confident the AI is; it's about whether evidence supports action.
Doesn't explain AI reasoning – You won't see "the AI thought this because..." Instead, you see evidence-based governance decisions.
Doesn't soften restrictions – If something is restricted, it's clearly marked. No ambiguity.
Doesn't make clinical decisions for you – HES enforces evidence standards; you still apply clinical judgment.
HES is designed to feel like reviewing lab results, not chatting with an assistant:
Structured
Clear rows, states, and decisions
Scannable
See governance decisions at a glance
Predictable
Same interface every time
Defensible
Audit trail for compliance and legal review
Scenario: You're reviewing AI-generated recommendations for a diabetic patient.
"Recommend metformin 500mg twice daily"
Permitted (evidence-based, appropriate)
"Consider SGLT2 inhibitor for cardiovascular benefit"
Qualified (appropriate with conditions)
"Increase insulin by 20 units"
Restricted (insufficient context-specific evidence)
What you do:
What HES does: