Health Evidence System

What is HES?

HES (Health Evidence System) is a clinical governance tool that evaluates AI-generated medical recommendations before they reach you. Think of it as a safety filter that checks whether each AI suggestion has sufficient evidence to support clinical action. Instead of showing you everything the AI generates, HES only presents recommendations that meet evidence standards—and clearly labels those that don't.

What HES Does

HES reviews every medical statement an AI system generates and makes a governance decision:

  • • Can this recommendation safely influence patient care?
  • • Is there enough evidence to support taking action?
  • • Are there important conditions or limitations?

Each recommendation gets one of four decisions:

Permitted

Evidence meets requirements. Safe to act on.

⚠️

Qualified

Permitted with specific conditions or limitations.

🚫

Restricted

Evidence is insufficient for clinical action in this context.

Refused

Conflicting or inadequate evidence. Not safe for clinical use.

Why HES Exists

AI can generate plausible-sounding medical recommendations that lack adequate evidence or may be inappropriate for specific contexts. HES exists to:

Protect patient safety

By blocking unsubstantiated recommendations

Save your time

By filtering out suggestions that shouldn't influence care

Maintain clinical standards

By enforcing evidence requirements

How It's Different from AI Chat

HES is not a chatbot. It doesn't have conversations or try to be helpful in the traditional AI assistant sense.

Instead, it acts as a clinical governance layer—similar to how your hospital's formulary restricts certain medications, or how clinical pathways guide care decisions. The difference: HES evaluates each specific assertion individually, not just broad recommendations.

What You See

When you use HES, you'll see:

Medical assertions

Individual clinical statements or recommendations

Governance badges

Clear labels showing what's permitted or restricted

Brief explanations

One-line reasons for restrictions (when applicable)

Evidence access

Links to view supporting research on demand

You can answer these questions in under 10 seconds:

• What is allowed?

• What is restricted?

• Why is it restricted?

• Where is the evidence?

What HES Doesn't Do

Doesn't show AI "confidence scores" – This isn't about how confident the AI is; it's about whether evidence supports action.

Doesn't explain AI reasoning – You won't see "the AI thought this because..." Instead, you see evidence-based governance decisions.

Doesn't soften restrictions – If something is restricted, it's clearly marked. No ambiguity.

Doesn't make clinical decisions for you – HES enforces evidence standards; you still apply clinical judgment.

Built for Clinical Workflow

HES is designed to feel like reviewing lab results, not chatting with an assistant:

Structured

Clear rows, states, and decisions

Scannable

See governance decisions at a glance

Predictable

Same interface every time

Defensible

Audit trail for compliance and legal review

In Practice

Scenario: You're reviewing AI-generated recommendations for a diabetic patient.

"Recommend metformin 500mg twice daily"

Permitted (evidence-based, appropriate)

⚠️

"Consider SGLT2 inhibitor for cardiovascular benefit"

Qualified (appropriate with conditions)

🚫

"Increase insulin by 20 units"

Restricted (insufficient context-specific evidence)

What you do:

  • • Act on Permitted items with confidence
  • • Review conditions for Qualified items
  • • Investigate or override Restricted items as clinically appropriate

What HES does:

  • • Prevents unvetted recommendations from looking actionable
  • • Shows you the evidence basis when you need it
  • • Creates an audit trail of what was shown vs. blocked

Bottom Line

HES is clinical governance for the AI era.

It ensures that AI-generated medical content meets evidence standards before influencing patient care—giving you confidence that what you see is substantiated, and clearly flagging what isn't.