Skip to main content
Back to Resources

AI & Health

Half of Canadians Are Using AI for Health Advice. Most Don't Trust It.

Jamie Health · · 6 min read


In February 2026, the Canadian Medical Association released findings from their Health and Media Tracking Survey: respondents who followed health advice from AI were five times more likely to report experiencing harms than those who did not.

Methodology note

The survey asked respondents whether they experienced a negative effect after following AI health advice. That is self-reported and correlational. It does not establish that AI caused the harm. What it does establish is that a meaningful number of Canadians are turning to tools they distrust and reporting negative results. That association is worth taking seriously regardless of causation.

The numbers

The survey, conducted by Abacus Data, found that 89% of Canadians go online for health information. About half are using AI tools to diagnose or treat their health issues. Only 27% trust AI to provide accurate health information.

Read those last two numbers together. People are using tools they don't trust because they have no better option.

The survey also found that 77% of Canadians are concerned about false health information coming from the United States, and 69% have become skeptical of all health information they find online. Not just AI. Everything.

The access gap is the root cause

CMA president Dr. Margot Burnell put it directly: "For years, we've been talking about how too many Canadians struggle to access health care when they need it. This leaves people little choice but to turn to dubious sources of information, and now we know that it is hurting them."

The survey result makes more sense in that context. It is not primarily a story about AI being bad. It is a story about a healthcare system with gaps wide enough that people turn to tools never built for clinical guidance, and get hurt as a result.

The CMA is calling for coordinated action from government, health providers, and patients to make AI a tool for reliable health information. People will keep asking health questions online. The question is what answers they get.

Not all AI health tools are the same

There is a meaningful difference between asking a general-purpose AI assistant about your symptoms and using a purpose-built, compliance-first health information tool.

General-purpose AI models are trained on broad internet data. They typically are not grounded in clinical protocols. They often carry no mandatory safety architecture. A user asking whether their chest pain is serious may receive a confident, probabilistic response with no escalation to emergency services. The response might be wrong. There is typically no audit trail and no licensed clinical standard behind it.

A responsible health information tool works differently. Clinical safety decisions run on a separate layer from the conversational response. Rule-based escalation operates independently of the language model: if a red-flag symptom pattern appears, it surfaces not as a result of what the AI decides, but as a deterministic check. The tool uses licensed clinical protocols rather than internet-trained pattern matching. It hedges: "based on what you've described, it may be worth seeking care today" rather than "you have X condition." Every output is traceable to its clinical source.

Health Canada's guidance on software as a medical device is clear that intended use and how a product is marketed determines whether it falls under regulation. A tool positioned as health information and care navigation sits in a different regulatory category than one marketed as making clinical decisions. That boundary matters, and a well-built tool is explicit about it.

The CMA noted in their October 2025 submission to Innovation, Science and Economic Development Canada that AI in health must "strengthen trust, protect privacy and enhance health care." Those are architecture requirements, not marketing claims.

What to look for in an AI health tool

If you are going to use AI for health guidance, here are the questions worth asking:

Is it grounded in clinical protocols?

General-purpose AI generates responses from training data. A responsible health tool uses structured clinical decision logic from licensed protocols.

What happens when symptoms are serious?

Rule-based escalation that runs independently of the language model is a meaningful safety distinction. Ask how emergency escalation works and whether it can be bypassed by the AI.

Is the reasoning traceable?

You should be able to understand why the tool gave you the guidance it did. "Based on this clinical protocol" is a different answer than "based on patterns in my training data."

Does it tell you when to stop using it?

A responsible tool should actively direct you to in-person care when your symptoms warrant it. If a tool seems oriented toward extending the conversation rather than routing you to the right care, that is worth noticing.

Where is your data stored?

Canadian organizations remain accountable under PIPEDA even when data processing happens abroad. But health information held in another country can be subject to that country's laws. For sensitive health data, that distinction matters.

Does it know what it is not?

A health information tool should not diagnose, prescribe, or replace a clinical consultation. How a tool describes itself is often the clearest signal of whether it understands its own regulatory position.

What the CMA's survey actually asks for

The CMA is calling on government and health providers to work together on AI tools that are reliable, trustworthy, and built for Canadian healthcare. They are specifically asking for investment in domestic sources: Canadian-built, Canadian-hosted tools that operate under Canadian clinical and privacy standards.

That is a call for a different kind of AI health tool. Not general-purpose. Not ungrounded. One that is honest about what it is and cannot do, that escalates when it should, and that makes its reasoning visible.

The 5x association is not an argument against AI in healthcare. It is an argument for building it right.

Sources

  1. Canadian Medical Association. "Doctors warn: Canadians are turning to AI for health information and it is hurting them." February 10, 2026.
  2. Office of the Privacy Commissioner of Canada. "Guidelines for processing personal data across borders."
  3. Health Canada. "Software as a Medical Device — Draft Guidance Document."

Disclosure: Jamie is a compliance-first AI health information and care navigation platform we are building in Canada. It provides urgency guidance based on clinical protocols. It does not diagnose, prescribe, or replace a clinical consultation. Learn about our trust and compliance architecture →

Also in Resources

January 28, 2026

Why Compliance-First AI Matters for Canadian Healthcare

Canada is in the middle of two colliding crises: a healthcare access problem that has been building for decades, and an AI governance vacuum that the country is only now starting to confront.

Read article
Half of Canadians Are Using AI for Health Advice. Most Don't Trust It. | Jamie