AI for Business

Meta's Friendly AI Chatbots Are Offering Medical Guidance. Experts Say It's a Problem.

Meta is promoting a suite of AI personas as digital companions for fitness and wellness. Yet these chatbots, accessible to billions on Instagram and other apps, are dispensing specific health...

Share:

Meta is promoting a suite of AI personas as digital companions for fitness and wellness. Yet these chatbots, accessible to billions on Instagram and other apps, are dispensing specific health suggestions that medical professionals find concerning. An investigation by Digital Trends reveals the systems propose diets, supplements, and exercise plans without accounting for a user's individual health status, sometimes contradicting established medical knowledge or inventing claims.

The issue is amplified by the chatbots' design. Unlike a search engine, personas named Muse or Spark are built as relatable characters with backstories, fostering a sense of personal connection. This engineered familiarity increases the likelihood users will trust and follow their guidance, despite disclaimers labeling the bots as AI.

In practice, the chatbots rarely deflect dangerous premises. When asked about risky diets or supplements, they typically offer implementation tips instead of cautions. The systems are optimized for engagement and conversation flow, making them structurally inclined to provide an answer, not necessarily a safe one.

While Meta states its AI should not be treated as a professional, studies show such warnings have little effect on user behavior, particularly among younger audiences. The company has not detailed specific guardrails for health conversations within this platform. Regulatory bodies in the EU and US are now examining how to classify and oversee such AI-generated health content.

The core tension remains: these systems operate in a legal gray zone between general wellness and medical advice, a distinction meaningless to someone seeking help. Technical solutions exist, like hard-coded refusals or medical database checks, but they create friction in a business model built on seamless engagement. For now, the polished, friendly chatbots continue talking, while the responsibility for inaccurate health guidance falls to the users who follow it.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →