Anthropic Launches Claude AI for Healthcare with Secure Health Record Access
Anthropic推出Claude for Healthcare功能,允许用户连接实验室结果和健康记录,并通过HealthEx、Function及即将推出的Apple Health和Android Health Connect获取数据。Claude可总结医疗历史、解释测试结果并准备就诊问题。该功能旨在提高医患沟通效率,并强调隐私保护与AI局限性。 2026-1-12 08:37:0 Author: thehackernews.com(查看原文) 阅读量:0 收藏

Artificial Intelligence / Healthcare

Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information.

Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Max plans can opt to give Claude secure access to their lab results and health records by connecting to HealthEx and Function, with Apple Health and Android Health Connect integrations rolling out later this week via its iOS and Android apps.

"When connected, Claude can summarize users' medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments," Anthropic said. "The aim is to make patients' conversations with doctors more productive, and to help users stay well-informed about their health."

Cybersecurity

The development comes merely days after OpenAI unveiled ChatGPT Health as a dedicated experience for users to securely connect medical records and wellness apps and get personalized responses, lab insights, nutrition advice, and meal ideas.

The company also pointed out that the integrations are private by design, and users can explicitly choose the kind of information they want to share with Claude and disconnect or edit Claude's permissions at any time. As with OpenAI, the health data is not used to train its models.

The expansion comes amid growing scrutiny over whether AI systems can avoid offering harmful or dangerous guidance. Recently, Google stepped in to remove some of its AI summaries after they were found providing inaccurate health information. Both OpenAI and Anthropic have emphasized that their AI offerings can make mistakes and are not substitutes for professional healthcare advice.

In the Acceptable Use Policy, Anthropic notes that a qualified professional in the field must review the generated outputs "prior to dissemination or finalization" in high-risk use cases related to healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance.

"Claude is designed to include contextual disclaimers, acknowledge its uncertainty, and direct users to healthcare professionals for personalized guidance," Anthropic said.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/01/anthropic-launches-claude-ai-for.html
如有侵权请联系:admin#unsafe.sh