Vietnam.vn - Nền tảng quảng bá Việt Nam

ChatGPT, DeepSeek distort scientific data

AI models like ChatGPT and DeepSeek have been found to be prone to skewing scientific content when summarizing, especially in the medical field.

Báo Khoa học và Đời sốngBáo Khoa học và Đời sống08/07/2025

A group of researchers in Germany recently warned about the risk of bias when using AI chatbots to shorten research content. After analyzing 4,900 scientific abstracts written by humans, the team used multiple AI models to compare how these systems process information. The results show that most chatbots were guilty of overgeneralizing, even when urged to summarize accurately.

what-is-ai-chatbot.jpg

AI is prone to bias when summarizing scientific research.

In tests, AI models made five times more mistakes than human experts when not guided. Even when there was a clear requirement for accuracy, the error rate was twice as high as that of a standard summary. “Generalizations sometimes seem harmless, but they actually change the nature of the original research,” said one of the team. “That's a systematic bias.”

Notably, newer versions of the chatbot have not fixed the problem, but have actually aggravated it. With their smooth and engaging language, AI-generated summaries can easily appear credible, while the actual content has been distorted. In one instance, DeepSeek changed the phrase “safe and can be successfully performed” to “safe and effective treatment”—a reinterpretation of the original study's conclusions.

In another example, the Llama model recommended diabetes medication for young people without specifying dosage, frequency, or side effects. If the reader is a physician or health care professional who does not verify the information with the original research, summaries like this can pose a direct risk to patients.

Experts say the phenomenon is rooted in the way AI models are trained. Many chatbots today are trained on secondary data—like popular science news—that has already been condensed. As AI continues to summarize the condensed content, the risk of distortion increases.

Experts in AI in the field of mental health say that technical barriers need to be built early in the development and use of AI.

099393800-1681896363-glenn-carstens-peters-npxxwgq33zq-unsplash.jpg

Users need to be wary as chatbots can easily distort content.

As users increasingly rely on AI chatbots to learn about science, small errors in interpretation can quickly accumulate and spread, leading to widespread misperceptions. At a time when trust in science is declining, this risk becomes even more worrying and deserves serious attention.

The integration of AI into the research and dissemination of knowledge is an irreversible trend. However, experts affirm that technology cannot replace the role of humans in understanding and verifying scientific content. When using chatbots in high-risk areas such as medicine, accuracy should be the top priority, rather than focusing only on smooth language experience or response speed.


Source: https://khoahocdoisong.vn/chatgpt-deepseek-bop-meo-du-lieu-khoa-hoc-post1552971.html


Comment (0)

No data
No data

Same category

How is the most expensive tea in Hanoi, priced at over 10 million VND/kg, processed?
Taste of the river region
Beautiful sunrise over the seas of Vietnam
The majestic cave arc in Tu Lan

Same author

Heritage

Figure

Business

No videos available

News

Political System

Local

Product