May 24, 2024
ChatGPT produces half-baked answers most of the time, but people trust it anyway
A new study from researchers at Purdue University has found that 52 percent of ChatGPT’s responses to programming queries were ‘riddled’ with misinformation. Even so, people were more likely to trust its answers because they found the application to be so damn polite and well-spoken. It’s the Buster Scruggs of AI.
The team examined ChatGPT’s attempts to tackle 517 programming questions. Over half of the chatbot’s responses contained misleading information. ChatGPT’s answers were also significantly wordier (77 percent) compared to human-generated solutions. Additionally, the researchers identified inconsistencies between ChatGPT’s responses and those provided by human programmers.
An analysis of 2,000 randomly selected ChatGPT responses also revealed a distinct stylistic fingerprint: more formal, analytical language devoid of “negative sentiment.“ The researchers suggest that this bland, overly optimistic tone is a feature of AI communication, often lacking the nuance and critical thinking found in human responses.
Despite the high error rate, a small survey conducted by the researchers found that a significant number of programmers (35 percent) actually preferred ChatGPT’s answers. The participants also failed to detect nearly 40 percent of the mistakes generated by the AI.
“The follow-up interviews revealed that the polite language, articulated and textbook-style answers, and comprehensiveness were some of the main reasons that made ChatGPT answers look more convincing,” the researchers noted. In essence, programmers were lowering their guard due to ChatGPT’s pleasant demeanour, overlooking the fundamental inaccuracies in its responses.