Fact Check: How reliable AI Fact checks are? , world News

“Hey, @grok, is it true?” Ever since Elon Musk’s XAI launched its general artificial intelligence chatbot groke in November 2023, and especially since it was rolled out to all non-pramium users in December 2024, thousands of X (East Twitter) users are asking to check the facts fast to look at the question.A recent survey conducted by Techradar, a British online technology publication, found that 27% of Americans used artificial appliances such as apps such as Openai’s Chatgpt, Meta’s Meta AI, Google Ke Mithun, Microsoft’s Copilot or Google or Yahoo instead of traditional search engines. But how accurate and reliable are chatbots’ reactions? Many people have asked themselves in front of Groke’s recent statements about “white massacre” in South Africa. In addition to the problematic trend of grouke on this subject, X users were also irritated about the fact that Bot started talking about the issue when it was asked about completely different subjects, such as in the following example:

Picture: X
Discussions arose around the alleged “white genocide” after the Trump administration brought white South African people to the United States as refugees. Trump said that he was facing a “massacre” in his motherland – an accusation that lacks any evidence and many people belong to the mythical myth of “great replacement”.XAI blamed an “unauthorized amendment” for the passion of the grouke with the theme “White Genocide”, and said that it “fully investigated.” But do such flaws occur regularly? How much can users be sure to get reliable information when they want to do something with AI?We analyzed this and in the investigation of this DW fact, answered these questions for you.
Study reflects factual errors and converted quotes
Two studies conducted by the Tow Center for Digital Journalism in the British Public Broadcaster BBC and the United States were found this year when it talks about the ability of generic AI chatbots to accurately express news reporting.In February, the BBC study found that “the answer produced by AI assistants included significant impurities and perverted materials manufactured by the organization”.When it asked Chatgpt, Copilot, Gemini and Perplexity to answer questions about current news using BBC articles, it was found that there were “important issues of some forms” in 51% of the answers of chatbots.It was found in nineteen percent of the answers that they had added their factual errors, while 13% of the quotes were either changed or were not present in all in the quoted articles.PT Archer, director of the BBC’s generic AI program, said, “AI assistants cannot be trusted to provide accurate news in the current and they risk misleading the audience.”
AI provides wrong answers with ‘dangerous confidence’
Similarly, research by TO Center for Digital Journalism published in Columbia Journalism Review (CJR) in March found that eight generative AI search equipment 60% of cases were unable to correctly identify the perfection of article parts in cases.Perplexity performed the best with a failure rate of “only” 37%, while Grocke incorrectly responded to 94% questions.CJR stated that it was particularly concerned with “dangerous confidence”, with which AI devices presented incorrect answers. For example, Chatgpt, incorrectly identified 134 articles, but indicated a lack of confidence in just fifteen times. [total] Reactions, and never refused to respond, “the report states.Overall, the study found that the chatbots “generally deteriorated to answer the questions they could not give the correct answer, instead offered incorrect or speculative answers” and that AI search tools “cited” fabricated links and syndicate and copied versions of articles. “
AI chatbott is only good as their ‘diet’
And where does AI get their information? It is fed by various sources such as comprehensive databases and web discovers. Depending on how AI chatbots are trained and programmed, the quality and accuracy of their answers may be different.“One issue that has emerged recently is LLM pollution [Large Language Models — Editor’s note] By Russian disintegration and promotion. So it is clearly an issue with the ‘diet’ of LLM, “Tomaso Cineta told DW. He is the Deputy Director of the Italian fact-checking project Pagela Politika and Fact Checking Coordinator at the European Digital Media Observatory.“If the sources are reliable and qualitative, the answer would be most likely,” Kineta explained. He said that he regularly comes into reactions that are “incomplete, not accurate, misleading or even false.”He said that in the case of XAI and Groke, whose owner, Elon Musk, is a fierce supporter of US President Donald Trump, a clear danger that the “diet” could be politically controlled, he said.
When AI finds it all wrong
In April 2024, Meta AI allegedly posted on Facebook in the New York Parenting Group that a handicapped in it was still academicly talented children and advised on special schooling.Finally, Chatbot apologized and admitted that he did not have “personal experiences or children”, as Meta reports to 404media, who reports on the incident:“This is a new technique and it cannot always return the response that we intend, which is the same for all generative AI systems. Ever since we launched, we have constantly released updates and reforms in our models and we are continue to work on improving them, “a spokesman said in a statement.In the same month, Groke misinterpreted a viral joke about a poor -performing basketball player and told users in his trending section that he was investigating by the police after being accused of barbaining houses with bricks in Sacramento, California.Groke misunderstood the common basketball expression, by which a player who failed to achieve any of his throwing on the target is said to “throw bricks”.Other mistakes have been less entertaining. In August 2024, Groke spread misinformation about the deadline for US presidential candidates for adding ballot papers to nine federal states after a return from the race of former President Joe Biden.State Secretary of Minnesota, Steve Simon wrote to Musk in a public letter that, within a few hours of Biden’s announcement, Groke made false headlines that Vice President Kamala Harris would be unworthy to appear on the ballot in many states – untrue information.
Grake provides the same AI image for various real events
It is not just news that difficulties appear with AI chatbots; When they talk about identifying AI-borne images, they demonstrate serious boundaries.In a quick experiment, DW asked Groke to identify the date, location and origin of the AI-Janit image of the fire in a destroyed aircraft hangar taken from the Tikkok video. In its response and explanation, Groke claimed that the image showed several different incidents at many different places, which were in Ho Chi Minh City, Vietnam, from a small airspace in Calisbury to Denver International Airport in Colorado to Denver International Airport in Colorado.In recent years, these places have actually caught accidents and fire, but in question the image showed none of them. DW strongly believes that it was generated by Artificial Intelligence, which seemed unable to identify the grouke, despite the image and discrepancies in the image – including the wings of the inverted tail and fire on the airplane, including the irrational jet of water.More than that, Groke recognized the part of the “tiktok” watermark that appeared in the corner of the image and suggested that this “its authenticity supported.” Conversely, under its “more details” tab, Groke said that Tiktok was a platform that is often used for rapid spread of viral materials, which can cause misinformation if not properly verified. “Similarly, this week, Groke informed X users (in Portuguese) that a viral video to show a huge anaconda in Amazon, measuring a length of several hundred meters (more than 500 feet), was real – it was clearly despite being born with artificial intelligence, and Grake also recognizes a chatting watermark.
AI chatbots should not be seen as a fact-zanch tool ‘
AI Chatbot may appear as a omniscient unit, but they are not. They make mistakes, misunderstand things and even manipulation can be done. AI at the Oxford Internet Institute (OII) and PostDorle Research Fellow, Felix Simon, conclusions in Digital News and Research Associate: “AI systems such as Groke, Meta AI or chatgate should not be seen as a fact-zanch tool. While they can be used at that end with some success, for a special form.,For Canata in Pagela Polyika, AI chatbot can be useful for very simple facts for investigation. But he advises people not to fully trust them. Both experts stressed that users should always re -examine the reactions with other sources.