Throughout history, truth has frequently been manipulated. Damnatio memoriae, or the deliberate erasing of individuals from history, is an example of a practise that dates back to Ancient Egypt and involves the deliberate manipulation of reality. Disingenuous information can now be produced easily and in a realistic format, and its dissemination to a targeted audience occurs at an unparalleled speed and scale, including through artificial intelligence (AI) techniques. This is due to the rapid advancements in information and communication technologies (ICT) as well as their increasing pervasiveness. Our world is currently dealing with an influx of AI-powered websites and multimedia that disseminate false information and eliminate all human touch from everything. Since a few years ago, false information and rumours have been spreading quickly. In this situation, AI acts as an additional aid, boosting all such events, and this presents a challenge to the entire globe. Without a question, the human mind is a tremendous tool, and no AI-powered website can match its knowledge and IQ. We rely on such machines and automated information from AI, believing they are bigger than the human mind who invented them, which is erroneous.
Websites powered by AI use algorithms and user history to generate material that is entirely user biased and hence spins a false storey. Such AI-generated material limits users’ perspectives and aids in the spread of false information and fake news. AI methods contribute to the online disinformation epidemic in two ways. First, AI techniques are opening up new possibilities for producing or editing texts, images, audio, and video content. Second, the effective and quick spread of misinformation online is considerably facilitated by the AI technologies created and used by online platforms to increase user engagement. The difficulty is primarily caused by these latter methods. This circumstance has numerous ethical ramifications that need to be carefully considered.
A significant risk to society is posed by the widespread, unchecked, and frequently systematic dissemination of false and misleading information online and through social media. Current countermeasures based on journalistic corrections do not seem to scale up as a variety of cognitive, social, and computational biases contribute to the spread of digital disinformation. Computational social scientists are uniquely positioned to contribute in two ways to the fight against fake news: first, by elucidating the fundamental mechanisms that make us susceptible to online disinformation, and second, by developing practical methods to refute misinformation.