Intelligence agencies and national security experts have warned that artificial intelligence could become a powerful tool in the hands of extremist armed groups, enabling them to recruit new members, produce fake images, and enhance their cyberattack capabilities.
National security experts believe the Daesh terrorist group recognized years ago that social media is an effective tool for recruitment and the spread of disinformation, making it unsurprising that it is now testing the use of artificial intelligence, according to the Associated Press.
Last month, an individual posting on a website sympathetic to Daesh urged supporters to integrate artificial intelligence into their operations, writing in English, “One of the best things about AI is how easy it is to use.”
Some intelligence agencies added that they fear AI could be used to recruit extremists, turning their worst nightmares into reality, according to the Associated Press.
For extremist groups, or even for any malicious actor, AI can be used to produce propaganda or fake images on a large scale.
John Laliberte, a former vulnerability researcher at the National Security Agency and current CEO of cybersecurity firm ClearVector, said, “For any adversary, artificial intelligence makes things much easier to carry out. Even a small group with limited financial resources can have an impact.”
Armed groups began using artificial intelligence following the launch of programs such as ChatGPT and more recently have increasingly relied on these tools to generate images and videos that appear more realistic.
According to the report, linking such fabricated content to social media algorithms helps recruit new supporters, confuse and intimidate adversaries, and spread propaganda on a wide scale.
As an example, AI-generated propaganda videos circulated on social media to attract recruits following an attack claimed by Daesh that killed 140 people at a concert hall in Russia last year.
Cybercriminals are already using synthetic audio and video in phishing campaigns, impersonating senior corporate or government officials to gain access to sensitive networks.
They can also use artificial intelligence to write malicious software or automate certain aspects of cyberattacks.
Most concerning, according to the source, is the possibility that armed groups may attempt to use artificial intelligence to help produce biological or chemical weapons, compensating for a lack of technical expertise.
The risk has been included in the updated “Homeland Threat Assessment” released earlier this year by the U.S. Department of Homeland Security.
U.S. lawmakers have since taken action, introducing several proposals aimed at curbing the growing threat.
Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, put forward a proposal to facilitate information sharing among artificial intelligence developers on how their products are being used by malicious actors, including extremist groups, hackers, and spies.
During recent congressional hearings on extremist threats, U.S. lawmakers learned that Daesh and al-Qaeda held training workshops to help their followers learn how to use artificial intelligence.
Last month, the House of Representatives passed legislation requiring homeland security officials to conduct an annual assessment of the AI risks posed by such groups.