The Weaponization of Artificial Intelligence (AI)

Artificial Intelligence (AI) is a revolutionary technology impacting people, organizations, and governments all across the globe. The vast majority of industries and sectors have been directly and indirectly affected by the emergence of AI. Automation of jobs and processes, enhanced technological systems, and improved efficiency are all outcomes of AI implementation. For some, AI integration has produced positive results, such as automating redundant tasks. For others, AI implementation has created great havoc, such as on the battlefield with Unmanned Aerial Systems (UAS). As with all emerging technologies, the extent to which AI will impact our world has yet to be fully explored. If AI works as developers promise it will, it has the potential to enhance our quality of life; however, the potential to weaponize AI is a reality that cyber professionals must be aware of and take steps to protect against. 

AI Weaponized Against Us

AI is a revolutionary technology that has extensive applications, including nefarious uses. Three key areas of potential weaponization of AI include AI-enabled cyber attacks, AI-generated manipulative content, and AI-empowered weapons that can be deployed to cause harm. While these three areas are not an exhaustive list of the weaponization potential of AI, they are three key areas that cause grave immediate concern to industry leaders, policy makers, and concerned citizens. AI is dual-purpose, meaning it can be used for good and it can also be used for bad. The same technology that can enhance a supply chain’s efficiency can also be used to cause great harm to humankind. Additionally, the ethical application, development, and management of AI technologies present numerous ethical considerations that go well beyond the scope of established cyber policy (Clancy, Bode, and Zhu 2023). Given the potential weaponization applications, it’s important that stakeholders are aware of both the opportunities and threats that AI presents to the modern world. 

AI-Enabled Cyber Attacks

Historically, cyber attacks were directed by humans, oftentimes remotely stationed outside of the target’s physical proximity. These human-directed hacks would use compromised credentials to log into a computer-based system, or they would launch a pre-orchestrated attack to get around the network’s cybersecurity features. AI-facilitated cyber attacks have capabilities that greatly expand those of traditional, human-directed cyber attacks (Guembe, Azeta, Misra, Osamor, Fernandez-Sanz, and Pospelova 2022). Instead of being limited by static programming or human cognition, AI-enabled cyber attacks have the capabilities to learn as they go - meaning they can evolve and respond to new information in ways that allow them to potentially work around cyber defenses they encounter as part of the breach (Guembe et al. 2022). This level of intelligent intrusion greatly exceeds anything previously encountered with traditional cyber hacks. 

AI doesn’t have to be directing the cyberattack to be assisting in it. Another interesting element of how AI enables cyber attacks is that it circumvents the need for robust tech skills to launch and execute a cyber attack (Zucca and Fiorinelli 2025). Thanks to AI, relatively novice cyber criminals can potentially commit very disruptive cyber crimes, especially if the target lacks adequate cybersecurity. The upside to AI applied to cybersecurity is that AI can be used to strengthen an organization’s cybersecurity systems (Hurlburt 2024) and also to enhance the effectiveness of its response to cyber attacks (Javed 2021). This will require strategic implementation into organizations' and networks' information systems, and given how new these types of applications are, cyber professionals will need to carefully implement, monitor, and revise their AI-assisted cybersecurity strategy in accordance with new information and developments. If the AI programs are not secure, but are being implemented to protect network systems, then they can present an added vulnerability to the overall information system. 

AI-Generated Manipulative Content  

There’s an extensive amount of content online that people are consuming every day, and some of that content is not real. AI can be used to create and disseminate false, manipulative information such as deep fakes and mis/disinformation (Hurlburt 2024). This content isn’t accurate and can be used by bad actors, criminals, terrorist groups, or foreign adversaries to mislead American citizens and Allies towards negative endstates. Due to the advanced technologies of AI, a deep fake video of Queen Elizabeth performing a TikTok dance may appear to the viewer to be real, but it isn’t (Mogg 2020). Additionally, AI can be used to create large amounts of mis/disinformation about current events, trusted institutions, public health and safety that intends to mislead viewers towards false narratives and beliefs. This manipulative information can spark people toward actions that are not based on facts, as observed with the violent actions in Myanmar (Whitten-Woodring, Kleinberg, Thawnghmung, and Thitsar 2020). Just because false information is posted only online does not mean its potential impacts are restricted to just cyberspace. 

A key challenge when it comes to curbing AI-generated manipulative content is the protection of free speech, which is often applied when attempting to address these AI-related cyber threats. For example, a social media post that shares disinformation may be protected under the First Amendment, even if it is inaccurate and manipulative (Huang 2022). In the United States, the government cannot simply suspend deep fakes or mis/disinformation from the Internet. While some of the fake content comes from foreign adversaries and terrorist groups, it does get shared across citizens’ personal social media platforms, and may contribute to citizen-created follow-on content. Thus, the censorship solution to combating AI-generated manipulative content is not a feasible proposal. 

AI-Empowered Weapons

The potential for weapons to be AI-empowered opens up a much broader aperture when it comes to the future of warfare and terrorism. AI-empowered weapons may have much greater capabilities when it comes to lethality and destruction than traditional weapons. Examples of how AI is being used on the modern battlefield include AI-generated target recommendations in teh Middle East and unmanned vehicles in Eastern Europe (Pusztaszeri and Harding 2025). Criticisms of these effective defense technologies empowered by AI include concerns that the tools have not been properly tested to ensure effectiveness, and given their high degrees of lethality, should not be released onto the battlefield without lengthy periods of testing (Pusztaszeri and Harding 2025). AI allows for greater autonomy of weapons systems, which means there may not be a human operator managing the attack from end to end (Bondar 2025). This type of autonomous weapons system is unlike anything the world has ever seen before in terms of warfare. 

Additionally, AI-empowered weaponry development presents a multitude of ethical and legal conundrums related to the level of human responsibility that would be applied to autonomous weapons systems (Caruso 2024). For example, if an automated drone swarm not under the control of a human operator destroys a civilian hospital, resulting in hundreds of casualties, who is responsible for such an attack? Is it the country or the terrorist group that purchased the drone system? With AI, there may not be a specific drone operator or even a specific military unit or nation-state to identify as the responsible party. Such expanded technological capabilities are likely to have impacts on the development of warfare, views on casualties, and overall perspectives on using lethal force against humans. The availability and applications of AI-empowered weapons present complexity to the justification of killing in the world of defense (Calhoun 2021). The technology has the potential to change the perspective of the parties involved, and also to require an update to the ethical codes and doctrines applied to more traditional warfare, such as hand-to-hand combat. 

Countering Weaponized AI Threats

The weaponization of AI presents many challenges to the modern and future world. As seen by the aforementioned examples, AI is already being weaponized in a variety of ways across multiple sectors, and the full extent of its weaponization potential has yet to be realized. Three solutions proposed to confront these challenges include: 1) cessation of AI commercialization until further research can be conducted, 2) enhanced laws and polices related to AI, and 3) increased education regarding AI applications and their subsequent impacts. 

Option 1: Curbing Commercial Access

The first proposed solution of cessation of AI commercialization until further research can be done would allow academia, lawmakers, and regulatory bodies to further explore the weaponization potential and subsequent concerns regarding AI. Additionally, it would also allow multiple stakeholders to collaborate on effective guardrails to protect people from the negative impacts of AI weaponization. Given the extensive availability of AI technology across the globe, this proposed solution has likely already passed its expiration date regarding effectiveness. Countries all around the world are actively engaged in AI development and applications from the battlefield to the boardroom, and thus, disrupting our own nation’s commercial exploration of AI may negatively impact our nation’s technological capabilities, which could put us at a disadvantage in the event of an attack from our adversaries.

Bottom-line: Not a good option. 

Option 2: Increased Regulations

The second proposed solution to enhance laws and polices related to AI is certainly needed, as general technology, information, and communications regulations may encompass the scope of which AI applications incur. For example, laws regarding the use of lethal force by the U.S. military assume there is a human soldier directing the act of aggression - but what if a target is killed via an autonomous lethal weapons system? School policies regarding cyberharassment assume the originator of such content is a human in proximity, such as another student, but what if the cyber harassment is a deepfake originating from another country? What if an illegal media operation is using AI to steal and re-publish an author’s written works? These are all examples of gaps in the current legal structure related to AI. Enhancing AI-related policies and laws is a needed initiative; however, such additions to regulatory frameworks will likely take time, much longer than is required for this technology to continue in its rapid development.

Bottom-line: Not a timely option. 

Option 3: More User Education 

The third proposed solution is increased education across our population regarding AI applications and its subsequent impacts. Many people are unaware of the weaponization potential of AI. This lack of awareness extends beyond just consumers, but also to industry professionals, elected officials, and political leaders. People see the benefits of AI in our world, such as increased efficiency, but fail to see the potential for malicious applications of AI. Unfortunately, this awareness often comes only after something devastating has happened, such as a disruptive, large-scale cyberattack on critical infrastructure. Educating Americans on the weaponization potential of AI could be very useful in informing and equipping the public to navigate this new era of technology in a safe manner. This type of education could come in the form of Public Service announcements (PSAs), K-12 initiatives, county extension programs, and other accessible methods of information delivery. The downside to this approach would be who sets the messaging? Do we let the AI companies determine what should be distributed or should nonprofits or the government? Also, who is responsible for financing this type of widescale solution? Should taxpayers, or should technology companies that stand to profit from usage of AI’s commercial products?

Bottom-line: This option needs some exploration and action. 

Conclusion: Pandora’s Box of AI Has Been Opened

There is no one-size-fits-all solution to effectively addressing the weaponization potential of AI in the modern world. Given our society and economy’s interconnectedness and reliance on technology, it’s likely that the casualties of AI weaponization will extend beyond a traditional battlefield, and potentially impact civilians and sectors outside of defense. Thus, it’s critical that stakeholders recognize the current and future applications for AI weaponization, and work to identify and implement effective solutions to this challenge. Humankind has not yet experienced the full potential of AI-empowered harm, and waiting for the destruction to take place before identifying safeguards and response will only set society up to fail. AI ushers in a new era of cyber threats that require innovative solutions. This revolutionary technology has already been unleashed on the world, and cannot be simply suspended. Humans across the globe must contend with the realities of Pandora’s Box of AI, and it would be wise to be proactive about addressing the threats it presents alongside the opportunities.   

References

Bondar, Kateryna. 2025. Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare. Center for Strategic and International Studies (CSIS), March 6, 2025. https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare 

Calhoun, Laurie. 2021. “The Real Problem with Lethal Autonomous Weapons Systems (LAWS).” Peace Review 33 (2): 182–89. doi:10.1080/10402659.2021.1998746.

Caruso, Catherine. 2024. “The Risks of Artificial Intelligence in Weapons Design.” Harvard Medical School News, August 7, 2024. https://hms.harvard.edu/news/risks-artificial-intelligence-weapons-design

Clancy, Rockwell, Ingvild Bode, and Qin Zhu. 2023. “The Need for and Nature of a Normative, Cultural Psychology of Weaponized AI (Artificial Intelligence).” Ethics and Information Technology 25 (1).

Guembe, Blessing, Ambrose Azeta, Sanjay Misra, Victor Chukwudi Osamor, Luis Fernandez-Sanz, and Vera Pospelova. 2022. “The Emerging Threat of Ai-Driven Cyber Attacks: A Review.” Applied Artificial Intelligence 36 (1): 1–34. doi:10.1080/08839514.2022.2037254.

Huang, Tzu-Chiang. 2022. “Private Censorship, Disinformation and the First Amendment: Rethinking Online Platforms Regulation in the Era of a Global Pandemic.” Michigan Technology Law Review 29, no. 1: 137. https://doi.org/10.36645

Hurlburt, George. 2024. “An Ethical Trio—AI, Cybersecurity, and Coding at Scale.” IT Professional 26, no. 2 (March–April): 4–9. https://doi.org/10.1109/MITP.2024.3386050

Javed, Zeeshan. 2021. “The Role of Artificial Intelligence in the Enhancement of Cyber Security of Pakistan.” Journal of Contemporary Studies 10 (2): 1–15. https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip&db=aph&AN=161047008&site=ehost-live&scope=site.

Mogg, Trevor. 2020. “Watch Deepfake Queen Perform TikTok Dance during Annual Message.” Digital Trends, December 28, 2020. https://www.digitaltrends.com/news/watch-deepfake-queen-perform-tiktok-dance-during-annual-message/ 

Pusztaszeri, Aosheng, and Emily Harding. 2025. “Technological Evolution on the Battlefield.” Center for Strategic and International Studies (CSIS), September 16, 2025. https://www.csis.org/analysis/chapter-9-technological-evolution-battlefield

Whitten-Woodring, Jenifer, Mona S. Kleinberg, Ardeth Thawnghmung, and Myat Thitsar. 2020. “Poison If You Don’t Know How to Use It: Facebook, Democracy, and Human Rights in Myanmar.” The International Journal of Press/Politics 25, no. 3: 407–25. https://doi.org/10.1177/1940161220919666

Zucca, Maria Vittoria, and Gaia Fiorinelli. 2025. “Regulating AI to Combat Tech-Crimes: Fighting the Misuse of Generative AI for Cyber Attacks and Digital Offenses.” Technology and Regulation 2025: 247–62. https://doi.org/10.71265/23nqtq40

Next
Next

The Great Disconnect: Rebuilding Community in a Digital Age