This paper is part of a research project, “Countering AI Disinformation and Implications for the US-ROK Alliance,” conducted by the Stimson Center’s Korea Program and generously sponsored by the Korea Foundation. For additional papers in this series, click here.
Artificial intelligence (AI) is reshaping the global threat landscape, and the Democratic People’s Republic of Korea’s (North Korea or DPRK) adoption of these technologies reflects a calculated effort to amplify its asymmetric strategies. Historically, the regime has pursued tools that deliver disproportionate impact relative to cost, and AI fits squarely within that framework. Today, DPRK actors are leveraging AI to bypass traditional barriers, secure foreign employment, and enhance cyber operations, all while funneling resources into domestic priorities and military modernization.
These efforts are not isolated but represent a layered approach that combines opportunistic tactics with emerging technologies, creating a threat that evolves faster than most defensive measures. Understanding how North Korea has integrated AI into its economic, cyber, and military programs provides critical insight into where these trends are headed and what steps must be taken to mitigate their impact. Addressing these risks will require security teams to invest in technologies capable of identifying synthetic media, AI assisted deception, and behavioral anomalies as they occur. Equally important is coordinated collaboration across industry and government to ensure defensive posture keeps pace with the speed of DPRK adaptation.
Historical Context
North Korea has historically approached technology with a strategic mindset, viewing it as a tool to advance national objectives and maintain regime stability. Their interest in artificial intelligence reflects this philosophy. Early indications of AI-related activity suggest that DPRK sees these technologies as force multipliers, enabling capabilities that align with its asymmetric strategy. Rather than competing through conventional means, DPRK has consistently sought methods to amplify its influence and achieve strategic goals despite resource limitations.
Initially, AI development in the DPRK was centered on industrial automation to overcome resource scarcity and improve domestic productivity. Projects aimed to enhance efficiency in sectors such as agriculture, manufacturing, and logistics, using automation and robotics to compensate for labor and material shortages. The apparent aim of these efforts has been to stabilize internal systems and strengthen economic resilience under sanctions. Over time, this domestic focus evolved alongside ambitions to integrate AI into broader strategic frameworks.
By leveraging foreign technologies and adapting them to local needs, DPRK has sought to reduce dependency on traditional supply chains out of their control while maintaining operational flexibility. This approach is fundamental to the juche (self-reliance) philosophy and underscores a long-standing principle in DPRK’s strategic thinking: prioritize tools and efforts that deliver disproportionate impact relative to cost. It also signals an intent to adapt quickly to global technological trends, ensuring that even incremental progress in AI can strengthen the ability to conduct operations, maintain control, and project influence beyond their borders.
Historically, this pattern of adaptation under pressure has allowed the DPRK to successfully circumvent sanctions and resource constraints, innovating to preserve strategic advantage under austere conditions.
Current State
North Korea’s current use of artificial intelligence reflects a deliberate effort to enhance operational efficiency and expand the scope of its cyber and IT worker programs. These activities demonstrate a clear intent to leverage AI as a force multiplier, enabling the regime to maximize revenue generation, improve infiltration success rates, and accelerate internal technological development.
Initially, North Korean operators developed methods to bypass identity verification processes on platforms requiring Know Your Customer (KYC) compliance that did not utilize AI-driven techniques. Instead, they relied on prompt injection strategies to manipulate photo submissions and pass live verification checks. This approach allowed the DPRK-linked actors to create and maintain fraudulent accounts without resorting to deepfake technologies, which were previously considered a future capability rather than an operational necessity.
However, this dynamic began to shift in Q3 2025, when deepfake technologies started appearing more in active use. Recent observations show North Korean IT workers incorporating more deepfake imagery and voice synthesis into their workflows. Voice modification, including the adoption of female voice profiles, has been documented alongside AI-enabled noise cancellation tools designed to mask environmental sounds during interviews and virtual meetings. These enhancements improve credibility and reduce detection risk, signaling a move toward more sophisticated identity obfuscation techniques.
AI is also being used extensively to support job acquisition strategies. North Korean IT workers can rely on language learning models to craft resumes tailored to specific roles and to assist during technical interviews. In numerous observed instances, operators feed real-time subtitles from interviews into AI models to generate accurate and contextually relevant responses, significantly improving their chances of securing employment. These methods allow DPRK actors to operate at scale, managing multiple job roles simultaneously and significantly increasing the volume of revenue funneled back to the regime.
This financial activity is not limited to a single channel. Funds are routed back to IT workers embedded in established hacking groups, academic institutions, and front companies such as Korea Ryonbong General Corporation which, among others, are sanctioned by the Office of Foreign Assets Control (OFAC). The IT workforce is highly stratified, with most operators focused on generating income, while others hold elevated privileges and assume specialized responsibilities such as acquiring intellectual property or targeting specific technologies. These privileged roles often include ad hoc tasking, where operators are assigned opportunistic or short-notice objectives that align with emerging priorities.
Revenue streams from these workers do not exclusively support weapons programs; they can also finance domestic priorities such as port development, infrastructure projects, and other initiatives that reinforce regime stability. This layered approach demonstrates how North Korea uses AI-enabled employment schemes not only as a source of hard currency but as a mechanism to advance broader strategic objectives.
In one observed case, a North Korean IT worker demonstrated how these tactics play out in real time.[1] The individual had cultivated a relationship with a legitimate staffing recruiter based in Florida and secured an interview with an AI-focused organization. During the session, the worker navigated multiple browser tabs, reviewing company details while simultaneously leveraging an LLM to craft responses. Subtitles from the interview were copied and fed into the model, enabling quick, contextually accurate answers to technical questions. The interview appeared successful, but what stood out was the scale of activity behind the scenes. On that same day, the worker received interest from roughly 20 other companies, all offering roles tied to AI development. This pattern illustrates how DPRK operators pursue targeted positions not only for income generation but potentially for access to sensitive technologies and intellectual property.
Beyond employment-related activities, subsets of North Korean IT teams exhibit varying levels of privilege and responsibility, with a some authorized to steal intellectual property from their employers. Some of these workers actively seek positions within AI-focused organizations or roles involving AI development. This approach provides on-the-job training and access to proprietary technologies, which, after theft and sometimes extortion, can then be repurposed to advance domestic AI initiatives. Reporting suggests that new AI research units, such as the Reconnaissance General Bureau’s (RGB) Unit/Research Center 227, are being established near colleges that traditionally produce cyber, IT Worker, and AI technical talent (ie. Kim Chaek University of Technology, Kim Il Sung University, etc.). These facilities appear to serve as hubs for integrating skills acquired abroad with internal development programs, reinforcing North Korea’s long-term strategy of technological self-sufficiency.
Open-source disclosures further indicate that the DPRK is experimenting with AI applications in military contexts, including autonomous systems and suicide-assisted drones. While some of these developments remain in early stages, they underscore the regime’s intent to incorporate AI into both defensive and offensive capabilities.
Beyond military integration, North Korean advanced persistent threat (APT) groups have adopted AI to enhance the effectiveness of their cyber operations. Observed tactics include the use of AI-driven tools to generate malicious code, such as variants produced through platforms like WormGPT, and other code-generation models designed for offensive purposes. These operators also leverage AI to refine phishing campaigns, creating highly convincing emails and lure documents that increase the likelihood of compromise. In addition, AI models assist in debugging and optimizing malicious payloads, enabling faster development cycles and reducing operational errors. By combining these capabilities with traditional exploit development and vulnerability discovery, APT groups have significantly expanded their reach and efficiency.
North Korea’s ability to integrate AI into both IT worker schemes and APT operations amplifies the threat posed by these programs, transforming them from resourceful enterprises into highly adaptive and technologically sophisticated networks.
Outlook
Assessing North Korea’s trajectory with artificial intelligence requires looking at past behaviors, current patterns, and the technologies available for exploitation. The DPRK remains an opportunistic adversary, often favoring the path of least resistance over original development. Historical examples include repurposing open-source projects from platforms like GitHub with minimal modifications and leveraging dark web marketplaces to acquire breached credentials rather than sourcing them independently. This approach reflects a criminal enterprise that prioritizes speed and efficiency, adopting emerging technologies before defenders can fully adapt.
Recent developments underscore the growing role of AI-generated disinformation. During the South Korean presidential election, deepfake content circulated widely, though official attribution to foreign actors was absent. While these incidents were linked to domestic individuals, it is highly likely that North Korea, among other nation states, observed these tactics and recognized their potential. Coupled with DPRK’s documented interest in deepfake technologies for identity obfuscation and social engineering, the prospect of AI-driven influence operations targeting external elections or political discourse is increasingly plausible.
The rise of AI-enabled threats has prompted notable defensive responses. Organizations are deploying detection tools capable of identifying synthetic media, voice alterations, and behavioral anomalies during interviews or remote engagements. These technologies are being integrated into existing security stacks, offering real-time alerts when indicators of deepfake usage or AI-assisted deception are present. While these efforts are commendable, the speed and adaptability of North Korean operators demand equally agile defensive strategies. Static controls will not suffice against adversaries who continuously refine their methods.
North Korea’s past successes in cyber operations without heavy reliance on AI suggests that its integration will accelerate both scale and sophistication. Activity across multiple domains can be expected. Cyber operations will likely see more advanced malware development and phishing campaigns, with generative AI used to craft realistic lure documents and tailored social engineering. Military applications will continue to evolve, with experimentation in autonomous systems, AI-guided weaponry, and AI-based command structures for strategic forces. Information warfare will also expand, with AI enabling propaganda and psychological operations both externally and internally. These tools can analyze regional dialects, political views, cultural norms, and social sentiment to shape messages that feel authentic to the intended audience. AI enabled content can also replicate the tone, cadence, and emotional style of trusted sources, making influence efforts more persuasive while reducing the likelihood of immediate detection.
Domestic surveillance will remain a priority. AI can augment North Korea’s already pervasive monitoring systems through facial recognition, voice synthesis, and multi-object tracking. These capabilities can enable hyper-targeted propaganda and behavioral control, reinforcing regime stability while providing a blueprint for future external influence campaigns.
Countering these trends requires a multi layered approach. Technical defenses must evolve to detect AI generated artifacts in real time, including model fingerprinting, voice clone detection, and image and video forensics that surface inconsistencies in timing, lighting, and acoustic features. For online influence campaigns, defenses should combine content provenance signals, cross platform burst analysis, and behavioral anomaly detection to flag coordinated inauthentic activity at the moment of impact rather than after narratives have taken hold.
Policy frameworks should also mandate stronger identity verification and continuous anomaly detection for remote hiring processes, including liveness checks, device and network attestation, keystroke and response pattern analysis during interviews, and escalation paths when synthetic media indicators are detected. Collaboration between governments, private sector entities, and threat intelligence teams will be critical to closing gaps before adversaries exploit them.
North Korea’s integration of AI into its cyber and military programs represents a force multiplier for an already resilient threat actor. Without proactive measures, the convergence of AI and the DPRK’s asymmetric strategies will amplify risks across economic, political, and security domains, creating challenges that demand both speed and precision from defenders.