In an era where technology advances at an unprecedented pace, the world stands at the threshold of a new generation of warfare—one no longer managed by the human mind alone, but increasingly planned and executed by digital
systems that know no mercy, hold no conscience, and operate without emotion. A feverish race renews itself daily with the emergence of new AI models and continuous system updates designed to meet evolving requirements across all sectors. This raises the central question: How will these developments shape the military domain and the future of warfare?
The ongoing competition and rapid technological progress indicate that wars of the future will undergo radical shifts in their essence and mechanisms. Warfare—an enduring human phenomenon—will not be exempt from the sweeping transformations produced by advances in data and information systems. Tomorrow’s wars will be reshaped by AI algorithms, becoming more lethal, faster, more accurate, and more complex. Conflict will shift from traditional confrontation to advanced digital engagement requiring strategic readiness, continuous modernization, and a careful balance between military effectiveness and ethical constraints.
Warfare will become a blend of technology and destructive force. Combat systems will become intelligent: armed robots, unmanned vehicles, smart robotic dogs, advanced drones of all types, and autonomous armored vehicles capable of independent decision-making in response to field conditions. Precision in aerial and ground strikes will improve, along with early threat detection and neutralization, fighter aircraft guidance, advanced intelligence gathering, predictive analysis, and enhanced operational effectiveness—ensuring constant readiness for battle.
Operational plans and battle orders will also be generated by AI, enabling faster, more precise force employment. This future is not distant. Many countries have already established new departments and units dedicated to AI-driven warfare, and have begun designing and integrating comprehensive systems powered by artificial intelligence. Automated models will analyze massive datasets in extremely short timeframes—surveying, assessing, and planning without human involvement. These modern systems rely on microchips, sequential commands, and self-improving programs that replicate human capabilities—and may even surpass them in decision-making speed and responsiveness to changing events.
What we are witnessing is a completely new technological revolution that will push many current classes of weapons into premature retirement. New systems will surprise the world, and it would be no exaggeration to say that the role of the human fighter may eventually be reduced to remote operation for ethical considerations—until the day comes when he is replaced entirely.
There is an important distinction between the terms “industrial intelligence” and “artificial intelligence,” though they are often confused. Industrial intelligence refers broadly to developing intelligent systems that mimic human behavior across various domains. Artificial intelligence is a subset of this field applied specifically to industrial environments—including the military applications that are now rapidly expanding in highly specialized military fields.
AI applications in the military domain are characterized by objectivity and human-like abilities such as independent decision-making. They are defined by speed and logical reasoning and require extraordinary computing power and bandwidth to meet military demands. These systems are trained to absorb vast quantities of data, recognize speech, understand thought patterns, solve problems, and predict outcomes based on available information.
Achieving superiority in AI technology is no longer a luxury but a strategic necessity for great powers competing to shape their position in the global order. In 2019, China released its new White Paper on Defense titled "China’s National Defense in the New Era," outlining its military ambitions and its intent to enhance its global influence. The paper emphasized that modern warfare is increasingly tied to information systems and AI, requiring major advancements in automation, information integration, and AI development—along with the creation of specialized units within China’s military structure.
Amid intensifying technological competition between the United States and China, Beijing is advancing rapidly to secure AI supremacy—driven by a deeply rooted national conviction that future wars will hinge on AI progress, as stated in its defense doctrine.
To strengthen its military technological capabilities, China has invested heavily in AI development, relying on local innovations while adapting Western technologies. Recently, China revealed a new military-oriented AI model called ChatBIT, developed by Chinese scientists. This model is based on the open-source LLaMA-2 framework—one of Meta’s pre-trained and commercially usable language models capable of generating text and producing code.
The emergence of China’s ChatBIT marks a qualitative shift in military dynamics. Open-source models—whose code can be viewed, modified, enhanced, and redistributed—allow nations with relatively limited resources to achieve practical advantages when equipped with specialized expertise. By adapting LLaMA-2 for military use, Chinese researchers demonstrated the enormous potential of open-source systems, sparking broad debate about the ethical, security, and strategic implications of such technologies should other nations follow the same path.
Chinese military researchers tailored ChatBIT to meet specific operational needs, including high-precision processing of massive datasets and real-time responses to complex battlefield queries. The system is designed to provide effective solutions to multifaceted challenges faced by the Chinese military, including multi-source intelligence analysis, short-term tactical recommendations, long-term strategic planning, and support for highly coordinated joint operations. It has outperformed competing models such as Vicuna-13B in intelligence and military-oriented operational tasks.
ChatBIT achieved nearly 90% answer accuracy compared to GPT-4o, despite the significant difference in token processing capacity. LLaMA-2, the foundation of ChatBIT, processes only 13 billion tokens, whereas OpenAI’s GPT-4o handles 405 billion tokens.
In Washington, the announcement generated a mixture of concern and vigilance. The United States—long the global leader in AI and its military applications—now feels mounting pressure from China. A silent race has emerged between Chinese and American AI models in the military sphere, shaping the contours of future conflict. AI technologies are being adapted to overcome battlefield uncertainty by detecting enemy intent, predicting movements, maintaining situational awareness, reducing response time, and accelerating the detect–decide–neutralize cycle.
The U.S. military has a long history of experimenting with AI. In 1991, an AI program called the Dynamic Analysis and Replanning Tool (DART) was used to schedule logistics operations during war. Today, the military seeks to leverage new AI systems—such as OpenAI’s models—for greater operational efficiency. Among these, ChatGPT stands out as a versatile text-generation tool capable of supporting military planning, intelligence gathering, and smart battle management through several key roles.
ChatGPT enhances planning units’ ability to gather and analyze open-source intelligence, offering fast, accurate insights essential for forming a responsive intelligence picture. It assists with drafting operational documents, tactical reports, analytical studies, and lessons-learned reports with high editorial quality. The model also provides predictive analytics that support rapid, well-reasoned decision-making based on real-time data, optimizing logistics and resource allocation during operations.
Moreover, ChatGPT can function as a virtual “operations staff officer,” improving decision-making efficiency, shortening response cycles, assessing threats, and supporting frontline coordination. Logistically, it aids in managing military inventory, analyzing supply documents, maintaining up-to-date resource visibility, and enhancing real-time support during battle. It also plays a major role in training by generating smart training plans, scenario-based exercises, and objective evaluations.
In reality, ChatGPT has become a strategic AI tool for military planning and analysis. It enables more intelligent and effective command of military operations. No longer merely an assistant, it has become a form of “digital commander,” capable of analyzing thousands of scenarios in seconds and making critical recommendations without hesitation. But this raises a profound dilemma: What happens when life-and-death decisions are entrusted to a machine—one that feels no pain, no loss, no human emotion?
Integrating AI models into war planning brings security and ethical hazards. Among the primary risks is data leakage during processing through insecure platforms or vulnerabilities exploitable by malicious actors—threatening national security. Users may also be targeted by sophisticated cyberattacks employing AI-generated deceptive content against military assets. Excessive reliance on AI may lead to incorrect assessments or misinterpretations that produce flawed conclusions, potentially escalating conflict unintentionally.
Ethically, training ChatGPT on incomplete or unbalanced data may result in biased or inaccurate outputs that affect military decision-making—decisions that are often sensitive, urgent, and consequential. This raises major legal and moral questions: Who bears responsibility for AI-influenced decisions in combat? Such concerns carry significant implications for crisis escalation and global peace.
To mitigate these risks, strict policies must be enacted to verify AI outputs, secure systems against intrusion, and ensure continuous human oversight. For this reason, the U.S. Department of Defense mandates that commanders and operators of autonomous and semi-autonomous weapons maintain appropriate levels of human judgment over the use of force.
In January 2023, the DoD updated Directive 3000.09 on "Autonomy in Weapon Systems.” The revised policy states:
“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
While AI can support informed decision-making, the human element—not the machine—remains the final authority.
Recently, OpenAI quietly removed its explicit ban on military use of its AI tools, despite retaining general prohibitions against using them to harm others or develop weapons. A company spokesperson told CNBC that the policy update aimed to provide clarity and allow permissible national-security use cases.
He added:
"Our policy does not permit using our tools to harm people, develop weapons, monitor communications, or destroy property. However, there are national-security use cases aligned with our mission."
Future wars are likely to be more brutal and deadly than ever before. The absence of human conscience in command and control—and the rise of autonomous AI-driven weapons—mean that machines may one day wage war without regard for mercy or morality. Autonomous weapons may attack targets without human intervention, as has already occurred. Killing machines will not stop when compassion is exhausted, but only when power runs out—following instructions written by humans and refined by algorithms designed to enhance lethality against other humans.
So the question remains:
Will we wait until the end of humanity is written by the hand of a machine?
Or will we act now to build a future in which values and ethics—not algorithms, dominance, or the Struggle for Influence?



English
العربية 
