Toggle light / dark theme

It seems some countries are now switching to drone swarms.


From Syria to Libya to Nagorno-Karabakh, this new method of military offense has been brutally effective. We are witnessing a revolution in the history of warfare, one that is causing panic, particularly in Europe.

In an analysis written for the European Council on Foreign Relations, Gustav Gressel, a senior policy fellow, argues that the extensive (and successful) use of military drones by Azerbaijan in its recent conflict with Armenia over Nagorno-Karabakh holds “distinct lessons for how well Europe can defend itself.”

Gressel warns that Europe would be doing itself a disservice if it simply dismissed the Nagorno-Karabakh fighting as “a minor war between poor countries.” In this, Gressel is correct – the military defeat inflicted on Armenia by Azerbaijan was not a fluke, but rather a manifestation of the perfection of the art of drone warfare by Baku’s major ally in the fighting, Turkey. Gressel’s conclusion – that “most of the [European Union’s] armies… would do as miserably as the Armenian Army” when faced by such a threat – is spot on.

“During the summer disturbances in Washington, D.C., a top local military police officer asked the D.C. National Guard about deploying two military systems that seem to come out of science fiction. One, the Active Denial System (ADS), makes the target’s skin feel like it’s on fire. The other, called the Long Range Acoustic Device (LRAD), directs intense sound in a narrow cone. The sound is so clear and so powerful that it was nicknamed “the voice of God.” I encountered both systems, one at Quantico, Virginia, the other in Falluja, Iraq. Here’s what I saw.”


DOD has two crowd control systems that are straight out of science fiction. One uses direct energy to create a burning sensation on exposed skin. The other is so loud that it sounds like the voice of God.

Given the possibility of orbital debris, space warfare will be different from what we imagine it to be (from Star Wars and Call of Duty). Watch this video to find out what it will look like!


*Note: I spelled the word “deficit” wrong in my subtitles. Guess I was too sleepy.

Space warfare is often depicted as battleships and marines fighting each other. Yet, in this video, I will talk about how the future of space warfare has more to do with satellites, orbital debris, and cyber warfare.

Discord Link: https://discord.gg/brYJDEr.
Patreon link: https://www.patreon.com/TheFuturistTom.
Please follow our instagram at: https://www.instagram.com/the_futurist_tom.
For business inquires, please contact [email protected].

Editor credits (Velinix):
https://www.youtube.com/channel/UCYcAMWx0Vcsy-SSzX3HVWYw?view_as=subscriber.
https://www.instagram.com/velinix/

Space Force members will be known as “Guardians” from now on, Vice President Michael R. Pence announced Dec. 18.

“Soldiers, Sailors, Airmen, Marines, and Guardians will be defending our nation for generations to come,” he said at a Dec. 18 White House ceremony celebrating the Space Force’s upcoming birthday.

As the Space Force turns 1 year old on Dec. 20, abandoning the moniker of “Airman” is one of the most prominent moves made so far to distinguish space personnel from the Air Force they came from. An effort to crowdsource options brought in more than 500 responses earlier this year, including “sentinel” and “vanguard.”

Popular media and policy-oriented discussions on the incorporation of artificial intelligence (AI) into nuclear weapons systems frequently focus on matters of launch authority—that is, whether AI, especially machine learning (ML) capabilities, should be incorporated into the decision to use nuclear weapons and thereby reduce the role of human control in the decisionmaking process. This is a future we should avoid. Yet while the extreme case of automating nuclear weapons use is high stakes, and thus existential to get right, there are many other areas of potential AI adoption into the nuclear enterprise that require assessment. Moreover, as the conventional military moves rapidly to adopt AI tools in a host of mission areas, the overlapping consequences for the nuclear mission space, including in nuclear command, control, and communications (NC3), may be underappreciated.

AI may be used in ways that do not directly involve or are not immediately recognizable to senior decisionmakers. These areas of AI application are far left of an operational decision or decision to launch and include four priority sectors: security and defense; intelligence activities and indications and warning; modeling and simulation, optimization, and data analytics; and logistics and maintenance. Given the rapid pace of development, even if algorithms are not used to launch nuclear weapons, ML could shape the design of the next-generation ballistic missile or be embedded in the underlying logistics infrastructure. ML vision models may undergird the intelligence process that detects the movement of adversary mobile missile launchers and optimize the tipping and queuing of overhead surveillance assets, even as a human decisionmaker remains firmly in the loop in any ultimate decisions about nuclear use. Understanding and navigating these developments in the context of nuclear deterrence and the understanding of escalation risks will require the analytical attention of the nuclear community and likely the adoption of risk management approaches, especially where the exclusion of AI is not reasonable or feasible.

One good reason for the rarity of radical designs is the enormous expense of the research. Engineers can learn only so much by running tests on the ground, using computational fluid-flow models and hypersonic wind tunnels, which themselves cost a pretty penny (and simulate only some limited aspects of hypersonic flight). Engineers really need to fly their creations, and usually when they do, they use up the test vehicle. That makes design iteration very costly.

“What we’re looking at now is not just an attack that is ongoing, that is not just highly sophisticated, but also we cannot trust the supply chain. We can no longer trust that any third-party application in these systems has not been compromised by Russia,” says NYT’s Nicole Perlroth.

Artificial intelligence helped co-pilot a U-2 “Dragon Lady” spy plane during a test flight Tuesday, the first time artificial intelligence has been used in such a way aboard a US military aircraft.

Mastering artificial intelligence or “AI” is increasingly seen as critical to the future of warfare and Air Force officials said Tuesday’s training flight represented a major milestone.

“The Air Force flew artificial intelligence as a working aircrew member onboard a military aircraft for the first time, December 15,” the Air Force said in a statement, saying the flight signaled “a major leap forward for national defense in the digital age.”