Toggle light / dark theme

The terrorist or psychopath of the future, however, will have not just the Internet or drones—called “slaughterbots” in this video from the Future of Life Institute—but also synthetic biology, nanotechnology, and advanced AI systems at their disposal. These tools make wreaking havoc across international borders trivial, which raises the question: Will emerging technologies make the state system obsolete? It’s hard to see why not. What justifies the existence of the state, English philosopher Thomas Hobbes argued, is a “social contract.” People give up certain freedoms in exchange for state-provided security, whereby the state acts as a neutral “referee” that can intervene when people get into disputes, punish people who steal and murder, and enforce contracts signed by parties with competing interests.

The trouble is that if anyone anywhere can attack anyone anywhere else, then states will become—and are becoming—unable to satisfy their primary duty as referee.


In The Future of Violence, Benjamin Wittes and Gabriella Blum discuss a disturbing hypothetical scenario. A lone actor in Nigeria, “home to a great deal of spamming and online fraud activity,” tricks women and teenage girls into downloading malware that enables him to monitor and record their activity, for the purposes of blackmail. The real story involved a California man who the FBI eventually caught and sent to prison for six years, but if he had been elsewhere in the world he might have gotten away with it. Many countries, as Wittes and Blum note, “have neither the will nor the means to monitor cybercrime, prosecute offenders, or extradite suspects to the United States.”

Technology is, in other words, enabling criminals to target anyone anywhere and, due to democratization, increasingly at scale. Emerging bio-, nano-, and cyber-technologies are becoming more and more accessible. The political scientist Daniel Deudney has a word for what can result: “omniviolence.” The ratio of killers to killed, or “K/K ratio,” is falling. For example, computer scientist Stuart Russell has vividly described how a small group of malicious agents might engage in omniviolence: “A very, very small quadcopter, one inch in diameter can carry a one-or two-gram shaped charge,” he says. “You can order them from a drone manufacturer in China. You can program the code to say: ‘Here are thousands of photographs of the kinds of things I want to target.’ A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.” Manufacturers will be producing millions of these drones, available for purchase just as with guns now, Russell points out, “except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch.” In this scenario, the K/K ratio could be perhaps 3/1,000,000, assuming a 10-percent accuracy and only a single one-gram shaped charge per drone.

Will emerging technologies make the state system obsolete? It’s hard to see why not.

Kate Aronoff wrote about a great idea recently: turn Rikers Island into a solar farm. Transforming a prison that was built on heaps of trash into a solar farm does have many benefits.

Her article dives into the past as well as the present of Rikers Island and she points out that both share the story of the United States itself. The island’s ownership has roots traced to slaveowners since the 1660s and played a huge role in the kidnapping ring that sold black people in the North back to slavery in the South under the Fugitive Slave Act. The island was sold in 1884 and it became a penal colony. The island was redesigned into a massive jail complex.

Today, 80% of the island’s landmass is landfill. Aronoff, with her words, painted a picture of an island that is filled with decomposing garbage and prisoners — with 90% of them being people of color. “Heat in the summer can be unbearable, which has lent to its ominous nickname: The Oven,” she wrote. She referenced another account from Raven Rakia who spoke about the island’s “environmental justice horror show.” Rakia noted that 6 of the island’s 10 facilities don’t have any air conditioning.

“Facial recognition is a uniquely dangerous form of surveillance. This is not just some Orwellian technology of the future — it’s being used by law enforcement agencies across the country right now, and doing harm to communities right now,” Fight for the Future deputy director Evan Greer said in a statement shared with VentureBeat and posted online.


Members of the United States Congress introduced a bill today, The Facial Recognition and Biometric Technology Moratorium Act of 2020, that would prohibit the use of U.S. federal funds to acquire facial recognition systems or “any biometric surveillance system” use by federal government officials. It would also withhold federal funding through the Byrne grant program for state and local governments that use the technology.

The bill is sponsored by Senators Ed Markey (D-MA) and Jeff Merkley (D-OR) as well as Representatives Ayanna Pressley (D-MA) and Pramila Jayapal (D-WA). Pressley previously introduced a bill prohibiting use of facial recognition in public housing, while Merkley introduced a facial recognition moratorium bill in February with Senator Cory Booker (D-NJ).

The news comes a day after the Boston City Council in Pressley’s congressional district unanimously passed a facial recognition ban, one of the largest cities in the United States to do so. News also emerged this week about Robert Williams, who’s thought to be the first person falsely accused of a crime and arrested due to misidentification by facial recognition.

A collective of more than 1,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research that claims to use neural networks to “predict criminality.” At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.

The controversial research is set to be highlighted in an upcoming book series by Springer, the publisher of Nature. Its authors make the alarming claim that their automated facial recognition software can predict if a person will become a criminal, citing the utility of such work in law enforcement applications for predictive policing.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Harrisburg University professor and co-author Nathaniel J.S. Ashby said.

More than a million Uighurs and others belonging to Muslim minority groups are believed to be detained in China’s Xinjiang region. China calls them “transformation camps” built to prevent extremism from spreading. However, reports indicate they’re more like prisons. BBC News correspondent John Sudworth got exclusive access to one of the facilities.

The coronavirus has killed dozens of federal prisons and infected more than 6,000. Prisoners say they have been stuck in grim conditions that make social distancing impossible. To support their claims, some prisoners have used contraband cell phones that have been smuggled into prisons to post videos on Facebook and other social media sites.

VICE News contacted one of the prisoners, 34-year-old Aaron Campbell, held at a federal prison in Ohio, who said he was punished for making his video by being sent to solitary confinement. In a letter, Campbell said officials told him he would not face additional discipline if he issued a statement saying the video was fake. He refused. (The BOP did not respond to questions about his allegations.)

Read the full story — https://bit.ly/3dae6Zb

Check out VICE News for more: http://vicenews.com
Follow VICE News here:
Facebook: https://www.facebook.com/vicenews
Twitter: https://twitter.com/vicenews
Tumblr: http://vicenews.tumblr.com/
Instagram: http://instagram.com/vicenews
More videos from the VICE network: https://www.fb.com/vicevideo

IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”