Celestial tensions mount with U.S. as both sides aim for supremacy.
PALO ALTO, U.S. — China on Tuesday accused the U.S. of ignoring international space law and urged Washington to act responsibly after two near-collisions this year between Beijing’s new orbiting station and satellites launched by Elon Musk’s exploration company SpaceX.
It may have seemed like an obscure United Nations conclave, but a meeting this week in Geneva was followed intently by experts in artificial intelligence, military strategy, disarmament and humanitarian law.
The reason for the interest? Killer robots — drones, guns and bombs that decide on their own, with artificial brains, whether to attack and kill — and what should be done, if anything, to regulate or ban them.
Once the domain of science fiction films like the “Terminator” series and “RoboCop,” killer robots, more technically known as Lethal Autonomous Weapons Systems, have been invented and tested at an accelerated pace with little oversight. Some prototypes have even been used in actual conflicts.
NSO Group, an Israeli tech firm, developed malware to hack iPhones by creating a “computer within a computer” capable of stealing sensitive data and sitting undetected for months or even years, researchers at Google have revealed.
The malware is part of NSO Group’s Pegasus software tool, which it is thought to have sold to countries including Azerbaijan, Bahrain, Saudi Arabia, India and the United Arab Emirates. US law-makers have called for sanctions against the firm.
An incredibly sophisticated piece of malware developed by the Israeli tech firm NSO Group works by creating an entirely separate computer inside the memory of an iPhone, allowing attackers to snoop and steal data.
Referring to Tesla’s Autopilot and Full Self Driving features.
Elon Musk has claimed that no other CEO cares as much about safety as he does in an interview with Financial Times.
In the year that has seen his private wealth balloon like never before, Musk has also been showered with titles, beginning with the richest person in the world and more recently, the person of the year by Time Magazine. The Time accolade is probably one of the many titles Musk will receive as he embarks on his mission to send humanity to Moon with his space company, SpaceX.
Before we get there though, there are some issues with his other company Tesla that needs addressing. The company’s short history is peppered with incidents that have risked human lives as it pushes the boundaries of autonomous driving. The company offers features called Autopilot and Full Self-Driving (FSD) which are still in beta stages and have been involved in accidents. In August, this year, the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) launched an investigation into the Autopilot feature that involves 750,000 Tesla vehicles.
Speaking to FT, Musk said that he hasn’t misled Tesla buyers about Autopilot or FSD. “Read what it says when you order a Tesla. Read what it says when you turn it on. It’s very, very clear,” said Musk during the interview. He also cited the high ratings Tesla cars have achieved on safety and also used SpaceX’s association with NASA to send humans into space to highlight his focus on safety. He also went a step further to say that he doesn’t see any other CEO on the planet care as much about safety as he does.
Although Musk is spot on about the high safety ratings of the cars and even NASA’s faith in SpaceX to ferry its astronauts, the Tesla website does not give the impression that the Autopilot or FSD is in beta and cannot be completely relied upon. Rather a promotional video even goes on to claim that the person in the driver’s seat is only for legal reasons and does not even have his hands on the steering wheel at all times, a requirement for enabling Autopilot. according to Tesla’s own terms.
A new video released by nonprofit The Future of Life Institute (FLI) highlights the risks posed by autonomous weapons or ‘killer robots’ – and the steps we can take to prevent them from being used. It even has Elon Musk scared.
Its original Slaughterbots video, released in 2017, was a short Black Mirror-style narrative showing how small quadcopters equipped with artificial intelligence and explosive warheads could become weapons of mass destruction. Initially developed for the military, the Slaughterbots end up being used by terrorists and criminals. As Professor Stuart Russell points out at the end of the video, all the technologies depicted already existed, but had not been put together.
Now the technologies have been put together, and lethal autonomous drones able to locate and attack targets without human supervision may already have been used in Libya.
Octopuses, crabs and lobsters will receive greater welfare protection in UK law following an LSE report which demonstrates that there is strong scientific evidence that these animals have the capacity to experience pain, distress or harm.
The UK government has today confirmed that that the scope of the Animal Welfare (Sentience) Bill will be extended to all decapod crustaceans and cephalopod molluscs.
This move follows the findings of a government-commissioned independent review led by Dr Jonathan Birch. The review drew on over 300 existing scientific studies to evaluate evidence of sentience in cephalopods (including octopuses, squid and cuttlefish) and decapods (including crabs, lobsters and crayfish).
Experts are sounding the alarm about the threat of asteroids to life on Earth — and warning that the United States does not have a clear plan to prevent catastrophe.
Though NASA says the odds are literally one in a millennium, no US agency is explicitly responsible if space rocks are headed our way.
“No one is tasked with mitigation,” former Air Force space strategist Peter Garretson, an expert in planetary defense told Politico. “Congress did put in law that the White House identify who should be responsible, but fully four subsequent administrations so far have blown off their request.”
Informed consent not something we hear a lot about these days, which is kind of odd, given all the drugs our government currently insists that we take and how often those very same legal concepts are invoked for aboriginal rights and sexual assault cases.
“Informed consent” is a well understood legal doctrine in healthcare, requiring the healthcare provider (traditionally a doctor) to educate patients about the risks, benefits, and alternatives of any given recommended procedure or intervention, allowing the patient to make informed and “voluntary” decisions about whether to undergo the procedure.