While accidents have happened, one of the most appealing things about autonomous vehicles is their capacity to make our roads a safer place. Now, insurance companies are starting to offer financial incentives to promote adoption.
Britain’s largest automobile insurance company, Direct Line, has announced a 5 percent discount for customers who activate Autopilot functionality in their Tesla. It follows in the footsteps of Root, a startup that offers a similar promotion across nine states in the US.
Google’s AI, AlphaZero, developed a “superhuman performance” in chess in just four hours.
Essentially, the AI absorbed humanity’s entire history of chess in one-sixth of a day — and then figured out how to beat anyone or anything.
After being programmed with the rules of the game (not the strategy) AlphaZero played 100 games against Stockfish, the world champion chess program. AlphaZero won 25 playing as white (which has first-mover advantage) and three games playing as black. The last 72 games were a draw with AlphaZero recording no losses and Stockfish recording no wins.
A few months after demonstrating its dominance over the game of Go, DeepMind’s AlphaZero AI has trounced the world’s top-ranked chess engine—and it did so without any prior knowledge of the game and after just four hours of self-training.
AlphaZero is now the most dominant chess playing entity on the planet. In a one-on-one tournament against Stockfish 8, the reigning computer chess champion, the DeepMind-built system didn’t lose a single game, winning or drawing all of the 100 matches played.
There are a lot of people in the world that need glasses on a daily basis. Despite their often expensive price tag, they do little more than correct poor eyesight. Let Glass updates glasses for the 21st century by integrating them with smart home connectivity.
While maintaining a slim form factor, Let Glass features audio entertainment, telephone communication, and voice interaction. Using Alexa and a built-in microphone, these frames allow users to control their smartphones without fumbling through their pockets. Simply tapping the legs of the smart glasses activate remote control functions, while voice commands handle everything else. In addition to Amazon Alexa, Apple Siri and Google Now are also supported.
Keeping with a traditional appearance, audio is produced using bone conduction technology. Instead of a speaker, the glasses vibrate the small bones in the ear to produce sound. This also keeps ears open to other noises, ensuring users remain aware of their surroundings. This allows users to listen to music, track activity, use voice navigation, call a friend, and more.
In the mid-1900s, art historian Maurits Michel van Dantzig developed a system to identify artists by their brush or pen strokes, which he called Pictology. Dantzig found shape, length, direction, and pressure all contributed to a kind of stroke signature, unique to each artist.
New research with contributions from The Hague suggests that Pictology might be the key to helping machines understand art, allowing systems to quickly verify whether brushstrokes were from an original painter or a forger.
After analyzing 80,000 brushstrokes from 297 digitized sketches and drawings, an AI system was able to spot forged drawings in the style of Pablo Picasso, Henri Matisse, and Egon Schiele with 100% accuracy. The “fakes” were commissioned recreations of specific drawings, which the algorithms had not been shown previously.
Max Tegmark’s Life 3.0 tries to rectify the situation. Written in an accessible and engaging style, and aimed at the general public, the book offers a political and philosophical map of the promises and perils of the AI revolution. Instead of pushing any one agenda or prediction, Tegmark seeks to cover as much ground as possible, reviewing a wide variety of scenarios concerning the impact of AI on the job market, warfare and political systems.
Yuval Noah Harari responds to an account of the artificial intelligence era and argues we are profoundly ill-prepared to deal with future technology.
Of course, it’s not actually all that doom and gloom, the child AI is really only capable of a specific task – image recognition. Using its AutoML AI, Google’s AI-building AI created its child AI using a technique called reinforcement learning. This works just like machine learning, except it’s entirely automated where AutoML acts as the neural network for its task-driven AI child.
Known as NASNet, the child AI was tasked with recognising objects in a video, in real time. AutoML would then evaluate how good NASNet was at its task and then improve its algorithms using the data to create a superior version of NASNet.
The show Robosapiens (about #robots) aired last night and had about a 5 minute section on my #transhumanism work. The footage is from a while back but just aired yesterday. My part is in English:
“Liever een computer die de nucleaire codes heeft dan Trump? Transhumanist Zoltan Istvan is ervan overtuigd dat kunstmatige intelligentie politici ooit zal kunnen vervangen. Meer in Robo sapiens, vanavond om 20.15u @NPO2 https://twitter.com/vpro/status/937260132219502595/video/1”