Toggle light / dark theme

Deep-learning neural networks have come a long way in the past several years—we now have systems that are capable of beating people at complex games such as shogi, Go and chess. But is the progress of such systems limited by their basic architecture? Shimon Ullman, with the Weizmann Institute of Science, addresses this question in a Perspectives piece in the journal Science and suggests some ways computer scientists might reach beyond simple AI systems to create artificial general intelligence (AGI) systems.

Deep learning networks are able to learn because they have been programmed to create artificial neurons and the connections between them. As they encounter , new neurons and communication paths between them are formed—very much like the way the operates. But such systems require extensive training (and a feedback system) before they are able to do anything useful, which stands in stark contrast to the way that humans learn. We do not need to watch thousands of people in action to learn to follow someone’s gaze, for example, or to figure out that a smile is something positive.

Ullman suggests this is because humans are born with what he describes as preexisting network structures that are encoded into our neural circuitry. Such structures, he explains, provide growing infants with an understanding of the physical world in which they exist—a base upon which they can build more that lead to general intelligence. If computers had similar structures, they, too, might develop physical and social skills without the need for thousands of examples.

Read more

Germaphobes rejoice: you can now check in with confidence, thanks to this nifty little device.

Advertise with NZME.

The robot uses UV light to scour surfaces – including bed sheets – without the need for harmful chemicals or manual labour. This method is found to be effective against 99.9 per cent of pathogens tucked away in the fabric of hotel suites.

Read more

Finland knows it doesn’t have the resources to compete with China or the United States for artificial intelligence supremacy, so it’s trying to outsmart them. “People are comparing this to electricity – it touches every single sector of human life,” says Nokia chairman Risto Siilasmaa. From its foundations as a pulp mill 153 years ago, Nokia is now one of the companies helping to drive a very quiet, very Finnish AI revolution.


The small Nordic country is betting on education to give it a decisive edge in the age of AI.

Read more

With the development of deep fakes and social media bots, there’s a concern about the use of AI in crime. This paper by Floridi is a great analysis of the possible problems that may arise. From the above mentioned deep fakes to AI copying someone’s social media account into another media and pretending to be them or the use of AI financial bots to gather insider information to use in financial manipulation.

The last idea reminds of the scenes in Transcendence where the AI Will Caster makes a fortune in the markets.


Artificial intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI.

Read more

Circa 2018


The experimental mastery of complex quantum systems is required for future technologies like quantum computers and quantum encryption. Scientists from the University of Vienna and the Austrian Academy of Sciences have broken new ground. They sought to use more complex quantum systems than two-dimensionally entangled qubits and thus can increase the information capacity with the same number of particles. The developed methods and technologies could in the future enable the teleportation of complex quantum systems. The results of their work, “Experimental Greenberger-Horne-Zeilinger entanglement beyond qubits,” is published recently in the renowned journal Nature Photonics.

Similar to bits in conventional computers, qubits are the smallest unit of in . Big companies like Google and IBM are competing with research institutes around the world to produce an increasing number of entangled qubits and develop a functioning quantum computer. But a research group at the University of Vienna and the Austrian Academy of Sciences is pursuing a new path to increase the information capacity of complex quantum systems.

The idea behind it is simple: Instead of just increasing the number of particles involved, the complexity of each is increased. “The special thing about our experiment is that for the first time, it entangles three photons beyond the conventional two-dimensional nature,” explains Manuel Erhard, first author of the study. For this purpose, the Viennese physicists used quantum systems with more than two possible states—in this particular case, the angular momentum of individual light particles. These individual photons now have a higher than qubits. However, the entanglement of these light particles turned out to be difficult on a conceptual level. The researchers overcame this challenge with a groundbreaking idea: a computer algorithm that autonomously searches for an experimental implementation.

Read more

Space architecture startup AI SpaceFactory achieved second place in the latest phase of a NASA-led competition, pitting several groups against each other in pursuit of designing a 3D-printed Mars habitat and physically demonstrating some of the technologies needed to build them.

With a focus on ease of scalable 3D-printing and inhabitants’ quality of life, as well as the use of modular imported goods like windows and airlocks, MARSHA lends itself impeccably well to SpaceX’s goal of developing a sustainable human presence on Mars as quickly, safely, and affordably as possible with the support of its Starship/Super Heavy launch vehicle.

Read more

Circa 2018


Computer scientists have created an AI called Bayou that is able to write its own software code, reports Futurity. Though there have been attempts in the past at creating software that can write its own code, programmers generally needed to write as much or more code to tell the program what kind of applications they want it to code as they would write if they just coded the app itself. That’s all changed with Bayou.

The AI studies all the code posted on GitHub and uses that to write its own code. Using a process called neural sketch learning, the AI reads all the code and then associates an “intent” behind each. Now when a human asks Bayou to create an app, Bayou associates the intent its learned from codes on Github to the user’s request and begins writing the app it thinks the user wants.

Read more