Toggle light / dark theme

:0000000000


Forget large arrays of sensors and radars. Forget hard-coded road rules. British startup Wayve taught a car to teach itself to drive, and using only some cameras, a sat-nav and 20 hours’ worth of experience, it’s already driving itself short distances on unfamiliar UK roads.

Read more

Instead of throwing away your broken boots or cracked toys, why not let them fix themselves? Researchers at the University of Southern California Viterbi School of Engineering have developed 3D-printed rubber materials that can do just that.

Assistant Professor Qiming Wang works in the world of 3D printed materials, creating new functions for a variety of purposes, from flexible electronics to sound control. Now, working with Viterbi students Kunhao Yu, An Xin, and Haixu Du, and University of Connecticut Assistant Professor Ying Li, they have made a new material that can be manufactured quickly and is able to repair itself if it becomes fractured or punctured. This material could be game-changing for industries like shoes, tires, soft robotics, and even electronics, decreasing manufacturing time while increasing product durability and longevity.

The material is manufactured using a 3D printing method that uses photopolymerization. This process uses light to solidify a liquid resin in a desired shape or geometry. To make it self-healable, they had to dive a little deeper into the chemistry behind the material.

Read more

New book calls Google, Facebook, Amazon, and six more tech giants “the new gods of A.I.” who are “short-changing our futures to reap immediate financial gain”.


A call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head.

We like to think that we are in control of the future of “artificial” intelligence. The reality, though, is that we—the everyday people whose data powers AI—aren’t actually in control of anything. When, for example, we speak with Alexa, we contribute that data to a system we can’t see and have no input into—one largely free from regulation or oversight. The big nine corporations—Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple—are the new gods of AI and are short-changing our futures to reap immediate financial gain.

In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI—the people working on the system, their motivations, the technology itself—is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.

Read more

The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.


The EU wants AI that’s fair and accountable, respects human autonomy and prevents harm.

Read more

Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes. — Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable. — Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen. — Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make. — Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines. — Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change” — Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.


AI technologies should be accountable, explainable, and unbiased, says EU.

Read more

Tune in beginning at 6:30 a.m. EDT on Monday, as Anne McClain of NASA and David Saint-Jacques of the Canadian Space Agency set up a redundant power supply for the station’s robotic arm. Watch live coverage here:

Read more

Japan’s space agency wants to create a moon base with the help of robots that can work autonomously, with little human supervision.

The project, which has racked up three years of research so far, is a collaboration between the Japan Aerospace Exploration Agency (JAXA), the construction company Kajima Corp., and three Japanese universities: Shibaura Institute of Technology, The University of Electro-Communications and Kyoto University.

Recently, the collaboration did an experiment on automated construction at the Kajima Seisho Experiment Site in Odawara (central Japan).

Read more