Toggle light / dark theme

In this talk, professor Bell breaks down the very foundations of AI –viewed as an inescapable and univocal technology- and opens up a space for other truths and possibilities by visiting AI’s alternative stories in the past, present and future. By doing so, she claims, we might make room for a more sustainable, safe and responsible AI, and ultimately a more human-centric one.

Genevieve Bell is a cultural anthropologist and technologist who has spent her career at the intersection between places, people and things. From growing up in indigenous communities in Australia’s outback to Silicon Valley, from Stanford University and Intel Corporation back to Australia’s only national university, she has always questioned what it means to be human in an increasingly digital world. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

VR Boots for gaming.


Last year I did a VR experience meant to simulate what it’s like to be at the US-Mexico border wall. The tall, foreboding wall towered above me, and as I turned from side to side there were fields of grass with some wildlife and a deceivingly harmless-looking border patrol station. I wanted to explore more, so I took a few steps toward the wall, hoping to catch a glimpse of the Mexico side through its tall metal slats.

“Oops!” a voice called out. A hand landed lightly on my arm. “Look out, you’re about to run into the wall.” The “wall” was in fact a curtain—the experience took place in a six-foot-by-eight-foot booth alongside dozens of similar VR booths—and I had, in fact, just about walked through it.

Virtual reality is slowly getting better, but there are all kinds of improvements that could make it feel more lifelike. More detailed graphics and higher screen resolution can make the visual aspect (which is, of course, most important) more captivating, and haptic gloves or full haptic suits can lend a sense of touch to the experience.

Autonomous tractors for farming.


OSAKA — Kubota has partnered with U.S. chipmaker Nvidia to develop highly sophisticated self-driving farm tractors, the Japanese machinery maker said Tuesday.

The tractors will be equipped with Nvidia graphics processing units and artificial intelligence, coupled with cameras to instantly process collected data.

The farming technology is expected to provide a labor-saving solution that will help address the shortage of workers in Japan’s agricultural industry.

With artificial intelligence (AI) tools and machine learning algorithms now making their way into a wide variety of settings, assessing their security and ensuring that they are protected against cyberattacks is of utmost importance. As most AI algorithms and models are trained on large online datasets and third-party databases, they are vulnerable to a variety of attacks, including neural Trojan attacks.

A neural Trojan attack occurs when an attacker inserts what is known as a hidden Trojan trigger or backdoor inside an AI model during its training. This trigger allows the attacker to hijack the model’s prediction at a later stage, causing it to classify data incorrectly. Detecting these attacks and mitigating their impact can be very challenging, as a targeted model typically performs well and in alignment with a developer’s expectations until the Trojan backdoor is activated.

Researchers at University of California, San Diego have recently created CLEANN, an end-to-end framework designed to protect embedded from Trojan attacks. This framework, presented in a paper pre-published on arXiv and set to be presented at the 2020 IEEE/ACM International Conference on Computer-Aided Design, was found to perform better than previously developed Trojan shields and detection methods.

It looks like our food for the future will be bugs. A factory in France will grow bugs as a food source.


Enter the insects. Or, more appropriately in this case, enter Ÿnsect, the French company with big ambitions to help change the way the world eats. Ÿnsect raised $125 million in Series C funding in early 2019, and at the time already had $70 million worth of aggregated orders to fill. Now they’re building a bug-farming plant to churn out tiny critters in record numbers.

You’ve probably heard of vertical farms in the context of plants; most existing vertical farms use LED lights and a precise mixture of nutrients and water to grow leafy greens or other produce indoors. They maximize the surface area used for growing by stacking several layers of plants on top of one another; the method may not make for as much space as outdoor fields have, but can yield a lot more than you might think.

Ÿnsect’s new plant will use layered trays too, except they’ll be cultivating beetle larvae instead of plants. The ceilings of the facility are 130 feet high—that’s a lot of vertical space to grow bugs in. Those of us who are grossed out by the thought will be glad to know that the whole operation will be highly automated; robots will tend to and harvest the beetles, and AI will be employed to keep tabs on important growing conditions like temperature and humidity.

What the researchers have achieved has remarkable results: by replacing the traditional h.264 video codec with a neural network, they have managed to reduce the required bandwidth for a video call by an order of magnitude. In one example, the required data rate fell from 97.28 KB/frame to a measly 0.1165 KB/frame — a reduction to 0.1% of required bandwidth.


NVIDIA Research has invented a way to use AI to dramatically reduce video call bandwidth while simultaneously improving quality.