Toggle light / dark theme

At Google I/O today Google Cloud announced Vertex AI, a new managed machine learning platform that is meant to make it easier for developers to deploy and maintain their AI models. It’s a bit of an odd announcement at I/O, which tends to focus on mobile and web developers and doesn’t traditionally feature a lot of Google Cloud news, but the fact that Google decided to announce Vertex today goes to show how important it thinks this new service is for a wide range of developers.

The launch of Vertex is the result of quite a bit of introspection by the Google Cloud team. “Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”

Can this be true?


An unmanned aircraft was brought down by a powerful electromagnetic pulse in what could be the first reported test of an advanced new weapon in China.

A paper published in the Chinese journal Electronic Information Warfare Technology did not give details of the timing and location of the experiment, which are classified but it may be the country’s first openly reported field test of an electromagnetic pulse (EMP) weapon.

China is racing to catch up in the field after the US demonstrated a prototype EMP weapon that brought down 50 drones with one shot in 2019.

An international research team with participants from several universities including the FAU has proposed a standardized registry for artificial intelligence (AI) work in biomedicine to improve the reproducibility of results and create trust in the use of AI algorithms in biomedical research and, in the future, in everyday clinical practice. The scientists presented their proposal in the journal Nature Methods.

In the last decades, new technologies have made it possible to develop a wide variety of systems that can generate huge amounts of biomedical data, for example in cancer research. At the same time, completely new possibilities have developed for examining and evaluating this data using methods. AI algorithms in intensive care units, e.g., can predict circulatory failure at an early stage based on large amounts of data from several monitoring systems by processing a lot of complex information from different sources at the same time, which is far beyond human capabilities.

This great potential of AI systems leads to an unmanageable number of biomedical AI applications. Unfortunately, the corresponding reports and publications do not always adhere to best practices or provide only incomplete information about the algorithms used or the origin of the data. This makes assessment and comprehensive comparisons of AI models difficult. The decisions of AIs are not always comprehensible to humans and results are seldomly fully reproducible. This situation is untenable, especially in clinical research, where trust in AI models and transparent research reports are crucial to increase the acceptance of AI algorithms and to develop improved AI methods for basic biomedical research.

Dr. Valentin Robu, Associate Professor and Academic PI of the project, says that this work was part of the NCEWS (Network Constraints Early Warning System project), a collaboration between Heriot-Watt and Scottish Power Energy Networks, part funded by InnovateUK, the United Kingdom’s applied research and innovation agency. The project’s results greatly exceeded our expectations, and it illustrates how advanced AI techniques (in this case deep learning neural networks) can address important practical challenges emerging in modern energy systems.


Power networks worldwide are faced with increasing challenges. The fast rollout of distributed renewable generation (such as rooftop solar panels or community wind turbines) can lead to considerable unpredictability. The previously used fit-and-forget mode of operating power networks is no longer adequate, and a more active management is required. Moreover, new types of demand (such as from the rollout EV charging) can also be source of unpredictability, especially if concentrated in particular areas of the distribution grid.

Network operators are required to keep power and voltage within safe operating limits at all connection points in the , as out of bounds fluctuations can damage expensive equipment and connected devices. Hence, having good estimates of which area of the network could be at risk and require interventions (such as strengthening the network, or extra storage to smoothen fluctuations) is increasingly a key requirement.

Privacy-sensitive machine learning

Smart meter data analysis holds great promise for identifying at risk areas in distribution networks. Yet, using smart meter data can present significant practical constraints. In many countries and regions, the rollout of smart meters does not provide full coverage, as installation is voluntary and many customers may reject installing a smart meter at their home. Moreover, even places where there is a successful smart meter roll-out, privacy restrictions must be taken into account and, in practice, regulators considerably constrain what private data from smart meters network operators have access to.

AI and Machine Learning systems have proven a boon to scientific research in a variety of academic fields in recent years. They’ve assisted scientists in ripe for cutting-edge treatments, of potent and, and even. Throughout this period, however, AI/ML systems have often been relegated to simply processing large data sets and performing brute force computations, not leading the research themselves.

But Dr. Hiroaki Kitano, CEO of Sony Computer Science Laboratories, “hybrid form of science that shall bring systems biology and other sciences into the next stage,” by creating an AI that’s just as capable as today’s top scientific minds. To do so, Kitano seeks to launch the and.

“The distinct characteristic of this challenge is to field the system into an open-ended domain to explore significant discoveries rather than rediscovering what we already know or trying to mimic speculated human thought processes,” Kitano. “The vision is to reformulate scientific discovery itself and to create an alternative form of scientific discovery.”

Two AI’s talking to each other power by GPT3!


Here we look at a conversation between two AIs.
The AIs were built using GPT-3, a language model that understands the English language better than anything else in the world right now.

I prompt GPT3 with just three lines:
“The following is a conversation between two AIs. The AIs are both clever, humorous, and intelligent.
Hal: Good Evening, Sophia.
Sophia: It’s great to see you again, Hal.

The rest of the conversation is generated. This is the first conversation I generated.

I create individual videos for each AI from synthesia.io. I splice up the videos so that it looks like a real conversation, but that is all the editing I do not edit the text of the conversation at all, only the video to make it seem like a back and forth.

The AIs discuss existential dread, love, and even somewhat assume gender roles. These are three big issues as we think about sentient AI. We are going through the singularity right now, so it’s very important we keep AI safe and aligned with humans.

Hurray.


Tesla has started to hire roboticists to build its recently announced “Tesla Bot,” a humanoid robot to become a new vehicle for its AI technology.

When Elon Musk explained the rationale behind Tesla Bot, he argued that Tesla was already making most of the components needed to create a humanoid robot equipped with artificial intelligence.

The automaker’s computer vision system developed for self-driving cars could be leveraged for use in the robot, which could also use things like Tesla’s battery system and suite of sensors.

A Tesla semi-truck with a very Tesla-worthy aesthetics highlighted by the contoured yet sharp design language that in a way reminds me of the iPhone 12!

Tesla’s visionary Semi all-electric truck powered by four independent motors on the rear is scheduled for production in 2022. The semi is touted to be the safest, most comfortable truck with an acceleration of 0–60 mph in just 20 seconds and a range of 300–500 miles. While the prototype version looks absolutely badass, how the final version will look is anybody’s guess.

Proteins are essential to life, and understanding their 3D structure is key to unpicking their function. To date, only 17% of the human proteome is covered by an experimentally determined structure. Two papers in this week’s issue dramatically expand our structural understanding of proteins. Researchers at DeepMind, Google’s London-based sister company, present the latest version of their AlphaFold neural network. Using an entirely new architecture informed by intuitions about protein physics and geometry, it makes highly accurate structure predictions, and was recognized at the 14th Critical Assessment of Techniques for Protein Structure Prediction last December as a solution to the long-standing problem of protein-structure prediction. The team applied AlphaFold to 20,296 proteins, representing 98.5% of the human proteome.