Toggle light / dark theme

Solar power, hydrogen fuel from seawater, automatic wingsails, a 6 year journey.


The Energy Observer set sail on a six-year world tour in 2017, testing new technologies, from onboard hydrogen electrolysis to fully-automated sails. It’s hoped the rugged ocean environment will prove the techs’ durability and usefulness at home.

CNET playlists: https://www.youtube.com/user/CNETTV/playlists
Download the new CNET app: https://cnet.app.link/GWuXq8ExzG
Like us on Facebook: https://www.facebook.com/cnet
Follow us on Twitter: https://www.twitter.com/cnet
Follow us on Instagram: http://bit.ly/2icCYYm

#EnergyObserver #Hydrogen #Toyota

My prediction is that around the late 2030s machines will start to own assets and liabilities and through this, they will rise to the status of ‘banking clients’.

How did I arrive at this conclusion?

Step 1: Mezzanine AI

My prediction is that weak and narrow AI will slowly, gradually and smoothly mature and bend into ‘mezzanine AI’.

For a lot of smaller companies, AI isn’t part of the picture—not yet, at least. “Big companies are adopting,” says Brynjolfsson, “but most companies in America—Joe’s pizzeria, the dry cleaner, the little manufacturing company—they are just not there yet.”


A big study by the US Census Bureau finds that only about 9 percent of firms employ tools like machine learning or voice recognition—for now.

The COVID-19 pandemic didn’t just transform how we work and communicate. It also accelerated the need for more proactive health measures for chronic health problems tied to diet. Such problems have emerged as a top risk factor for coronavirus and people with poor metabolic health accounted for half of COVID-19 hospitalizations in some regions around the world. The resulting high numbers led the authors of a report in The Lancet to issue a call for more resources to tackle metabolic health to avoid needless deaths.

Thankfully, new tools have been developed to offer comprehensive understanding of nutrition. This expertise and technology won’t just help us tackle metabolic health – it could help us finally fully realize the power of plants to improve health and wellness outcomes.

Tesla has released its quarterly “Tesla Vehicle Safety Report.” One of the top reasons — if not the #1 reason — I bought a Tesla Model 3 last was because of its record-setting safety rating, so I’m always interested in seeing new stats on this topic.

The second quarter of 2020 saw a slightly worse result for Tesla than the first quarter in terms of accidents per million miles driven with Tesla Autopilot engaged (see graph below), but keep in mind that the first quarter had a record result. Additionally, the difference was so small that it was probably not statistically significant. On the other hand, Tesla’s Q2 figure was far better than the US average — about 10 times better with Autopilot engaged.

Breaking the lowest oxygen abundance record.

New results achieved by combining big data captured by the Subaru Telescope and the power of machine learning have discovered a galaxy with an extremely low oxygen abundance of 1.6% solar abundance, breaking the previous record of the lowest oxygen abundance. The measured oxygen abundance suggests that most of the stars in this galaxy formed very recently.

To understand galaxy evolution, astronomers need to study galaxies in various stages of formation and evolution. Most of the galaxies in the modern Universe are mature galaxies, but standard cosmology predicts that there may still be a few galaxies in the early formation stage in the modern Universe. Because these early-stage galaxies are rare, an international research team searched for them in wide-field imaging data taken with the Subaru Telescope. “To find the very faint, rare galaxies, deep, wide-field data taken with the Subaru Telescope was indispensable,” emphasizes Dr. Takashi Kojima, the leader of the team.

Skoltech scientists have shown that quantum enhanced machine learning can be used on quantum (as opposed to classical) data, overcoming a significant slowdown common to these applications and opening a “fertile ground to develop computational insights into quantum systems.” The paper was published in the journal Physical Review A.

Quantum computers utilize quantum mechanical effects to store and manipulate information. While quantum effects are often claimed to be counterintuitive, such effects will enable quantum enhanced calculations to dramatically outperform the best supercomputers. In 2019, the world saw a prototype of this demonstrated by Google as quantum computational superiority.

Quantum algorithms have been developed to enhance a range of different computational tasks; more recently this has grown to include quantum enhanced machine learning. Quantum machine learning was partly pioneered by Skoltech’s resident-based Laboratory for Quantum Information Processing, led by Jacob Biamonte, a coathor of this paper. “Machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that are thought not to produce efficiently, so it is not surprising that quantum computers might outperform classical computers on machine learning tasks,” he says.

Computer programming has never been easy. The first coders wrote programs out by hand, scrawling symbols onto graph paper before converting them into large stacks of punched cards that could be processed by the computer. One mark out of place and the whole thing might have to be redone.

Nowadays coders use an array of powerful tools that automate much of the job, from catching errors as you type to testing the code before it’s deployed. But in other ways, little has changed. One silly mistake can still crash a whole piece of software. And as systems get more and more complex, tracking down these bugs gets more and more difficult. “It can sometimes take teams of coders days to fix a single bug,” says Justin Gottschlich, director of the machine programming research group at Intel.

Over the past decade, researchers have developed a growing number of deep neural networks that can be trained to complete a variety of tasks, including recognizing people or objects in images. While many of these computational techniques have achieved remarkable results, they can sometimes be fooled into misclassifying data.

An adversarial attack is a type of cyberattack that specifically targets deep neural networks, tricking them into misclassifying data. It does this by creating adversarial data that closely resembles and yet differs from the data typically analyzed by a deep neural network, prompting the network to make incorrect predictions, failing to recognize the slight differences between real and adversarial data.

In recent years, this type of attack has become increasingly common, highlighting the vulnerabilities and flaws of many deep neural networks. A specific type of that has emerged in recent years entails the addition of adversarial patches (e.g., logos) to images. This attack has so far primarily targeted models that are trained to detect objects or people in 2-D images.