Toggle light / dark theme

In February of last year, the San Francisco–based research lab OpenAI announced that its AI system could now write convincing passages of English. Feed the beginning of a sentence or paragraph into GPT-2, as it was called, and it could continue the thought for as long as an essay with almost human-like coherence.

Now, the lab is exploring what would happen if the same algorithm were instead fed part of an image. The results, which were given an honorable mention for best paper at this week’s International Conference on Machine Learning, open up a new avenue for image generation, ripe with opportunity and consequences.

How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper.

It’s called Deep Teaching.

Perhaps not surprisingly, it works by taking human effort out of the equation.

Ancient Egyptians used hieroglyphs over four millennia ago to engrave and record their stories. Today, only a select group of people know how to read or interpret those inscriptions.

To read and decipher the ancient hieroglyphic writing, researchers and scholars have been using the Rosetta Stone, an irregularly shaped black granite stone.

In 2017, game developer Ubisoft launched an initiative to use AI and machine learning to understand the written language of the Pharoahs.

A new method developed at Cold Spring Harbor Laboratory (CSHL) uses DNA sequencing to efficiently map long-range connections between different regions of the brain. The approach dramatically reduces the cost of mapping brain-wide connections compared to traditional microscopy-based methods.

Neuroscientists need anatomical maps to understand how information flows from one region of the to another. “Charting the cellular connections between different parts of the brain—the connectome—can help reveal how the nervous system processes information, as well as how faulty wiring contributes to and other disorders,” says Longwen Huang, a postdoctoral researcher in CSHL Professor Anthony Zador’s lab. Creating these maps has been expensive and time-consuming, demanding massive efforts that are out of reach for most research teams.

Researchers usually follow neurons’ paths using , which can highlight how individual cells branch through a tangled neural network to find and connect with their targets. But, the palette of fluorescent labels suitable for this work is limited. Researchers can inject different colored dyes into two or three parts of the brain, then trace the connections emanating from those regions. They can repeat this process, targeting new regions, to visualize additional connections. In order to generate a brain-wide map, this must be done hundreds of times, using new research animals each time.

No industry will be spared.


The pharmaceutical business is perhaps the only industry on the planet, where to get the product from idea to market the company needs to spend about a decade, several billion dollars, and there is about 90% chance of failure. It is very different from the IT business, where only the paranoid survive but a business where executives need to plan decades ahead and execute. So when the revolution in artificial intelligence fueled by credible advances in deep learning hit in 2013–2014, the pharmaceutical industry executives got interested but did not immediately jump on the bandwagon. Many pharmaceutical companies started investing heavily in internal data science R&D but without a coordinated strategy it looked more like re-branding exercise with the many heads of data science, digital, and AI in one organization and often in one department. And while some of the pharmaceutical companies invested in AI startups no sizable acquisitions were made to date. Most discussions with AI startups started with “show me a clinical asset in Phase III where you identified a target and generated a molecule using AI?” or “how are you different from a myriad of other AI startups?” often coming from the newly-minted heads of data science strategy who, in theory, need to know the market.

However, some of the pharmaceutical companies managed to demonstrate very impressive results in the individual segments of drug discovery and development. For example, around 2018 AstraZeneca started publishing in generative chemistry and by 2019 published several impressive papers that were noticed by the community. Several other pharmaceutical companies demonstrated impressive internal modules and Eli Lilly built an impressive AI-powered robotics lab in cooperation with a startup.

However, it was not possible to get a comprehensive overview and comparison of the major pharmaceutical companies that claimed to be doing AI research and utilizing big data in preclinical and clinical development until now. On June 15th, one article titled “The upside of being a digital pharma player” got accepted and quietly went online in a reputable peer-reviewed industry journal Drug Discovery Today. I got notified about the article by Google Scholar because it referenced several of our papers. I was about to discard the article as just another industry perspective but then I looked at the author list and saw a group of heavy-hitting academics, industry executives, and consultants: Alexander Schuhmacher from Reutlingen University, Alexander Gatto from Sony, Markus Hinder from Novartis, Michael Kuss from PricewaterhouseCoopers, and Oliver Gassmann from University of St. Gallen.

Japanese researchers have created a smart face mask that has a built in speaker and can translate speech into 8 different languages.

We live in a world full of technology but it was a world without smart masks, until now!

A Japanese technology company Donut Robotics has taken the initiative to create the first smart face masks which connects to your phone. Of course, we couldn’t have battled coronavirus with a simple mask that still does the job of protecting us perfectly well. We as a race need to bring technology into everything and more so if it does an array of extremely important, life-saving things like using a speaker to amplify a person’s voice, covert a person’s speech into text and then translate it into eight different languages through a smartphone app.

No one wants to walk with a walker, but age has a way of making people compromise on their quality of life. The team behind Superflex, which spun out of SRI International in May, thinks there could be another way.

The company is building wearable robotic suits, plus other types of clothing, that can make it easier for soldiers to carry heavy loads or for elderly or disabled people to perform basic tasks. A current prototype is a soft suit that fits over most of the body. It delivers a jolt of supporting power to the legs, arms, or torso exactly when needed to reduce the burden of a load or correct for the body’s shortcomings.

A walker is a “very cost-effective” solution for people with limited mobility, but “it completely disempowers, removes dignity, removes freedom, and causes a whole host of other psychological problems,” SRI Ventures president Manish Kothari says. “Superflex’s goal is to remove all of those areas that cause psychological-type encumbrances and, ultimately, redignify the individual.”