Toggle light / dark theme

Roche and its Genentech subsidiary have committed up to $12 billion to Recursion in return for using its Recursion Operating System (OS) to advance therapies in 40 programs that include “key areas” of neuroscience and an undisclosed oncology indication.

Recursion OS applies machine learning and high-content screening methods in what the companies said would be a “transformational” model for tech-enabled target and drug discovery.

The integrated, multi-faceted OS is designed to generate, analyze and glean insights from large-scale proprietary biological and chemical datasets—in this case, extensive single-cell perturbation screening data from Roche and Genentech—by integrating wet-lab and dry-lab biology at scale to phenomically capture chemical and genetic alterations in neuroscience-related cell types and select cancer cell lines.

Referring to Tesla’s Autopilot and Full Self Driving features.

Elon Musk has claimed that no other CEO cares as much about safety as he does in an interview with Financial Times.

In the year that has seen his private wealth balloon like never before, Musk has also been showered with titles, beginning with the richest person in the world and more recently, the person of the year by Time Magazine. The Time accolade is probably one of the many titles Musk will receive as he embarks on his mission to send humanity to Moon with his space company, SpaceX.

Before we get there though, there are some issues with his other company Tesla that needs addressing. The company’s short history is peppered with incidents that have risked human lives as it pushes the boundaries of autonomous driving. The company offers features called Autopilot and Full Self-Driving (FSD) which are still in beta stages and have been involved in accidents. In August, this year, the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) launched an investigation into the Autopilot feature that involves 750,000 Tesla vehicles.

Speaking to FT, Musk said that he hasn’t misled Tesla buyers about Autopilot or FSD. “Read what it says when you order a Tesla. Read what it says when you turn it on. It’s very, very clear,” said Musk during the interview. He also cited the high ratings Tesla cars have achieved on safety and also used SpaceX’s association with NASA to send humans into space to highlight his focus on safety. He also went a step further to say that he doesn’t see any other CEO on the planet care as much about safety as he does.

Although Musk is spot on about the high safety ratings of the cars and even NASA’s faith in SpaceX to ferry its astronauts, the Tesla website does not give the impression that the Autopilot or FSD is in beta and cannot be completely relied upon. Rather a promotional video even goes on to claim that the person in the driver’s seat is only for legal reasons and does not even have his hands on the steering wheel at all times, a requirement for enabling Autopilot. according to Tesla’s own terms.

Full Story:

Controversial facial recognition company, Clearview AI, which has amassed a database of some 10 billion images by scraping selfies off the Internet so it can sell an identity-matching service to law enforcement, has been hit with another order to delete people’s data.

France’s privacy watchdog said today that Clearview has breached Europe’s General Data Protection Regulation (GDPR).

In an announcement of the breach finding, the CNIL also gives Clearview formal notice to stop its “unlawful processing” and says it must delete user data within two months.

Wasn’t it science-fiction writer, futurist, inventor, undersea explorer, and television series host, Arthur C. Clarke, who said, “Any sufficiently advanced technology is indistinguishable from magic?” Yes, in fact, it was! And the same is true today! Technology is magic! And the great thing about living in the future is we get to reap the benefits of all this technological advancement. Who doesn’t want lazer-precise internet? Why not take a vacation in outer space? Where is the driverless car taking us? These are the questions we face when we take a look at the future — up close… 15 Emerging Technologies That Will Change Our World.

For copyright issues, please feel free to e-mail me: [email protected]

Deepmind has just publically released their GPT-3 competitor called Gopher AI which is able to outcompete GPT by almost 10 times at a much better efficiency level. DeepMind said that larger models are more likely to generate toxic responses when provided with toxic prompts. They can also more accurately classify toxicity. The model scale does not significantly improve results for areas like logical reasoning and common-sense tasks. The research team found out that the capabilities of Gopher exceed existing language models for a number of key tasks. This includes the Massive Multitask Language Understanding benchmark, where Gopher demonstrates a significant advancement towards human expert performance over prior work.

TIMESTAMPS:
00:00 Deepmind’s Road to Human Intelligence.
02:09 The Dangers of Deepmind’s AI
04:46 How Gopher AI works.
06:17 How this AI could be used.
08:17 Last Words.

#ai #deepmind #futurism

That was a key takeaway from a conversation between economist Daniel Kahneman and MIT professor of brain and cognitive science Josh Tenenbaum at the Conference on Neural Information Processing Systems (NeurIPS) recently. The pair spoke during the virtual event about the shortcomings of humans and what we can learn from them while building A.I.

Kahneman, a Nobel Prize winner in economic sciences and the author of Thinking, Fast and Slow, noted an instance in which humans use judgment heuristics—shortcuts, essentially—to answer questions they don’t know the answer to. In the example, people are given a small amount of information about a student: She’s about to graduate, and she was reading fluently when she was 4 years old. From that, they’re asked to estimate her grade point average.

Using this information, many people will estimate the student’s GPA to be 3.7 or 3.8. To arrive there, Kahneman explained, they assign her a percentile on the intelligence scale—usually very high, given what they know about her reading ability at a young age. Then they assign her a GPA in what they estimate to be the corresponding percentile.

Most often, we recognize deep learning as the magic behind self-driving cars and facial recognition, but what about its ability to safeguard the quality of the materials that make up these advanced devices? Professor of Materials Science and Engineering Elizabeth Holm and Ph.D. student Bo Lei have adopted computer vision methods for microstructural images that not only require a fraction of the data deep learning typically relies on but can save materials researchers an abundance of time and money.

Quality control in materials processing requires the analysis and classification of complex material microstructures. For instance, the properties of some high strength steels depend on the amount of lath-type bainite in the material. However, the process of identifying bainite in microstructural images is time-consuming and expensive as researchers must first use two types of to take a closer look and then rely on their own expertise to identify bainitic regions. “It’s not like identifying a person crossing the street when you’re driving a car,” Holm explained “It’s very difficult for humans to categorize, so we will benefit a lot from integrating a .”

Their approach is very similar to that of the wider computer-vision community that drives facial recognition. The model is trained on existing material microstructure images to evaluate new images and interpret their classification. While companies like Facebook and Google train their models on millions or billions of images, materials scientists rarely have access to even ten thousand images. Therefore, it was vital that Holm and Lei use a “data-frugal method,” and train their model using only 30–50 microscopy images. “It’s like learning how to read,” Holm explained. “Once you’ve learned the alphabet you can apply that knowledge to any book. We are able to be data-frugal in part because these systems have already been trained on a large database of natural images.”

Stanford’s made a lot of progress over the years with its gecko-inspired robotic hand. In May, a version of the “gecko gripper” even found its way onto the International Space Station to test its ability to perform tasks like collecting debris and fixing satellites.

In a paper published today in Science Robotics, researchers at the university are demonstrating a far more terrestrial application for the tech: picking delicate objects. It’s something that’s long been a challenge for rigid robot hands, leading to a wide range of different solutions, including soft robotic grippers.

The team is showing off FarmHand, a four-fingered gripper inspired by both the dexterity of the human hand and the unique gripping capabilities of geckos. Of the latter, Stanford notes that the adhesive surface “creates a strong hold via microscopic flaps — Van der Waals force – a weak intermolecular force that results from subtle differences in the positions of electrons on the outsides of molecules.”

A team of researchers affiliated with multiple institutions in Korea has developed a robot hand that has abilities similar to human hands. In their paper published in the journal Nature Communications, the group describes how they achieved a high level of dexterity while keeping the hand’s size and weight low enough to attach to a robot arm.

Creating hands with the dexterity, strength and flexibility of is a challenging task for engineers—typically, some attributes are discarded to allow for others. In this new effort, the researchers developed a new robot based on a linkage-driven mechanism that allows it to articulate similarly to the human hand. They began their work by conducting a survey of existing and assessing their strengths and weaknesses. They then drew up a list of features they believed their hand should have, such as fingertip force, a high degree of controllability, low cost and high dexterity.

The researchers call their new hand an integrated, linkage-driven dexterous anthropomorphic (IDLA) robotic hand, and just like its human counterpart, it has four fingers and a thumb, each with three joints. And also like the human hand, it has fingertip sensors. The hand is also just 22 centimeters long. Overall, it has 20 joints, which gives it 15 degrees of motion—it is also strong, able to exert a crushing force of 34 Newtons—and it weighs just 1.1.kg.