Toggle light / dark theme

Houston-based ThirdAI, a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding.

Neotribe Ventures, Cervin Ventures and Firebolt Ventures co-led the investment, which will be used to hire additional employees and invest in computing resources, Anshumali Shrivastava, Third AI co-founder and CEO, told TechCrunch.

Shrivastava, who has a mathematics background, was always interested in artificial intelligence and machine learning, especially rethinking how AI could be developed in a more efficient manner. It was when he was at Rice University that he looked into how to make that work for deep learning. He started ThirdAI in April with some Rice graduate students.

Mathematicians have proved that a geometric object called the Fargues-Fontaine curve can connect arithmetic and geometry. The work is a major advance in one of the most ambitious projects in mathematics.


The grandest project in mathematics has received a rare gift, in the form of a mammoth 350-page paper posted in February that will change the way researchers around the world investigate some of the field’s deepest questions. The work fashions a new geometric object that fulfills a bold, once fanciful dream about the relationship between geometry and numbers.

“This truly opens up a tremendous amount of possibilities. Their methods and constructions are so new they’re just waiting to be explored,” said Tasho Kaletha of the University of Michigan.

The work is a collaboration between Laurent Fargues of the Institute of Mathematics of Jussieu in Paris and Peter Scholze of the University of Bonn. It opens a new front in the long-running “Langlands program,” which seeks to link disparate branches of mathematics — like calculus and geometry — to answer some of the most fundamental questions about numbers.

Summary: Combining artificial intelligence, mathematical modeling, and brain imaging data, researchers shed light on the neural processes that occur when people use mental abstraction.

Source: UCL

By using a combination of mathematical modeling, machine learning and brain imaging technology, researchers have discovered what happens in the brain when people use mental abstractions.

Experimental facilities around the globe are facing a challenge: their instruments are becoming increasingly powerful, leading to a steady increase in the volume and complexity of the scientific data they collect. At the same time, these tools demand new, advanced algorithms to take advantage of these capabilities and enable ever-more intricate scientific questions to be asked—and answered. For example, the ALS-U project to upgrade the Advanced Light Source facility at Lawrence Berkeley National Laboratory (Berkeley Lab) will result in 100 times brighter soft X-ray light and feature superfast detectors that will lead to a vast increase in data-collection rates.

To make full use of modern instruments and facilities, researchers need new ways to decrease the amount of data required for and address data acquisition rates humans can no longer keep pace with. A promising route lies in an emerging field known as autonomous discovery, where algorithms learn from a comparatively little amount of input data and decide themselves on the next steps to take, allowing multi-dimensional parameter spaces to be explored more quickly, efficiently, and with minimal human intervention.

“More and more experimental fields are taking advantage of this new optimal and autonomous data acquisition because, when it comes down to it, it’s always about approximating some function, given noisy data,” said Marcus Noack, a research scientist in the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at Berkeley Lab and lead author on a new paper on Gaussian processes for autonomous data acquisition published July 28 in Nature Reviews Physics. The paper is the culmination of a multi-year, multinational effort led by CAMERA to introduce innovative autonomous discovery techniques across a broad scientific community.

Summary: A new study found a person’s math ability was linked to levels of GABA and glutamate in the brain. In children, greater math fluency was associated with higher GABA levels in the left intraparietal sulcus, while lower levels of GABA were linked to math ability in adults. The reverse was true for glutamate in both children and adults.

Source: PLOS

The neurotransmitters GABA and glutamate have complementary roles — GABA inhibits neurons, while glutamate makes them more active.

Nvidia today announced the release of TensorRT 8, the latest version of its software development kit (SDK) designed for AI and machine learning inference. Built for deploying AI models that can power search engines, ad recommendations, chatbots, and more, Nvidia claims that TensorRT 8 cuts inference time in half for language queries compared with the previous release of TensorRT.

Models are growing increasingly complex, and demand is on the rise for real-time deep learning applications. According to a recent O’Reilly survey, 86.7% of organizations are now considering, evaluating, or putting into production AI products. And Deloitte reports that 53% of enterprises adopting AI spent more than $20 million in 2019 and 2020 on technology and talent.

TensorRT essentially dials a model’s mathematical coordinates to a balance of the smallest model size with the highest accuracy for the system it’ll run on. Nvidia claims that TensorRT-based apps perform up to 40 times faster than CPU-only platforms during inference, and that TensorRT 8-specific optimizations allow BERT-Large — one of the most popular Transformer-based models — to run in 1.2 milliseconds.

A new set of equations can precisely describe the reflections of the Universe that appear in the warped light around a black hole.

The proximity of each reflection is dependent on the angle of observation with respect to the black hole, and the rate of the black hole’s spin, according to a mathematical solution worked out by physics student Albert Sneppen of the Niels Bohr Institute in Denmark.

This is really cool, absolutely, but it’s not just really cool. It also potentially gives us a new tool for probing the gravitational environment around these extreme objects.

Circa 2019


MIT’S new mini cheetah robot is the first four-legged robot to do a backflip. At only 20 pounds the limber quadruped can bend and swing its legs wide, enabling it to walk either right side up or upside down. The robot can also trot over uneven terrain about twice as fast as an average person’s walking speed. (Learn more: http://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304)

Watch more videos from MIT: http://www.youtube.com/user/MITNewsOffice?sub_confirmation=1

The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts. Our mission is to advance knowledge; to educate students in science, engineering, and technology; and to tackle the most pressing problems facing the world today. We are a community of hands-on problem-solvers in love with fundamental science and eager to make the world a better place.

The MIT YouTube channel features videos about all types of MIT research, including the robot cheetah, LIGO, gravitational waves, mathematics, and bombardier beetles, as well as videos on origami, time capsules, and other aspects of life and culture on the MIT campus. Our goal is to open the doors of MIT and bring the Institute to the world through video.

A team of researchers affiliated with multiple institutions in China, working at the University of Science and Technology of China, has achieved another milestone in the development of a usable quantum computer. The group has written a paper describing its latest efforts and have uploaded it to the arXiv preprint server.

Back in 2019, a team at Google announced that they had achieved “quantum supremacy” with their Sycamore machine—a 54 processor that carried out a calculation that would have taken a traditional approximately 10000 years to complete. But that was soon surpassed by other teams from Honeywell and a team in China. The team in China used a different technique, one that involved the use of photonic qubits—but it was also a one-trick pony. In this new effort, the new team in China, which has been led by Jian-Wei Pan, who also led the prior team at the University of Science and Technology has achieved another milestone.

The new effort was conducted with a 2D programable computer called Zuchongzhi—one equipped to run with 66 qubits. In their demonstration, the researchers used only 56 of those qubits to tackle a well-known computer problem—sampling the output distribution of random quantum circuits. The task requires a variety of computer abilities that involve mathematical analysis, matrix theory, the complexity of certain computations and probability theory—a task approximately 100 times more challenging than the one carried out by Sycamore just two years ago. Prior research has suggested the task set before the Chinese machine would take a conventional computer approximately eight years to complete. Zuchongzhi completed the task in less than an hour and a half. The achievement by the team showed that the Zuchongzhi machine is capable of tackling more than just one kind of task.