Last month, our Azure Cognitive Services team, comprising researchers and engineers with expertise in AI, achieved a groundbreaking milestone by advancing commonsense language understanding. When given a question that requires drawing on prior knowledge and five answer choices, our latest model— KEAR, Knowledgeable External Attention for commonsense Reasoning —performs better than people answering the same question, calculated as the majority vote among five individuals. KEAR reaches an accuracy of 89.4 percent on the CommonsenseQA leaderboard compared with 88.9 percent human accuracy. While the CommonsenseQA benchmark is in English, we follow a similar technique for multilingual commonsense reasoning and topped the X-CSR leaderboard.
Although recent large deep learning models trained with big data have made significant breakthroughs in natural language understanding, they still struggle with commonsense knowledge about the world, information that we, as people, have gathered in our day-to-day lives over time. Commonsense knowledge is often absent from task input but is crucial for language understanding. For example, take the question “What is a treat that your dog will enjoy?” To select an answer from the choices salad, petted, affection, bone, and lots of attention, we need to know that dogs generally enjoy food such as bones for a treat. Thus, the best answer would be “bone.” Without this external knowledge, even large-scale models may generate incorrect answers. For example, the DeBERTa language model selects “lots of attention,” which is not as good an answer as “bone.”
Scientists and institutions dedicate more resources each year to the discovery of novel materials to fuel the world. As natural resources diminish and the demand for higher value and advanced performance products grows, researchers have increasingly looked to nanomaterials.
Nanoparticles have already found their way into applications ranging from energy storage and conversion to quantum computing and therapeutics. But given the vast compositional and structural tunability nanochemistry enables, serial experimental approaches to identify new materials impose insurmountable limits on discovery.
Now, researchers at Northwestern University and the Toyota Research Institute (TRI) have successfully applied machine learning to guide the synthesis of new nanomaterials, eliminating barriers associated with materials discovery. The highly trained algorithm combed through a defined dataset to accurately predict new structures that could fuel processes in clean energy, chemical and automotive industries.
With the spread of the omicron variant, not everyone can or is eager to travel for the winter break. But what if virtual touch could bring you assurance that you were not alone?
At the USC Viterbi School of Engineering, computer scientist and roboticist Heather Culbertson has been exploring various methods to simulate touch. As part of a new study, Culbertson a senior author on this study, along with researchers at Stanford, her alma mater, wanted to see if two companions (platonic or romantic), could communicate and express care and emotion remotely. People perceive a partner’s true intentions through in-person touch an estimated 57 percent of the time. When interacting with a device that simulated human touch, respondents were able to discern the touch’s intention 45 percent of the time. Thus, devices in this study appear to perform with approximately 79 percent accuracy of perceived human touch.
Our sense of touch is unique. In fact, people have a “touch language” says Culbertson, the WiSE Gabilan Assistant Professor and Assistant Professor of Computer Science and Aerospace and Mechanical Engineering at USC. Thus, she says, creating virtual touch that people can direct towards their loved ones is quite complex—not only do we have differences in our comfort with social touch and levels of “touchiness” but we also may have a distinct way of communicating different emotions such sympathy, love or sadness. The challenge for the researchers was to create an algorithm that can be flexible enough to incorporate the many dimensions of touch.
Lightelligence, a Boston-based photonics company, revealed the world’s first small form-factor, photonics-based computing device, meaning it uses light to perform compute operations. The company claims the unit is “hundreds of times faster than a typical computing unit, such as NVIDIA RTX 3080.” 350 times faster, to be exact, but that only applies to certain types of applications.
However, the PACE achieves that coveted specialization through an added field of computing — which not only makes the system faster, it makes it incredibly more efficient. While traditional semiconductor systems have the issue of excess heat that results from running current through nanometre-level features at sometimes ludicrous frequencies, the photonic system processes its workloads with zero Ohmic heating — there’s no heat produced from current resistance. Instead, it’s all about light.
Lightelligence is built around its CEO’s Ph.d. thesis — and the legitimacy it provides. This is so because when “Deep Learning with Coherent Nanophotonic Circuits” was published in Nature in 2017, Lightelligence’s CEO and founder Yichen Chen had already foreseen a path for optical circuits to be at the forefront of Machine Learning computing efforts. By 2020, the company had already received $100 million in funding and employed around 150 employees. A year later, Lightspeed has achieved a dem product that it says is “hundreds of times faster than a typical computing unit, such as NVIDIA RTX 3080”. 350 times faster, to be clear.
The PACE’s debut aims to charm enough capital to comfortably reach its goal of launching a pilot AI accelerator product to the market in 2022. That’s still only a stretch goal in the company’s vision, however, its goal is to develop and distribute a mass-market, photonics-based hardware solution as early as 2023, targeting the Cloud AI, Finance, and Retail markets. Considering how Lightelligence managed to improve the company’s 2019 COMET design performance by a factor of a million with PACE in a span of two years, it’ll be interesting to see where their efforts take them when it comes to launching.
WASHINGTON – The National Geospatial-Intelligence Agency has selected a team of commercial and academic partners to build an artificial intelligence system with synthetic data, which will further help the agency determine how it builds machine learning algorithms moving forward.
Orbital Insight was issued a Phase II Small Business Innovation Research contract by the NGA, the company announced. Dec. 16. It will collaborate with Rendered.ai and the University of California, Berkeley, to develop a computer vision model.
As the organization charged with analyzing satellite imagery for the intelligence community, NGA has put increased emphasis on using AI for its mission. The agency sees human-machine pairing as critical for its success, with machine learning algorithms taking over the rote task of processing the torrent of satellite data to find potential intelligence and freeing up human operators to do more high level analysis and tasks.
For all that neural networks can accomplish, we still don’t really understand how they operate. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted.
If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders.
This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs—irrespective of what else was in the image.
Researchers at Kobe University and Osaka University have successfully developed artificial intelligence technology that can extract hidden equations of motion from regular observational data and create a model that is faithful to the laws of physics.
This technology could enable researchers to discover the hidden equations of motion behind phenomena for which the laws were considered unexplainable. For example, it may be possible to use physics-based knowledge and simulations to examine ecosystem sustainability.
The research group consisted of Associate Professor YAGUCHI Takaharu and Ph.D. student CHEN Yuhan (Graduate School of System Informatics, Kobe University), and Associate Professor MATSUBARA Takashi (Graduate School of Engineering Science, Osaka University).
A black hole laser in analogues of gravity amplifies Hawking radiation, which is unlikely to be measured in real black holes, and makes it observable. There have been proposals to realize such black hole lasers in various systems. However, no progress has been made in electric circuits for a long time, despite their many advantages such as high-precision electromagnetic wave detection. Here we propose a black hole laser in Josephson transmission lines incorporating metamaterial elements capable of producing Hawking-pair propagation modes and a Kerr nonlinearity due to the Josephson nonlinear inductance. A single dark soliton obeying the nonlinear Schrödinger equation produces a black hole-white hole horizon pair that acts as a laser cavity through a change in the refractive index due to the Kerr effect.