Toggle light / dark theme

Microsoft and Intel have a long-standing relationship, which grows stronger today with a new collaboration on a seamless artificial intelligence (AI) and machine learning (ML) experience, from Azure in the cloud to a wide range of high-performance edge devices powered by Intel Movidius vision processing units (VPU). This will deliver a more seamless experience for developers across the intelligent cloud and intelligent edge.

Ivan Smirnov, Leading Research Fellow of the Laboratory of Computational Social Sciences at the Institute of Education of HSE University, has created a computer model that can distinguish high academic achievers from lower ones based on their social media posts. The prediction model uses a mathematical textual analysis that registers users’ vocabulary (its range and the semantic fields from which concepts are taken), characters and symbols, post length, and word length.

Every word has its own rating (a kind of IQ). Scientific and cultural topics, English words, and words and posts that are longer in length rank highly and serve as indicators of good academic performance. An abundance of emojis, words or whole phrases written in capital letters, and vocabulary related to horoscopes, driving, and military service indicate lower grades in school. At the same time, posts can be quite short—even tweets are quite informative. The study was supported by a grant from the Russian Science Foundation (RSF), and an article detailing the study’s results was published in EPJ Data Science.

Foreign studies have long shown that users’ social media behavior—their posts, comments, likes, profile features, user pics, and photos—can be used to paint a comprehensive portrait of them. A person’s social media behavior can be analyzed to determine their lifestyle, personal qualities, individual characteristics, and even their mental health status. It is also very easy to determine a person’s socio-demographic characteristics, including their age, gender, race, and income. This is where profile pictures, Twitter hashtags, and Facebook posts come in.

Cafe X Robot Coffee Bar in San Francisco employs assembly line-style robots to build your coffee orders for you. This robot barista can make two drinks in under a minute and will get your order right every time.

For More Info:

https://cafexapp.com

Music Credits:

Spring by Ikson: https://soundcloud.com/ikson
Music promoted by Audio Library: https://youtu.be/5WPnrvEMIdo

#cafex #robotcoffee #sanfrancisco

Moving from one-algorithm to one-brain is one of the biggest open challenges in AI. A one-brain AI would still not be a true intelligence, only a better general-purpose AI—Legg’s multi-tool. But whether they’re shooting for AGI or not, researchers agree that today’s systems need to be made more general-purpose, and for those who do have AGI as the goal, a general-purpose AI is a necessary first step.

Humans regularly tackle and solve a variety of complex visuospatial problems. In contrast, most machine learning and computer vision techniques developed so far are designed to solve individual tasks, rather than applying a set of capabilities to any task they are presented with.

Researchers at York University in Canada have been trying to better understand the mechanisms that allow humans to actively observe their environment and solve the wide range perception tasks that they encounter every day, with the hope of informing the development of more sophisticated computer vision systems. In a paper pre-published on arXiv, they presented a new experimental setup called PESAO (psychophysical experimental setup for active observers), which is specifically designed to investigate how humans actively observe the world around them and engage with it.

“The hallmark of human vision is its generality,” Prof. John K. Tsotsos, one of the researchers who carried out the study, told TechXplore. “The same brain and allow one to play tennis, drive a car, perform surgery, view photo albums, read a book, gaze into your loved one’s eyes, go online shopping, solve 1000-piece jigsaw puzzles, find lost keys, chase after his/her young daughter when she appears in danger and so much more. The reality is that as incredible as AI successes have been so far, it is humbling to acknowledge how far there still is to go.”

A pair of statisticians at the University of Waterloo has proposed a math process idea that might allow for teaching AI systems without the need for a large dataset. Ilia Sucholutsky and Matthias Schonlau have written a paper describing their idea and published it on the arXiv preprint server.

Artificial intelligence (AI) applications have been the subject of much research lately, with the development of , researchers in a wide range of fields began finding uses for it, including creating deepfake videos, board game applications and medical diagnostics.

Deep learning networks require large datasets in order to detect patterns revealing how to perform a given task, such as picking a certain face out of a crowd. In this new effort, the researchers wondered if there might be a way to reduce the size of the dataset. They noted that children only need to see a couple of pictures of an animal to recognize other examples. Being statisticians, they wondered if there might be a way to use mathematics to solve the problem.

SOUTHLAKE, Texas, Oct. 22, 2020 /PRNewswire/ — Sabre Corporation (NASDAQ: SABR), the leading software and technology company that powers the global travel industry, today announced that Sabre and Google are developing an Artificial Intelligence (AI)-driven technology platform that is an industry first in travel. The technology, known as Sabre Travel AI™, is infused with Google’s state-of-the-art AI technology and advanced machine-learning capabilities that will help customers to deliver highly relevant and personalized content more quickly, deliver personalized content that better meets the demands of today’s traveler, and create expanded revenue and margin growth opportunities. The Company is integrating Sabre Travel AI into certain products in its existing portfolio, with plans to bring those to market in early 2021.

“Sabre Travel AI is a game-changer. We are proud to be working with Google to build technologies that will seek to re-define the way travel companies do business, and turn the insights derived from analyses into repeatable, scalable operations. The development of Sabre Travel AI marks a milestone in our technology transformation and a significant step toward achieving our 2025 vision of personalized retailing,” said Sundar Narasimhan, president of Sabre Labs. “With the creation of Sabre Travel AI, we are rebuilding our platform on cloud-native, data-driven technology that can be integrated into the existing and future products that Sabre offers. We are combining Google Cloud’s infrastructure, AI and machine-learning capabilities with Sabre’s deep travel domain knowledge to create, not next, but third-generation solutions that we believe are smarter, faster and more cost-effective – a first-of-its kind in travel.”

Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft is releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond to, and remediate threats against ML systems.

During the last four years, Microsoft has seen a notable increase in attacks on commercial ML systems. Market reports are also bringing attention to this problem: Gartner’s Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that “Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.” Despite these compelling reasons to secure ML systems, Microsoft’s survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don’t have the right tools in place to secure their ML systems. What’s more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern. This is a problem because cyber attacks on ML systems are now on the uptick. For instance, in 2020 we saw the first CVE for an ML component in a commercial system and SEI/CERT issued the first vuln note bringing to attention how many of the current ML systems can be subjected to arbitrary misclassification attacks assaulting the confidentiality, integrity, and availability of ML systems. The academic community has been sounding the alarm since 2004, and have routinely shown that ML systems, if not mindfully secured, can be compromised.

The second law of thermodynamics delineates an asymmetry in how physical systems evolve over time, known as the arrow of time. In macroscopic systems, this asymmetry has a clear direction (e.g., one can easily notice if a video showing a system’s evolution over time is being played normally or backward).

In the microscopic world, however, this direction is not always apparent. In fact, fluctuations in microscopic systems can lead to clear violations of the , causing the arrow of to become blurry and less defined. As a result, when watching a video of a microscopic process, it can be difficult, if not impossible, to determine whether it is being played normally or backwards.

Researchers at University of Maryland developed a that can infer the direction of the thermodynamic arrow of time in both macroscopic and microscopic processes. This algorithm, presented in a paper published in Nature Physics, could ultimately help to uncover new physical principles related to thermodynamics.