Real-time health monitoring and sensing abilities of robots require soft electronics, but a challenge of using such materials lie in their reliability. Unlike rigid devices, being elastic and pliable makes their performance less repeatable. The variation in reliability is known as hysteresis.
Guided by the theory of contact mechanics, a team of researchers from the National University of Singapore (NUS) came up with a new sensor material that has significantly less hysteresis. This ability enables more accurate wearable health technology and robotic sensing.
The research team, led by Assistant Professor Benjamin Tee from the Institute for Health Innovation & Technology at NUS, published their results in the prestigious journal Proceedings of the National Academy of Sciences on 28 September 2020.
DiCarlo and Yamins, who now runs his own lab at Stanford University, are part of a coterie of neuroscientists using deep neural networks to make sense of the brain’s architecture. In particular, scientists have struggled to understand the reasons behind the specializations within the brain for various tasks. They have wondered not just why different parts of the brain do different things, but also why the differences can be so specific: Why, for example, does the brain have an area for recognizing objects in general but also for faces in particular? Deep neural networks are showing that such specializations may be the most efficient way to solve problems.
Neuroscientists are finding that deep-learning networks, often criticized as “black boxes,” can be good models for the organization of living brains.
In this video, I’m going to talk about how an AI Camera Mistakes Soccer Ref’s Bald Head For Ball. Technology and sports have a fairly mixed relationship already. Log on to Twitter during a soccer match (or football as it’s properly known*) and as well as people tweeting ambiguous statements like “YESSS” and “oh no mate” to about 20,000 inexplicable retweets, you’ll likely see a lot of complaints about the video assistant referee (VAR) and occasionally goal-line technology not doing its job. Fans of Scottish football team Inverness Caledonian Thistle FC experienced a new hilarious technological glitch during a match last weekend, but in all honesty, you’d be hard-pressed to say it didn’t improve the viewing experience dramatically.
The club announced a few weeks ago it was moving from using human camera operators to cameras controlled by AI. The club proudly announced at the time the new “Pixellot system uses cameras with in-built, AI, ball-tracking technology” and would be used to capture HD footage of all home matches at Caledonian Stadium, which would be broadcast directly to season-ticket holders’ homes. The AI camera appeared to mistake the man’s bald head for the ball for a lot of the match, repeatedly swinging back to follow the linesman instead of the actual game. Many viewers complained they missed their team scoring a goal because the camera “kept thinking the Lino bald head was the ball,” and some even suggested the club would have to provide the linesman with a toupe or hat.
With no fans allowed in the stadium due to Covid-19 restrictions, the fans of Inverness Caledonian Thistle FC and their opponents Ayr United could only watch via the cameras, and so were treated to mostly a view of the linesman’s head instead of any exciting moments of the match that were occurring off-camera, though some fans saw this as a bonus given the usual quality of performance. The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart. The overall research goal of artificial intelligence is to create technology that allows computers and machines to function intelligently. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. That’s all for today.
Researchers at Stanford University have developed a CRISPR-based “lab on a chip” to detect COVID-19, and are working with automakers at Ford to develop their prototype into a market-ready product.
This could provide an automated, hand-held device designed to deliver a coronavirus test result anywhere within 30 minutes.
In a study published this week in the Proceedings of the National Academy of Sciences, the test spotted active infections quickly and cheaply, using electric fields to purify fluids from a nasal swab sample and drive DNA-cutting reagents within the system’s tiny passages.
Boeing has hired a former SpaceX and Tesla executive with autonomous technology experience to lead its software development team.
Effective immediately, Jinnah Hosein is Boeing’s vice-president of software engineering, a new position that includes oversight of “software engineering across the enterprise”, Boeing says.
“Hosein will lead a new, centralised organisation of engineers who currently support the development and delivery of software embedded in Boeing’s products and services,” the Chicago-based airframer says. “The team will also integrate other functional teams to ensure engineering excellence throughout the product life cycle.”
Another argument for government to bring AI into its quantum computing program is the fact that the United States is a world leader in the development of computer intelligence. Congress is close to passing the AI in Government Act, which would encourage all federal agencies to identify areas where artificial intelligences could be deployed. And government partners like Google are making some amazing strides in AI, even creating a computer intelligence that can easily pass a Turing test over the phone by seeming like a normal human, no matter who it’s talking with. It would probably be relatively easy for Google to merge some of its AI development with its quantum efforts.
The other aspect that makes merging quantum computing with AI so interesting is that the AI could probably help to reduce some of the so-called noise of the quantum results. I’ve always said that the way forward for quantum computing right now is by pairing a quantum machine with a traditional supercomputer. The quantum computer could return results like it always does, with the correct outcome muddled in with a lot of wrong answers, and then humans would program a traditional supercomputer to help eliminate the erroneous results. The problem with that approach is that it’s fairly labor intensive, and you still have the bottleneck of having to run results through a normal computing infrastructure. It would be a lot faster than giving the entire problem to the supercomputer because you are only fact-checking a limited number of results paired down by the quantum machine, but it would still have to work on each of them one at a time.
But imagine if we could simply train an AI to look at the data coming from the quantum machine, figure out what makes sense and what is probably wrong without human intervention. If that AI were driven by a quantum computer too, the results could be returned without any hardware-based delays. And if we also employed machine learning, then the AI could get better over time. The more problems being fed to it, the more accurate it would get.
Looks like inventory robots won’t be replacing humans in Walmart for now. 😃
I’m a bit sad for the supplier of the robots. But I’m glad that people will keep their jobs in Walmart.
Bitter Reality
Unfortunately, the news was devastating for Bossa Nova, the robotics firm that provided Walmart with its inventory robots. The firm, a Carnegie Mellon University-born startup, laid off half of its staff as it tries to drum up replacement business.
“We see an improvement in stores with the robots,” Walmart told Bossa Nova, someone familiar with the deal told the WSJ, “but we don’t see enough of an improvement.”
Military observers said the disruptive technologies – those that fundamentally change the status quo – might include such things as sixth-generation fighters, high-energy weapons like laser and rail guns, quantum radar and communications systems, new stealth materials, autonomous combat robots, orbital spacecraft, and biological technologies such as prosthetics and powered exoskeletons.
Speeding up the development of ‘strategic forward-looking disruptive technologies’ is a focus of the country’s latest five-year plan.
EPFL engineers have developed a computer chip that combines two functions—logic operations and data storage—into a single architecture, paving the way to more efficient devices. Their technology is particularly promising for applications relying on artificial intelligence.
It’s a major breakthrough in the field of electronics. Engineers at EPFL’s Laboratory of Nanoscale Electronics and Structures (LANES) have developed a next-generation circuit that allows for smaller, faster and more energy-efficient devices—which would have major benefits for artificial-intelligence systems. Their revolutionary technology is the first to use a 2-D material for what’s called a logic-in–memory architecture, or a single architecture that combines logic operations with a memory function. The research team’s findings appear today in Nature.
Until now, the energy efficiency of computer chips has been limited by the von Neumann architecture they currently use, where data processing and data storage take place in two separate units. That means data must constantly be transferred between the two units, using up a considerable amount of time and energy.
The Float is a concept car by Yunchen Chai. It won the design competition hosted by Renault and Central Saint Martins. The participants of the competition had to design a car that emphasized electric power, autonomous driving, and connected technology.
This car uses Meglev technology, is non-directional, and a magnetic belt to attach multiple pods. The Float would even come with an app. This could be the future of car design.