Robotics researchers are developing exoskeleton legs capable of thinking and making control decisions on their own using artificial intelligence called ExoNet
THE PROBLEM
Current generation of exoskeleton legs need to be manually controlled by users via smartphones or joysticks, It has a problem where motors need to change their operating mode manually when they perform a new activity in different terrains.
By using a conformable electrical interface as an electrical modulating unit and a Venus flytrap as an actuating unit, a biohybrid actuator can be created that is power efficient and responsive, and it can be wirelessly controlled via a smartphone.
Energy efficient light-emitting diodes (LEDs) have been used in our everyday life for many decades. But the quest for better LEDs, offering both lower costs and brighter colors, has recently drawn scientists to a material called perovskite. A recent joint-research project co-led by the scientist from City University of Hong Kong (CityU) has now developed a 2-D perovskite material for the most efficient LEDs.
From household lighting to mobile phone displays, from pinpoint lighting needed for endoscopy procedures, to light source to grow vegetables in Space, LEDs are everywhere. Yet current high-quality LEDs still need to be processed at high temperatures and using elaborated deposition technologies—which makes their production cost expensive.
Scientists have recently realized that metal halide perovskites —semiconductor materials with the same structure as calcium titanate mineral, but with another elemental composition—are extremely promising candidate for next generation LEDs. These perovskites can be processed into LEDs from solution at room temperature, thus largely reducing their production cost. Yet the electro-luminescence performance of perovskites in LEDs still has a room for improvements.
A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.
Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.
Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.
The Apple Car. Quite possibly the most hotly anticipated rumour of this decade. And last decade. Years in the making, and still years from its first appearance, what do we know about the Apple Car?
The late Apple co-founder Steve Jobs was said to be thinking about the company’s involvement in the automotive industry way back in 2008, the era of the iPhone 3G. Fast forward a few years and the Project Titan name begins to get thrown around, an Apple project destined to bring autonomous transport to life. More than 1000 employees were transferred onto this project in its early days.
Apple seemingly put all its eggs into this basket though, because in 2016 rumours had it that Project Titan was getting axed. After major staffing changes and leadership issues, the Project remains in operation today with John Giannandrea at the wheel — Apple’s artificial intelligence and machine learning chief.
The State of the Edge report is based on analysis of the potential growth of edge infrastructure from the bottom up across multiple sectors modeled by Tolaga Research. The forecast evaluates 43 use cases spanning 11 vertical industries.
The one thing these use cases have in common is a growing need to process and analyze data at the point where it is being created and consumed. Historically, IT organizations have deployed applications that process data in batch mode overnight. As organizations embrace digital business transformation initiatives, it’s becoming more apparent that data needs to be processed and analyzed at the edge in near real time.
Of course, there are multiple classes of edge computing platforms, ranging from smartphones and internet of things (IoT) gateways to complete hyperconverged infrastructure (HCI) platforms that are being employed to process data at scale at the edge of a telecommunications network.
Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very ‘deep’ network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g. face and object recognition) are particularly energy-intensive, since they have to send very many numerical values from one neuron layer to the next with great accuracy in each time cycle.
Computer scientist Wolfgang Maass, together with his Ph.D. student Christoph Stöckl, has now found a design method for artificial neural networks that paves the way for energy-efficient high-performance AI hardware (e.g. chips for driver assistance systems, smartphones and other mobile devices). The two researchers from the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) have optimized artificial neuronal networks in computer simulations for image classification in such a way that the neurons —similar to neurons in the brain—only need to send out signals relatively rarely and those that they do are very simple. The proven classification accuracy of images with this design is nevertheless very close to the current state of the art of current image classification tools.
Today, machine learning permeates everyday life, with millions of users every day unlocking their phones through facial recognition or passing through AI-enabled automated security checks at airports and train stations. These tasks are possible thanks to sensors that collect optical information and feed it to a neural network in a computer.
Scientists in China have presented a new nanoscale AI optical circuit trained to perform unpowered all-optical inference at the speed of light for enhanced authentication solutions. Combining smart optical devices with imaging sensors, the system performs complex functions easily, achieving a neural density equal to 1/400th that of the human brain and a computational power more than 10 orders of magnitude higher than electronic processors.
Imagine empowering the sensors in everyday devices to perform artificial intelligence functions without a computer—as simply as putting glasses on them. The integrated holographic perceptrons developed by the research team at University of Shanghai for Science and Technology led by Professor Min Gu, a foreign member of the Chinese Academy of Engineering, can make that a reality. In the future, its neural density is expected to be 10 times that of human brain.
| Phononic crystals as a nanomechanical computing platform.
Without electronics and photonics, there would be no computers, smartphones, sensors, or information and communication technologies. In the coming years, the new field of phononics may further expand these options. That field is concerned with understanding and controlling lattice vibrations (phonons) in solids. In order to realize phononic devices, however, lattice vibrations have to be controlled as precisely as commonly realized in the case of electrons or photons.