Toggle light / dark theme

Wearables and other connected devices have been available to help treat chronic conditions like asthma and heart disease for a while now. But thus far, the nation’s 30 million diabetics haven’t seen much to help them improve their health or reduce the daily grind of finger pricks and needle pokes.

The $2.5 billion connected-care industry may be off to a late start in diabetes, but it’s making up for lost time. A new breed of connected glucometers, insulin pumps and smartphone apps is hitting the market. They promise to make it easier for diabetics to manage the slow-progressing disease and keep them motivated with feedback and support. In as little as two years, the industry plans to take charge of the entire uncomfortable, time-consuming routine of checking and regulating blood-sugar levels with something called an artificial pancreas. Such systems mimic the functions of a healthy pancreas by blending continuous glucose monitoring, remote-controlled insulin pumps and artificial intelligence to maintain healthy blood-sugar levels automatically.

For Jeroen Tas, CEO of Philips’ Connected Care and Health Informatics unit, diabetes management is also personal: his daughter Kim is diabetic.

Read more

Virtual and augmented reality is taking giant leaps every day, both in the mainstream and in research labs. In a recent TechEmergence interview, Biomedical Engineer and Founder of g.tec Medical Engineering Christopher Guger said the next big steps will be in brain-computer interfaces (BCIs) and embodiment.

Image credit: HCI International
Image credit: HCI International

If you’re unfamiliar with the term, embodiment is the moment when a person truly “feels” at one with a device controlled by their thoughts, while sensing that device as a part of, or an extension, of themselves. While researchers are taking big strides toward that concept, Guger believes those are only baby steps toward what is to come.

While augmented or virtual reality can take us away for a brief period, Guger said true embodiment will require far more BCI development. There has been a lot of work recently in robotic embodiment using BCI.

“We have the robotic system, which is learning certain tasks. You can train the robotic system to pick up objects, to play a musical instrument and, after the robotic system has learned, you’re just giving the high-level command for the robotic system to do it for you,” he said. “This is like a human being, where you train yourself for a certain task and you have to learn it. You need your cortex and a lot of neurons to do the task. Sometimes, it’s pre-programmed and (sometimes) you’re just making the high-level decision to do it.”

Another tool at work in the study of embodiment is what Guger called “virtual avatars.” These virtual avatars allow researchers to experiment with embodiment and learn both how avatars need to behave, while also helping humans grow more comfortable with the concept of embodiment inside the avatar. Being at ease inside the avatar, he said, makes it easier for one to learn tasks and train, or re-train, for specific functions.

As an example, Guger cited a stroke patient working to regain movement in his hand. Placing the patient into a virtual avatar, the patient can “see” the hand of the avatar moving in the same manner that he wants his own hand to move. This connection activates mirror neurons in the patient’s brain, which helps the brain rewire itself to regain a sense of the hand.

“We also do functional electrical stimulation (where) the hand is electrically stimulated, so you also get the same type of movement. This, altogether, has a very positive effect on the remobilization of the patient,” Guger said. “Your movement and the virtual movement, that’s all feeding back to the artificial systems in the cortex again and is affecting brain plasticity. This helps people learn to recover faster.”

One hurdle that researchers are still working to overcome is the concept of “break in presence” (discussed in the article under the sub-heading ‘head-tracking module’). Basically, this is the moment where one’s immersion in a virtual reality world is interrupted by an outside influence, leading to the loss of embodiment. Avoiding that loss of embodiment, he said, is what researchers are striving to attain to make virtual reality a more effective technology.

Though Guger believes mainstream BCI use and true embodiment is still a ways off, other applications of BCI and embodiment are already happening in the medical field. In addition to helping stroke patients regain their mobility, there are BCI systems that allow doctors to do assessments of brain activity on coma patients, which provides some level of communication for both the patient and the family. Further, ALS patients are able to take advantage of BCI technology to improve their quality of life through virtual movement and communication.

“For the average person on the street, it’s very important that the BCI system is cheap and working, and it has to be faster or better compared to other devices that you might have,” he said. “The embodiment work shows that you can really be embodied in another device; this is only working if you are controlling it mentally, like the body is your own, because you don’t have to steer the keyboard or the mouse. It’s just your body and it’s doing what you want it to do. And then you gain something.”

Many opportunities in the VR/ AR space for enterprise Apps, Platforms, and services. Over the years we all have seen many opportunities missed where companies did not do the proper value map assessment and apply their finding to their own prod roadmaps. I personally have created my own value map of VR & AR opportunities across various industries and their biz caps.; and hope that others have done the same around this technology.


But augmented reality might be the best stepping stone, Hardware, Gadgets, Developer, Internet of Things, Wearables, Google, HTC, Fujitsu, Epson.

Read more

The Internet full of incredible DIY projects that make you wish you had the years of experience required to build your own Batmobile, flaming Mad Max guitar, or hoverboard. Thankfully with the underlit miniskirt, we’ve come across a DIY item that looks awesome and is still easy to make.

This wearable was inspired by the Hikaru skirt, a programmable LED miniskirt that took certain corners of the Japanese Internet by storm earlier this year.

Read more

A team of Stanford researchers have developed a novel means of teaching artificial intelligence systems how to predict a human’s response to their actions. They’ve given their knowledge base, dubbed Augur, access to online writing community Wattpad and its archive of more than 600,000 stories. This information will enable support vector machines (basically, learning algorithms) to better predict what people do in the face of various stimuli.

“Over many millions of words, these mundane patterns [of people’s reactions] are far more common than their dramatic counterparts,” the team wrote in their study. “Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings.”

In its initial field tests, using an Augur-powered wearable camera, the system correctly identified objects and people 91 percent of the time. It correctly predicted their next move 71 percent of the time.

Read more

K-Glass, smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano.

Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimized for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realize a natural user interface (UI) and experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.

As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.

Read more

And, this will only be the beginning because with the lightering weight materials that have been develop we will see some amazing VR suits coming.


Virtual reality could one day incorporate all the senses, creating a rich and immersive experience, but existing virtual reality headsets only simulate things you can see and hear. But now, a group of engineers wants to help people “touch” virtual environments in a more natural way, and they built a wearable suit to do just that.

Designed by Lucian Copeland, Morgan Sinko and Jordan Brooks while they were students at the University of Rochester, in New York, the suit looks something like a bulletproof vest or light armor. Each section of the suit has a small motor in it, not unlike the one that makes a mobile phone vibrate to signal incoming messages. In addition, there are small accelerometers embedded in the suit’s arms.

The vibrations provide a sense of touch when a virtual object hits that part of the body, and the accelerometers help orient the suit’s limbs in space, the researchers said. [Photos: Virtual Reality Puts Adults in a Child’s World].

Read more

Your smartwatch screen may soon be rather more impressive: This 4.7-inch organic LCD display is flexible enough to wrap right around a wrist.

Produced by FlexEnable from the UK, the screen squeezes a full-color organic LCD onto a sheet that measures just one hundredth of an inch thick, which makes it highly conformable. The company claims that it can easily run vivid colour and smooth video content, which is a sight better than most wearables.

It’s not the first flexible display, of course. LG already has an 18-inch OLED panel that has enough flexibility to roll into a tube that’s an inch across. But this concept—which, sadly, is all it is right now—is the first large, conformable OLCD designed for wearables that we’ve seen.

Read more

The jury may still be out on the usefulness of the Internet of Things, but payments giant Visa is 100 percent sure that it doesn’t want to miss out. Today, it announced plans to push Visa payments into numerous fields. We’re talking “wearables, automobiles, appliances, public transportation services, clothing, and almost any other connected device” — basically anything that can or will soon connect to the internet.

Visa imagines a future where you’ll be able to pay for parking from your car dashboard or order a grocery delivery from your fridge. It makes sense, then, that Samsung is one of the first companies to sign up to the Visa Ready Program, alongside Accenture, universal payment card company Coin and Fit Pay. Chronos and Pebble are also working to integrate secure payments inside their devices.

To show off the technology, which works with any credit card, Visa or otherwise, the company has teamed up with Honda to develop an in-car app that helps automate payments. Right now they have two demos, the first of which concerns refueling. It warns the driver when their fuel level is low and directs them to the nearest gas station. Once the car arrives at the pump, the app calculates the expected cost and allows the driver to pay for the fuel without having to leave the vehicle.

Read more