AppleInsider Staff
“User interface started with the command prompt, moved to graphics, then touch, and then gestures,” Microsoft research executive Yoram Yaakobi told the Wall Street Journal. “It’s now moving to invisible UI, where there is nothing to operate. The tech around you understands you and what you want to do. We’re putting this at the forefront of our efforts.”
With the push, dubbed “UI.Next,” Microsoft is pursuing a future in which users do not need to tell their device what to do — by touching or speaking to it, for instance — and instead passively consume information that the device has already prepared in anticipation of their needs.
Both Apple and Google have nodded in this direction already, though the technology is far from mature. Apple’s Passbook, for instance, can dynamically surface information like event tickets based on the user’s location, while Google’s Google Now will adjust a user’s schedule based on traffic conditions.