Toggle light / dark theme

IN AN effort to mine precious metals potentially worth trillions of dollars and aid interstellar travel, China has unveiled plans to build a base on an asteroid, likely to happen “in the near future”.

Ye Peijian, the chief commander and designer of the Chinese Lunar Exploration Program, revealed details that could potentially put an unmanned craft on an asteroid and mine the rock for metals like palladium, platinum and others that are used in items such as smartphones and cars.

“In the near future, we will study ways to send robots or astronauts to mine suitable asteroids and transport the resources back to Earth,” Peijian said in comments reported by China Daily.

Read more

Abbott received the European CE Mark and is introducing its Confirm Rx Insertable Cardiac Monitor (ICM). Still sporting St. Jude Medical’s logo, now part of Abbott, the Confirm Rx features wireless Bluetooth connectivity to a paired app on the patient’s smartphone. This allows for transmission of cardiac event data to the patient’s cardiologist from just about anywhere there is cellular connectivity.

Unlike other similar devices, you don’t need a separate transmitter, like the Merlin system that has typically been employed, taking up space near the bed. And you can freely travel without having to bring another dedicated device.

Cardiac monitors such as these are used to detect heart arrhythmias in order to help identify their causes and triggers. Patients have them implanted under the skin in a procedure that takes only a few minutes, and then go about their usual days while being continuously monitored, with data uploading to a central hub on a regular basis.

Read more

Yes, this works with the financial profile of “middle class” American families.


(Tech Xplore)—RethinkX, an independent think tank that analyzes and forecasts disruptive technologies, has released an astonishing report predicting a far more rapid transition to EV/autonomous vehicles than experts are currently predicting. The report is based on an analysis of the so-called technology-adoption S-curve that describes the rapid uptake of truly disruptive technologies like smartphones and the internet. Additionally, the report addresses in detail the massive economic implications of this prediction across various sectors, including energy, transportation and manufacturing.

Rethinking Transportation 2020–2030 suggests that within 10 years of regulatory approval, by 2030, 95 percent of U.S. passenger miles traveled will be served by on-demand autonomous electric vehicles (AEVs). The primary driver of this unfathomably huge change in American life is economics: The cost savings of using transport-as-a-service (TaaS) providers will be so great that consumers will abandon individually owned vehicles. The report predicts that the cost of TaaS will save the average family $5600 annually, the equivalent of a 10 percent raise in salary. This, the report suggests, will lead to the biggest increase in consumer spending in history.

Consumers are already beginning to adapt to TaaS with the broad availability of ride-sharing services; additionally, the report says, Uber, Lyft and Didi are investing billions developing technologies and services to help consumers overcome psychological and behavioral hurdles to shared transportation such as habit, fear of strangers and affinity for driving. In 2016 alone, 550,000 passengers chose TaaS services in New York City alone.

Read more

The “Watchsense” prototype uses a small depth camera attached to the arm, mimicking a depth camera on a smartwatch. It could make it easy to type, or in a music program, volume could be increased by simply raising a finger. (credit: Srinath Sridhar et al.)

If you wear a smartwatch, you know how limiting it is to type it on or otherwise operate it. Now European researchers have developed an input method that uses a depth camera (similar to the Kinect game controller) to track fingertip touch and location on the back of the hand or in mid-air, allowing for precision control.

The researchers have created a prototype called “WatchSense,” worn on the user’s arm. It captures the movements of the thumb and index finger on the back of the hand or in the space above it. It would also work with smartphones, smart TVs, and virtual-reality or augmented reality devices, explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.

Read more

UK artificial intelligence (AI) startup Babylon has raised $60 million (£47 million) for its smartphone app which aims to put a doctor in your pocket.

The latest funding round, which comes just over a year after the startup’s last fundraise, means that the three-year-old London startup now has a valuation in excess of $200 million (£156 million), according to The Financial Times.

Babylon’s app has been downloaded over a million times and it allows people in UK, Ireland, and Rwanda to ask a chatbot a series of questions about their condition without having to visit a GP.

Read more

In the past 10 years, the best-performing artificial-intelligence systems—such as the speech recognizers on smartphones or Google’s latest automatic translator—have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

Read more

Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.

To help us make the most of these “micro-moments,” researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a series of apps called “WaitSuite” that test you on vocabulary words during idle moments, like when you’re waiting for an instant message or for your phone to connect to WiFi.

Building on micro-learning apps like Duolingo, WaitSuite aims to leverage moments when a person wouldn’t otherwise be doing anything — a practice that its developers call “wait-learning.”

Read more

Drawing inspiration from the plant world, researchers have invented a new electrode that could boost our current solar energy storage by an astonishing 3,000 percent.

The technology is flexible and can be attached directly to solar cells — which means we could finally be one step closer to smartphones and laptops that draw their power from the Sun, and never run out.

A major problem with reliably using solar energy as a power source is finding an efficient way to store it for later use without leakage over time.

Read more

I’ve been reading about Gcam, the Google X project that was first sparked by the need for a tiny camera to fit inside Google Glass, before evolving to power the world-beating camera of the Google Pixel. Gcam embodies an atypical approach to photography in seeking to find software solutions for what have traditionally been hardware problems. Well, others have tried, but those have always seemed like inchoate gimmicks, so I guess the unprecedented thing about Gcam is that it actually works. But the most exciting thing is what it portends.

I think we’ll one day be able to capture images without any photographic equipment at all.

Now I know this sounds preposterous, but I don’t think it’s any more so than the internet or human flight might have once seemed. Let’s consider what happens when we tap the shutter button on our cameraphones: light information is collected and focused by a lens onto a digital sensor, which converts the photons it receives into data that the phone can understand, and the phone then converts that into an image on its display. So we’re really just feeding information into a computer.

Read more