Toggle light / dark theme

According to Astro Teller, the Google self-driving car is “close to graduating from X.” Parsing out the meaning of that string of words is a little complicated, but basically it means that Alphabet isn’t thinking of self-driving cars so much as a crazy “moonshot,” but as a thing that’s just about ready to be a standalone business that could actually generate revenue.

If you’re not a close follower of Google, though, more explanation might still be in order. It’s coming, in the form of a segment on tonight’s NBC Nightly News with Lester Holt. They’ll be airing an inside look at X division inside Alphabet. That’s the group you know as Google X, but after last year’s corporate reorg, we’re all still getting used to the new naming conventions.

Holt interviewed Astro Teller and Obi Felten, who have the cheeky titles “Chief of Moonshots” and “Director of X Foundry,” respectively. It’ll likely be an overview of the projects that X is currently running — including self-driving cars, Project Loon, Project Wing, and Makani. Teller will also be candid about X’s failures. Failure being a favorite topic of his, actually — Holt tells us that inside X, “if you have an idea that crashes and burns, they give you a sticker.”

Read more

Well; US DoE and EPA has already been using AI for a very, very long time in monitoring and proactively acting on any waste release. How do I know? I was one of the lead architects and developers of the solution.


As computers get smarter, scientists look at new ways to enlist them in environmental protection.

Read more

Nice; however, I see also 3D printing along with machine learning being part of any cosmetic procedures and surgeries.


With an ever-increasing volume of electronic data being collected by the healthcare system, researchers are exploring the use of machine learning—a subfield of artificial intelligence—to improve medical care and patient outcomes. An overview of machine learning and some of the ways it could contribute to advancements in plastic surgery are presented in a special topic article in the May issue of Plastic and Reconstructive Surgery®, the official medical journal of the American Society of Plastic Surgeons (ASPS).

“Machine learning has the potential to become a powerful tool in plastic surgery, allowing surgeons to harness complex clinical data to help guide key clinical decision-making,” write Dr. Jonathan Kanevsky of McGill University, Montreal, and colleagues. They highlight some key areas in which machine learning and “Big Data” could contribute to progress in plastic and reconstructive surgery.

Machine Learning Shows Promise in Plastic Surgery Research and Practice

Machine learning analyzes historical data to develop algorithms capable of knowledge acquisition. Dr. Kanevsky and coauthors write, “Machine learning has already been applied, with great success, to process large amounts of complex data in medicine and surgery.” Projects with healthcare applications include the IBM Watson Health cognitive computing system and the American College of Surgeons’ National Surgical Quality Improvement Program.

Read more

Economist Robin Hanson says we’re on the brink of a strange new era. Read an excerpt ofThe Age of Em: Work, Love, and Life when Robots Rule the Earth” below.

digital cityEugene Sergeev / Shutterstock.

What will the next great era be like, after the eras of foraging, farming, and industry?

Read more

The odds are now better than ever that future explorers, both robotic and human, will be able to take samples of the lunar’s hidden interior in deep impact basins like Crisium and Moscoviense. This gives planners more options on where to embed the first science colony.


Finding and sampling the Moon’s ancient interior mantle — one of the science drivers for sending robotic spacecraft and future NASA astronauts to the Moon’s South Pole Aitken basin — is just as likely achievable at similar deep impact basins scattered around the lunar surface.

At least that’s the view reached by planetary scientists who have been analyzing the most recent data from NASA’s Gravity Recovery And Interior Laboratory (GRAIL) and its Lunar Reconnaissance Orbiter (LRO) missions as well as from Japan’s SELENE (Kaguya) lunar orbiter.

The consensus is that the lunar crust is actually thinner than previously thought.

Read more

I do love Nvidia!


During the past nine months, an Nvidia engineering team built a self-driving car with one camera, one Drive-PX embedded computer and only 72 hours of training data. Nvidia published an academic preprint of the results of the DAVE2 project entitled End to End Learning for Self-Driving Cars on arXiv.org hosted by the Cornell Research Library.

The Nvidia project called DAVE2 is named after a 10-year-old Defense Advanced Research Projects Agency (DARPA) project known as DARPA Autonomous Vehicle (DAVE). Although neural networks and autonomous vehicles seem like a just-invented-now technology, researchers such as Google’s Geoffrey Hinton, Facebook’s Yann Lecune and the University of Montreal’s Yoshua Bengio have collaboratively researched this branch of artificial intelligence for more than two decades. And the DARPA DAVE project application of neural network-based autonomous vehicles was preceded by the ALVINN project developed at Carnegie Mellon in 1989. What has changed is GPUs have made building on their research economically feasible.

Neural networks and image recognition applications such as self-driving cars have exploded recently for two reasons. First, Graphical Processing Units (GPU) used to render graphics in mobile phones became powerful and inexpensive. GPUs densely packed onto board-level supercomputers are very good at solving massively parallel neural network problems and are inexpensive enough for every AI researcher and software developer to buy. Second, large, labeled image datasets have become available to train massively parallel neural networks implemented on GPUs to see and perceive the world of objects captured by cameras.

Read more

Closing the instability gap.


(Phys.org)—It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. Errors arise because of the constant interaction between the qubits and their environment, which can result in photon loss, which in turn causes the qubits to randomly flip to an incorrect state.

In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.

In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.

Read more

Excellent read and a true point about the need for some additional data laws with our ever exploding information overload world.


Laws for Mobility, IoT, Artificial Intelligence and Intelligent Process Automation

If you are the VP of Sales, it is quite likely you want and need to know up to date sales numbers, pipeline status and forecasts. If you are meeting with a prospect to close a deal, it is quite likely that having up to date business intelligence and CRM information would be useful. Likewise traveling to a remote job site to check on the progress of an engineering project is also an obvious trigger that you will need the latest project information. Developing solutions integrated with mobile applications that can anticipate your needs based upon your Code Halo data, the information that surrounds people, organizations, projects, activities and devices, and acting upon it automatically is where a large amount of productivity gains will be found in the future.

There needs to be a law, like Moore’s infamous law, that states, “The more data that is collected and analyzed, the greater the economic value it has in aggregate.” This law I believe is accurate and my colleagues at the Center for the Future of Work, wrote a book titled Code Halos that documents evidence of its truthfulness as well. I would also like to submit an additional law, “Data has a shelf-life and the economic value of data diminishes over time.” In other words, if I am negotiating a deal today, but can’t get the critical business data I need for another week, the data will not be as valuable to me then. The same is true if I am trying to optimize, in real-time, the schedules of 5,000 service techs, but don’t have up to date job status information. Receiving job status information tomorrow, does not help me optimize schedules today.

Read more