The bio-inspired robot sweats for the same reason we do.
Category: robotics/AI
Dr. Watson will see you now…
IBM Watson is rapidly earning its place in medical diagnosis. It has recently proven its ability to accurately diagnose patients and far faster than conventional doctors could ever do. The world of diagnosis of medicine is going to change very rapidly in the next few years.
#crowdfundthecure #aging
Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity.
Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.
So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk?
What I don’t understand is why haven’t we seen and experienced much media news, radio, etc. enhanced and even in some cases new desk people, etc. replaced by AI technology especially with how we can emulate a person in AI tech not to mention AR/ VR technology. Could we see a Bill O’Riely, or Megan Kelly, or MSNBC, etc. replaced by AI in the coming 3 to 5 years? Most definitely radio should consider.
If you’re reading about the US election, some of that news is likely to come to you from a “bot.”
Automated systems known as “bots” or “robo-journalism” have been around for years, but they are playing a bigger role in coverage this year amid technology advances and stretched media resources.
The New York Times, Washington Post, CNN, NBC, Yahoo News and the non-profit Pro Publica are among news organizations using automated technology or messaging bots for coverage in the runup to Tuesday’s vote or on election night.
AI good for internal back office and some limited front office activities; however, still need to see more adoption of QC in the Net and infrastructure in companies to expose their services and information to the public net & infrastructure.
Deep learning, as explained by tech journalist Michael Copeland on Blogs.nvidia.com, is the newest and most powerful computational development thus far. It combines all prior research in artificial intelligence (AI) and machine learning. At its most fundamental level, Copeland explains, deep learning uses algorithms to peruse massive amounts of data, and then learn from that data to make decisions or predictions. The Defense Agency Advanced Project Research (DARPA), as Wired reports, calls this method “probabilistic programming.”
Mimicking the human brain’s billions of neural connections by creating artificial neural networks was thought to be the path to AI in the early days, but it was too “computationally intensive.” It was the invention of Nvidia’s powerful graphics processing unit (GPU), that allowed Andre Ng, a scientist at Google, to create algorithms by “building massive artificial neural networks” loosely inspired by connections in the human brain. This was the breakthrough that changed everything. Now, according to Thenextweb.com, Google’s Deep Mind platform has been proven to teach itself, without any human input.
In fact, earlier this year an AI named AlphaGO, developed by Google’s Deep Mind division, beat Lee Sedol, a world master of the 3000 year-old Chinese game GO, described as the most complex game known to exist. AlphaGO’s creators and followers now say this Deep Learning AI proves that machines can learn and may possibly demonstrate intuition. This AI victory has changed our world forever.
This is what scares me; autonomous warfare.
WASHINGTON: DARPA is taking another step toward building autonomous electronic warfare systems with a small contract award to BAE Systems.
Artificial intelligence and autonomy loom large in the Pentagon these days. And electronic warfare, much more quietly, dominates a great deal of thinking across the services these days after we’ve watched how the Russians operate against Ukraine and in Syria. So DARPA’s additional $13.3 million award announced today is worth noting.
Why does all this matter? One of the biggest challenges facing the F-35 program, for example, is the creation of a huge digital threat library (known as mission data files) for the airplane. It includes electronic spectrum information for a wide array of emitters — radar, radio and other sources.
In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.
But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.
At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.