Think Java code-completion on steriods
Code boffins at Rice University in Texas have developed a system called Bayou to partially automate the writing of Java code with the help of deep-learning algorithms and training data sampled from GitHub.
Think Java code-completion on steriods
Code boffins at Rice University in Texas have developed a system called Bayou to partially automate the writing of Java code with the help of deep-learning algorithms and training data sampled from GitHub.
There is an enduring fear in the music industry that artificial intelligence will replace the artists we love, and end creativity as we know it.
As ridiculous as this claim may be, it’s grounded in concrete evidence. Last December, an AI-composed song populated several New Music Friday playlists on Spotify, with full support from Spotify execs. An entire startup ecosystem is emerging around services that give artists automated songwriting recommendations, or enable the average internet user to generate customized instrumental tracks at the click of a button.
But AI’s long-term impact on music creation isn’t so cut and dried. In fact, if we as an industry are already thinking so reductively and pessimistically about AI from the beginning, we’re sealing our own fates as slaves to the algorithm. Instead, if we take the long view on how technological innovation has made it progressively easier for artists to realize their creative visions, we can see AI’s genuine potential as a powerful tool and partner, rather than as a threat.
In a talk given today at the American Association for Cancer Research’s annual meeting, Google researchers described a prototype of an augmented reality microscope that could be used to help physicians diagnose patients. When pathologists are analyzing biological tissue to see if there are signs of cancer — and if so, how much and what kind — the process can be quite time-consuming. And it’s a practice that Google thinks could benefit from deep learning tools. But in many places, adopting AI technology isn’t feasible. The company, however, believes this microscope could allow groups with limited funds, such as small labs and clinics, or developing countries to benefit from these tools in a simple, easy-to-use manner. Google says the scope could “possibly help accelerate and democratize the adoption of deep learning tools for pathologists around the world.”
The microscope is an ordinary light microscope, the kind used by pathologists worldwide. Google just tweaked it a little in order to introduce AI technology and augmented reality. First, neural networks are trained to detect cancer cells in images of human tissue. Then, after a slide with human tissue is placed under the modified microscope, the same image a person sees through the scope’s eyepieces is fed into a computer. AI algorithms then detect cancer cells in the tissue, which the system then outlines in the image seen through the eyepieces (see image above). It’s all done in real time and works quickly enough that it’s still effective when a pathologist moves a slide to look at a new section of tissue.
Google today announced a pair of new artificial intelligence experiments from its research division that let web users dabble in semantics and natural language processing. For Google, a company that’s primary product is a search engine that traffics mostly in text, these advances in AI are integral to its business and to its goals of making software that can understand and parse elements of human language.
The website will now house any interactive AI language tools, and Google is calling the collection Semantic Experiences. The primary sub-field of AI it’s showcasing is known as word vectors, a type of natural language understanding that maps “semantically similar phrases to nearby points based on equivalence, similarity or relatedness of ideas and language.” It’s a way to “enable algorithms to learn about the relationships between words, based on examples of actual language usage,” says Ray Kurzweil, notable futurist and director of engineering at Google Research, and product manager Rachel Bernstein in a blog post. Google has published its work on the topic in a paper here, and it’s also made a pre-trained module available on its TensorFlow platform for other researchers to experiment with.
The first of the two publicly available experiments released today is called Talk to Books, and it quite literally lets you converse with a machine learning-trained algorithm that surfaces answers to questions with relevant passages from human-written text. As described by Kurzweil and Bernstein, Talk to Books lets you “make a statement or ask a question, and the tool finds sentences in books that respond, with no dependence on keyword matching.” The duo add that, “In a sense you are talking to the books, getting responses which can help you determine if you’re interested in reading them or not.”
Researchers proposed implementing the residential energy scheduling algorithm by training three action dependent heuristic dynamic programming (ADHDP) networks, each one based on a weather type of sunny, partly cloudy, or cloudy. ADHDP networks are considered ‘smart,’ as their response can change based on different conditions.
“In the future, we expect to have various types of power supplies to every household including the grid, windmills, solar panels and biogenerators. The issues here are the varying nature of these power sources, which do not generate electricity at a stable rate,” said Derong Liu, a professor with the School of Automation at the Guangdong University of Technology in China and an author on the paper. “For example, power generated from windmills and solar panels depends on the weather, and they vary a lot compared to the more stable power supplied by the grid. In order to improve these power sources, we need much smarter algorithms in managing/scheduling them.”
The details were published on the January 10th issue of IEEE/CAA Journal of Automatica Sinica, a joint bimonthly publication of the IEEE and the Chinese Association of Automation.
People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers.
In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise. In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed. Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context. We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking.