Toggle light / dark theme

Google today announced a pair of new artificial intelligence experiments from its research division that let web users dabble in semantics and natural language processing. For Google, a company that’s primary product is a search engine that traffics mostly in text, these advances in AI are integral to its business and to its goals of making software that can understand and parse elements of human language.

The website will now house any interactive AI language tools, and Google is calling the collection Semantic Experiences. The primary sub-field of AI it’s showcasing is known as word vectors, a type of natural language understanding that maps “semantically similar phrases to nearby points based on equivalence, similarity or relatedness of ideas and language.” It’s a way to “enable algorithms to learn about the relationships between words, based on examples of actual language usage,” says Ray Kurzweil, notable futurist and director of engineering at Google Research, and product manager Rachel Bernstein in a blog post. Google has published its work on the topic in a paper here, and it’s also made a pre-trained module available on its TensorFlow platform for other researchers to experiment with.

The first of the two publicly available experiments released today is called Talk to Books, and it quite literally lets you converse with a machine learning-trained algorithm that surfaces answers to questions with relevant passages from human-written text. As described by Kurzweil and Bernstein, Talk to Books lets you “make a statement or ask a question, and the tool finds sentences in books that respond, with no dependence on keyword matching.” The duo add that, “In a sense you are talking to the books, getting responses which can help you determine if you’re interested in reading them or not.”

Read more

Okay, if you’ve got some spare time, check out this amazing website called Stuff in Space. It’s a simulation of every satellite (alive or dead), space station, and large piece of space junk orbiting the Earth right now.

You can zoom in and out, rotate the Earth and its satellites around. Pick any one object and discover more information about it. Or just leave it running and watch all the objects buzz around in real time. Humans have been busy launching a lot of stuff, and it’s only going to increase.

The simulation was made by James Yoder, an incoming Electrical and Computer Engineering freshman at the University of Texas at Austin, and it’s based on data supplied by Space Track, which is a service of the Joint Space Operations Center. They have a bunch of handy data feeds and APIs that you can use track orbital objects, but I’ve never seen anything as creative as this.

Read more

DIYers can bioprint living human organs by modifying an off-the-shelf 3D printer costing about $500, announce researchers who published the plans as open source, enabling anyone to build their own system. [This article first appeared on LongevityFacts. Author: Brady Hartman. ]

Scientists at Carnegie Mellon University (CMU) developed a low-cost 3D bioprinter to print living tissue by modifying a standard desktop 3D printer and released the design as open source so that anyone can build their own system.

The biomedical engineering team led by Carnegie Mellon University (CMU) Associate Professor Adam Feinberg, Ph.D., BME postdoctoral fellow TJ Hinton, Ph.D. just published a paper in the journal HardwareX describing a low-cost 3D bioprinter. The article contains complete instructions for modifying nearly any commercial plastic printer, as well as printing and installing the syringe-based, large volume extruder.

Read more

“Although smartphones and tablets are ubiquitous, many of the companies that make our everyday consumer products still rely on paper trails and manually updated spreadsheets to keep track of their production processes and delivery schedules,” says Leyuan Shi, a professor of industrial and systems engineering at the University of Wisconsin-Madison.

That’s what she hopes to change with a research idea she first published almost two decades ago.

During the past 16 years, Shi has visited more than 400 companies in the United States, China, Europe, and Japan to personally observe their production processes. “And I have used that insight to develop tools that can make these processes run much more smoothly,” she says.

Read more

Engineering and construction is behind the curve in implementing artificial intelligence solutions. Based on extensive research, we survey applications and algorithms to help bridge the technology gap.

The engineering and construction (E&C) sector is worth more than $10 trillion a year. And while its customers are increasingly sophisticated, it remains severely underdigitized. To lay out the landscape of technology, we conducted a comprehensive study of current and potential use cases in every stage of E&C, from design to preconstruction to construction to operations and asset management. Our research revealed a growing focus on technological solutions that incorporate artificial intelligence (AI)-powered algorithms. These emerging technologies focus on helping players overcome some of the E&C industry’s greatest challenges, including cost and schedule overruns and safety concerns.

Read more

Mitsubishi Hitachi Power Systems (MHPS) and Carnegie Mellon University (CMU) today announced the release of the 2018 Carnegie Mellon Power Sector Carbon Index, at CMU Energy Week, hosted by the Wilton E. Scott Institute for Energy Innovation. The Index tracks the environmental performance of U.S. power producers and compares current emissions to more than two decades of historical data collected nationwide. This release marks the one-year anniversary of the Index, developed as a new metric to track power sector carbon emissions performance trends.

“The Carnegie Mellon Power Sector Carbon Index provides a snapshot of critical data regarding energy production and environmental performance,” said Costa Samaras, Assistant Professor of Civil and Environmental Engineering. “We’ve found this index to provide significant insight into trends in generation and emissions. In particular, the data have shown that emissions intensity has fallen to the lowest level on record, as a combination of natural gas and renewable power have displaced more intensive coal-fired power generation.”

The latest data revealed the following findings: U.S. power plant emissions averaged 967 lb. CO2 per megawatt hour (MWh) in 2017, which was down 3.1 percent from the prior year and down 26.8 percent from the annual value of 1,321 lb CO2 per MWh in 2005 The result of 2016 was initially reported as 1,001 lb/MWh, but was later revised downward to 998 lb/MWh.

Read more

According to the United Nations, 30,000 people die each week from the consumption and use of unsanitary water. Although the vast majority of these fatalities occur in developing nations, the U.S. is no stranger to unanticipated water shortages, especially after hurricanes, tropical storms and other natural disasters that can disrupt supplies without warning.

Led by Guihua Yu, associate professor of materials science and mechanical engineering at The University of Texas at Austin, a research team in UT Austin’s Cockrell School of Engineering has developed a cost-effective and compact technology using combined gel-polymer hybrid materials. Possessing both hydrophilic (attraction to ) qualities and semiconducting (solar-adsorbing) properties, these “hydrogels” (networks of polymer chains known for their high water absorbency) enable the production of clean, safe drinking water from any source, whether it’s from the oceans or contaminated supplies.

The findings were published in the most recent issue of the journal Nature Nanotechnology.

Read more

Rumors of commercial quantum computing systems have been coming hot and heavy these past few years but there are still a number of issues to work out in the technology. For example, researchers at the Moscow Institute Of Physics And Technology have begun using silicon carbine to create a system to release single photons in ambient i.e. room temperature conditions. To maintain security quantum computers need to output quantum bits – essentially single photons. This currently requires a supercooled material that proves to be unworkable in the real world. From the release:

Photons — the quanta of light — are the best carriers for quantum bits. It is important to emphasize that only single photons can be used, otherwise an eavesdropper might intercept one of the transmitted photons and thus get a copy of the message. The principle of single-photon generation is quite simple: An excited quantum system can relax into the ground state by emitting exactly one photon. From an engineering standpoint, one needs a real-world physical system that reliably generates single photons under ambient conditions. However, such a system is not easy to find. For example, quantum dots could be a good option, but they only work well when cooled below −200 degrees Celsius, while the newly emerged two-dimensional materials, such as graphene, are simply unable to generate single-photons at a high repetition rate under electrical excitation.

Researchers used silicon carbide in early LEDs and has been used to create electroluminescent electronics in the past. This new system will allow manufacturers to place silicon carbide emitters right on the quantum computer chips, a massive improvement over the complex systems used today.

Read more

Perfect vision is great. But like any advantage it comes with limitations. Those with ease don’t develop the same unique senses and strengths as someone who must overcome obstacles, people like Lana Awad, a neurotech engineer at CTRL-labs in New York, who diagnosed her own degenerative eye disease with a high school science textbook as a teen in Syria and went on to teach at Harvard University.

Though they see themselves as clear leaders, visionaries with all the obvious advantages—like Elon Musk and Mark Zuckerberg, for example—can be blind in their way, lacking the context needed to guide if they don’t recognize their counterintuitive limitations. This is problematic for humanity because we’re all relying on them to create the tools that increasingly rule every aspect of our lives. The internet is just the start.

Tools that will meld mind and machine are already a reality. Neurotech is a huge business with applications being developed for gaming, the military, medicine, social media, and much more to come. Neurotech Report projected in 2016 that the $7.6 billion market could reach $12 billion by 2020. Wired magazine called 2017, “a coming-out year for the brain machine interface (BMI).”

Read more

Researchers from the School of Informatics, Computing, and Engineering are part of a group that has received a multi-million dollar grant from IUs’ Emerging Areas of Research program.

Amr Sabry, a professor of informatics and computing and the chair of the Department of Computer Science, and Alexander Gumennik, assistant professor of Intelligent Systems Engineering, are part of the “Center for Quantum Information Science and Engineering” initiative led by Gerardo Ortiz, a professor of physics in IU’s College of Arts and Sciences. The initiative will focus on harnessing the power of quantum entanglement, which is a theoretical phenomenon in which the quantum state of two or more particles have to be described in reference to one another even if the objects are spatially separated.

“Bringing together a unique group of physicists, computer scientists, and engineers to solve common problems in quantum sensing and computation positions IU at the vanguard of this struggle,” Gumennik said. “I believe that this unique implementation approach, enabling integration of individual quantum devices into a monolithic quantum computing circuit, is capable of taking the quantum information science and engineering to a qualitatively new level.”

Read more