The same laser system being developed to blast tiny spacecraft between the stars could also launch human missions to Mars, protect Earth from dangerous asteroids and help get rid of space junk, project leaders say.
Last month, famed physicist Stephen Hawking and other researchers announced Breakthrough Starshot, a $100 million project that aims to build prototype light-propelled “wafersats” that could reach the nearby Alpha Centauri star system just 20 years after launch.
The basic idea behind Breakthrough Starshot has been developed primarily by astrophysicist Philip Lubin of the University of California, Santa Barbara, who has twice received funding from the NASA Innovative Advanced Concepts (NIAC) program to develop the laser propulsion system. [Stephen Hawking Video: ‘Transcending Our Limits’ with Breakthrough Starshot].
/EINPresswire.com/ — SAN JOSE, CA — (Marketwired) — 05/24/16 — UltraMemory Inc. (UltraMemory) has selected NanoSpice™ and NanoSpice Giga™ from ProPlus Design Solutions, Inc., the leading technology provider of giga-scale parallel SPICE simulation, SPICE modeling solutions and Design-for-Yield (DFY) applications, to simulate its super-broadband, super large-scale memory design.
UltraMemory is developing innovative 3D DRAM chip, which includes Through Chip Interface (TCI), enabling low-cost and low-power wireless communication between stacked DARM when compared to TSV technology.
Highly accurate and high-capacity SPICE simulation was necessary because it needed to simulate several DRAM chips with analog functions. UltraMemory’s decision to adopt NanoSpice, a high-performance parallel SPICE simulator, and NanoSpice Giga, the industry’s only GigaSpice simulator, came after an extensive evaluation of commercial SPICE and FastSPICE circuit simulators. NanoSpice and NanoSpice Giga have been integrated in UltraMemory’s existing design flows to replace other SPICE and FastSPICE simulators to provide full circuit simulation solutions from small block simulation to full-chip verification.
“We have developed a hydrogel based rapid E. coli detection system that will turn red when E. coli is present,” says Professor Sushanta Mitra, Lassonde School of Engineering. “It will detect the bacteria right at the water source before people start drinking contaminated water.”
The new technology has cut down the time taken to detect E. coli from a few days to just a couple of hours. It is also an inexpensive way to test drinking water (C$3 per test estimated), which is a boon for many developing countries, as much as it is for remote areas of Canada’s North.
“This is a significant improvement over the earlier version of the device, the Mobile Water Kit, that required more steps, handling of liquid chemicals and so on,” says Mitra, Associate Vice-President of Research at York U. “The entire system is developed using a readily available plunger-tube assembly. It’s so user-friendly that even an untrained person can do the test using this kit.”
The software startup launching out of a garage or a dorm room is now the stuff of legend. We can all name the stories of people who got together in a garage with a few computers and ended up disrupting massive, established corporations — or creating something the world never even knew it wanted.
Until now, this hasn’t really been as true for physical things you build from the ground up. The cost of tools and production has been too high, and for top quality, you still had to go at it the traditional manufacturing route.
If you’ve ever seen a “recommended item” on eBay or Amazon that was just what you were looking for (or maybe didn’t know you were looking for), it’s likely the suggestion was powered by a recommendation engine. In a recent interview, Co-founder of machine learning startup Delvv, Inc., Raefer Gabriel, said these applications for recommendation engines and collaborative filtering algorithms are just the beginning of a powerful and broad-reaching technology.
Raefer Gabriel, Delvv, Inc.
Gabriel noted that content discovery on services like Netflix, Pandora, and Spotify are most familiar to people because of the way they seem to “speak” to one’s preferences in movies, games, and music. Their relatively narrow focus of entertainment is a common thread that has made them successful as constrained domains. The challenge lies in developing recommendation engines for unbounded domains, like the internet, where there is more or less unlimited information.
“Some of the more unbounded domains, like web content, have struggled a little bit more to make good use of the technology that’s out there. Because there is so much unbounded information, it is hard to represent well, and to match well with other kinds of things people are considering,” Gabriel said. “Most of the collaborative filtering algorithms are built around some kind of matrix factorization technique and they definitely tend to work better if you bound the domain.”
Of all the recommendation engines and collaborative filters on the web, Gabriel cites Amazon as the most ambitious. The eCommerce giant utilizes a number of strategies to make item-to-item recommendations, complementary purchases, user preferences, and more. The key to developing those recommendations is more about the value of the data that Amazon is able to feed into the algorithm initially, hence reaching a critical mass of data on user preferences, which makes it much easier to create recommendations for new users.
“In order to handle those fresh users coming into the system, you need to have some way of modeling what their interest may be based on that first click that you’re able to extract out of them,” Gabriel said. “I think that intersection point between data warehousing and machine learning problems is actually a pretty critical intersection point, because machine learning doesn’t do much without data. So, you definitely need good systems to collect the data, good systems to manage the flow of data, and then good systems to apply models that you’ve built.”
Beyond consumer-oriented uses, Gabriel has seen recommendation engines and collaborative filter systems used in a narrow scope for medical applications and in manufacturing. In healthcare for example, he cited recommendations based on treatment preferences, doctor specialties, and other relevant decision-based suggestions; however, anything you can transform into a “model of relationships between items and item preferences” can map directly onto some form of recommendation engine or collaborative filter.
One of the most important elements that has driven the development of recommendation engines and collaborative filtering algorithms is the Netflix Prize, Gabriel said. The competition, which offered a $1 million prize to anyone who could design an algorithm to improve upon the proprietary Netflix’s recommendation engine, allowed entrants to use pieces of the company’s own user data to develop a better algorithm. The competition spurred a great deal of interest in the potential applications of collaborative filtering and recommendation engines, he said.
In addition, relative ease of access to an abundant amount of cheap memory is another driving force behind the development of recommendation engines. An eCommerce company like Amazon with millions of items needs plenty of memory to store millions of different of pieces of item and correlation data while also storing user data in potentially large blocks.
“You have to think about a lot of matrix data in memory. And it’s a matrix, because you’re looking at relationships between items and other items and, obviously, the problems that get interesting are ones where you have lots and lots of different items,” Gabriel said. “All of the fitting and the data storage does need quite a bit of memory to work with. Cheap and plentiful memory has been very helpful in the development of these things at the commercial scale.”
Looking forward, Gabriel sees recommendation engines and collaborative filtering systems evolving more toward predictive analytics and getting a handle on the unbounded domain of the internet. While those efforts may ultimately be driven by the Google Now platform, he foresees a time when recommendation-driven data will merge with search data to provide search results before you even search for them.
“I think there will be a lot more going on at that intersection between the search and recommendation space over the next couple years. It’s sort of inevitable,” Gabriel said. “You can look ahead to what someone is going to be searching for next, and you can certainly help refine and tune into the right information with less effort.”
While “mind-reading” search engines may still seem a bit like science fiction at present, the capabilities are evolving at a rapid pace, with predictive analytics at the bow.
I find this amusing because much of the top US AI talent has worked for many decades in the National Labs and not always in academia. National labs often is a mix of top scientists, engineers as well as academia; not academia only. Granted universities do incubations such a GA Tech, VA Tech, Rensselaer Polytechnic Institute, etc.; however, the bulk of AI and other patented innovations truly have come out of the national labs such as X10, Los Alamos, Argonne, over the years.
The high demand for AI talents at giant corporations This means the academe is directly affected because their smartest AI experts are rapidly transferring to the corporate world and leaving the academe.
AR working ; I cannot wait to see what we do with AR in many of the other enterprise apps.
Augmented reality is transforming field maintenance. With DAQRI Smart Helmet™, workers get real-time visual instructions, equipment diagnostics, and operational data, turning every user into a maintenance expert.
By combining DAQRI’s innovative design with Intel’s powerful technology, DAQRI Smart Helmet helps workers be more productive and less error-prone. As an example of how powerful augmented reality can be, Kazakhstan Seamless Pipe (KSP Steel) used the helmet to achieve a 40% increase in worker productivity and a 50% reduction in factory downtime.