Toggle light / dark theme

Level 4 – Awareness + World model: Systems that have a modeling system complex enough to create a world model: a sense of other, without a sense of self – e.g., dogs. Level 4 capabilities include static behaviors and rudimentary learned behavior.

Level 5 – Awareness + World model + Primarily subconscious self model = Sapient or Lucid: Lucidity means to be meta-aware – that is, to be aware of one’s own awareness, aware of abstractions, aware of one’s self, and therefore able to actively analyze each of these phenomena. If a given animal is meta-aware to any extent, it can therefore make lucid decisions. Level 5 capabilities include the following: The “sense of self”; Complex learned behavior; Ability to predict the future emotional states of the self (to some degree); The ability to make motivational tradeoffs.

Level 6 – Awareness + World model + Dynamic self model + Effective control of subconscious: The dynamic sense of self can expand from “the small self” (directed consciousness) to the big self (“social group dynamics”). The “self” can include features that cross barriers between biological and non-biological – e.g., features resulting from cybernetic additions, like smartphones.

Level 7 – Global awareness – Hybrid biological-digital awareness = Singleton: Complex algorithms and/or networks of algorithms that have capacity for multiple parallel simulations of multiple world models, enabling cross-domain analysis and novel temporary model generation. This level includes an ability to contain a vastly larger amount of biases, many paradoxically held. Perspectives are maintained in separate modules, which are able to dynamically switch between identifying with the local module of awareness/perspective or the global awareness/perspective. Level 7 capabilities involve the same type of dynamic that exists between the subconscious and directed consciousness, but massively parallelized, beyond biological capacities.

Read more

Quantum computers aren’t yet practical, but Microsoft has already developed a programming language for them. Q# works inside Visual Studio, just like most other languages, and could offer aspiring programmers a chance to learn the basics of quantum physics through trial-and-error.

Read more

Engineers at Caltech have developed a new control algorithm that enables a single drone to herd an entire flock of birds away from the airspace of an airport. The algorithm is presented in a study in IEEE Transactions on Robotics.

The project was inspired by the 2009 “Miracle on the Hudson,” when US Airways Flight 1549 struck a flock of geese shortly after takeoff and pilots Chesley Sullenberger and Jeffrey Skiles were forced to land in the Hudson River off Manhattan.

“The passengers on Flight 1549 were only saved because the pilots were so skilled,” says Soon-Jo Chung, an associate professor of aerospace and Bren Scholar in the Division of Engineering and Applied Science as well as a JPL research scientist, and the principal investigator on the drone herding project. “It made me think that next time might not have such a happy ending. So I started looking into ways to protect from birds by leveraging my research areas in autonomy and robotics.”

Read more

The state of artificial intelligence (AI) in smart homes nowadays might be likened to a smart but moody teenager: It’s starting to hit its stride and discover its talents, but it doesn’t really feel like answering any questions about what it’s up to and would really rather be left alone, OK?

William Yeoh, assistant professor of computer science and engineering in the School of Engineering & Applied Science at Washington University in St. Louis, is working to help smart-home AI to grow up.

The National Science Foundation (NSF) awarded Yeoh a $300,000 grant to assist in developing smart-home AI algorithms that can determine what a user wants by both asking questions and making smart guesses, and then plan and schedule accordingly. Beyond being smart, the system needs to be able to communicate and to explain why it is proposing the schedule it proposed to the user.

Read more

There is. Our engagement with AI will transform us. Technology always does, even while we are busy using it to reinvent our world. The introduction of the machine gun by Richard Gatling during America’s Civil War, and its massive role in World War I, obliterated our ideas of military gallantry and chivalry and emblazoned in our minds Wilfred Owen’s imagery of young men who “die as Cattle.” The computer revolution beginning after World War II ushered in a way of understanding and talking about the mind in terms of hardware, wiring and rewiring that still dominates neurology. How will AI change us? How has it changed us already? For example, what does reliance on navigational aids like Waze do to our sense of adventure? What happens to our ability to make everyday practical judgments when so many of these judgments—in areas as diverse as credit worthiness, human resources, sentencing, police force allocation—are outsourced to algorithms? If our ability to make good moral judgments depends on actually making them—on developing, through practice and habit, what Aristotle called “practical wisdom”—what happens when we lose the habit? What becomes of our capacity for patience when more and more of our trivial interests and requests are predicted and immediately met by artificially intelligent assistants like Siri and Alexa? Does a child who interacts imperiously with these assistants take that habit of imperious interaction to other aspects of her life? It’s hard to know how exactly AI will alter us. Our concerns about the fairness and safety of the technology are more concrete and easier to grasp. But the abstract, philosophical question of how AI will impact what it means to be human is more fundamental and cannot be overlooked. The engineers are right to worry. But the stakes are higher than they think.

Read more

In Machine Learning one of the biggest problem faced by the practitioners in the process is choosing the correct set of hyper-parameters. And it takes a lot of time in tuning them accordingly, to stretch the accuracy numbers.

For instance lets take, SVC from well known library Scikit-Learn, class implements the Support Vector Machine algorithm for classification which contains more than 10 hyperparameters, now adjusting all ten to minimize the loss is very difficult just by using hit and trial. Though Scikit-Learn provides Grid Search and Random Search, but the algorithms are brute force and exhaustive, however hyperopt implements distributed asynchronous algorithm for hyperparameter optimization.

Read more

“The eyes are the window of the soul.” Cicero said that. But it’s a bunch of baloney.

Unless you’re a state-of-the-art set of machine-learning algorithms with the ability to demonstrate links between eye movements and four of the big five personality traits.

If that’s the case, then Cicero was spot on.

Read more

Closing in on molecular manufacturing…


http://xt-pl.com received an honorable mention from I-Zone judges for its innovative product that prints extremely fine film structures using nanomaterials. XTPL’s interdisciplinary team is developing and commercializing an innovative technology that enables ultra-precise printing of electrodes up to several hundred times thinner than a human hair – conducive lines as thin as 100 nm. XTPL is facilitating the production of a new generation of transparent conductive films (TCFs) that are widely used in manufacturing. XTPL’s solution has a potentially disruptive technology in the production of displays, monitors, touchscreens, printed electronics, wearable electronics, smart packaging, automotive, medical devices, photovoltaic cells, biosensors, and anti-counterfeiting. The technology is also applicable to the open-defect repair industry (the repair of broken metallic connections in thin film electronic circuits) and offers cost-effective, non-toxic, flexible industry-adapted solutions.

XTPL’s technology might be the only one in the world offering cost-effective, non-toxic, flexible, industry adapted solution for the market of displays TFT/LCD/OLED, integrated circuits (IC), printed circuit boards (PCB), multichip modules (MCM); photolithographic masks & solar cells market.

XTPL delivers also solutions for research & prototyping including printing head, electronics, software algorithms which are the core of the system driving the electric field and the assembly process of nanoparticles implemented in XTPL’s Nanometric Lab Printer. It is a device that offers necessary functionalities to test, evaluate and use XTPL line-forming technology with nanometric precision and enables positioning of the printing head with micrometric resolution precisely.

Official video explaining XTPL’s technology: https://youtu.be/WMerzxzCXuw

Filmed at the I-Zone demo and prototype area at SID Display Week, the world’s largest and best exhibition for electronic information display technology.

Read more