A team of quantum physicists in Martinis Lab have come a step closer in creating the circuitry that would allow them to process super computing done by quantum computers. The revolution is promised by the new quantum bits (qubits) compared to the previously done classical computing. Qubits infuse the system with high levels of reliability and speed, thus building foundations for large scale superconducting quantum computers.
Till now computing has been done by classical methods in which the bits were either in states 0 or 1, but qubits exist at all the positions simultaneously, in different dimensions. This special property of being omnipresent is called ‘superpositioning’. However, one of the difficulties is keeping the qubits stable to reproduce same result each time. This superpositioning characteristic makes qubits prone to ‘flipping’, therefore making it difficult to work with.
Julian Kelly, graduate student researcher and co-lead author of a research paper that was published in the journal Nature said:
A computer simulation of a cognitive model entirely made up of artificial neurons learns to communicate through dialog starting from a state of tabula rasa —
A group of researchers from the University of Sassari (Italy) and the University of Plymouth (UK) has developed a cognitive model, made up of two million interconnected artificial neurons, able to learn to communicate using human language starting from a state of ‘tabula rasa’, only through communication with a human interlocutor. The model is called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning) and it is described in an article published in PLOS ONE. This research sheds light on the neural processes that underlie the development of language.
How does our brain develop the ability to perform complex cognitive functions, such as those needed for language and reasoning? This is a question that certainly we are all asking ourselves, to which the researchers are not yet able to give a complete answer. We know that in the human brain there are about one hundred billion neurons that communicate by means of electrical signals. We learned a lot about the mechanisms of production and transmission of electrical signals among neurons. There are also experimental techniques, such as functional magnetic resonance imaging, which allow us to understand which parts of the brain are most active when we are involved in different cognitive activities. But a detailed knowledge of how a single neuron works and what are the functions of the various parts of the brain is not enough to give an answer to the initial question.
“The subtitle to this post is a variation of William Gibson’s famous remark: “The future is already here — it’s just not very evenly distributed.” An obvious follow up question is: if the future is already here, where can I find it?”
Because of its unique chemical and physical properties, graphene has helped scientists design new gadgets from tiny computer chips to salt water filters. Now a team of researchers from MIT has found a new use for the 2D wonder material: in infrared sensors that could replace bulky night-vision goggles, or even add night vision capabilities to high-tech windshields or smartphone cameras. The study was published last week in Nano Letters.
Night vision technology picks up on infrared wavelengths, energy usually emitted in the form of heat that humans can’t see with the naked eye. Researchers have known for years that because of how it conducts electricity, graphene is an excellent infrared detector, and they wanted to see if they could create something less bulky than current night-vision goggles. These goggles rely on cryogenic cooling to reduce the amount of excess heat that might muddle the image. To create the sensor, the researchers integrated graphene with tiny silicon-based devices called MEMS. Then, they suspended this chip over an air pocket so that it picks up on incoming heat and eliminates the need for the cooling mechanisms found in other infrared-sensing devices. That signal is then transmitted to another part of the device that creates a visible image. When the researchers tested their sensor, they found that it clearly and successfully picked up the image of a human hand.
The year is 2050 and super-intelligent robots have emerged as the masters of Earth. Unfortunately, you have no idea of that fact because we are immersed in a computer simulation set decades ago. Everything you see and touch has now been created and programmed by machines that use mankind for their own benefit. This radical theory, demonstrated in numerous books and science fiction films, has been, and is currently regarded by science as possible; Moreover, scientists are taking this theory to a cosmic level and even believe that if only one extraterrestrial civilization in the universe go the technological level to “emulate” an entire “multiverse,” then even our probes and space telescopes, which are out there exploring the universe, belong to that “creepy simulation.”
Robert Lawrence Kuhn, author and host of the Closer to Truth program, recently explored this theory in an episode where he interviewed several scholars, including Nick Bostrom, a philosopher at Oxford University, who argues that the scenario presented in the movie The Matrix might be true, but “instead of brains connected to a virtual simulator, own brains would also be part of the multiverse simulation.”
Feeling like the typical four or eight (Extreme Edition) cores in your current Core i7 processor are holding you back? Well, you’re in luck. Intel is going to offer up their very first Core i7 with ten processing cores before the end of next summer.
While it’ll be the first desktop-class CPU with that many cores, it won’t actually be Intel’s first 10-core processor. They’ve been making Xeon chips with at least 10 cores since 2011, and some with as many as 15. They’re aimed primarily at servers and enterprise-class workstations, though. Next year, however, they’ll finally offer up a deca-core processor for the consumer market.
That chip will be the Core i7-6950X, a 10-core beast with Hyper-threading support that allows it to handle 20 independent instructions at any given time. It’s based on Intel’s new 14nm process, down from 22nm on Ivy Bridge and Haswell. The 6950X should be clocked at 3GHz, but it’s not yet known where Turbo Boost will top out.
In the past couple of years, Google has been trying to improve more and more of its services with artificial intelligence. Google also happens to own a quantum computer — a system capable of performing certain computations faster than classical computers.
It would be reasonable to think that Google would try running AI workloads on the quantum computer it got from startup D-Wave, which is kept at NASA’s Ames Research Center in Mountain View, California, right near Google headquarters.
Google is keen on advancing its capabilities in a type of AI called deep learning, which involves training artificial neural networks on a large supply of data and then getting them to make inferences about new data.
According to Steve Jurvetson, venture capitalist and board member at pioneer quantum computing company D-WAVE (as well as others, such as Tesla and SpaceX), Google has what may be a “watershed” quantum computing announcement scheduled for early next month. This comes as D-WAVE, which notably also holds the Mountain View company as a customer, has just sold a 1000+ Qubit 2X quantum computer to national security research institution Los Alamos…
It’s not exactly clear what this announcement will be (besides important for the future of computing), but Jurvetson says to “stay tuned” for more information coming on December 8th. This is the first we’ve heard of a December 8th date for a Google announcement, and considering its purported potential to be a turning point in computing, this could perhaps mean an actual event is in the cards.
Notably, Google earlier this year entered a new deal with NASA and D-WAVE to continue its research in quantum computing. D-WAVE’s press release at the time had this to say:
Oh Japan, how I love your beautiful insanity. wink
If you’re a fan of virtual musicians with computer-generated bodies and voices, and you live in North America, then do I have news for you.
Hatsune Miku, Japan’s “virtual pop star,” is coming to the US and Canada next year for a seven-city, synth-filled tour—her first tour in this neck of the woods. Miku herself may be a digital illusion, but her unique impact on the music industry is very real.
Her bio describes her to her 2.5 million Facebook fans as “a virtual singer who can sing any song that anybody composes.” She debuted in 2007 as software called Vocaloid developed by Crypton Future Media, a Sapporo-based music technology company. Vocaloid software generates a human-sounding singing voice, but without any actual humans.
Yes, conceivably. And if/when we achieve the levels of technology necessary for simulation, the universe will become our playground. Eagleman’s latest book is “The Brain: The Story of You” (http://goo.gl/2IgDRb).
Transcript — The big picture in modern neuroscience is that you are the sum total of all the pieces and parts of your brain. It’s a vastly complicated network of neurons, almost 100 billion neurons, each of which has 10,000 connections to its neighbors. So we’re talking a thousand trillion neurons. It’s a system of such complexity that it bankrupts our language. But, fundamentally it’s only three pounds and we’ve got it cornered and it’s right there and it’s a physical system.
The computational hypothesis of brain function suggests that the physical wetware isn’t the stuff that matters. It’s what are the algorithms that are running on top of the wetware. In other words: What is the brain actually doing? What’s it implementing software-wise that matters? Hypothetically we should be able to take the physical stuff of the brain and reproduce what it’s doing. In other words, reproduce its software on other substrates. So we could take your brain and reproduce it out of beer cans and tennis balls and it would still run just fine. And if we said hey, “How are you feeling in there?” This beer can/tennis ball machine would say “Oh, I’m feeling fine. It’s a little cold, whatever.”
It’s also hypothetically a possibility that we could copy your brain and reproduce it in silica, which means on a computer at zeroes and ones, actually run the simulation of your brain. The challenges of reproducing a brain can’t be underestimated. It would take something like a zettabyte of computational capacity to run a simulation of a human brain. And that is the entire computational capacity of our planet right now.
There’s a lot of debate about whether we’ll get to a simulation of the human brain in 50 years or 500 years, but those would probably be the bounds. It’s going to happen somewhere in there. It opens up the whole universe for us because, you know, these meat puppets that we come to the table with aren’t any good for interstellar travel. But if we could, you know, put you on a flash drive or whatever the equivalent of that is a century from now and launch you into outer space and your consciousness could be there, that could get us to other solar systems and other galaxies. We will really be entering an era of post-humanism or trans-humanism at that point.