Toggle light / dark theme

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

A secret agent travels to a secret underground desert base being used to develop space weapons to investigate a series of mysterious murders. The agent finds a secret transmitter was built into a supercomputer that controls the base and a stealth plane flying overhead is controlling the computer and causing the deaths. The agent does battle with two powerful robots in the climax of the story.

Gog is a great story worthy of a sci fi action epic today- and was originally made in 1954. Why can’t they just remake these movies word for word and scene for scene with as few changes as possible? The terrible job done on so many remade sci fi classics is really a mystery. How can such great special effects and actors be used to murder a perfect story that had already been told well once? Amazing.

In contrast to Gog we have the fairly recent movie Stealth released in 2005 that has talent, special effects, and probably the worst story ever conceived. An artificially intelligent fighter plane going off the reservation? The rip-off of HAL from 2001 is so ridiculous.

Fantastic Voyage (1966) was a not so good story that succeeded in spite of stretching suspension of disbelief beyond the limit. It was a great movie and might succeed today if instead of miniaturized and injected into a human body it was instead a submarine exploring a giant organism under the ice of a moon in the outer solar system. Just an idea.

And then there is one of the great sci-fi movies of all time if one can just forget the ending. The Abyss of 1989 was truly a great film in that aquanauts and submarines were portrayed in an almost believable way.

From wiki: The cast and crew endured over six months of grueling six-day, 70-hour weeks on an isolated set. At one point, Mary Elizabeth Mastrantonio had a physical and emotional breakdown on the set and on another occasion, Ed Harris burst into spontaneous sobbing while driving home. Cameron himself admitted, “I knew this was going to be a hard shoot, but even I had no idea just how hard. I don’t ever want to go through this again”

Again, The Abyss, like Fantastic Voyage, brings to mind those oceans under the icy surface of several moons in the outer solar system.

I recently watched Lockdown with Guy Pearce and was as disappointed as I thought I would be. Great actors and expensive special effects just cannot make up for a bad story. When will they learn? It is sad to think they could have just remade Gog and had a hit.

The obvious futures represented by these different movies are worthy of consideration in that even in 1954 the technology to come was being portrayed accurately. In 2005 we have a box office bomb that as a waste of money is parallel to the military industrial complex and their too-good-to-be-true wonder weapons that rarely work as advertised. In Fantastic Voyage and The Abyss we see scenarios that point to space missions to the sub-surface oceans of the outer planet moons.

And in Lockdown we find a prison in space where the prisoners are the victims of cryogenic experimentation and going insane as a result. Being an advocate of cryopreservation for deep space travel I found the story line.……extremely disappointing.

The Truth about Space Travel is Stranger than Fiction

Posted in asteroid/comet impacts, biological, biotech/medical, business, chemistry, climatology, complex systems, cosmology, counterterrorism, defense, economics, education, engineering, ethics, events, evolution, existential risks, finance, futurism, geopolitics, habitats, homo sapiens, human trajectories, life extension, lifeboat, media & arts, military, neuroscience, nuclear weapons, physics, policy, space, sustainability, transparency, treatiesTagged , , , , , , , , , , , , , , , , , , , , , , , | 5 Comments on The Truth about Space Travel is Stranger than Fiction

I have been corresponding with John Hunt and have decided that perhaps it is time to start moving toward forming a group that can accomplish something.

The recent death of Neil Armstrong has people thinking about space. The explosion of a meteor over Britain and the curiosity rover on Mars are also in the news. But there is really nothing new under the sun. There is nothing that will hold people’s attention for very long outside of their own immediate comfort and basic needs. Money is the central idea of our civilization and everything else is soon forgotten. But this idea of money as the center of all activity is a death sentence. Human beings die and species eventually become extinct just as worlds and suns also are destroyed or burn out. Each of us is in the position of a circus freak on death row. Bizarre, self centered, doomed; a cosmic joke. Of all the creatures on this planet, we are the freaks the other creatures would come to mock- if they were like us. If they were supposedly intelligent like us. But are we actually the intelligent ones? The argument can be made that we lack a necessary characteristic to be considered truly intelligent life forms.

Truly intelligent creatures would be struggling with three problems if they found themselves in our situation as human beings on Earth in the first decades of this 21st century;

1. Mortality. With technology possible to delay death and eventually reverse the aging process, intelligent beings would be directing the balance of planetary resources towards conquering “natural” death.

2. Threats. With technology not just possible, but available, to defend the earth from extinction level events, the resources not being used to seek an answer to the first problem would necessarily be directed toward this second danger.

3. Progress. With science advancing and accelerating, the future prospects for engineering humans for greater intelligence and eventually building super intelligent machines are clear. Crystal clear. Not addressing these prospects is a clear warning that we are, as individuals, as a species, and as a living planet, headed not toward a bright future, but in the opposite direction toward a dead and final end.

One engineered pathogen will destroy us forever. One impact larger than average will destroy us forever. The reasoning that death is somehow “natural” which drives us to ignore the subject of destruction will destroy us forever. Earth changes are inevitable and taking place now- despite our faith in television and popular culture that everything is fun and games. Man is not the measure of all things. We think tomorrow will come just like yesterday- but it will not.

The Truth about Space Travel is that there are no stargates or warp drives that will take us across the galaxy like commecial airliners or cruise ships take us across oceans. If we do wake up and change our course, space voyages will take centuries and human expansion will be measured in millenia. We will be frozen when we travel to distant stars. And this survivable freezing will mark the beginning of a new age since being able to delay death by freezing will completely transform life. The first such successful procedure will mean the end of the world as we know it- and the beginning of a new civilization.

Though unknown to the public, the atomic bomb and then the hydrogen bomb marked the true beginning of the Space Age. Hydrogen bombs can push cities in space, hollow moons, to some percentage of the speed of light. These cities can travel to other stars, such as Epsilon Eridani with it’s massive asteroid belt. And there more artificial hollow moons can be mass produced to provide new worlds to live in. This is not fiction I am speaking of but something we could do right now- today. We only lack the procedure to freeze and successfully revive a human being. It is, indeed, stranger than fiction.

In Beam Propulsion we have the answer to bending the rocket equation to our will and allowing millions and eventually billions of human beings to migrate into space. Just as Verne and Wells made accurate predictions of the decades to come, we now are seeing the possible obvious future unfolding before our eyes.

But the most possible and probable obvious future at this moment is destruction. The end of days. Unless we do something.
You and I and everyone you know is involved in this. Let’s get started.

Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

GatgetBridge is currently just a concept. It might start its life as a discussion forum, later turn into a network or an organisation and hopefully inspire a range of similar activities.

We will soon be able to use technology to make ourselves more intelligent, feel happier or change what motivates us. When the use of such technologies is banned, the nations or individuals who manage to cheat will soon lord it over their more obedient but unfortunately much dimmer fellows. When these technologies are made freely available, a few terrorists and psychopaths will use them to cause major disasters. Societies will have to find ways to spread these mind enhancement treatments quickly among the majority of their citizens, while keeping them from the few who are likely to cause harm. After a few enhancement cycles, the most capable members of such societies will all be “trustworthy” and use their skills to stabilise the system (see “All In The Mind”).

But how can we manage the transition period, the time in which these technologies are powerful enough to be abused but no social structures are yet in place to handle them? It might help to use these technologies for entertainment purposes, so that many people learn about their risks and societies can adapt (see “Should we build a trustworthiness tester for fun”). But ideally, a large, critical and well-connected group of technology users should be part of the development from the start and remain involved in every step.

To do that, these users would have to spend large amounts of money and dedicate considerable manpower. Fortunately, the basic spending and working patterns are in place: People already use a considerable part of their income to buy consumer devices such as mobile phones, tablet computers and PCs and increasingly also accessories such as blood glucose meters, EEG recorders and many others; they also spend a considerable part of their time to get familiar with these devices. Manufacturers and software developers are keen to turn any promising technology into a product and over time this will surely include most mind measuring and mind enhancement technologies. But for some critical technologies this time might be too long. GadgetBridge is there to shorten it as follows:

- GadgetBridge spreads its philosophy — that mind-enhancing technologies are only dangerous when they are allowed to develop in isolation — that spreading these technologies makes a freer world more likely — and that playing with innovative consumer gadgets is therefore not just fun but also serves a good cause.

- Contributors make suggestions for new consumer devices based on the latest brain research and their personal experiences. Many people have innovative ideas but few are in a position to exploit them. Contributors rather donate their ideas that see them wither away or claimed by somebody else.

- All ideas are immediately published and offered free of charge to anyone who wants to use them. Companies select and implement the best options. Users buy their products and gain hands-on experience with the latest mind measurement and mind enhancement technologies. When risks become obvious, concerned users and governments look for ways to cope with them before they get out of hand.

- Once GadgetBridge produces results, it might attract funding from the companies that have benefited or hope to benefit from its services. GadgetBridge might then organise competitions, commission feasibility studies or develop a structure that provides modest rewards to successful contributors.

Your feedback is needed! Please be honest rather than polite: Could GadgetBridge make a difference?

Twenty years ago, way back in the primordial soup of the early Network in an out of the way electromagnetic watering hole called USENET, this correspondent entered the previous millennium’s virtual nexus of survival-of-the-weirdest via an accelerated learning process calculated to evolve a cybernetic avatar from the Corpus Digitalis. Now, as columnist, sci-fi writer and independent filmmaker, [Cognition Factor — 2009], with Terence Mckenna, I have filmed rocket launches and solar eclipses for South African Astronomical Observatories, and produced educational programs for South African Large Telescope (SALT). Latest efforts include videography for the International Astronautical Congress in Cape Town October 2011, and a completed, soon-to-be-released, autobiography draft-titled “Journey to Everywhere”.

Cognition Factor attempts to be the world’s first ‘smart movie’, digitally orchestrated for the fusion of Left and Right Cerebral Hemispheres in order to decode civilization into an articulate verbal and visual language structured from sequential logical hypothesis based upon the following ‘Big Five’ questions,

1.) Evolution Or Extinction?
2.) What Is Consciousness?
3.) Is God A Myth?
4.) Fusion Of Science & Spirit?
5.) What Happens When You Die?

Even if you believe that imagination is more important than knowledge, you’ll need a full deck to solve the ‘Arab Spring’ epidemic, which may be a logical step in the ‘Global Equalisation Process as more and more of our Planet’s Alumni fling their hats in the air and emit primal screams approximating;
“we don’t need to accumulate (so much) wealth anymore”, in a language comprising of ‘post Einsteinian’ mathematics…

Good luck to you if you do…

Schwann Cybershaman

The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Many years ago I was a member of what was called the Artorga Research Group – a group including some of the early cyberneticists – who were focussed on Artificial Organisms.

One of the main areas of concern was, of course, Memory.

One of our group was a young German engineer who suggested that perhaps memories were in fact re-synthesised in accordance with remembered rules, as opposed to storing huge amounts of data.

Since then similar ideas have arisen in such areas as computer graphics.

Here is an example,

It shows a simple picture on a computer screen. We want to store (memorize) this information.

One way is to store the information about each pixel on the screen – is it white or is it black. With a typical screen resolution that could mean over 2.5 million bits of information

But there is another way….

In this process one simply specifies the start point (A) in terms of its co-ordinates (300 Vertically, 100 Horizontally); and its end point (B) (600 Vertically, 800 Horizontally); and simply instructs – “Draw a line of thickness w between them”.

The whole picture is specified in just a few bits..

The first method, specifying bit by bit, known as the Bit Mapped Protocol (.BMP), uses up lots of memory space.

The other method, based on re-synthesising according to stored instructions, is used in some data reduction formats; and is, essentially, just what that young engineer suggested, many years before.

On your computer you will have a screen saver –almost certainly a colorful scene – and of course that is stored, so that if you are away from the computer for a time it can automatically come on to replace what was showing, and in this way “save” your screen.

So – where are those colors in your screensaver stored, where are the shapes shown in it stored? Is there in the computer a Color Storage Place? Is there a Shape Storage Place?

Of course not.

Yet these are the sort of old, sodden concepts that are sometimes still applied in thinking about the brain and memories.

Patterned streams of binary bits, not unlike neural signals , (but about 70 times larger), are fed to a computer screen. And then the screen takes these patterns of bits as instructions to re-synthesise glowing colors and shapes.

We cannot actually perceive the binary signals, and so they are translated by the screen into a language that we can understand. The screen is a translator – that is its sole function.

This is exactly analogous to the point made earlier about perception and neural signals.

The main point here, though, is that what is stored in the computer memory are not colors and shapes but instructions.

And inherent in these instructions as a whole, there must exist a “map”.

Each instruction must not only tell its bit of the screen what color to glow – but it must also specify the co-ordinates of that bit. If the picture is the head of a black panther with green eyes, we don’t want to see a green head and black eyes. The map has to be right. It is important.

Looking at it in another way the map can be seen as a connectivity table – specifying what goes where. Just two different ways of describing the same thing.

As well as simple perception there are derivatives of what has been perceived that have to be taken into account, for example, the factor called movement.

Movement is not in itself perceptible (as we shall presently show); it is a computation.

Take for example, the following two pictures shown side-by-side.

I would like to suggest that one of these balls is moving. And to ask — which one is moving?

If movement had a visual attribute then one could see which one it was – but movement has no visual attributes – it is a computation.

To determine the speed of something, one has to observe its current position, compare that with the record (memory) of its previous position; check the clock to determine the interval between the two observations; and then divide the distance between the two positions, s; by the elapsed time, t; to determine the speed, v,

s/t = v.

This process is carried out automatically, (subconsciously), in more elaborate organisms by having two eyes spaced apart by a known distance and having light receptors – the retina – where each has a fast turn-on and a slow (about 40 ms) turn off, all followed by a bit of straightforward neural circuitry.

Because of this system, one can look at a TV screen and see someone in a position A, near the left hand edge, and then very rapidly, a series of other still pictures in which the person is seen being closer and closer to B, at the right hand edge.

If the stills are shown fast enough – more than 25 a second — then we will see the person walking across the screen from left to right. What you see is movement – except you don’t actually see anything extra on the screen. Being aware of movement as an aid to survival is very old in evolutionary terms. Even the incredibly old fish, the coelacanth, has two eyes.

The information provided is a derivate of the information provided by the receptors.

And now we ought to look at information in a more mathematical way – as in the concept of Information Space (I-space).

For those who are familiar with the term, it is a Hilbert Space.

Information Space is not “real” space – it is not distance space – it is not measurable in metres and centimetres.

As an example, consider Temperature Space. Take the temperature of the air going in to an air-conditioning (a/c) system; the temperature of the air coming out of the a/c system; and the temperature of the room. These three provide the three dimensions of a Temperature Space. Every point in that space correlates to an outside air temperature, an a/c output temperature and the temperature of the room. No distances are involved – just temperatures.

This is an illustration of what it would look like if we re-mapped it into a drawing.

The drawing shows the concept of a 3-dimensional Temperature Space (T-space). The darkly outlined loop is shown here as a way of indicating the “mapping” of a part of T-space.

But what we are interested in here is I-space. And I-space will have many more dimensions than T-space.

In I-space each location is a different item of information, and the fundamental rule of I-space – indeed of any Hilbert space – is,

Similarity equals Proximity.

This would mean that the region concerned with Taste, for example, would be close to the area concerned with Smell, since the two are closely related.

Pale Red would be closer to Medium Red than to Dark Red.

Perception then would be a matter of connectivity.

An interconnected group we could refer to as a Composition or Feature.

Connect 4 & legs & fur & tail & bark & the word dog & the sound of the word dog – and we have a familiar feature.

Features are patterns of interconnections; and it is these features that determine what a thing or person is seen as. What they are seen as is taken as their identity. It is the identity as seen from outside.

To oneself one is here and now, a 0,0 reference point. To someone else one is not the 0,0 point – one is there — not here, and to that person it is they who are the 0,0 point.

This 0,0 or reference point is crucially important. One could upload a huge mass of data, but if there was no 0,0 point that is all it would be – a huge mass of data.

The way forward towards this evolutionary goal, is not to concentrate on being able to upload more and more data, faster and faster – but instead to concentrate on being able to identify the 0.0 point; and to be able to translate from neural code to the language of perception.

The vulnerability of the bio body is the source of most threats to its existence.

We have looked at the question of uploading the identity by uploading the memory contents, on the assumption that the identity is contained in the memories. I believe this assumption has been proved to be almost certainly wrong.

What we are concentrating on is the identity as the viewer of its perceptions, the centroid or locus of perception.

It is the fixed reference point. And the locus of perception is always Here, and it is always Now. This is abbreviated here to 0,0.

What more logical place to find the identity than where it considers Here and Now – its residence in Space Time.

It would surely be illogical to start searching for the identity where it considers to be Somewhere Else or in Another Time.

We considered the fact that the human being accesses the outside world through its senses, and that its information processing system is able to present that information as being “external.” A hand is pricked with a pin. The sensory information – a stream of neural impulses, all essentially identical — progress to the upper brain where the pattern is read and the sensation of pain is felt. That sensation, however, is projected or mapped onto the exact point it originated from.

One feels the pain at the place the neural disturbance came from. It is an illusion — a very useful illusion.

In the long slow progress of evolution from a single cell to the human organism, and to the logical next step — the “android” (we must find a better word) – this mapping function must be one of the most vital survival strategies. If the predator is gnawing at your tail, it’s smart to know where the pain is coming from.

It wasn’t just structure that evolved, but “smarts” too… smarter systems.

Each sensory channel conveys not just sensory information but information regarding where it came from. Like a set of outgoing information vectors. But there is also a complementary set of incoming vectors. The array of sensory vectors from visual, audible, tactile, and so on, all converge on one location – a locus of perception. And the channels cross-correlate. The hand is pricked – we immediately look at the place the pain came from. And… one can “follow one’s nose” to see where the barbecue is.

Dr Shu can use both his left hand and arm; and his right hand and arm in coordination to lift up the $22M Ming vase he is in the process of stealing.

Left/right coordination — so obvious and simple it gets overlooked.

A condition known as Synesthesia [http://hplusmagazine.com/editors-blog/sight-synesthesia-what-happens-when-senses-can-be-rewired ] provides an example of how two channels can get confused — for example, being able to see sounds or hear movement.

Perhaps the most interesting example is the rubber hand experiment from UC Riverside. In this the subject places their hands palm down on a table. The left arm and hand are screened off, and a substitute left “arm” and rubber hand are installed. After a while, the subject reacts as though the substitute was their real hand.

It is on Youtube at http://www.youtube.com/watch?v=93yNVZigTsk.

This phenomenon has been attributed to neuroplasticity.

A simpler explanation would be changed coordinates — something that people who row or who ride bicycles are familiar with — even if they have never analysed it. The vehicle becomes part of oneself. It becomes a part of the system, an extension. What about applying the same sense on a grander scale? Such a simple and common observation may have just as much relevance to the next step in evolution as the number of teraflops per second.

So, we can get the sensory vectors to be re-deployed. But one of the fundamental questions would be – can we get the 0,0 locus, the centroid of perception, to shift to another place?

Our environment, the environment we live in, is made of perception. Outside there may be rocks and rivers and rain and wind and thunder… but not in the head. Outside this “theater in the head,” there is a world of photons and particles and energy and radiation — reality — but what we see is what is visible, what we hear is what is audible, what we feel is what is tangible … that is our environment, that is where we live.

However, neurones do not emit any light, neurons do not make any sound, they are not a source of pressure or temperature so what the diddly are we watching and listening to?

We live in a world of perception. Thanks to powerful instrumentation and a great deal of scientific research we know that behind this world of perception there are neurons, unknown to us all the time working away providing us with colors and tones and scents….

But they do not emit colors or tones or scents – the neuronal language is binary – fired or not fired.

Somewhere the neuronal binary (Fired/Not Fired) language has to be translated into the language of perception – the range of colors, the range of tones, the range of smells … these are each continuous variables; not two-state variables as in the language of neurons.

There has been a great flurry of research activity in the area of neurons, and what was considered to be “Gospel” 10 years ago, is no longer so.

IBM and ARM in the UK have (summer 2011) announced prototype brains with hyper-connectivity – a step in the right direction but the fundamental question of interpretation/translation is side-stepped.

I hope someone will prove me wrong, but I am not aware of anyone doing any work on the translator question. This is a grievous error.

(To be continued)