Toggle light / dark theme

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

I have just watched this video by Global Futures 2045.

This is my list of things I disagree with:

It starts with scary words about how every crisis comes faster and faster. However this is untrue. Many countries have been running deficits for decades. The financial crisis is no surprise. The reason the US has such high energy costs goes back to government decisions made in the 1970s. And many things that used to be crises no longer happen, like the Black Plague. We have big problems, but we’ve also got many resources we’ve built up over the centuries to help. Much of the challenges we face are political and social, not technical.

We will never fall into a new Dark Ages. The biggest problem is that we aren’t advancing as fast as we could and many are still starving, sick, etc. However, it has always been this way. The 20th century was very brutal! But we are advancing and it is mostly known threats like WMDs which could cause a disaster. In the main, the world is getting safer every day as we better understand it.

We aren’t going to build a new human. It is more like a Renaissance. Those who lost limbs will get increasingly better robotic ones, but they will still be humans. The best reason to build a robotic arm is to attach it to a human.

The video had a collectivist and authoritarian perspective when it said:

“The world’s community and leaders should encourage mankind instead of wasting resources on solving momentary problems.”

This sentence needs to be deconstructed:

1. Government acts via force. Government’s job is to maintain civil order, so having it also out there “encouraging” everyone to never waste resources is creepy. Do you want your policeman to also be your nanny? Here is a quote from C.S. Lewis:

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”

2. It is wrong to think government is the solution to our problems. Most of the problems that exist today like the Greek Debt Crisis, and the US housing crisis were caused by governments trying to do too much.

3. There is no such thing as the world’s leaders. There is the UN, which doesn’t act in a humanitarian crisis until after everyone is dead. In any case, we don’t need the governments to act. We built Wikipedia.

4. “Managing resources” is codeword for socialism. If their goal is to help with the development of new technologies, then the task of managing existing resources is totally unrelated. If your job is to build robots, then your job is not also to worry about whether the water and air are dirty. Any scientist who talks about managing resources is actually a politician. Here is a quote from Frederic Hayek:

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design. Before the obvious economic failure of Eastern European socialism, it was widely thought that a centrally planned economy would deliver not only “social justice” but also a more efficient use of economic resources. This notion appears eminently sensible at first glance. But it proves to overlook the fact that the totality of resources that one could employ in such a plan is simply not knowable to anybody, and therefore can hardly be centrally controlled.”

5. We should let individuals decide what to spend their resources on. People don’t only invest in momentary things. People build houses. In fact, if you are looking for an excuse to drink, being poor because you live in a country with 70% taxes is a good one.

The idea of tasking government to finding the solutions and to do all futuristic research and new products to shove down our throats is wrong and dangerous. We want individuals, and collections of them (corporations) to do it because they will best put it to use in ways that actually improve our lives. Everything is voluntary which encourages good customer relationships. The money will be funded towards the products people actually care about, instead of what some mastermind bureaucrat thinks we should spend money on. There are many historical examples of how government doesn’t innovate as well as the private sector: the French telephone system, Cuba, expensive corn-based ethanol, the International Space Station, healthcare. The free market is imperfect but it leads to fastest technological and social progress for the reasons Frederic Hayek has explained. A lot of government research today is wasted because it never gets put to use commercially. There are many things that can be done to make the private sector more vibrant. There are many ways government can do a better job, and all that evidence should be a warning to not use governments to endorse programs with the goal of social justice. NASA has done great things, but it was only because it existed in a modern society that it was possible.

They come up with a nice list of things that humanity can do, but they haven’t listed that the one of the most important first steps is more Linux. We aren’t going to get cool and smart robots, etc. without a lot of good free software first.

The video says:

“What we need is not just another technological revolution, but a new civilization paradigm, we need philosophy and ideology, new ethics, new culture, new psychology.”

It minimizes the technology aspect when this is the hard work by disparate scientists that will bring us the most benefits.

It is true that we need to refine our understandings of many things, but we are not starting over, just evolving. Anyone who thinks we need to start over doesn’t realize what we’ve already built and all the smart people who’ve come before. The basis of good morals from thousands of years ago still apply. It will just be extended to deal with new situations, like cloning. The general rules of math, science, and biology will remain. In many cases, we are going back to the past. The Linux and free software movement is simply returning computer software to the hundreds of years-old tradition of science. Sometimes the idea has already been discovered, but it isn’t widely used yet. It is a social problem, not a technical one.

The repeated use of the word “new”, etc. makes this video like propaganda. Cults try to get people to reset their perspective into a new world, and convince them that only they have the answers. This video comes off as a sales pitch with them as the solution to our problems, ignoring that it will take millions. Their lists of technologies are random. Some of these problems we could have solved years ago, and some we can’t solve for decades, and they mix both examples. It seems they do no know what is coming next given how disorganized they are. They also pick multiple words that are related and so are repeating themselves. Repetition is used to create an emotional impact, another trait of propaganda.

The thing about innovation and the future is that it is surprising. Many futurists get things wrong. If these guys really had the answers, they’d have invented it and made money on it. And compared to some of the tasks, we are like cavemen.

Technology evolves in a stepwise fashion, and so looking at it as some clear end results on some day in the future is wrong.

For another example: the video makes it sound like going beyond Earth and then beyond the Solar System is a two-step process when in fact it is many steps, and the journey is the reward. If they were that smart, they’d endorse the space elevator which is the only cheap way to get out there, and we can do it in 10 years.

The video suggests that humanity doesn’t have a masterplan, when I just explained that you couldn’t make one.

It also suggests that individuals are afraid of change, when in fact, that is a trait characteristic of governments as well. The government class has known for decades that Social Security is going bankrupt, but they’d rather criticize anyone who wants to reform it rather than fix the underlying problem. This video is again trying to urge collectivism with its criticism of the “mistakes” people make. The video is very arrogant at how it looks down at “the masses.” This is another common characteristic of collectivism.

Here is the first description of their contribution:

“We integrate the latest discoveries and developments from the sciences: physics, energetics, aeronautics, bio-engineering, nanotechnology, neurology, cybernetics, cognitive science.”

That sentence is laughable because it is an impossible task. To understand all of the latest advances would involve talking with millions of scientists. If they are doing all this integration work, what have they produced? They want everyone to join up today, work to be specified later.

The challenge for nuclear power is not the science, it is the lawyers who outlawed new ones in 1970s, and basically have halted all advancements in building safer and better ones. Some of these challenges are mostly political, not scientific. We need to get engineers in corporations like GE, supervised by governments, building safer and cleaner nuclear power.

If you wanted to create all of what they offer, you’d have to hire a million different people. If you were building the pyramids, you could get by with most of your workers having one skill, the ability to move heavy things around. However, the topics they list are so big and complicated, I don’t think you could build an organization that could understand it all, let alone build it.

They mention freedom and speak in egalitarian terms, but this is contradicted by their earlier words. In their world, we will all be happy worker bees, working “optimally” for their collective. Beware of masterminds offering to efficiently manage your resources.

I support discussion and debate. I am all for think-tanks and other institutions that hire scientists. However, those that lobby government to act on their behalf are scary. I don’t want every scientist lobbying the government to institute their pet plan, no matter how good it sounds. They will get so overwhelmed that they won’t be able to do their actual job. The rules of the US Federal government are very limited and generally revolve around an army and a currency. Social welfare is supposed to be handled by the states.

Some of their ideas cannot be turned into laws by the US Congress because they don’t have this authority — the States do. Obamacare is likely to be ruled unconstitutional, and their ideas are potentially much more intrusive towards individual liberty. It would require a Constitutional Amendment, which would never pass and we don’t need.

They offer a social network where scientists can plug in and figure out what they need to do. This could also be considered an actual concrete example of something they are working on. However, there are already social networks where people are advancing the future. SourceForge.net is the biggest community of programmers. There is also Github.com with 1,000,000 projects. Sage has a community advancing the state of mathematics.

If they want to create their own new community solving some aspect, that is great, especially if they have money. But the idea that they are going to make it all happen is impossible. And it will never replace all the other great communities that already exist. Even science happens on Facebook, when people chat about their work.

If they want to add value, they need to specialize. Perhaps they come up with millions of dollars and they can do research in specific areas. However, their fundamental research would very likely get used in ways they never imagined by other people. The more fundamental, the more no one team can possibly take advantage of all aspects of the discovery.

They say there is some research lab they’ve got working on cybernetics. However they don’t demonstrate any results. I don’t imagine they can be that much ahead of the rest of the world who provides them the technology they use to do their work. Imagine a competitor to Henry Ford. Could he really build a car much better given the available technology at the time? My response to anyone who has claims of some advancements is: turn it into a demo or useful product and sell it. All this video offer as evidence here is CGI, which any artist can make.

I support the idea of flying cars. First we need driverless cars and cheaper energy. Unless they are a car or airplane company, I don’t see what this organization will have to do with that task. I have nothing against futuristic videos, but they don’t make clear what is their involvement and instances of ambiguity should be noted.

They are wrong when they say we won’t understand consciousness till 2030 because we already understand it at some level today. Neural networks have been around for decades. IBM’s Jeopardy-playing Watson was a good recent example. However, it is proprietary so not much will come of that particular example. Fortunately, Watson was built on lots of free software, and the community will get there. Google is very proprietary with their AI work. Wolfram Alpha is also proprietary. Etc. We’ve got enough the technical people for an amazing world if we can just get them to work together in free software and Python.

The video’s last sentence suggests that spiritual self-development is the new possibility. But people can work on that today. And again, enlightenment is not a destination but a journey.

We are a generation away from immortality unless things greatly change. I think about LibreOffice, cars that drive themselves and the space elevator, but faster progress in biology is also possible as well if people will follow the free software model. The Microsoft-style proprietary development model has infected many fields.

Steamships, locomotives, electricity; these marvels of the industrial age sparked the imagination of futurists such as Jules Verne. Perhaps no other writer or work inspired so many to reach the stars as did this Frenchman’s famous tale of space travel. Later developments in microbiology, chemistry, and astronomy would inspire H.G. Wells and the notable science fiction authors of the early 20th century.

The submarine, aircraft, the spaceship, time travel, nuclear weapons, and even stealth technology were all predicted in some form by science fiction writers many decades before they were realized. The writers were not simply making up such wonders from fanciful thought or childrens ryhmes. As science advanced in the mid 19th and early 20th century, the probable future developments this new knowledge would bring about were in some cases quite obvious. Though powered flight seems a recent miracle, it was long expected as hydrogen balloons and parachutes had been around for over a century and steam propulsion went through a long gestation before ships and trains were driven by the new engines. Solid rockets were ancient and even multiple stages to increase altitude had been in use by fireworks makers for a very long time before the space age.

Some predictions were seen to come about in ways far removed yet still connected to their fictional counterparts. The U.S. Navy flagged steam driven Nautilus swam the ocean blue under nuclear power not long before rockets took men to the moon. While Verne predicted an electric submarine, his notional Florida space gun never did take three men into space. However there was a Canadian weapons designer named Gerald Bull who met his end while trying to build such a gun for Saddam Hussien. The insane Invisible Man of Wells took the form of invisible aircraft playing a less than human role in the insane game of mutually assured destruction. And a true time machine was found easily enough in the mathematics of Einstein. Simply going fast enough through space will take a human being millions of years into the future. However, traveling back in time is still as much an impossibillity as the anti-gravity Cavorite from the First Men in the Moon. Wells missed on occasion but was not far off with his story of alien invaders defeated by germs- except we are the aliens invading the natural world’s ecosystem with our genetically modified creations and could very well soon meet our end as a result.

While Verne’s Captain Nemo made war on the death merchants of his world with a submarine ram, our own more modern anti-war device was found in the hydrogen bomb. So destructive an agent that no new world war has been possible since nuclear weapons were stockpiled in the second half of the last century. Neither Verne or Wells imagined the destructive power of a single missile submarine able to incinerate all the major cities of earth. The dozens of such superdreadnoughts even now cruising in the icy darkness of the deep ocean proves that truth is more often stranger than fiction. It may seem the golden age of predictive fiction has passed as exceptions to the laws of physics prove impossible despite advertisments to the contrary. Science fiction has given way to science fantasy and the suspension of disbelief possible in the last century has turned to disappointment and the distractions of whimsical technological fairy tales. “Beam me up” was simply a way to cut production costs for special effects and warp drive the only trick that would make a one hour episode work. Unobtainium and wishalloy, handwavium and technobabble- it has watered down what our future could be into childish wish fulfillment and escapism.

The triumvirate of the original visionary authors of the last two centuries is completed with E.E. Doc Smith. With this less famous author the line between predictive fiction and science fantasy was first truly crossed and the new genre of “Space Opera” most fully realized. The film industry has taken Space Opera and run with it in the Star Wars franchise and the works of Canadian film maker James Cameron. Though of course quite entertaining, these movies showcase all that is magical and fantastical- and wrong- concerning science fiction as a predictor of the future. The collective imagination of the public has now been conditioned to violate the reality of what is possible through the violent maiming of basic scientific tenets. This artistic license was something Verne at least tried not to resort to, Wells trespassed upon more frequently, and Smith indulged in without reservation. Just as Madonna found the secret to millions by shocking a jaded audience into pouring money into her bloomers, the formula for ripping off the future has been discovered in the lowest kind of sensationalism. One need only attend a viewing of the latest Transformer movie or download Battlestar Galactica to appreciate that the entertainment industry has cashed in on the ignorance of a poorly educated society by selling intellect decaying brain candy. It is cowboys vs. aliens and has nothing of value to contribute to our culture…well, on second thought, I did get watery eyed when the young man died in Harrison Ford’s arms. I am in no way criticizing the profession of acting and value the talent of these artists- it is rather the greed that corrupts the ancient art of storytelling I am unhappy with. Directors are not directors unless they make money and I feel sorry that these incredibly creative people find themselves less than free to pursue their craft.

The archetype of the modern science fiction movie was 2001 and like many legendary screen epics, a Space Odyssey was not as original as the marketing made it out to be. In an act of cinema cold war many elements were lifted from a Soviet movie. Even though the fantasy element was restricted to a single device in the form of an alien monolith, every artifice of this film has so far proven non-predictive. Interestingly, the propulsion system of the spaceship in 2001 was originally going to use atomic bombs, which are still, a half century later, the only practical means of interplanetary travel. Stanly Kubrick, fresh from Dr. Strangelove, was tired of nukes and passed on portraying this obvious future.

As with the submarine, airplane, and nuclear energy, the technology to come may be predicted with some accuracy if the laws of physics are not insulted but rather just rudely addressed. Though in some cases, the line is crossed and what is rude turns disgusting. A recent proposal for a “NautilusX” spacecraft is one example of a completely vulgar denial of reality. Chemically propelled, with little radiation shielding, and exhibiting a ridiculous doughnut centrifuge, such advertising vehicles are far more dishonest than cinematic fabrications in that they decieve the public without the excuse of entertaining them. In the same vein, space tourism is presented as space exploration when in fact the obscene spending habits of the ultra-wealthy have nothing to do with exploration and everything to do with the attendent taxpayer subsidized business plan. There is nothing to explore in Low Earth Orbit except the joys of zero G bordellos. Rudely undressing by way of the profit motive is followed by a rude address to physics when the key private space scheme for “exploration” is exposed. This supposed key is a false promise of things to come.

While very large and very expensive Heavy Lift Rockets have been proven to be successful in escaping earth’s gravitational field with human passengers, the inferior lift vehicles being marketed as “cheap access to space” are in truth cheap and nasty taxis to space stations going in endless circles. The flim flam investors are basing their hopes of big profit on cryogenic fuel depots and transfer in space. Like the filling station every red blooded American stops at to fill his personal spaceship with fossil fuel, depots are the solution to all the holes in the private space plan for “commercial space.” Unfortunately, storing and transferring hydrogen as a liquified gas a few degrees above absolute zero in a zero G environment has nothing in common with filling a car with gasoline. It will never work as advertised. It is a trick. A way to get those bordellos in orbit courtesy of taxpayer dollars. What a deal.

So what is the obvious future that our present level of knowledge presents to us when entertaining the possible and the impossible? More to come.

Greetings fellow travelers, please allow me to introduce myself; I’m Mike ‘Cyber Shaman’ Kawitzky, independent film maker and writer from Cape Town, South Africa, one of your media/art contributors/co-conspirators.

It’s a bit daunting posting to such an illustrious board, so let me try to imagine, with you; how to regard the present with nostalgia while looking look forward to the past, knowing that a millisecond away in the future exists thoughts to think; it’s the mode of neural text, reverse causality, non-locality and quantum entanglement, where the traveller is the journey into a world in transition; after 9/11, after the economic meltdown, after the oil spill, after the tsunami, after Fukushima, after 21st Century melancholia upholstered by anti-psychotic drugs help us forget ‘the good old days’; because it’s business as usual for the 1%; the rest continue downhill with no brakes. Can’t wait to see how it all works out.

Please excuse me, my time machine is waiting…
Post cyberpunk and into Transhumanism

The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Many years ago I was a member of what was called the Artorga Research Group – a group including some of the early cyberneticists – who were focussed on Artificial Organisms.

One of the main areas of concern was, of course, Memory.

One of our group was a young German engineer who suggested that perhaps memories were in fact re-synthesised in accordance with remembered rules, as opposed to storing huge amounts of data.

Since then similar ideas have arisen in such areas as computer graphics.

Here is an example,

It shows a simple picture on a computer screen. We want to store (memorize) this information.

One way is to store the information about each pixel on the screen – is it white or is it black. With a typical screen resolution that could mean over 2.5 million bits of information

But there is another way….

In this process one simply specifies the start point (A) in terms of its co-ordinates (300 Vertically, 100 Horizontally); and its end point (B) (600 Vertically, 800 Horizontally); and simply instructs – “Draw a line of thickness w between them”.

The whole picture is specified in just a few bits..

The first method, specifying bit by bit, known as the Bit Mapped Protocol (.BMP), uses up lots of memory space.

The other method, based on re-synthesising according to stored instructions, is used in some data reduction formats; and is, essentially, just what that young engineer suggested, many years before.

On your computer you will have a screen saver –almost certainly a colorful scene – and of course that is stored, so that if you are away from the computer for a time it can automatically come on to replace what was showing, and in this way “save” your screen.

So – where are those colors in your screensaver stored, where are the shapes shown in it stored? Is there in the computer a Color Storage Place? Is there a Shape Storage Place?

Of course not.

Yet these are the sort of old, sodden concepts that are sometimes still applied in thinking about the brain and memories.

Patterned streams of binary bits, not unlike neural signals , (but about 70 times larger), are fed to a computer screen. And then the screen takes these patterns of bits as instructions to re-synthesise glowing colors and shapes.

We cannot actually perceive the binary signals, and so they are translated by the screen into a language that we can understand. The screen is a translator – that is its sole function.

This is exactly analogous to the point made earlier about perception and neural signals.

The main point here, though, is that what is stored in the computer memory are not colors and shapes but instructions.

And inherent in these instructions as a whole, there must exist a “map”.

Each instruction must not only tell its bit of the screen what color to glow – but it must also specify the co-ordinates of that bit. If the picture is the head of a black panther with green eyes, we don’t want to see a green head and black eyes. The map has to be right. It is important.

Looking at it in another way the map can be seen as a connectivity table – specifying what goes where. Just two different ways of describing the same thing.

As well as simple perception there are derivatives of what has been perceived that have to be taken into account, for example, the factor called movement.

Movement is not in itself perceptible (as we shall presently show); it is a computation.

Take for example, the following two pictures shown side-by-side.

I would like to suggest that one of these balls is moving. And to ask — which one is moving?

If movement had a visual attribute then one could see which one it was – but movement has no visual attributes – it is a computation.

To determine the speed of something, one has to observe its current position, compare that with the record (memory) of its previous position; check the clock to determine the interval between the two observations; and then divide the distance between the two positions, s; by the elapsed time, t; to determine the speed, v,

s/t = v.

This process is carried out automatically, (subconsciously), in more elaborate organisms by having two eyes spaced apart by a known distance and having light receptors – the retina – where each has a fast turn-on and a slow (about 40 ms) turn off, all followed by a bit of straightforward neural circuitry.

Because of this system, one can look at a TV screen and see someone in a position A, near the left hand edge, and then very rapidly, a series of other still pictures in which the person is seen being closer and closer to B, at the right hand edge.

If the stills are shown fast enough – more than 25 a second — then we will see the person walking across the screen from left to right. What you see is movement – except you don’t actually see anything extra on the screen. Being aware of movement as an aid to survival is very old in evolutionary terms. Even the incredibly old fish, the coelacanth, has two eyes.

The information provided is a derivate of the information provided by the receptors.

And now we ought to look at information in a more mathematical way – as in the concept of Information Space (I-space).

For those who are familiar with the term, it is a Hilbert Space.

Information Space is not “real” space – it is not distance space – it is not measurable in metres and centimetres.

As an example, consider Temperature Space. Take the temperature of the air going in to an air-conditioning (a/c) system; the temperature of the air coming out of the a/c system; and the temperature of the room. These three provide the three dimensions of a Temperature Space. Every point in that space correlates to an outside air temperature, an a/c output temperature and the temperature of the room. No distances are involved – just temperatures.

This is an illustration of what it would look like if we re-mapped it into a drawing.

The drawing shows the concept of a 3-dimensional Temperature Space (T-space). The darkly outlined loop is shown here as a way of indicating the “mapping” of a part of T-space.

But what we are interested in here is I-space. And I-space will have many more dimensions than T-space.

In I-space each location is a different item of information, and the fundamental rule of I-space – indeed of any Hilbert space – is,

Similarity equals Proximity.

This would mean that the region concerned with Taste, for example, would be close to the area concerned with Smell, since the two are closely related.

Pale Red would be closer to Medium Red than to Dark Red.

Perception then would be a matter of connectivity.

An interconnected group we could refer to as a Composition or Feature.

Connect 4 & legs & fur & tail & bark & the word dog & the sound of the word dog – and we have a familiar feature.

Features are patterns of interconnections; and it is these features that determine what a thing or person is seen as. What they are seen as is taken as their identity. It is the identity as seen from outside.

To oneself one is here and now, a 0,0 reference point. To someone else one is not the 0,0 point – one is there — not here, and to that person it is they who are the 0,0 point.

This 0,0 or reference point is crucially important. One could upload a huge mass of data, but if there was no 0,0 point that is all it would be – a huge mass of data.

The way forward towards this evolutionary goal, is not to concentrate on being able to upload more and more data, faster and faster – but instead to concentrate on being able to identify the 0.0 point; and to be able to translate from neural code to the language of perception.

The vulnerability of the bio body is the source of most threats to its existence.

We have looked at the question of uploading the identity by uploading the memory contents, on the assumption that the identity is contained in the memories. I believe this assumption has been proved to be almost certainly wrong.

What we are concentrating on is the identity as the viewer of its perceptions, the centroid or locus of perception.

It is the fixed reference point. And the locus of perception is always Here, and it is always Now. This is abbreviated here to 0,0.

What more logical place to find the identity than where it considers Here and Now – its residence in Space Time.

It would surely be illogical to start searching for the identity where it considers to be Somewhere Else or in Another Time.

We considered the fact that the human being accesses the outside world through its senses, and that its information processing system is able to present that information as being “external.” A hand is pricked with a pin. The sensory information – a stream of neural impulses, all essentially identical — progress to the upper brain where the pattern is read and the sensation of pain is felt. That sensation, however, is projected or mapped onto the exact point it originated from.

One feels the pain at the place the neural disturbance came from. It is an illusion — a very useful illusion.

In the long slow progress of evolution from a single cell to the human organism, and to the logical next step — the “android” (we must find a better word) – this mapping function must be one of the most vital survival strategies. If the predator is gnawing at your tail, it’s smart to know where the pain is coming from.

It wasn’t just structure that evolved, but “smarts” too… smarter systems.

Each sensory channel conveys not just sensory information but information regarding where it came from. Like a set of outgoing information vectors. But there is also a complementary set of incoming vectors. The array of sensory vectors from visual, audible, tactile, and so on, all converge on one location – a locus of perception. And the channels cross-correlate. The hand is pricked – we immediately look at the place the pain came from. And… one can “follow one’s nose” to see where the barbecue is.

Dr Shu can use both his left hand and arm; and his right hand and arm in coordination to lift up the $22M Ming vase he is in the process of stealing.

Left/right coordination — so obvious and simple it gets overlooked.

A condition known as Synesthesia [http://hplusmagazine.com/editors-blog/sight-synesthesia-what-happens-when-senses-can-be-rewired ] provides an example of how two channels can get confused — for example, being able to see sounds or hear movement.

Perhaps the most interesting example is the rubber hand experiment from UC Riverside. In this the subject places their hands palm down on a table. The left arm and hand are screened off, and a substitute left “arm” and rubber hand are installed. After a while, the subject reacts as though the substitute was their real hand.

It is on Youtube at http://www.youtube.com/watch?v=93yNVZigTsk.

This phenomenon has been attributed to neuroplasticity.

A simpler explanation would be changed coordinates — something that people who row or who ride bicycles are familiar with — even if they have never analysed it. The vehicle becomes part of oneself. It becomes a part of the system, an extension. What about applying the same sense on a grander scale? Such a simple and common observation may have just as much relevance to the next step in evolution as the number of teraflops per second.

So, we can get the sensory vectors to be re-deployed. But one of the fundamental questions would be – can we get the 0,0 locus, the centroid of perception, to shift to another place?

Our environment, the environment we live in, is made of perception. Outside there may be rocks and rivers and rain and wind and thunder… but not in the head. Outside this “theater in the head,” there is a world of photons and particles and energy and radiation — reality — but what we see is what is visible, what we hear is what is audible, what we feel is what is tangible … that is our environment, that is where we live.

However, neurones do not emit any light, neurons do not make any sound, they are not a source of pressure or temperature so what the diddly are we watching and listening to?

We live in a world of perception. Thanks to powerful instrumentation and a great deal of scientific research we know that behind this world of perception there are neurons, unknown to us all the time working away providing us with colors and tones and scents….

But they do not emit colors or tones or scents – the neuronal language is binary – fired or not fired.

Somewhere the neuronal binary (Fired/Not Fired) language has to be translated into the language of perception – the range of colors, the range of tones, the range of smells … these are each continuous variables; not two-state variables as in the language of neurons.

There has been a great flurry of research activity in the area of neurons, and what was considered to be “Gospel” 10 years ago, is no longer so.

IBM and ARM in the UK have (summer 2011) announced prototype brains with hyper-connectivity – a step in the right direction but the fundamental question of interpretation/translation is side-stepped.

I hope someone will prove me wrong, but I am not aware of anyone doing any work on the translator question. This is a grievous error.

(To be continued)

I have been asked to mention the following.
The Nature of The Identity — with Reference to Androids

The nature of the identity is intimately related to information and information processing.

The importance and the real nature of information is only now being gradually realised.

But the history of the subject goes back a long way.

In ancient Greece, those who studied Nature – the predecessors of our scientists – considered that what they studied – material reality – Nature – had two aspects – form and substance.

Until recent times all the emphasis was on substance — what substance(s) subjected to sufficient stress would transmute into gold; what substances in combination could be triggered into releasing vast amounts of energy – money and weapons – the usual Homo Sap stuff.

You take a block of marble – that is substance. You have a sculptor create a beautiful statue from it – that is form.

The form consists of the shapes imposed by the sculptor; and the shapes consist of information. Now, if you were an unfeeling materialistic bastard you could describe the shapes in terms of equations. And if you were an utterly depraved unfeeling materialistic bastard you could have a computer compare the sets of equations from many examples to find out what is considered to be beauty.

Dr Foxglove – the Great Maestro of Leipzig, is seated at the concert grand — playing on a Steinway (of course) with great verve, (as one would expect). In front of him, under a low light, there is a sheet of paper with black marks – information of some kind – the music for Chopin’s Nocturne Op. 9, no. 2.

Aahh! Wonderful.

Sublime….

But … all is not as it seems….

Herr Doktor Foxglove thinks he is playing music.

A grand illusion my friend! You see, the music – it is, how you say — all in the heads of the listeners.

What the Good Doktor is doing, and doing manfully — is operating a wooden acoustic-wave generator – albeit very skilfully, and not just any old wooden acoustic-wave generator – but a Steinway wooden acoustic-wave generator.

There is no music in the physical world. The acoustic waves are not music. They are just pressure waves in the atmosphere. The pressure waves actuate the eardrum. And that in turn actuates a part of the inner ear called the cochlea. And that in turn causes streams of neural impulses to progress up into the higher brain.

Dr Foxglove hits a key on the piano corresponding to 440 acoustic waves per second; this is replicated in a slightly different form within the inner ear, until it becomes a stream of neural impulses….

But what the listener hears is not 440 waves or 440 neural impulses or 440 anything – what the listener hears is one thing – a single tone.

The tone is an exact derivative of the pattern of neural impulses. There are no tones in physical reality.

Tones exist only in the experience of the listener – only in the experience of the observer.

And thanks to some fancy processing not only will the listener get the illusion that 440 cycles per second is actually a “tone” – but a further illusion is perpetrated – that the tone is coming from a particular direction, that what one is hearing is Dr. Foxglove at the Steinway, over there, under the lights – that is where the sound is.

But no, my friend….

What the listener is actually listening to is his eardrums. He is listening to a derivative of a derivative … of his eardrums rattling.

His eardrums are rattling because someone is operating an acoustic wave generator in the vicinity.

But what he is hearing is pure information.

And as for the music ….

A single note – a tone – is neither harmonious nor disharmonious in itself. It is only harmonious or disharmonious in relation to another note.

Music is derived from ratios – a still further derivative — and ratios are pure information.

Take for example the ratio of 20 Kg to 10 Kg.

The ratio of 20 Kg to 10 Kg is not 2 Kg.

The ratio of 20 Kg to 10 Kg is 2 – just 2 – pure information.

20 kg/10 kg = 2.

Similarly, we can also show that there is no colour in reality, there are no shapes in reality; depth perception is a derivative – and just as what one is listening to is the rattling of one’s eardrums – so what one is watching is the inside of one’s eyeballs – one is watching the shuddering impact of photons on one’s retina.

The sensations of sound, of light and colour and shapes are all in one’s mind – as decodings of neural messages – which in turn are derivatives of physical processes.

The wonderful aroma coming from the barbecue is all in one’s head.

There are no aromas or tastes in reality – all are conjurations of the mind.

Like the Old Guy said, all is maya, baby….

The only point that is being made here is that Information is too important a subject to be so neglected.

What you are doing here is at the leading edge beyond the leading edge and in that future Information will be a significant factor.

What we away back in the dim, distant and bewildered early 21st Century called Information Technology (I.T.) will be seen as Computer Technology (CT) which is all it ever was – but there will be a real IT in the future.

Similarly what has been referred to for too long as Information Science will be seen for what it is — Library Technology.

Now – down to work.

One of the options – the android – is to upload all stored data from a smelly old bio body to a cool Designer Body (DB).

This strategy is based on the unproven but popular belief that one’s identity is contained by one’s memory.

There are two critical points that need to be addressed.

The observer is the cameraman — not the picture. Unless you are looking in a mirror or at a film of yourself, then you are the one person who will not appear in your memory.

There will be memories of that favourite holiday place, of your favorite tunes, of the emotions that you felt when … but you will only “appear” in your memories as the point of observation.

You are the cameraman – not the picture.

So, we should view with skepticism ideas that uploading the memory will take the identity with it.

If somebody loses their memory – they do not become someone else – hopping and skipping down the street,

‘Hi – I’m Tad Furlong, I’m new in town….’

If somebody loses their memory – they may well say – ‘I do not know my name….’

That does not mean they have become someone else – what they mean is ‘I cannot remember my name….’

The fact that this perplexes them indicates that it is still the same person – it is someone who has lost their name.

If a person changes their name they do not become someone else; nor do they become someone else if they can’t remember their name – or as it is more commonly, and more dramatically, and more loosely put – “cannot remember who they are”.

So, what is the identity?

There is the observer – whatever that is – and there are observations.

There are different forms of information – visual, audible, tactile, olfactory … which together form the environment of the observer. By “projection” the environment is observed as being external. The visual image from one eye is compared with that of the other eye to give depth perception. The sound from one ear is compared with that from the other ear to give surround sound. You are touched on the arm and immediately the tactile sensation – which actually occurs in the mind, is mapped as though coming from that exact spot on your arm.

You live and have your being in a world of sensation.

This is not to say that the external world does not exist – only that our world is the world “inside” – the place where we hear, and see, and feel, and taste….

And all those projections are like “vectors” leading out from a projection spot – a locus of projection – the 0,0 spot – the point which is me seeing and me tasting and me hearing and me scenting even though through the magic of projection I have the idea that the barbeque smells, that there is music in the piano, that the world is full of color, and that my feet feel cold.

This locus of projection is the “me” –it is the point of observation, the 0,0 reference point. This, the observer not the observation, is the identity … the me, the 0,0.

And that 0,0 may be a lot easier to shift than a ton and a half of squashed memories. Memories of being sick; of being tired; of the garden; of your dog; of the sound of chalk on the blackboard, of the humourless assistant bank manager; of the 1982 Olympics; of Sadie Trenton; of Fred’s tow bar; and so on and on and on –

So – if memory ain’t the thing — how do we do it … upload the identity?
(To be continued)

Most of the threats to human survival come down to one factor – the vulnerability of the human biological body.

If a tiny faction of the sums being spent on researching or countering these threats was to be used to address the question of a non-biological alternative, a good team could research and develop a working prototype in a matter of years.

The fundamental question does not lie in the perhaps inappropriately named “Singularity”, (of the AI kind), but rather in by what means are neural impulses translated into sensory experience – sounds, colors, tastes, odours, tactile sensations.

By what means is the TRANSLATION effected?

It is well known that leading up to sensory experience – such as music – that it is not just a matter of neural impulses or even patterns of neural impulses, but patterns of patterns – derivatives of derivatives of derivatives – but yet beyond that, translation has to occur.

Many of the threats to human existence, including over-population and all that it brings – can be handled by addressing the basic problem, instead of addressing each threat separately.

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University