Toggle light / dark theme

JUSTIN.SPACE.ROBOT.GUY
A Point too Far to Astronaut

It’s cold out there beyond the blue. Full of radiation. Low on breathable air. Vacuous.
Machines and organic creatures, keeping them functioning and/or alive — it’s hard.
Space to-do lists are full of dangerous, fantastically boring, and super-precise stuff.

We technological mammals assess thusly:
Robots. Robots should be doing this.

Enter Team Space Torso
As covered by IEEE a few days ago, the DLR (das German Aerospace Center) released a new video detailing the ins & outs of their tele-operational haptic feedback-capable Justin space robot. It’s a smooth system, and eventually ground-based or orbiting operators will just strap on what look like two extra arms, maybe some VR goggles, and go to work. Justin’s target missions are the risky, tedious, and very precise tasks best undertaken by something human-shaped, but preferably remote-controlled. He’s not a new robot, but Justin’s skillset is growing (video is down at the bottom there).

Now, Meet the Rest of the Gang:SPACE.TORSO.LINEUPS
NASA’s Robonaut2 (full coverage), the first and only humanoid robot in space, has of late been focusing on the ferociously mundane tasks of button pushing and knob turning, but hey, WHO’S IN SPACE, HUH? Then you’ve got Russia’s elusive SAR-400, which probably exists, but seems to hide behind… an iron curtain? Rounding out the team is another German, AILA. The nobody-knows-why-it’s-feminized AILA is another DLR-funded project from a university robotics and A.I. lab with a 53-syllable name that takes too long to type but there’s a link down below.

Why Humanoid Torso-Bots?
Robotic tools have been up in space for decades, but they’ve basically been iterative improvements on the same multi-joint single-arm grabber/manipulator. NASA’s recent successful Robotic Refueling Mission is an expansion of mission-capable space robots, but as more and more vital satellites age, collect damage, and/or run out of juice, and more and more humans and their stuff blast into orbit, simple arms and auto-refuelers aren’t going to cut it.

Eventually, tele-operable & semi-autonomous humanoids will become indispensable crew members, and the why of it breaks down like this: 1. space stations, spacecraft, internal and extravehicular maintenance terminals, these are all designed for human use and manipulation; 2. what’s the alternative, a creepy human-to-spider telepresence interface? and 3. humanoid space robots are cool and make fantastic marketing platforms.

A space humanoid, whether torso-only or legged (see: Robotnaut’s new legs), will keep astronauts safe, focused on tasks machines can’t do, and prevent space craziness from trying to hold a tiny pinwheel perfectly still next to an air vent for 2 hours — which, in fact, is slated to become one of Robonaut’s ISS jobs.

Make Sciencey Space Torsos not MurderDeathKillBots
As one is often want to point out, rather than finding ways to creatively dismember and vaporize each other, it would be nice if we humans could focus on the lovely technologies of space travel, habitation, and exploration. Nations competing over who can make the most useful and sexy space humanoid is an admirable step, so let the Global Robot Space Torso Arms Race begin!

“Torso Arms Race!“
Keepin’ it real, yo.

• • •

DLR’s Justin Tele-Operation Interface:

• • •

[JUSTIN TELE-OPERATION SITUATION — IEEE]

Robot Space Torso Projects:
[JUSTIN — GERMANY/DLRFACEBOOKTWITTER]
[ROBONAUT — U.S.A./NASAFACEBOOKTWITTER]
[SAR-400 — RUSSIA/ROSCOSMOS — PLASTIC PALSROSCOSMOS FACEBOOK]
[AILA — GERMANY/DAS DFKI]

This piece originally appeared at Anthrobotic.com on February 21, 2013.

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.


LEFT: Activelink Power Loader Light — RIGHT: The Latest HAL Suit

New Japanese Exoskeleton Pushing into HAL’s (potential) Marketshare
We of the robot/technology nerd demo are well aware of the non-ironically, ironically named HAL (Hybrid Assistive Limb) exoskeletal suit developed by Professor Yoshiyuki Sankai’s also totally not meta-ironically named Cyberdyne, Inc. Since its 2004 founding in Tsukuba City, just north of the Tokyo metro area, Cyberdyne has developed and iteratively refined the force-amplifying exoskeletal suit, and through the HAL FIT venture, they’ve also created a legs-only force resistance rehabilitation & training platform.

Joining HAL and a few similar projects here in Japan (notably Toyota’s & Honda’s) is Kansai based & Panasonic-owned Activelink’s new Power Loader Light (PLL). Activelink has developed various human force amplification systems since 2003, and this latest version of the Loader looks a lot less like its big brother the walking forklift, and a lot more like the bottom half & power pack of a HAL suit. Activelink intends to connect an upper body unit, and if successful, will become HAL’s only real competition here in Japan.
And for what?

Well, along with general human force amplification and/or rehab, this:


福島第一原子力発電所事故 — Fukushima Daiichi Nuclear Disaster Site

Fukushima Cleanup & Recovery: Heavy with High-Rads
As with Cyberdyne’s latest radiation shielded self-cooling HAL suit (the metallic gray model), Activelink’s PLL was ramped up after the 2011 Tohoku earthquake, tsunami, and resulting disaster at the Fukushima Daiichi Power Plant. Cleanup at the disaster area and responding to future incidents will of course require humans in heavy radiation suits with heavy tools possibly among heavy debris.While specific details on both exoskeletons’ recent upgrades and deployment timeline and/or capability are sparse, clearly the HAL suit and the PLL are conceptually ideal for the job. One assumes both will incorporate something like 20-30KG/45-65lbs. per limb of force amplification along with fully supporting the weight of the suit itself, and like HAL, PLL will have to work in a measure of radiological shielding and design consideration. So for now, HAL is clearly in the lead here.

Exoskeleton Competition Motivation Situation
Now, the HAL suit is widely known, widely deployed, and far and away the most successful of its kind ever made. No one else in Japan — in the world — is actually manufacturing and distributing powered exoskeletons at comparable scale. And that’s awesome and all due props to Professor Sankai and his team, but in taking stock of the HAL project’s 8 years of ongoing development, objectively one doesn’t see a whole lot of fundamental advancement. Sure, lifting capacity has increased incrementally and the size of the power source & overall bulk have decreased a bit. And yeah, no one else is doing what Cyberdyne’s doing, but that just might be the very reason why HAL seems to be treading water — and until recently, e.g., Activelink’s PLL, no one’s come along to offer up any kind of alternative.

Digressively Analogizing HAL with Japan & Vice-Versa Maybe
What follows is probably anecdotal, but probably right: See, Japanese economic and industrial institutions, while immensely powerful and historically cutting-edge, are also insular, proud — and weirdly — often glacially slow to innovate or embrace new technologies. With a lot of relatively happy workers doing excellent engineering with unmatched quality control and occasional leaps of innovation, Japan’s had a healthy electronics & general tech advantage for a good long time. Okay but now, thorough and integrated globalization has monkeywrenched the J-system, and while the Japanese might be just as good as ever, the world has caught up. For example, Korea’s big two — Samsung & LG — are now selling more TVs globally than all Japanese makers combined. Okay yeah, TVs ain’t robots, but across the board competition has arrived in a big way, and Japan’s tech & electronics industries are faltering and freaking out, and it’s illustrative of a wider socioeconomic issue. Cyberdyne, can you dig the parallel here?

Back to the Robot Stuff: Get on it, HAL/Japan — or Someone Else Will
A laundry list of robot/technology outlets, including Anthrobotic & IEEE, puzzled at how the first robots able to investigate at Fukushima were the American iRobot PackBots & Warriors. It really had to sting that in robot loving, automation saturated, theretofore 30% nuclear-powered Japan, there was no domestically produced device nimble enough and durable enough to investigate the facility without getting a radiation BBQ (the battle-tested PackBots & Warriors — no problem). So… ouch?

For now, HAL & Japan lead the exoskeletal pack, but with a quick look at Andra Keay’s survey piece over at Robohub it’s clear that HAL and the PLL are in a crowded and rapidly advancing field. So, if the U.S. or France or Germany or Korea or the Kiwis or whomever are first to produce a nimble, sufficiently powered, appropriately equipped, and ready-for-market & deployment human amplification platform, Japanese energy companies and government agencies and disaster response teams just might add those to cart instead. Without rapid and inspired development and improvement, HAL & Activelink, while perhaps remaining viable for Japan’s aging society industry, will be watching emergency response and cleanup teams at home with their handsome friend Asimo and his pet Aibo, wondering whatever happened to all the awesome, innovative, and world-leading Japanese robots.

It’ll all look so real on a 80-inch Samsung flat-panel HDTV.

Activelink Power Loader — Latest Model


Cyberdyne, Inc. HAL Suit — Latest Model
http://youtu.be/xwzYjcNXlFE

SOURCES & INFO & STUFF
[HAL SUIT UPGRADE FOR FUKUSHIMA — MEDGADGET]
[HAL RADIATION CONTAMINATION SUIT DETAILS — GIZMAG]
[ACTIVELINK POWER LOADER UPDATE — DIGINFO.TV]

[TOYOTA PERSONAL MOBILITY PROJECTS & ROBOT STUFF]
[HONDA STRIDE MANAGEMENT & ASSISTIVE DEVICE]

[iROBOT SENDING iROBOTS TO FUKUSHIMA — IEEE]
[MITSUBISHI NUCLEAR INSPECTION BOT]

For Fun:
[SKELETONICS — CRAZY HUMAN-POWERED PROJECT: JAPAN]
[KURATAS — EVEN CRAZIER PROJECT: JAPAN]

Note on Multimedia:
Main images were scraped from the above Diginfo.tv & AFPBBNEWS
YouTube videos, respectively. Because there just aren’t any decent stills
out there — what else is a pseudo-journalist of questionable competency to do?

This piece originally appeared at Anthrobotic.com on January 17, 2013.

I’d like to announce the start of the Indiegogo.com campaign for Software Wars, the movie. It is called Software Wars, but it also talks about biotechnology, the space elevator and other futuristic topics. This movie takes many of the ideas I’ve posted here and puts them into video form. It will be understandable to normal people but interesting to people like us. I would appreciate the support of Lifeboat for this project.

A response to McClelland and Plaut’s
comments in the Phys.org story:

Do brain cells need to be connected to have meaning?

Asim Roy
Department of Information Systems
Arizona State University
Tempe, Arizona, USA
www.lifeboat.com/ex/bios.asim.roy

Article reference:

Roy A. (2012). “A theory of the brain: localist representation is used widely in the brain.” Front. Psychology 3:551. doi: 10.3389/fpsyg.2012.00551

Original article: http://www.frontiersin.org/Journal/FullText.aspx?s=196&name=cognitive_science&ART_DOI=10.3389/fpsyg.2012.00551

Comments by Plaut and McClelland: http://phys.org/news273783154.html

Note that most of the arguments of Plaut and McClelland are theoretical, whereas the localist theory I presented is very much grounded in four decades of evidence from neurophysiology. Note also that McClelland may have inadvertently subscribed to the localist representation idea with the following statement:

Even here, the principles of distributed representation apply: the same place cell can represent very different places in different environments, for example, and two place cells that represent overlapping places in one environment can represent completely non-overlapping places in other environments.”

The notion that a place cell can “represent” one or more places in different environments is very much a localist idea. It implies that the place cell has meaning and interpretation. I start with responses to McClelland’s comments first. Please reference the Phys.org story to find these quotes from McClelland and Plaut and see the contexts.

1. McClelland – “what basis do I have for thinking that the representation I have for any concept – even a very familiar one – is associated with a single neuron, or even a set of neurons dedicated only to that concept?”

There’s four decades of research in neurophysiology on receptive field cells in the sensory processing systems and on hippocampal place cells that shows that single cells can encode a concept – from motion detection, color coding and line orientation detection to identifying a particular location in an environment. Neurophysiologists have also found category cells in the brains of humans and animals. See the next response which has more details on category cells. The neurophysiological evidence is substantial that single cells encode concepts, starting as early as the retinal ganglion cells. Hubel and Wiesel won a Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Thus there’s enough basis to think that a single neuron can be dedicated to a concept and even at a very low level (e.g. for a dot, a line or an edge).

2. McClelland – “Is each such class represented by a localist representation in the brain?”

Cells that represent categories have been found in human and animal brains. Fried et al. (1997) found some MTL (medial temporal lobe) neurons that respond selectively to gender and facial expression and Kreiman et al. (2000) found MTL neurons that respond to pictures of particular categories of objects, such as animals, faces and houses. Recordings of single-neuron activity in the monkey visual temporal cortex led to the discovery of neurons that respond selectively to certain categories of stimuli such as faces or objects (Logothetis and Sheinberg, 1996; Tanaka, 1996; Freedman and Miller, 2008).

I quote Freedman and Miller (2008): “These studies have revealed that the activity of single neurons, particularly those in the prefrontal and posterior parietal cortices (PPCs), can encode the category membership, or meaning, of visual stimuli that the monkeys had learned to group into arbitrary categories.”

Lin et al. (2007) report finding “nest cells” in mouse hippocampus that fire selectively when the mouse observes a nest or a bed, regardless of the location or environment.

Gothard et al. (2007) found single neurons in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor. They found one neuron that responded in particular to threatening monkey faces. Their general observation is (p. 1674): “These examples illustrate the remarkable selectivity of some neurons in the amygdala for broad categories of stimuli.”

Thus the evidence is substantial that category cells exist in the brain.

References:

  1. Fried, I., McDonald, K. & Wilson, C. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron 18, 753–765.
  2. Kreiman, G., Koch, C. & Fried, I. (2000) Category-specific visual responses of single neurons in the human medial temporal lobe. Nat. Neurosci. 3, 946–953.
  3. Freedman DJ, Miller EK (2008) Neural mechanisms of visual categorization: insights from neurophysiology. Neurosci Biobehav Rev 32:311–329.
  4. Logothetis NK, Sheinberg DL (1996) Visual object recognition. Annu Rev Neurosci 19:577–621.
  5. Tanaka K (1996) Inferotemporal cortex and object vision. Annu Rev Neurosci 19:109–139.
  6. Lin, L. N., Chen, G. F., Kuang, H., Wang, D., & Tsien, J. Z. (2007). Neural encoding of the concept of nest in the mouse brain. Proceedings of theNational Academy of Sciences of the United States of America, 104, 6066–6071.
  7. Gothard, K.M., Battaglia, F.P., Erickson, C.A., Spitler, K.M. & Amaral, D.G. (2007). Neural Responses to Facial Expression and Face Identity in the Monkey Amygdala. J. Neurophysiol. 97, 1671–1683.

3. McClelland – “Do I have a localist representation for each phase of every individual that I know?”

Obviously more research is needed to answer these types of questions. But Saddam Hussein and Jennifer Aniston type cells may provide the clue someday.

4. McClelland – “Let us discuss one such neuron – the neuron that fires substantially more when an individual sees either the Eiffel Tower or the Leaning Tower of Pisa than when he sees other objects. Does this neuron ‘have meaning and interpretation independent of other neurons’? It can have meaning for an external observer, who knows the results of the experiment – but exactly what meaning should we say it has?”

On one hand, this obviously brings into focus a lot of the work in neurophysiology. This could boil down to asking who is to interpret the activity of receptive fields, place and grid cells and so on and whether such interpretation can be independent of other neurons. In neurophysiology, the interpretation of these cells (e.g. for motion detection, color coding, edge detection, place cells and so on) are obviously being verified independently in various research labs throughout the world and with repeated experiments. So it is not that some researcher is arbitrarily assigning meaning to cells and that such results can’t be replicated and verified. For many such cells, assignment of meaning is being verified by different labs.

On the other hand, this probably is a question about whether that cell is a category cell and how to assign meaning to it. The interpretation of a cell that responds to pictures of the Eiffel Tower and the Leaning Tower of Pisa, but not to other landmarks, could be somewhat similar to a place cell that responds to a certain location or it could be similar to a category cell. Similar cells have been found in the MTL region — a neuron firing to two different basketball players, a neuron firing to Luke Skywalker and Yoda, both characters of Star Wars, and another firing to a spider and a snake (but not to other animals) (Quiroga & Kreiman, 2010a). Quian Quiroga et al. (2010b, p. 298) had the following observation on these findings: “…. one could still argue that since the pictures the neurons fired to are related, they could be considered the same concept, in a high level abstract space: ‘the basketball players,’ ‘the landmarks,’ ‘the Jedi of Star Wars,’ and so on.”

If these are category cells, there is obviously the question what other objects are included in the category. But, it’s clear that the cells have meaning although it might include other items.

References:

  1. Quian Quiroga, R. & Kreiman, G. (2010a). Measuring sparseness in the brain: Comment on Bowers (2009). Psychological Review, 117, 1, 291–297.
  2. Quian Quiroga, R. & Kreiman, G. (2010b). Postscript: About Grandmother Cells and Jennifer Aniston Neurons. Psychological Review, 117, 1, 297–299.

5. McClelland – “In the context of these observations, the Cerf experiment considered by Roy may not be as impressive. A neuron can respond to one of four different things without really having a meaning and interpretation equivalent to any one of these items.”

The Cerf experiment is not impressive? What McClelland is really questioning is the existence of highly selective cells in the brains of humans and animals and the meaning and interpretation associated with those cells. This obviously has a broader implication and raises questions about a whole range of neurophysiological studies and their findings. For example, are the “nest cells” of Lin et al. (2007) really category cells sending signals to the mouse brain that there is a nest nearby? Or should one really believe that Freedman and Miller (2008) found category cells in the monkey visual temporal cortex that identify certain categories of stimuli such as faces or objects? Or should one believe that Gothard et al. (2007) found category cells in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor? And how about that one neuron that Gothard et al. (2007) found that responded in particular to threatening monkey faces? And does this question about the meaning and interpretation of highly selective cells also apply to simple and complex receptive fields in the retina ganglion and the primary visual cortex? Note that a Nobel Prize has already been awarded for the discovery of these highly selective cells.

The evidence for the existence of highly selective cells in the brains of humans and animals is substantive and irrefutable although one can theoretically ask “what else does it respond to?” Note that McClelland’s question contradicts his own view that there could exist place cells, which are highly selective cells.

6. McClelland – “While we sometimes (Kumeran & McClelland, 2012 as in McClelland & Rumelhart, 1981) use localist units in our simulation models, it is not the neurons, but their interconnections with other neurons, that gives them meaning and interpretation….Again we come back to the patterns of interconnections as the seat of knowledge, the basis on which one or more neurons in the brain can have meaning and interpretation.”

“one or more neurons in the brain can have meaning and interpretation” – that sounds like localist representation, but obviously that’s not what is meant. Anyway, there’s no denying that there is knowledge embedded in the connections between the neurons, but that knowledge is integrated by the neurons to create additional knowledge. So the neurons have additional knowledge that does not exist in the connections. And single cell studies are focused on discovering the integrated knowledge that exists only in the neurons themselves. For example, the receptive field cells in the sensory processing systems and the hippocampal place cells show that some cells detect direction of motion, some code for color, some detect orientation of a line and some detect a particular location in an environment. And there are cells that code for certain categories of objects. That kind of knowledge is not easily available in the connections. In general, consolidated knowledge exists within the cells and that’s where the general focus has been of single cell studies.

7. Plaut – “Asim’s main argument is that what makes a neural representation localist is that the activation of a single neuron has meaning and interpretation on a stand-alone basis. This is about how scientists interpret neural activity. It differs from the standard argument on neural representation, which is about how the system actually works, not whether we as scientists can make sense of a single neuron. These are two separate questions.”

Doesn’t “how the system actually works” depend on our making “sense of a single neuron?” The representation theory has always been centered around single neurons, whether they have meaning on a stand-alone basis or not. So how does making “sense of a single neuron” become a separate question now? And how are these two separate questions addressed in the literature?

8. Plaut – “My problem is that his claim is a bit vacuous because he’s never very clear about what a coherent ‘meaning and interpretation’ has to be like…. but never lays out the constraints that this is meaning and interpretation, and this isn’t. Since we haven’t figured it out yet, what constitutes evidence against the claim? There’s no way to prove him wrong.

In the article, I used the standard definition from cognitive science for localist units, which is a simple one, that localist units have meaning and interpretation. There is no need to invent a new definition for localist representation. The standard definition is very acceptable, accepted by the cognitive science community and I draw attention to that in the article with verbatim quotes from Plate, Thorpe and Elman. Here they are again.

  • Plate (2002):“Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”
  • Thorpe (1995, p. 550): “With a local representation, activity in individual units can be interpreted directly … with distributed coding individual units cannot be interpreted without knowing the state of other units in the network.”
  • Elman (1995, p. 210): “These representations are distributed, which typically has the consequence that interpretable information cannot be obtained by examining activity of single hidden units.”

The terms “meaning” and “interpretation” are not bounded in any way other than that by means of the alternative representation scheme where “meaning” of a unit is dependent on other units. That’s how it’s constrained in the standard definition and that’s been there for a long time.

Neither Plaut nor McClelland have questioned the fact that receptive fields in the sensory processing systems have meaning and interpretation. Hubel and Wiesel won the Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Here’s part of the Nobel Prize citation:

“Thus, they have been able to show how the various components of the retinal image are read out and interpreted by the cortical cells in respect to contrast, linear patterns and movement of the picture over the retina. The cells are arranged in columns, and the analysis takes place in a strictly ordered sequence from one nerve cell to another and every nerve cell is responsible for one particular detail in the picture pattern.”

Neither Plaut nor McClelland have questioned the fact that place cells have meaning and interpretation. McClelland, in fact, accepts the fact that place cells indicate locations in an environment, which means that he accepts that they have meaning and interpretation.

9. Plaut – “If you look at the hippocampal cells (the Jennifer Aniston neuron), the problem is that it’s been demonstrated that the very same cell can respond to something else that’s pretty different. For example, the same Jennifer Aniston cell responds to Lisa Kudrow, another actress on the TV show Friends with Aniston. Are we to believe that Lisa Kudrow and Jennifer Aniston are the same concept? Is this neuron a Friends TV show cell?”

Want to clarify three things here. First, localist cells are not necessarily grandmother cells. Grandmother cells are a special case of localist cells and this has been made clear in the article. For example, in the primary visual cortex, there are simple and complex cells that are tuned to visual characteristics such as orientation, color, motion and shape. They are localist cells, but not grandmother cells.

Second, the analysis in the article of the interactive activation (IA) model of McClelland and Rumelhart (1981) shows that a localist unit can respond to more than one concept in the next higher level. For example, a letter unit can respond to many word units. And the simple and complex cells in the primary visual cortex will respond to many different objects.

Third, there are indeed category cells in the brain. Response No. 2 above to McClelland’s comments cites findings in neurophysiology on category cells. So the Jennifer Aniston/Lisa Kudrow cell could very well be a category cell, much like the one that fired to spiders and snakes (but not to other animals) and the one that fired for both the Eiffel Tower and the Tower of Pisa (but not to other landmarks). But category cells have meaning and interpretation too. The Jennifer Aniston/Lisa Kudrow cell could be a Friends TV show cell, as Plaut suggested, but it still has meaning and interpretation. However, note that Koch (2011, p. 18, 19) reports finding another Jennifer Aniston MTL cell that didn’t respond to Lisa Kudrow:

One hippocampal neuron responded only to photos of actress Jennifer Aniston but not to pictures of other blonde women or actresses; moreover, the cell fired in response to seven very different pictures of Jennifer Aniston.

References:

  1. Koch, C. (2011). Being John Malkovich. Scientific American Mind, March/April, 18–19.

10. Plaut “Only a few experiments show the degree of selectivity and interpretability that he’s talking about…. In some regions of the medial temporal lobe and hippocampus, there seem to be fairly highly selective responses, but the notion that cells respond to one concept that is interpretable doesn’t hold up to the data.

There are place cells in the hippocampus that identify locations in an environment. Locations are concepts. And McClelland admits place cells represent locations. There is also plenty of evidence on the existence of category cells in the brain (see Response No. 2 above to McClelland’s comments) and categories are, of course, concepts. And simple and complex receptive fields also represent concepts such as direction of motion, line orientation, edges, shapes, color and so on. There is thus abundance of data in neurophysiology that shows that “cells respond to one concept that is interpretable” and that evidence is growing.

The existence of highly tuned and selective cells that have meaning and interpretation is now beyond doubt, given the volume of evidence from neurophysiology over the last four decades.

The historical context in which Brain Computer Interfaces (BCI) has emerged has been addressed in a previous article called “To Interface the Future: Interacting More Intimately with Information” (Kraemer, 2011). This review addresses the methods that have formed current BCI knowledge, the directions in which it is heading and the emerging risks and benefits from it. Why neural stem cells can help establish better BCI integration is also addressed as is the overall mapping of where various cognitive activities occur and how a future BCI could potentially provide direct input to the brain instead of only receive and process information from it.

EEG Origins of Thought Pattern Recognition
Early BCI work to study cognition and memory involved implanting electrodes into rats’ hippocampus and recording its EEG patterns in very specific circumstances while exploring a track both when awake and sleeping (Foster & Wilson, 2006; Tran, 2012). Later some of these patterns are replayed by the rat in reverse chronological order indicating a retrieval of the memory both when awake and asleep (Foster & Wilson, 2006). Dr. John Chapin shows that the thoughts of movement can be written to a rat to then remotely control the rat (Birhard, 1999; Chapin, 2008).

A few human paraplegics have volunteered for somewhat similar electrode implants into their brains for an enhanced BrainGate2 hardware and software device to use as a primary data input device (UPI, 2012; Hochberg et al., 2012). Clinical trials of an implanted BCI are underway with BrainGate2 Neural Interface System (BrainGate, 2012; Tran, 2012). Currently, the integration of the electrodes into the brain or peripheral nervous system can be somewhat slow and incomplete (Grill et al., 2001). Nevertheless, research to optimize the electro-stimulation patterns and voltage levels in the electrodes, combining cell cultures and neurotrophic factors into the electrode and enhance “endogenous pattern generators” through rehabilitative exercises are likely to improve the integration closer to full functional restoration in prostheses (Grill et al., 2001) and improved functionality in other BCI as well.

When integrating neuro-chips to the peripheral nervous system for artificial limbs or even directly to the cerebral sensorimotor cortex as has been done for some military veterans, neural stem cells would likely help heal the damage to the site of the limb lost and speed up the rate at which the neuro-chip is integrated into the innervating tissue (Grill et al., 2001; Park, Teng, & Snyder, 2002). These neural stem cells are better known for their natural regenerative ability and it would also generate this benefit in re-establishing the effectiveness of the damaged original neural connections (Grill et al., 2001).

Neurochemistry and Neurotransmitters to be Mapped via Genomics
Cognition is electrochemical and thus the electrodes only tell part of the story. The chemicals are more clearly coded for by specific genes. Jaak Panksepp is breeding one line of rats that are particularly prone to joy and social interaction and another that tends towards sadness and a more solitary behavior (Tran, 2012). He asserts that emotions emerged from genetic causes (Panksepp, 1992; Tran, 2012) and plans to genome sequence members of both lines to then determine the genomic causes of or correlations between these core dispositions (Tran, 2012). Such causes are quite likely to apply to humans as similar or homologous genes in the human genome are likely to be present. Candidate chemicals like dopamine and serotonin may be confirmed genetically, new neurochemicals may be identified or both. It is a promising long-term study and large databases of human genomes accompanied by medical histories of each individual genome could result in similar discoveries. A private study of the medical and genomic records of the population of Iceland is underway and has in the last 1o years has made unique genetic diagnostic tests for increased risk of type 2 diabetes, breast cancer prostate cancer, glaucoma, high cholesterol/hypertension and atrial fibrillation and a personal genomic testing service for these genetic factors (deCODE, 2012; Weber, 2002). By breeding 2 lines of rats based on whether they display a joyful behavior or not, the lines of mice should likewise have uniquely different genetic markers in their respective populations (Tran, 2012).

fMRI and fNIRIS Studies to Map the Flow of Thoughts into a Connectome
Though EEG-based BCI have been effective in translating movement intentionality of the cerebral motor cortex for neuroprostheses or movement of a computer cursor or other directional or navigational device, it has not advanced the understanding of the underlying processes of other types or modes of cognition or experience (NPG, 2010; Wolpaw, 2010). The use of functional Magnetic Resonance Imaging (fMRI) machines, and functional Near-Infrared Spectroscopy (fNIRIS) and sometimes Positron Emission Tomography (PET) scans for literally deeper insights into the functioning of brain metabolism and thus neural activity has increased in order to determine the relationships or connections of regions of the brain now known collectively as the connectome (Wolpaw, 2010).

Dr. Read Montague explained broadly how his team had several fMRI centers around the world linked to each other across the Internet so that various economic games could be played and the regional specific brain activity of all the participant players of these games can be recorded in real time at each step of the game (Montague, 2012). In the publication on this fMRI experiment, it shows the interaction between baseline suspicion in the amygdala and the ongoing evaluation of the specific situation that may increase or degree that suspicion which occurred in the parahippocampal gyrus (Bhatt et al., 2012). Since the fMRI equipment is very large, immobile and expensive, it cannot be used in many situations (Solovey et al., 2012). To essentially substitute for the fMRI, the fNIRS was developed which can be worn on the head and is far more convenient than the traditional full body fMRI scanner that requires a sedentary or prone position to work (Solovey et al., 2012).

In a study of people multitasking on the computer with the fNIRIS head-mounted device called Brainput, the Brainput device worked with remotely controlled robots that would automatically modify the behavior of 2 remotely controlled robots when Brainput detected an information overload in the multitasking brains of the human navigating both of the robots simultaneously over several differently designed terrains (Solovey et al., 2012).

Writing Electromagnetic Information to the Brain?
These 2 examples of the Human Connectome Project lead by the National Institute of Health (NIH) in the US and also underway in other countries show how early the mapping of brain region interaction is for higher cognitive functions beyond sensory motor interactions. Nevertheless, one Canadian neurosurgeon has taken volunteers for an early example of writing some electromagnetic input into the human brain to induce paranormal kinds of subjective experience and has been doing so since 1987 (Cotton, 1996; Nickell, 2005; Persinger, 2012). Dr. Michael Persinger uses small electrical signals across the temporal lobes in an environment with partial audio-visual isolation to reduce neural distraction (Persinger, 2003). These microtesla magnetic fields especially when applied to the right hemisphere of the temporal lobes often induced a sense of an “other” presence generally described as supernatural in origin by the volunteers (Persinger, 2003). This early example shows how input can be received directly by the brain as well as recorded from it.

Higher Resolution Recording of Neural Data
Electrodes from EEGs and electromagnets from fMRI and fNIRIS still record or send data at the macro level of entire regions or areas of the brain. Work on intracellular recording such as the nanotube transistor allows for better understanding at the level of neurons (Gao et al., 2012). Of course, when introducing micro scale recording or transmitting equipment into the human brain, safety is a major issue. Some progress has been made in that an ingestible microchip called the Raisin has been made that can transmit information gathered during its voyage through the digestive system (Kessel, 2009). Dr. Robert Freitas has designed many nanoscale devices such as Respirocytes, Clottocytes and Microbivores to replace or augment red blood cells, platelets and phagocytes respectively that can in principle be fabricated and do appear to meet the miniaturization and propulsion requirements necessary to get into the bloodstream and arrive at the targeted system they are programmed to reach (Freitas, 1998; Freitas, 2000; Freitas, 2005; Freitas, 2006).

The primary obstacle is the tremendous gap between assembling at the microscopic level and the molecular level. Dr. Richard Feynman described the crux of this struggle to bridge the divide between atoms in his now famous talk given on December 29, 1959 called “There’s Plenty of Room at the Bottom” (Feynman, 1959). To encourage progress towards the ultimate goal of molecular manufacturing by enabling theoretical and experimental work, the Foresight Institute has awarded annual Feynman Prizes every year since 1997 for contribution in this field called nanotechnology (Foresight, 2012).

The Current State of the Art and Science of Brain Computer Interfaces
Many neuroscientists think that cellular or even atomic level resolution is probably necessary to understand and certainly to interface with the brain at the level of conceptual thought, memory storage and retrieval (Ptolemy, 2009; Koene, 2010) but at this early stage of the Human Connectome Project this evaluation is quite preliminary. The convergence of noninvasive brain scanning technology with implantable devices among volunteer patients supplemented with neural stem cells and neurotrophic factors to facilitate the melding of biological and artificial intelligence will allow for many medical benefits for paraplegics at first and later to others such as intelligence analysts, soldiers and civilians.

Some scientists and experts in Artificial Intelligence (AI) express the concern that AI software is on track to exceed human biological intelligence before the middle of the century such as Ben Goertzel, Ray Kurzweil, Kevin Warwick, Stephen Hawking, Nick Bostrom, Peter Diamandis, Dean Kamen and Hugo de Garis (Bostrom, 2009; de Garis, 2009, Ptolemy, 2009). The need for fully functioning BCIs that integrate the higher order conceptual thinking, memory recall and imagination into cybernetic environments gains ever more urgency if we consider the existential risk to the long-term survival of the human species or the eventual natural descendent of that species. This call for an intimate and fully integrated BCI then acts as a shield against the possible emergence of an AI independently of us as a life form and thus a possible rival and intellectually superior threat to the human heritage and dominance on this planet and its immediate solar system vicinity.

References

Bhatt MA, Lohrenz TM, Camerer CF, Montague PR. (2012). Distinct contributions of the amygdala and parahippocampal gyrus to suspicion in a repeated bargaining game. Proc. Nat’l Acad. Sci. USA, 109(22):8728–8733. Retrieved October 15, 2012, from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365181/pdf/pnas.201200738.pdf.

Birhard, K. (1999). The science of haptics gets in touch with prosthetics. The Lancet, 354(9172), 52–52. Retrieved from http://search.proquest.com/docview/199023500

Bostrom, N. (2009). When Will Computers Be Smarter Than Us? Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/superintelligence-humanity-oxford-opinions-contributors-artificial-intelligence-09-bostrom.html.

BrainGate. (2012). BrainGate — Clinical Trials. Retrieved October 15, 2012, from http://www.braingate2.org/clinicalTrials.asp.

Chapin, J. (2008). Robo Rat — The Brain/Machine Interface [Video]. Retrieved October 19, 2012, from http://www.youtube.com/watch?v=-EvOlJp5KIY.

Cotton, I. (1997, 96). Dr. persinger’s god machine. Free Inquiry, 17, 47–51. Retrieved from http://search.proquest.com/docview/230100330.

de Garis, H. (2009, June 22). The Coming Artilect War. Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/cosmist–terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html.

deCODE genetics. (2012). deCODE genetics – Products. Retrieved October 26, 2012, from http://www.decode.com/products.

Feynman, R. (1959, December 29). There’s Plenty of Room at the Bottom, An Invitation to Enter a New Field of Physics. Caltech Engineering and Science. 23(5)22–36. Retrieved October 17, 2012, from http://calteches.library.caltech.edu/47/2/1960Bottom.pdf.

Foresight Institute. (2012). FI sponsored prizes & awards. Retrieved October 17, 2012, from http://www.foresight.org/FI/fi_spons.html.

Foster, D. J., & Wilson, M. A. (2006). Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084), 680–3. doi: 10.1038/nature04587.

Freitas, R. (1998). Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell, Artificial Cells, Blood Substitutes, and Immobil. Biotech.26(1998):411–430. Retrieved October 15, 2012, from http://www.foresight.org/Nanomedicine/Respirocytes.html.

Freitas, R. (2000, June 30). Clottocytes: Artificial Mechanical Platelets,” Foresight Update (41)9–11. Retrieved October 15, 2012, from http://www.imm.org/publications/reports/rep018.

Freitas, R. (2005. April). Microbivores: Artificial Mechanical Phagocytes using Digest and Discharge Protocol. J. Evol. Technol. (14)55–106. Retrieved October 15, 2012, from http://www.jetpress.org/volume14/freitas.pdf.

Freitas, R. (2006. September). Pharmacytes: An Ideal Vehicle for Targeted Drug Delivery. J. Nanosci. Nanotechnol. (6)2769–2775. Retrieved October 15, 2012, from http://www.nanomedicine.com/Papers/JNNPharm06.pdf.

Gao, R., Strehle, S., Tian, B., Cohen-Karni, T. Xie, P., Duan, X., Qing, Q., & Lieber, C.M. (2012). “Outside looking in: Nanotube transistor intracellular sensors” Nano Letters. 12(3329−3333). Retrieved September 7, 2012, from http://cmliris.harvard.edu/assets/NanoLet12-3329_RGao.pdf.

Grill, W., McDonald, J., Peckham, P., Heetderks, W., Kocsis, J., & Weinrich, M. (2001). At the interface: convergence of neural regeneration and neural prostheses for restoration of function. Journal Of Rehabilitation Research & Development, 38(6), 633–639.

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., Donoghue, J. P. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372–5. Retrieved from http://search.proquest.com/docview/1017604144.

Kessel, A. (2009, June 8). Proteus Ingestible Microchip Hits Clinical Trials. Retrieved October 15, 2012, from http://singularityhub.com/2009/06/08/proteus–ingestible-microchip-hits-clinical-trials.

Koene, R.A. (2010). Whole Brain Emulation: Issues of scope and resolution, and the need for new methods of in-vivo recording. Presented at the Third Conference on Artificial General Intelligence (AGI2010). March, 2010. Lugano, Switzerland. Retrieved August 29, 2010, from http://rak.minduploading.org/publications/publications/koene.AGI2010-lecture.pdf?attredirects=0&d=1.

Kraemer, W. (2011, December). To Interface the Future: Interacting More Intimately with Information. Journal of Geoethical Nanotechnology. 6(2). Retrieved December 27, 2011, from http://www.terasemjournals.com/GNJournal/GN0602/kraemer.html.

Montague, R. (2012, June). What we’re learning from 5,000 brains. Retrieved October 15, 2012, from http://video.ted.com/talk/podcast/2012G/None/ReadMontague_2012G-480p.mp4.

Nature Publishing Group (NPG). (2010, December). A critical look at connectomics. Nature Neuroscience. p. 1441. doi:10.1038/nn1210-1441.

Nickell, J. (2005, September). Mystical experiences: Magnetic fields or suggestibility? The Skeptical Inquirer, 29, 14–15. Retrieved from http://search.proquest.com/docview/219355830

Panksepp, J. (1992). A Critical Role for “Affective Neuroscience” in Resolving What Is Basic About Basic Emotions. 99(3)554–560. Retrieved October 14, 2012, from http://www.communicationcache.com/uploads/1/0/8/8/10887248/a_critical_role_for_affective_neuroscience_in_resolving_what_is_basic_about_basic_emotions.pdf.

Park, K. I., Teng, Y. D., & Snyder, E. Y. (2002). The injured brain interacts reciprocally with neural stem cells supported by scaffolds to reconstitute lost tissue. Nature Biotechnology, 20(11), 1111–7. doi: 10.1038/nbt751.

Persinger, M. (2003). The Sensed Presence Within Experimental Settings: Implications for the Male and Female Concept of Self. Journal of Psychology. (137)1.5–16. Retrieved October ‎October ‎14, ‎2012, from http://search.proquest.com/docview/213833884.

Persinger, M. (2012). Dr. Michael A. Persinger. Retrieved October 27, 2012, from http://142.51.14.12/Laurentian/Home/Departments/Behavioural+Neuroscience/People/Persinger.htm?Laurentian_Lang=en-CA

Ptolemy, R. (Producer & Director). (2009). Transcendent Man [Film]. Los Angeles: Ptolemaic Productions, Therapy Studios.

Solovey, E., Schermerhorn, P., Scheutz, M., Sassaroli, A., Fantini, S. & Jacob, R. (2012). Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input. Retrieved August 5, 2012, from http://web.mit.edu/erinsol/www/papers/Solovey.CHI.2012.Final.pdf.

Tran, F. (Director). (2012). Dream Life of Rats [Video]. Retrieved ?September ?21, ?2012, from http://www.hulu.com/watch/388493.

UPI. (2012, May 31). People with paralysis control robotic arms to reach and grasp using brain computer interface. UPI Space Daily. Retrieved from http://search.proquest.com/docview/1018542919

Weber, J. L. (2002). The iceland map. Nature Genetics, 31(3), 225–6. doi: http://dx.doi.org/10.1038/ng920

Wolpaw, J. (2010, November). Brain-computer interface research comes of age: traditional assumptions meet emerging realities. Journal of Motor Behavior. 42(6)351–353. Retrieved September 10, 2012, from http://www.tandfonline.com/doi/pdf/10.1080/00222895.2010.526471.

Greetings to the Lifeboat Foundation community and blog readers! I’m Reno J. Tibke, creator of Anthrobotic.com and new advisory board member. This is my inaugural post, and I’m honored to be here and grateful for the opportunity to contribute a somewhat… different voice to technology coverage and commentary. Thanks for reading.

This Here Battle Droid’s Gone Haywire
There’s a new semi-indy sci-fi web series up: DR0NE. After one episode, it’s looking pretty clear that the series is most likely going to explore shenanigans that invariably crop up when we start using semi-autonomous drones/robots to do some serious destruction & murdering. Episode 1 is pretty and well made, and stars 237, the android pictured above looking a lot like Abe Sapien’s battle exoskeleton. Active duty drones here in realityland are not yet humanoid, but now that militaries, law enforcement, the USDA, private companies, and even citizens are seriously ramping up drone usage by land, air, and sea, the subject is timely and watching this fiction is totally recommended.

(Update: DR0NE, Episode 2 now available)

It would be nice to hope for some originality, and while DR0NE is visually and means-of-productionally and distributionally novel, it’s looking like yet another angle on a psychology & set of issues that fiction has thoroughly drilled — like, for centuries.

Higher-Def Old Hat?
Okay, so the modern versions go like this: one day an android or otherwise humanlike machine is damaged or reprogrammed or traumatized or touched by Jesus or whatever, and it miraculously “wakes up,” or its neural network remembers a previous life, or what have you. Generally the machine becomes severely bi-polar about its place in the universe; while it often struggles with the guilt of all the murderdeathkilling it did at others’ behest, it simultaneously develops some serious self-preservation instinct and has little compunction about laying waste to its pursuers, i.e., former teammates & commanders who’d done the behesting.

Admittedly, DR0NE’s episode 2 has yet to be released, but it’s not too hard to see where this is going; the trailer shows 237 delivering some vegetablizing kung-fu to it’s human pursuers, and dude, come on — if a human is punched in the head hard enough to throw them across a room and into a wall or is uppercut into a spasticating backflip, they’re probably just going to embolize and die where they land. Clearly 237 already has the stereotypical post-revelatory per-the-plot justifiable body count.

Where have we seen this pattern before? Without Googling, from the top of one robot dork’s head, we’ve got: Archetype, Robocop, iRobot (film), Iron Giant, Short Circuit, Blade Runner, Rossum’s Universal Robots, and going way, way, way back, the golem.

Show Me More Me
Seems we really, really dig on this kind of story. Continue reading “The Recurring Parable of the AWOL Android” | >

A secret agent travels to a secret underground desert base being used to develop space weapons to investigate a series of mysterious murders. The agent finds a secret transmitter was built into a supercomputer that controls the base and a stealth plane flying overhead is controlling the computer and causing the deaths. The agent does battle with two powerful robots in the climax of the story.

Gog is a great story worthy of a sci fi action epic today- and was originally made in 1954. Why can’t they just remake these movies word for word and scene for scene with as few changes as possible? The terrible job done on so many remade sci fi classics is really a mystery. How can such great special effects and actors be used to murder a perfect story that had already been told well once? Amazing.

In contrast to Gog we have the fairly recent movie Stealth released in 2005 that has talent, special effects, and probably the worst story ever conceived. An artificially intelligent fighter plane going off the reservation? The rip-off of HAL from 2001 is so ridiculous.

Fantastic Voyage (1966) was a not so good story that succeeded in spite of stretching suspension of disbelief beyond the limit. It was a great movie and might succeed today if instead of miniaturized and injected into a human body it was instead a submarine exploring a giant organism under the ice of a moon in the outer solar system. Just an idea.

And then there is one of the great sci-fi movies of all time if one can just forget the ending. The Abyss of 1989 was truly a great film in that aquanauts and submarines were portrayed in an almost believable way.

From wiki: The cast and crew endured over six months of grueling six-day, 70-hour weeks on an isolated set. At one point, Mary Elizabeth Mastrantonio had a physical and emotional breakdown on the set and on another occasion, Ed Harris burst into spontaneous sobbing while driving home. Cameron himself admitted, “I knew this was going to be a hard shoot, but even I had no idea just how hard. I don’t ever want to go through this again”

Again, The Abyss, like Fantastic Voyage, brings to mind those oceans under the icy surface of several moons in the outer solar system.

I recently watched Lockdown with Guy Pearce and was as disappointed as I thought I would be. Great actors and expensive special effects just cannot make up for a bad story. When will they learn? It is sad to think they could have just remade Gog and had a hit.

The obvious futures represented by these different movies are worthy of consideration in that even in 1954 the technology to come was being portrayed accurately. In 2005 we have a box office bomb that as a waste of money is parallel to the military industrial complex and their too-good-to-be-true wonder weapons that rarely work as advertised. In Fantastic Voyage and The Abyss we see scenarios that point to space missions to the sub-surface oceans of the outer planet moons.

And in Lockdown we find a prison in space where the prisoners are the victims of cryogenic experimentation and going insane as a result. Being an advocate of cryopreservation for deep space travel I found the story line.……extremely disappointing.

The precursor to manned space exploration of new worlds is typically unmanned exploration, and NASA has made phenomenal progress with remote controlled rovers on the Martian surface in recent years with MER-A Spirit, MER-B Opportunity and now MSL Curiosity. However, for all our success in reliance on AI in such rovers — similar if not more advanced to AI technology we see around us in the automotive and aviation industries — such as operational real-time clear-air turbulence prediction in aviation — such AI is typically to aid control systems and not mission-level decision making. NASA still controls via detailed commands transmitted to the rover directly from Earth, typically 225 kbit/day of commands are transmitted to the rover, at a data rate of 1–2 kbit/s, during a 15 minute transmit window, with larger volumes of data collected by the rover returned via satellite relay — a one-way communication that incorporates a delay of on average 12 or so light minutes. This becomes less and less practical the further away the rover is.

If for example we landed a similar rover on Titan in the future, I would expect the current method of step-by-step remote control would render the mission impractical — Saturn being typically at least 16 times more distant — dependent on time of year.

With the tasks of the science labs well determined in advance, it should be practical to develop AI engines to react to hazards, change course of analysis dependent on data processed — and so on — the perfect playground for advanced AI programmes. The current Curiosity mission incorporates tasks such as 1. Determine the mineralogical composition of the Martian surface and near-surface geological materials. 2. Attempt to detect chemical building blocks of life (bio-signatures). 3. Interpret the processes that have formed and modified rocks and soils. 4. Assess long-timescale (i.e., 4-billion-year) Martian atmospheric evolution processes. 5. Determine present state, distribution, and cycling of water and carbon dioxide. 6. Characterize the broad spectrum of surface radiation, including galactic radiation, cosmic radiation, solar proton events and secondary neutrons. All of these are very deterministic processes in terms of mapping results to action points, which could be the foundation for shaping such into an AI learning engine, so that such rovers can be entrusted with making their own mission-level decisions on next phases of exploration based on such AI analyses.

Whilst the current explorations on Mars works quite well with the remote control strategy, it would show great foresight for NASA to engineer such unmanned rovers to operate in a more independent fashion with AI operating the mission-level control — learning to adapt to its environment as it explores the terrain, with only the return-link in use in the main — to relay back the analyzed data — and the low-bandwidth control-link reserved for maintenance and corrective action only. NASA has taken great strides in the last decade with unmanned missions. One can expect the next generation to be even more fascinating — and perhaps a trailblazer for advanced AI based technology.