Toggle light / dark theme

The recent scandal involving the surveillance of the Associated Press and Fox News by the United States Justice Department has focused attention on the erosion of privacy and freedom of speech in recent years. But before we simply attribute these events to the ethical failings of Attorney General Eric Holder and his staff, we also should consider the technological revolution powering this incident, and thousands like it. It would appear that bureaucrats simply are seduced by the ease with which information can be gathered and manipulated. At the rate that technologies for the collection and fabrication of information are evolving, what is now available to law enforcement and intelligence agencies in the United States, and around the world, will soon be available to individuals and small groups.

We must come to terms with the current information revolution and take the first steps to form global institutions that will assure that our society, and our governments, can continue to function through this chaotic and disconcerting period. The exponential increase in the power of computers will mean that changes the go far beyond the limits of slow-moving human government. We will need to build new institutions to the crisis that are substantial and long-term. It will not be a matter that can be solved by adding a new division to Homeland Security or Google.

We do not have any choice. To make light of the crisis means allowing shadowy organizations to usurp for themselves immense power through the collection and distortion of information. Failure to keep up with technological change in an institutional sense will mean that in the future government will be at best a symbolic façade of authority with little authority or capacity to respond to the threats of information manipulation. In the worst case scenario, corporations and government agencies could degenerate into warring factions, a new form of feudalism in which invisible forces use their control of information to wage murky wars for global domination.

No degree of moral propriety among public servants, or corporate leaders, can stop the explosion of spying and the propagation of false information that we will witness over the next decade. The most significant factor behind this development will be Moore’s Law which stipulates that the number of microprocessors that can be placed economically on a chip will double every 18 months (and the cost of storage has halved every 14 months) — and not the moral decline of citizens. This exponential increase in our capability to gather, store, share, alter and fabricate information of every form will offer tremendous opportunities for the development of new technologies. But the rate of change of computational power is so much faster than the rate at which human institutions can adapt — let alone the rate at which the human species evolves — that we will face devastating existential challenges to human civilization.

The Challenges we face as a result of the Information Revolution

The dropping cost of computational power means that individuals can gather gigantic amounts of information and integrate it into meaningful intelligence about thousands, or millions, of individuals with minimal investment. The ease of extracting personal information from garbage, recordings of people walking up and down the street, taking aerial photographs and combining then with other seemingly worthless material and then organizing it in a meaningful manner will increase dramatically. Facial recognition, speech recognition and instantaneous speech to text will become literally child’s play. Inexpensive, and tiny, surveillance drones will be readily available to collect information on people 24/7 for analysis. My son recently received a helicopter drone with a camera as a present that cost less than $40. In a few years elaborate tracking of the activities of thousands, or millions, of people will become literally child’s play. Continue reading “The Impending Crisis of Data: Do We Need a Constitution of Information?” | >

I have seen the future of Bitcoin, and it is bleak.

The Promise of Bitcoin

If you were to peek into my bedroom at night (please don’t), there’s a good chance you would see my wife sleeping soundly while I stare at the ceiling, running thought experiments about where Bitcoin is going. Like many other people, I have come to the conclusion that distributed currencies like Bitcoin are going to eventually be recognized as the most important technological innovation of the decade, if not the century. It seems clear to me that the rise of distributed currencies presents the biggest (and riskiest) investment opportunity I am likely to see in my lifetime; perhaps in a thousand lifetimes. It is critically important to understand where Bitcoin is going, and I am determined to do so.

Continue reading “Bitcoin’s Dystopian Future” | >

A response to McClelland and Plaut’s
comments in the Phys.org story:

Do brain cells need to be connected to have meaning?

Asim Roy
Department of Information Systems
Arizona State University
Tempe, Arizona, USA
www.lifeboat.com/ex/bios.asim.roy

Article reference:

Roy A. (2012). “A theory of the brain: localist representation is used widely in the brain.” Front. Psychology 3:551. doi: 10.3389/fpsyg.2012.00551

Original article: http://www.frontiersin.org/Journal/FullText.aspx?s=196&name=cognitive_science&ART_DOI=10.3389/fpsyg.2012.00551

Comments by Plaut and McClelland: http://phys.org/news273783154.html

Note that most of the arguments of Plaut and McClelland are theoretical, whereas the localist theory I presented is very much grounded in four decades of evidence from neurophysiology. Note also that McClelland may have inadvertently subscribed to the localist representation idea with the following statement:

Even here, the principles of distributed representation apply: the same place cell can represent very different places in different environments, for example, and two place cells that represent overlapping places in one environment can represent completely non-overlapping places in other environments.”

The notion that a place cell can “represent” one or more places in different environments is very much a localist idea. It implies that the place cell has meaning and interpretation. I start with responses to McClelland’s comments first. Please reference the Phys.org story to find these quotes from McClelland and Plaut and see the contexts.

1. McClelland – “what basis do I have for thinking that the representation I have for any concept – even a very familiar one – is associated with a single neuron, or even a set of neurons dedicated only to that concept?”

There’s four decades of research in neurophysiology on receptive field cells in the sensory processing systems and on hippocampal place cells that shows that single cells can encode a concept – from motion detection, color coding and line orientation detection to identifying a particular location in an environment. Neurophysiologists have also found category cells in the brains of humans and animals. See the next response which has more details on category cells. The neurophysiological evidence is substantial that single cells encode concepts, starting as early as the retinal ganglion cells. Hubel and Wiesel won a Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Thus there’s enough basis to think that a single neuron can be dedicated to a concept and even at a very low level (e.g. for a dot, a line or an edge).

2. McClelland – “Is each such class represented by a localist representation in the brain?”

Cells that represent categories have been found in human and animal brains. Fried et al. (1997) found some MTL (medial temporal lobe) neurons that respond selectively to gender and facial expression and Kreiman et al. (2000) found MTL neurons that respond to pictures of particular categories of objects, such as animals, faces and houses. Recordings of single-neuron activity in the monkey visual temporal cortex led to the discovery of neurons that respond selectively to certain categories of stimuli such as faces or objects (Logothetis and Sheinberg, 1996; Tanaka, 1996; Freedman and Miller, 2008).

I quote Freedman and Miller (2008): “These studies have revealed that the activity of single neurons, particularly those in the prefrontal and posterior parietal cortices (PPCs), can encode the category membership, or meaning, of visual stimuli that the monkeys had learned to group into arbitrary categories.”

Lin et al. (2007) report finding “nest cells” in mouse hippocampus that fire selectively when the mouse observes a nest or a bed, regardless of the location or environment.

Gothard et al. (2007) found single neurons in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor. They found one neuron that responded in particular to threatening monkey faces. Their general observation is (p. 1674): “These examples illustrate the remarkable selectivity of some neurons in the amygdala for broad categories of stimuli.”

Thus the evidence is substantial that category cells exist in the brain.

References:

  1. Fried, I., McDonald, K. & Wilson, C. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron 18, 753–765.
  2. Kreiman, G., Koch, C. & Fried, I. (2000) Category-specific visual responses of single neurons in the human medial temporal lobe. Nat. Neurosci. 3, 946–953.
  3. Freedman DJ, Miller EK (2008) Neural mechanisms of visual categorization: insights from neurophysiology. Neurosci Biobehav Rev 32:311–329.
  4. Logothetis NK, Sheinberg DL (1996) Visual object recognition. Annu Rev Neurosci 19:577–621.
  5. Tanaka K (1996) Inferotemporal cortex and object vision. Annu Rev Neurosci 19:109–139.
  6. Lin, L. N., Chen, G. F., Kuang, H., Wang, D., & Tsien, J. Z. (2007). Neural encoding of the concept of nest in the mouse brain. Proceedings of theNational Academy of Sciences of the United States of America, 104, 6066–6071.
  7. Gothard, K.M., Battaglia, F.P., Erickson, C.A., Spitler, K.M. & Amaral, D.G. (2007). Neural Responses to Facial Expression and Face Identity in the Monkey Amygdala. J. Neurophysiol. 97, 1671–1683.

3. McClelland – “Do I have a localist representation for each phase of every individual that I know?”

Obviously more research is needed to answer these types of questions. But Saddam Hussein and Jennifer Aniston type cells may provide the clue someday.

4. McClelland – “Let us discuss one such neuron – the neuron that fires substantially more when an individual sees either the Eiffel Tower or the Leaning Tower of Pisa than when he sees other objects. Does this neuron ‘have meaning and interpretation independent of other neurons’? It can have meaning for an external observer, who knows the results of the experiment – but exactly what meaning should we say it has?”

On one hand, this obviously brings into focus a lot of the work in neurophysiology. This could boil down to asking who is to interpret the activity of receptive fields, place and grid cells and so on and whether such interpretation can be independent of other neurons. In neurophysiology, the interpretation of these cells (e.g. for motion detection, color coding, edge detection, place cells and so on) are obviously being verified independently in various research labs throughout the world and with repeated experiments. So it is not that some researcher is arbitrarily assigning meaning to cells and that such results can’t be replicated and verified. For many such cells, assignment of meaning is being verified by different labs.

On the other hand, this probably is a question about whether that cell is a category cell and how to assign meaning to it. The interpretation of a cell that responds to pictures of the Eiffel Tower and the Leaning Tower of Pisa, but not to other landmarks, could be somewhat similar to a place cell that responds to a certain location or it could be similar to a category cell. Similar cells have been found in the MTL region — a neuron firing to two different basketball players, a neuron firing to Luke Skywalker and Yoda, both characters of Star Wars, and another firing to a spider and a snake (but not to other animals) (Quiroga & Kreiman, 2010a). Quian Quiroga et al. (2010b, p. 298) had the following observation on these findings: “…. one could still argue that since the pictures the neurons fired to are related, they could be considered the same concept, in a high level abstract space: ‘the basketball players,’ ‘the landmarks,’ ‘the Jedi of Star Wars,’ and so on.”

If these are category cells, there is obviously the question what other objects are included in the category. But, it’s clear that the cells have meaning although it might include other items.

References:

  1. Quian Quiroga, R. & Kreiman, G. (2010a). Measuring sparseness in the brain: Comment on Bowers (2009). Psychological Review, 117, 1, 291–297.
  2. Quian Quiroga, R. & Kreiman, G. (2010b). Postscript: About Grandmother Cells and Jennifer Aniston Neurons. Psychological Review, 117, 1, 297–299.

5. McClelland – “In the context of these observations, the Cerf experiment considered by Roy may not be as impressive. A neuron can respond to one of four different things without really having a meaning and interpretation equivalent to any one of these items.”

The Cerf experiment is not impressive? What McClelland is really questioning is the existence of highly selective cells in the brains of humans and animals and the meaning and interpretation associated with those cells. This obviously has a broader implication and raises questions about a whole range of neurophysiological studies and their findings. For example, are the “nest cells” of Lin et al. (2007) really category cells sending signals to the mouse brain that there is a nest nearby? Or should one really believe that Freedman and Miller (2008) found category cells in the monkey visual temporal cortex that identify certain categories of stimuli such as faces or objects? Or should one believe that Gothard et al. (2007) found category cells in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor? And how about that one neuron that Gothard et al. (2007) found that responded in particular to threatening monkey faces? And does this question about the meaning and interpretation of highly selective cells also apply to simple and complex receptive fields in the retina ganglion and the primary visual cortex? Note that a Nobel Prize has already been awarded for the discovery of these highly selective cells.

The evidence for the existence of highly selective cells in the brains of humans and animals is substantive and irrefutable although one can theoretically ask “what else does it respond to?” Note that McClelland’s question contradicts his own view that there could exist place cells, which are highly selective cells.

6. McClelland – “While we sometimes (Kumeran & McClelland, 2012 as in McClelland & Rumelhart, 1981) use localist units in our simulation models, it is not the neurons, but their interconnections with other neurons, that gives them meaning and interpretation….Again we come back to the patterns of interconnections as the seat of knowledge, the basis on which one or more neurons in the brain can have meaning and interpretation.”

“one or more neurons in the brain can have meaning and interpretation” – that sounds like localist representation, but obviously that’s not what is meant. Anyway, there’s no denying that there is knowledge embedded in the connections between the neurons, but that knowledge is integrated by the neurons to create additional knowledge. So the neurons have additional knowledge that does not exist in the connections. And single cell studies are focused on discovering the integrated knowledge that exists only in the neurons themselves. For example, the receptive field cells in the sensory processing systems and the hippocampal place cells show that some cells detect direction of motion, some code for color, some detect orientation of a line and some detect a particular location in an environment. And there are cells that code for certain categories of objects. That kind of knowledge is not easily available in the connections. In general, consolidated knowledge exists within the cells and that’s where the general focus has been of single cell studies.

7. Plaut – “Asim’s main argument is that what makes a neural representation localist is that the activation of a single neuron has meaning and interpretation on a stand-alone basis. This is about how scientists interpret neural activity. It differs from the standard argument on neural representation, which is about how the system actually works, not whether we as scientists can make sense of a single neuron. These are two separate questions.”

Doesn’t “how the system actually works” depend on our making “sense of a single neuron?” The representation theory has always been centered around single neurons, whether they have meaning on a stand-alone basis or not. So how does making “sense of a single neuron” become a separate question now? And how are these two separate questions addressed in the literature?

8. Plaut – “My problem is that his claim is a bit vacuous because he’s never very clear about what a coherent ‘meaning and interpretation’ has to be like…. but never lays out the constraints that this is meaning and interpretation, and this isn’t. Since we haven’t figured it out yet, what constitutes evidence against the claim? There’s no way to prove him wrong.

In the article, I used the standard definition from cognitive science for localist units, which is a simple one, that localist units have meaning and interpretation. There is no need to invent a new definition for localist representation. The standard definition is very acceptable, accepted by the cognitive science community and I draw attention to that in the article with verbatim quotes from Plate, Thorpe and Elman. Here they are again.

  • Plate (2002):“Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”
  • Thorpe (1995, p. 550): “With a local representation, activity in individual units can be interpreted directly … with distributed coding individual units cannot be interpreted without knowing the state of other units in the network.”
  • Elman (1995, p. 210): “These representations are distributed, which typically has the consequence that interpretable information cannot be obtained by examining activity of single hidden units.”

The terms “meaning” and “interpretation” are not bounded in any way other than that by means of the alternative representation scheme where “meaning” of a unit is dependent on other units. That’s how it’s constrained in the standard definition and that’s been there for a long time.

Neither Plaut nor McClelland have questioned the fact that receptive fields in the sensory processing systems have meaning and interpretation. Hubel and Wiesel won the Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Here’s part of the Nobel Prize citation:

“Thus, they have been able to show how the various components of the retinal image are read out and interpreted by the cortical cells in respect to contrast, linear patterns and movement of the picture over the retina. The cells are arranged in columns, and the analysis takes place in a strictly ordered sequence from one nerve cell to another and every nerve cell is responsible for one particular detail in the picture pattern.”

Neither Plaut nor McClelland have questioned the fact that place cells have meaning and interpretation. McClelland, in fact, accepts the fact that place cells indicate locations in an environment, which means that he accepts that they have meaning and interpretation.

9. Plaut – “If you look at the hippocampal cells (the Jennifer Aniston neuron), the problem is that it’s been demonstrated that the very same cell can respond to something else that’s pretty different. For example, the same Jennifer Aniston cell responds to Lisa Kudrow, another actress on the TV show Friends with Aniston. Are we to believe that Lisa Kudrow and Jennifer Aniston are the same concept? Is this neuron a Friends TV show cell?”

Want to clarify three things here. First, localist cells are not necessarily grandmother cells. Grandmother cells are a special case of localist cells and this has been made clear in the article. For example, in the primary visual cortex, there are simple and complex cells that are tuned to visual characteristics such as orientation, color, motion and shape. They are localist cells, but not grandmother cells.

Second, the analysis in the article of the interactive activation (IA) model of McClelland and Rumelhart (1981) shows that a localist unit can respond to more than one concept in the next higher level. For example, a letter unit can respond to many word units. And the simple and complex cells in the primary visual cortex will respond to many different objects.

Third, there are indeed category cells in the brain. Response No. 2 above to McClelland’s comments cites findings in neurophysiology on category cells. So the Jennifer Aniston/Lisa Kudrow cell could very well be a category cell, much like the one that fired to spiders and snakes (but not to other animals) and the one that fired for both the Eiffel Tower and the Tower of Pisa (but not to other landmarks). But category cells have meaning and interpretation too. The Jennifer Aniston/Lisa Kudrow cell could be a Friends TV show cell, as Plaut suggested, but it still has meaning and interpretation. However, note that Koch (2011, p. 18, 19) reports finding another Jennifer Aniston MTL cell that didn’t respond to Lisa Kudrow:

One hippocampal neuron responded only to photos of actress Jennifer Aniston but not to pictures of other blonde women or actresses; moreover, the cell fired in response to seven very different pictures of Jennifer Aniston.

References:

  1. Koch, C. (2011). Being John Malkovich. Scientific American Mind, March/April, 18–19.

10. Plaut “Only a few experiments show the degree of selectivity and interpretability that he’s talking about…. In some regions of the medial temporal lobe and hippocampus, there seem to be fairly highly selective responses, but the notion that cells respond to one concept that is interpretable doesn’t hold up to the data.

There are place cells in the hippocampus that identify locations in an environment. Locations are concepts. And McClelland admits place cells represent locations. There is also plenty of evidence on the existence of category cells in the brain (see Response No. 2 above to McClelland’s comments) and categories are, of course, concepts. And simple and complex receptive fields also represent concepts such as direction of motion, line orientation, edges, shapes, color and so on. There is thus abundance of data in neurophysiology that shows that “cells respond to one concept that is interpretable” and that evidence is growing.

The existence of highly tuned and selective cells that have meaning and interpretation is now beyond doubt, given the volume of evidence from neurophysiology over the last four decades.


The 100,000 Stars Google Chrome Galactic Visualization Experiment Thingy

So, Google has these things called Chrome Experiments, and they like, you know, do that. 100,000 Stars, their latest, simulates our immediate galactic zip code and provides detailed information on many of the massive nuclear fireballs nearby.

Zoom in & out of interactive galaxy, state, city, neighborhood, so to speak.

It’s humbling, beautiful, and awesome. Now, is 100, 000 Stars perfectly accurate and practical for anything other than having something pretty to look at and explore and educate and remind us of the enormity of our quaint little galaxy among the likely 170 billion others? Well, no — not really. But if you really feel the need to evaluate it that way, you are a unimaginative jerk and your life is without joy and awe and hope and wonder and you probably have irritable bowel syndrome. Deservedly.

The New Innovation Paradigm Kinda Revisited
Just about exactly one year ago technosnark cudgel Anthrobotic.com was rapping about the changing innovation paradigm in large-scale technological development. There’s chastisement for Neil deGrasse Tyson and others who, paraphrasically (totally a word), have declared that private companies won’t take big risks, won’t do bold stuff, won’t push the boundaries of scientific exploration because of bottom lines and restrictive boards and such. But new business entities like Google, SpaceX, Virgin Galactic, & Planetary Resources are kind of steadily proving this wrong.

Google in particular, a company whose U.S. ad revenue now eclipses all other ad-based business combined, does a load of search-unrelated, interesting little and not so little research. Their mad scientists have churned out innovative, if sometimes impractical projects like Wave, Lively, and Sketchup. There’s the mysterious Project X, rumored to be filled with robots and space elevators and probably endless lollipops as well. There’s Project Glass, the self-driving cars, and they have also just launched Ingress, a global augmented reality game.

In contemporary America, this is what cutting-edge, massively well-funded pure science is beginning to look like, and it’s commendable. So, in lieu of an national flag, would we be okay with a SpaceX visitor center on the moon? Come on, really — a flag is just a logo anyway!

Let’s hope Google keeps not being evil.

[VIA PC MAG]
[100,000 STARS ANNOUNCEMENT — CHROME BLOG]

(this post originally published at www.anthrobotic.com)

FutureICT have submitted their proposal to the FET Flagship Programme, an initiative that aims to facilitate breakthroughs in information technology. The vision of FutureICT is to

integrate the fields of information and communication technologies (ICT), social sciences and complexity science, to develop a new kind of participatory science and technology that will help us to understand, explore and manage the complex, global, socially interactive systems that make up our world today, while at the same time paving the way for a new paradigm of ICT systems that will leverage socio-inspired self-organisation, self-regulation, and collective awareness.

The project could provide us with profound insights into societal behaviour and improve policymaking. The project echoes the Large Hadron Collider at CERN in its scope and vision, only here we are trying to understand the state of the world. The FutureICT project combines the creation of a ‘Planetary Nervous System’ (PNS) where Big Data will be collated and organised, a ‘Living Earth Simulator’ (LES), and the ‘Global Participatory Platform’ (GPP). The LES will simulate the data and provide models for analysis, while the GPP will provide the data, models and methods to everyone. People wil be able to collaborate and research in a very different way. The availability of Big Data to participants will both strengthen our ability to understand complex socio-economic systems, and it could help build a new dialogue between nations in how we solve complex global societal challenges.

FutureICT aim to develop a ‘Global Systems Science’, which will

lay the theoretical foundations for these platforms, while the focus on socio-inspired ICT will use the insights gained to identify suitable designs for socially interactive systems and the use of mechanism that have proven effective in society as operational principles for ICT systems.

It is exciting to think about the possible breakthroughs that could be made. What new insights and scientific discoveries could be made? What new technologies could emerge? The Innovation Accelerator (IA) is one feature of the venture that could create both disruptive technology and politics. Next year will open up a new world of possibilities. A possible project for the Lifeboat Foundation to be involved in.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)

I cannot let the day pass without contributing a comment on the incredible ruling of multiple manslaughter on six top Italian geophysicists for not predicting an earthquake that left 309 people dead in 2009. When those who are entrusted with safeguarding humanity (be it on a local level in this case) are subjected to persecution when they fail to do so, despite acting in the best of their abilities in an inaccurate science, we have surely returned to the dark ages where those who practice science are demonized by the those who misunderstand it.

http://www.aljazeera.com/news/europe/2012/10/20121022151851442575.html

I hope I do not misrepresent other members of staff here at The Lifeboat Foundation, in speaking on behalf of the Foundation in wishing these scientists a successful appeal against a court ruling which has shocked the scientific community, and I stand behind the 5,000 members of the scientific community who sent an open letter to Italy’s President Giorgio Napolitano denouncing the trial. This court ruling was ape-mentality at its worst.

On January 28 2011, three days into the fierce protests that would eventually oust the Egyptian president Hosni Mubarak, a Twitter user called Farrah posted a link to a picture that supposedly showed an armed man as he ran on a “rooftop during clashes between police and protesters in Suez”. I say supposedly, because both the tweet and the picture it linked to no longer exist. Instead they have been replaced with error messages that claim the message – and its contents – “doesn’t exist”.

Few things are more explicitly ephemeral than a Tweet. Yet it’s precisely this kind of ephemeral communication – a comment, a status update, sharing or disseminating a piece of media – that lies at the heart of much of modern history as it unfolds. It’s also a vital contemporary historical record that, unless we’re careful, we risk losing almost before we’ve been able to gauge its importance.

Consider a study published this September by Hany SalahEldeen and Michael L Nelson, two computer scientists at Old Dominion University. Snappily titled “Losing My Revolution: How Many Resources Shared on Social Media Have Been Lost?”, the paper took six seminal news events from the last few years – the H1N1 virus outbreak, Michael Jackson’s death, the Iranian elections and protests, Barack Obama’s Nobel Peace Prize, the Egyptian revolution, and the Syrian uprising – and established a representative sample of tweets from Twitter’s entire corpus discussing each event specifically.

It then analysed the resources being linked to by these tweets, and whether these resources were still accessible, had been preserved in a digital archive, or had ceased to exist. The findings were striking: one year after an event, on average, about 11% of the online content referenced by social media had been lost and just 20% archived. What’s equally striking, moreover, is the steady continuation of this trend over time. After two and a half years, 27% had been lost and 41% archived.

Continue reading “The decaying web and our disappearing history”

I have been meaning to read a book coming out soon called Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. It’s written by Harvard biologist George Church and science writer Ed Regis. Church is doing stunning work on a number of fronts, from creating synthetic microbes to sequencing human genomes, so I definitely am interested in what he has to say. I don’t know how many other people will be, so I have no idea how well the book will do. But in a tour de force of biochemical publishing, he has created 70 billion copies. Instead of paper and ink, or pdf’s and pixels, he’s used DNA.

Much as pdf’s are built on a digital system of 1s and 0s, DNA is a string of nucleotides, which can be one of four different types. Church and his colleagues turned his whole book–including illustrations–into a 5.27 MB file–which they then translated into a sequence of DNA. They stored the DNA on a chip and then sequenced it to read the text. The book is broken up into little chunks of DNA, each of which has a portion of the book itself as well as an address to indicate where it should go. They recovered the book with only 10 wrong bits out of 5.27 million. Using standard DNA-copying methods, they duplicated the DNA into 70 billion copies.

Scientists have stored little pieces of information in DNA before, but Church’s book is about 1,000 times bigger. I doubt anyone would buy a DNA edition of Regenesis on Amazon, since they’d need some expensive equipment and a lot of time to translate it into a format our brains can comprehend. But the costs are crashing, and DNA is a far more stable medium than that hard drive on your desk that you’re waiting to die. In fact, Regenesis could endure for centuries in its genetic form. Perhaps librarians of the future will need to get a degree in biology…

Link to Church’s paper

Source

One question that fascinated me in the last two years is, can we ever use data to control systems? Could we go as far as, not only describe and quantify and mathematically formulate and perhaps predict the behavior of a system, but could you use this knowledge to be able to control a complex system, to control a social system, to control an economic system?

We always lived in a connected world, except we were not so much aware of it. We were aware of it down the line, that we’re not independent from our environment, that we’re not independent of the people around us. We are not independent of the many economic and other forces. But for decades we never perceived connectedness as being quantifiable, as being something that we can describe, that we can measure, that we have ways of quantifying the process. That has changed drastically in the last decade, at many, many different levels.

Continue reading “Thinking in Network Terms” and watch the hour long video interview