Toggle light / dark theme

I recently announced a new protocol layer being built on top of bitcoin. You can read the details here: https://bitcointalk.org/index.php?topic=265488.0

I’m pleased that I have already raised over $200k worth of bitcoins from investors, including myself. Investments will continue to be accepted through the end of this month (August 2013).

By Avi Roy, University of Buckingham

In his essay “Fifty Years Hence”, Winston Churchill speculated, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”

At an event in London today, the first hamburger made entirely from meat grown through cell culture will be cooked and consumed before a live audience. In June at the TED Global conference in Edinburgh, Andras Forgacs took a step even beyond Churchill’s hopes. He unveiled the world’s first leather made from cells grown in the lab.

These are historic events. Ones that will change the discussion about lab-grown meat from blue-skies science to a potential consumer product which may soon be found on supermarket shelves and retail stores. And while some may perceive this development as a drastic shake-up in the world of agriculture, it really is part of the trajectory that agricultural technology is already following.

Creating abundance

While modern humans have been around for 160,000 years or so, agriculture only developed about 10,000 years ago, probably helping the human population to grow. A stable food source had tremendous impact on the development of our species and culture, as the time and effort once put towards foraging could now be put towards intellectual achievement and the development of our civilisation.

In recent history though, agricultural technology has developed with the goal of securing food supply. We have been using greenhouses to control the environment where crops grow. We use pesticides, fertilisers and genetic techniques to control and optimise output. We have created efficiencies in plant cultivation to produce more plants that yield more food than ever before.

These patterns in horticulture can be seen in animal husbandry too. From hunting to raising animals for slaughter and from factory farming to the use of antibiotics, hormones and genetic techniques, meat production today is so efficient that we grow more bigger animals faster than ever before. In 2012, the global herd has reached 60 billion land animals to feed 7 billion people.

The trouble with meat

Now, civilisation has come to a point where we are recognising that there are serious problems with the way we produce food. This mass produced food contributes towards our disease burden, challenges food safety, ravages the environment, and plays a major role in deforestation and loss of biodiversity. For meat production, in particular, manipulating animals has led to an epidemic of viruses, resistant bacteria and food-borne illness, apart from animal welfare issues.

But we may be seeing change brought by consumer demand. The public has started caring about the ethical, environmental and health impacts of food production. And beyond consumer demand for thoughtful products, ecological limits are forcing us to evaluate the way food is produced.

A damning report by the United Nations shows that today livestock raised for meat uses more than 80% of Earth’s agricultural land and 27% of Earth’s potable water supply. It produces 18% of global greenhouse gas emissions and the massive quantities of manure produced heavily pollute water. Deforestation and degradation of wildlife habitats happens largely in part to create feed crops, and factory farming conditions are breeding grounds for dangerous disease.

Making everyone on the planet take up vegetarianism is not an option. While there is much merit to reducing (and rejecting) meat consumption, sustainable dietary changes in the Western world will be more than compensated for by the meat intake of the growing middle class in developing countries like China and India.

The future is cultured

The logical step in the evolution of humanity’s food production capacity is to make meat from cells, rather than animals. After all, the meat we consume is simply a collection of tissues. So why should we grow the whole animals when we can only grow the part that we eat?

By doing this we avoid slaughter, animal welfare issues, disease development. This method, if commercialised, is also more sustainable. Animals do not have to be raised from birth, and no resources are shunted towards non-meat tissues. Compared to conventionally grown meat, cultured meat would require up to 99% less land, 96% less water, 45% less energy, and produce up to 96% less greenhouse gas emissions.

Also even without modern scientific tools, for hundreds of years we have been using bacterial cells, yeast and fungus for food purposes. With recent advances in tissue engineering, culturing mammalian cells for meat production seems like a sensible advancement.

Efficiency has been the primary driver of agricultural developments in the past. Now, it should be health, environment and ethics. We need for cultured meat to go beyond the proof of concept. We need it to be on supermarket shelves soon.

Avi Roy does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

The history of humans, short when considered in the light of all times, if such a consideration can be made, is a web of intricacies and intentions, of acts and non-acts, of silence and sound (internally and externally), of growth and decay. Although we do have records of this history, in the land, air and water, in objects, in ourselves, in text, despite the proliferation of data and information, particularly post-printing press, we still do not know everything and what we do not know outmeasures what we do and will always. So while we have stories and myths, what we have mostly is uncertainty.

Something about the human animal who in the main is attached to petty things (reputation and praise, punishment and fear, egoic notions and material satisfactions) rejects uncertainty, rebels against a blank state. The process of logically manifesting an order in response to uncertainty is tabulated within the brain. But is the brain the seat of Man or is it because over several millennia humanity has organized phenomenal existence primarily through brain activity, that we now believe it to be the natural leadership in our lives? Is it possible that although it has lead, it is not the (natural) leader? Does Man, each a vortex of inter-dimensional energy, operate optimally through one lead, anyway?

Does intelligence permeate the entirety of Man’s being and the entirety of the known cosmic habitat? Is that intelligence being? Is that being existence? And is existence what is? And if this is true, and all that is, is, why this idea that the brain is the seat of intelligence? The brain is one known interpreter, receptacle for, perceiver of (and maybe also creator) of intelligence within the human biological cosmology. The brain is also a foe when not well aired, crafting for Man a separateness from phenomena, acting as the chief architect of his differentiation. The brain is more the functionary of the literal and the common, the go-to tool for navigating physical space and for generating concepts, including calculations. But is the brain the lord of Man, even with the pineal gland?

Man in wholeness and complexity should not be particularized piecemeal. One facet of human biology, like the brain, can only be understood through its relationship with the entire biological structure. Hierarchies of organs and functions within the human body can only lead to ultimate ignorance about not only the organ or function in question and the full ecosystem within which it resides, but also that ecosystem’s relationship with the outside, both the known and unknown, seen and unseen.

Today we know the ability of machines to outpace humans in carrying out certain functions that in humans are as far as we know manifested through the brain. There is also an intrigue with human biology’s seeming inability to regenerate and to decay into a state of supposed non-being. And yet we know also that energy cannot be destroyed and that the human body is an energetic species living in an energetic world. And so for the human being does death technically exist? Do we in fact ever have “death” in the universe?

Although a machine’s adroitness at managing many functions of the human brain more efficiently than a human can manage those functions is a stellar feat, it is localized and decontextualized from that function as it occurs in a human being and is therefore not an equivalent comparison to human use of thought. A human to operate in comprehensive intelligence may or may not want or need to perform brain-based functions as a machine performs them. The machine doing what it does does not therefore make it “smarter” or “better” than a human, it is simply able to enact specific functionality in certain instances according to a set standard.

To evoke smarter and better we must introduce measurement and also a set of criteria by which we evaluate. Why do we evaluate? By what measure? By what criteria?

How is it also that we define death? Is our common notion of death not defined from a time when a majority of humanity thought the body inert matter animated by spirit? Does this notion perhaps carry into today’s desire to evade death? Is it a mistaken concept of separateness (maybe spiritual separateness), a concept characterized through the brain’s activity, that is motioning humans toward an exercise spawned for the denial of death? Is this the brain operating in egoic selfishness calling for its own immortality? The immortality of the personality? Is an enduring personality immortality? Is memory immortality? Is accumulated and preserved experience immortality? And is the brain, the generator of Man’s fantasy of independence vis-à-vis the outside world, enhanced in certain functions by rapidly processing machines, the artifact through which humans can become immortal?

A full view of the future has to consider a huge range of time scales. Freeman Dyson pointed this out (as D. Hutchinson alerted me). I borrowed his idea in the following passage from my book, The Human Race to the Future, published by the Lifeboat Foundation.


Our journey into the future begins by asking what the next hundred years will be like. Call that century-long time frame the “first generation” of future history. After a baker’s dozen or so chapters we then move to the second generation — the next order of magnitude after a hundred — the next thousand years. The seventh generation then has a ten million year horizon, the very distant future. Beyond the seventh generation are time horizons above even ten million years. This “powers of ten” scaling of future history was used by well-known physicist Freeman Dyson in chapter 4 of his 1997 book, Imagined Worlds.

Technical update on the ebook edition: Many Kindle devices and reader software systems have a menu item for jumping to the table of contents, and another menu item for jumping to the “beginning” of a book, however that is defined. I found out how to build an ebook that defines these locations so that the menu items work. You can use basic html commands. To define the location of the table of contents, you can insert into the html code of the book, right where the table of contents begins, the following html command:

<a name="toc"></a>

And now the user can click on the device or reader software’s “table of contents” menu item and they go straight to the table of contents!

To define the “beginning” of the book where you, as the author, want users to go when they click the “beginning” menu item (title page? Chapter 1? You decide), just put the following html command at that location in the ebook’s html source:

<a name="start"></a>

…and now that works too!

Of course you can use MS Word, Dreamweaver, etc., instead of editing the raw html, but ultimately those editors do it by inserting the same html commands.

The imposition of compositional structure within the craft of writing was recently pointed out to me. As students we are told repeatedly to open, elaborate and conclude a writing work. This carries on into so-called professional life. Indeed the questions that arise during the course of any given writing work are outside the scope of the work itself, the material of the work deals with facts and recommendations, which are based on our conclusions. To end a piece of professional or student work without conclusions and with questions would be seen as a lack of seriousness. We believe time invested into investigation is only worthy if we emerge with answers. And the answers we are to have begin with our original questions and are influenced by the way we approach the questions. And yet we approach the questions knowing they will need to be answered and so our opening approach is very limited. We not only formulate opening questions we feel we will have a good chance of answering, but our entire attention during the duration of looking at the question is focused on finding an answer. So where is the originality then in our thought? And where is the opportunity to explore the limitations of thought itself as it is applied to the complexity and urgency of matters in the world? If my opening point of inquiry is designed to be something I know I can find an answer for, then certainly I have no opportunity to go beyond what I know to address it, not really, and so there is nothing new. And if I begin a problem knowing I will be judged on finding an answer for that problem then I will necessarily limit or eliminate any point of fact or inquiry that takes me from that task. The generally accepted process and presentation of writing today is linear and monolithic in an academic and professional context. We talk about complexity and interrelatedness but we judge, evaluate and reward a written approach to that complexity and interrelatedness according to how well it fits into what we already know and according to the standards we have already found to be acceptable. Because we are bound to our knowledge and our processes of merit through training, repetition, various forms of aggrandizement and institutional awareness, however subtle or overt, we disregard or penalize information and modalities that fall outside our realm of knowing. Therein, the places we go to fulfill our knowing may expand (geographically or otherwise) but the way we approach and arrive at knowing remains the same. Although some may develop original technical innovations, those technical innovations will be used as tools to serve the knowledge system that is already established within any given realm of inquiry.

Our assumptions and biases about knowledge creation are interwoven with our experiences, our interpretations of those experiences, and our identification with the experiences and interpretations. Patterns emerge and we craft a self through the mosaic and soon that mosaic can stand in for our self. When that mosaic of experience and interpretation is cultivated through authority and the authority of our own experience and sense of self, we will extend our sense of authority into the realm of that which we already know. In this we are setting up a subtle preoccupation with what we know and with the familiar way we arrive at knowledge while simultaneously we derive a prejudice against what we do not know and also any unknown means to cultivate the known.

For example, pretend I am a teacher with a PhD, many people have applauded my research and I publish books, give famous lectures and have tenure at a prestigious school. I feel confident in my work and consider myself to be an authority in my field. A student comes along who does not know me and takes my class for the first time. She questions my logic and says my class is a bore. She tells me my exams do not test her knowledge of the subject but instead test her ability to repeat my version of the subject. She writes a paper calling into doubt the major premises of my field, to which I have contributed the most popularly followed lines of inquiry and she proposes an entirely new approach to the field and ends her paper with grand questions about the nature of intellectual thought. How do I approach this? In a typical situation I would question the student’s credibility as a student. I would consider her farfetched and someone who is incapable of understanding the subject matter. I would have trouble finding a way to give her a passing course score. She would be a problem to fix or to solve or to ignore. Never would I consider that perhaps she had a point. Why? I assumed the ascendency of my own knowledge based on my own sense of authority. Because the student operated outside my realm of knowledge and outside my sense of appropriateness in the acquisition of knowledge, I decided she was wrong. Invisible to me are my own assumptions of authority, including my assumption that authority has validity. Even though I have a wide set of experiences related to a branch of knowledge I am unable to see that those experiences are necessarily limited because I have only had a certain set of them, no matter how vaunted, and that knowledge itself is limited because it is always about what is already known. So I approach my student as if she is a problem instead of approaching her as a person with insight that may also be valid and should be explored. If we use something that is already known to approach what is new, how can we really approach it? The new will consistently be framed according to its relationship or lack of relationship with what has been established. And as has already been stated, what has been established is where authority has been placed, including our reverence for all the things we have already authorized.

Many of us operate in this field of inquiry, discovery and selfhood and it is apparent when we review our written forays into the realms of global problem solving discourse. So often we conclude. So often we have answers and set approaches to solving problems. So often we solicit recommendations for action. But rarely do we ponder over, except that which we have relegated to philosophy. In the realms of activity (politics, business, economics, education, health, environment, etc.) we theorize action, take action or meet to form a new activity. We say events and circumstances are too urgent to stop for too much thought, but in our haste, our actions themselves lead to further reasons to have to meet again to reorient ourselves. Our writing becomes a part of this process. We write in order to validate our next action and we guide that writing according to what we think that action should be. We rarely write to discover the appropriate terms upon which our action should be based. We rarely question the terms upon which our previous action has been based. We rarely inquire into our standards, we just try to find novel ways to meet them.

What if our grand questions about the world ended with, “I don’t know.” Would that harm us? Does not knowing have to be accompanied by feelings of panicked desperation? Must we think ourselves inert if we do not have answers? What if we started with I don’t know? What will we do with Palestine and Israel? I don’t know. What will we do about hunger and miseducation? I don’t know. How will we live peaceably, without war and conflict? I don’t know. Is not I don’t know a better place to start than our usual conclusions, ideals and ideas? How is referencing what has already happened and what has already been thought (and what has not worked) a correct way to address how to move forward from right now? Perhaps in the ground of not knowing we have more possibility to create something new. We can put aside our predispositions and knowledge and simply give matters our attention. Indeed this may take more time. Or it may take no time at all. But the lunches, dinners, breakfasts, meetings, flights and arrangements accompanying our usual fast way of gathering together, sometimes for several days or weeks, to swiftly arrive at answers also takes time and over the course of years has substantially little to show as far as solving our grand world problems. It is obvious we don’t know, by the overall state of global affairs, so saying we don’t ought not be too challenging.

Let us all stop pretending. The urge to be right and definitive is ingrained into ourselves. We adore confidence and conclusions, especially if it is accompanied by a new technology and somebody says the word science. We write long reports about everything we know and every state of being we think we should have including bullet points for things we can do to be better. But rarely do we write about the world as it actually exists right now and how we have been utterly incapable of doing anything fundamentally different in it. This is not pessimism. Saying we are optimistic and having positive thoughts is not a substitute for critical inquiry. We are not going to smile ourselves into a better world. And just because we are smiling does not mean we care.

Our writing reflects this. A tome of high sounding phrases, deferred promises and volumes of technical bureaucratic lexicon. To what end? Perhaps it is our desire to conclude that hampers us. Perhaps it is our desire to know. Perhaps it is our desire itself, or in operation with other facets of our personality. But certainly we are operating within a certain structure and we seem to be thus far unable to work ourselves beyond its limits. Maybe we can free ourselves with our writing. I am not suggesting this as a method or as an exclusive approach. But so often we come to know through the words that we read and too often it is in the realm of fiction only that we allow ourselves imagination, curiosity, the unknown. Let us introduce imagination into our non-fictive selves. When we seriously consider policy and law and action, can we remove ourselves from tradition? Can we start only with what is now? How else can we speak to the moment if not from the place it exists. Now is only informed by history if we are living in the past. This is not a call to replace an old ideology with a new one or to discount traditions which have lasted because they are just. This is simply a question about how we might write differently about the circumstances in the world burning for our attention. It is an offer to approach our writing about the world’s most serious matters from a point of doubt regarding our own understanding. It is a somewhat diffident rejection of conclusions and the way conclusions as construct have been organized into our lives as authority through the way we have been taught to write and express ourselves through writing, and finally to arrive at knowledge of ourselves and the world through what we write and what we have read. It is a question about how writing affects our thought and how our thought affects our world and how we remain ignorant of a process we have created ourselves and silently abide by.

(I feel an internal pressure to “wrap up” what I have written. But my intent was not to begin to reach a goal, an end. It was to explore a question. And for now that exploration is complete, although not in conclusion.)

Short Summary of a New Idea: Cryodynamics

Otto E. Rossler, Faculty of Science, University of Tubingen, Germany

Abstract

A brief history and description of cryodynamics is offered. While still in its infancy, it is already strong in basic findings and predictions. It is a classical science the quantum version of which still waits to be formulated. It is highly promising technologically. A new fundamental science is a rare event in history. The basic insight is to picture randomly moving hyperbolic tree trunks in Sinai’s “rolling tennis ball in an orchard game” (Harry Thomas’ term), but flipped upside down so that the trees are hollow funnels pointing downwards.

- — - -

Cryodynamics is a classical field which appears to be new. It is a sister discipline to thermodynamics and automatically has as many implications as the latter despite its belated discovery. So far but a few features have been elaborated. For example, its deterministic entropy function is identical to the Sackur-Tetrode equation as given by Diebner, but with inverted sign (“ectropy”). If confirmed it allows for a combined entropic and ectropic model of the universe. Then all direction-of-time bound models of the universe lose their validity. The problem of black hole recycling which poses itself in this case is still unsolved in spite of Hawking’s early stab.

Held against this big scenario, what is presently on hand is still limited. It is the discovery that if you subject a fast-moving low-energy classical particle to successive grazing-type encounters with attractive, rich-in-kinetic-energy particles, then the low-energy particle loses kinetic energy on average to the high-energy ones (“energetic capitalism”). This is very unexpected and paradoxical. Nevertheless the idea goes back to Zwicky in 1929 and Chandrasekhar in 1943, although it was not elaborated at the time.

The “miracle” is that if you invert the direction of time, the opposite behavior is implicit. All of the conceptual problems of thermodynamics are re-encountered. The second major feature is that the new phenomenon is numerically elusive for stiffness reasons. While the increasing disorder of entropy increase, valid in the repulsive case is a numerically stable feature in statistical thermodynamics, the decreasing order of ectropy increase, valid in the attractive case is not numerically stable. Very minor numerical deviations suffice to destroy the on-going decrease of entropy. This explains why in the thousands of multi-particle simulations done so far in galactic dynamics, to mention only this subcase, the phenomenon was never encountered numerically so far.

Another reason for the lack of resonance up until now is the fact that thermodynamics has always been understood as a statistical theory, with probability-theoretic axioms employed to describe it. While this is not false, it eschews the underlying deterministic, chaos-theoretic mechanism. The thereby incurred intrinsic inaccuracy did not cause much damage in thermodynamics so far, but cannot be transported over to cryodynamics. Cryodynamics does not emerge without prior acknowledgement of deterministic chaos as its root. (This new fact strongly constrains the accuracy of quantum mechanics — backwards in time — which is quite unexpected.)

Let me explain the simplest example which also worked numerically in the first two successful simulations so far. A fast-moving low-mass particle is subjected to encounters with a Newtonian potential trough into which it dips-in and then gets out again. If the trough is periodically or nonperiodically approaching and receding (modulated in its depth), a net effect results: a loss of energy of the traversing fast particle. If we invert time after a while, the exact opposite occurs up to the initial point, to from then on give way to the previous behavior, but now in the opposite direction of time.

The best way to understand all this is to invert the sign of the potentials. Then the opposite phenomenon, familiar from statistical thermodynamics, occurs: The periodically modulated trough is now replaced by a periodically modulated mound or tree. It is obvious that the recurrent unequal increases and decreases in the height of the hyperbolic mound amount to a qualitatively different effect in their sums.

To see this, think of a ball running frictionlessly through a forest of (at first fixed) trees with softly rising features. Then the ball will from time to time climb up a little and come down again – without losing or gaining in its net kinetic energy. Now let the trees be moving slowly at random (or periodically). Then the two cases – of the tree approaching the path of the up-climbing particle or receding from it – have different strengths (different mean heights). This explains dissipation. On inverting time after a while, the net gain becomes a net loss for the moving particle — until the initial condition is re-arrived at. Then the gaining streak sets in again, now in the new direction of time.

When we leave the repulsive case by inverting the tree stems into mirror-symmetric troughs, then the opposite thing happens to a ball running on the surface of this inverted landscape. This is the new phenomenon of cryodynamics, proved to the mental eye.

After this geometric proof, the numerical challenge clearly is on – especially so after the successful two cases published by Klaus Sonnleitner and Ramis Movassagh, respectively. The new science is waiting to be put on a broader computational basis.

Why is this important? The new cosmology that is implied clearly is not a sufficient motivation, given the fact that most everyone is happy with the old paradigm. So all that remains as a convincing reason for further research is an economically challenging application.

Such an application could be provided by the ITER, a hot-fusion reactor based on the Tokamak design: a torus-shaped, millions-of-degrees hot plasma that is magnetically confined in a metal ring. The plasma must not touch the (necessarily much colder) confining walls. This design is intrinsically unstable dynamically: The plasma tends to break out from the toroidal magnetic confinement to suddenly touch the wall somewhere to let the overall temperature collapse. No working prototype exists for decades. The current hope that following another quarter of a century, the machine will work, is being upheld with many billions of euros already sunk-in. Here, cryodynamics can be of help in principle. The paradoxical option: apply a heat bath of even hotter attractive particles at the location of the budding instability. Then these hotter attractive particles – like the inverted tree trunks – will cool the too hot nucleons, thus curbing the budding local protrusion.

“Cooling by hotter attractive particles” is the essence of cryodynamics. The hotter particles could be electrons shot-in concentrically into the budding hot spot. This is no problem in principle since even very much hotter electrons are easy to generate in small, dirigible-beam accelerators.

The idea was published under the title “Is hot fusion made feasible by the discovery of Cryodynamics?” in Advances in Intelligent Systems and Computing, Volume 192, pp. 1–4, Springer-Verlag 2013. It can still be patented since no design details were mentioned. This is a very lucrative technological proposal. No country or nation is interested so far nor are the oil companies.

Acknowledgments

Thank you that I was allowed to tell you the whole story in as brief a form as I could. I thank Dan Stein, Eric Klien, Christophe Letellier, Nico Heller, Heinz Clement and Jozsef Fortagh for discussions. Paper presented at the “CQ Colloquium” of the University of Tubingen on June 28, 2013. For J.O.R. (Submitted to Nature.)

The arXiv blog on MIT Technology Review recently reported a breakthrough ‘Physicists Discover the Secret of Quantum Remote Control’ [1] which led some to comment on whether this could be used as an FTL communication channel. In order to appreciate the significance of the paper on Quantum Teleportation of Dynamics [2], one should note that it has already been determined that transfer of information via a quantum tangled pair occurs *at least* 10,000 times faster than the speed of light [3]. The next big communications breakthrough?

Quantum Entanglement Visual

In what could turn out to be a major breakthrough for the advancement of long-distance communications in space exploration, several problems are resolved — where if a civilization is eventually established on a star system many light years away, for example, such as on one of the recently discovered Goldilocks Zone super-Earths in the Gliese 667C star system, then communications back to people on Earth may after all be… instantaneous.

However, implications do not just stop there either. As recently reported in The Register [5], researchers in Israel at the University of Jerusalem, have established that quantum tangling can be used to send data across both TIME AND SPACE [6]. Their recent paper entitled ‘Entanglement Between Photons that have Never Coexisted’ [7] describes how photon-to-photon entanglement can be used to connect with photons in their past/future, opening up an understanding into how one may be able to engineer technology to not just communicate instantaneously across space — but across space-time.

Whilst in the past many have questioned what benefits have been gained in quantum physics research and in particular large research projects such as the LHC, it would seem that the field of quantum entanglement may be one of the big pay-offs. Whist it has yet to be categorically proven that quantum entanglement can be used as a communication channel, and the majority opinion dismisses it, one can expect much activity in quantum entanglement over the next decade. It may yet spearhead the next technological revolution.

[1] www.technologyreview.com/view/516636/physicists-discover-the-secret-of-quantum-remote-control
[2] Quantum Teleportation of Dynamics http://arxiv.org/abs/1304.0319
[3] Bounding the speed of ‘spooky action at a distance’ http://arxiv.org/abs/1303.0614
[4] http://www.universetoday.com/103131/three-potentially-habitable-planets-found-orbiting-gliese-667c/
[5] The Register — Biting the hand that feeds IT — http://www.theregister.co.uk/
[6] http://www.theregister.co.uk/2013/06/03/quantum_boffins_get_spooky_with_time/
[7] Entanglement Between Photons that have Never Coexisted http://arxiv.org/abs/1209.4191

3j0evbm2zqijaw_small

Originally posted via The Advanced Apes

Through my writings I have tried to communicate ideas related to how unique our intelligence is and how it is continuing to evolve. Intelligence is the most bizarre of biological adaptations. It appears to be an adaptation of infinite reach. Whereas organisms can only be so fast and efficient when it comes to running, swimming, flying, or any other evolved skill; it appears as though the same finite limits are not applicable to intelligence.

What does this mean for our lives in the 21st century?

First, we must be prepared to accept that the 21st century will not be anything like the 20th. All too often I encounter people who extrapolate expected change for the 21st century that mirrors the pace of change humanity experienced in the 20th. This will simply not be the case. Just as cosmologists are well aware of the bizarre increased acceleration of the expansion of the universe; so evolutionary theorists are well aware of the increased pace of techno-cultural change. This acceleration shows no signs of slowing down; and few models that incorporate technological evolution predict that it will.

The result of this increased pace of change will likely not just be quantitative. The change will be qualitative as well. This means that communication and transportation capabilities will not just become faster. They will become meaningfully different in a way that would be difficult for contemporary humans to understand. And it is in the strange world of qualitative evolutionary change that I will focus on two major processes currently predicted to occur by most futurists.

Qualitative evolutionary change produces interesting differences in experience. Often times this change is referred to as a “metasystem transition”. A metasystem transition occurs when a group of subsystems coordinate their goals and intents in order to solve more problems than the constituent systems. There have been a few notable metasystem transitions in the history of biological evolution:

  • Transition from non-life to life
  • Transition from single-celled life to multi-celled life
  • Transition from decentralized nervous system to centralized brains
  • Transition from communication to complex language and self-awareness

All these transitions share the characteristics described of subsystems coordinating to form a larger system that solve more problems than they could do individually. All transitions increased the rate of change in the universe (i.e., reduction of entropy production). The qualitative nature of the change is important to understand, and may best be explored through a thought experiment.

Imagine you are a single-celled organism on the early Earth. You exist within a planetary network of single-celled life of considerable variety, all adapted to different primordial chemical niches. This has been the nature of the planet for well over 2 billion years. Then, some single-cells start to accumulate in denser and denser agglomerations. One of the cells comes up to you and says:

I think we are merging together. I think the remainder of our days will be spent in some larger system that we can’t really conceive. We will each become adapted for a different specific purpose to aid the new higher collective.

Surely that cell would be seen as deranged. Yet, as the agglomerations of single-cells became denser, formerly autonomous individual cells start to rely more and more on each other to exploit previously unattainable resources. As the process accelerates this integrated network forms something novel, and more complex than had previously ever existed: the first multicellular organisms.

The difference between living as an autonomous single-cell is not just quantitative (i.e., being able to exploit more resources) but also qualitative (i.e., shift from complete autonomy to being one small part of an integrated whole). Such a shift is difficult to conceive of before it actually becomes a new normative layer of complexity within the universe.

Another example of such a transition that may require less imagination is the transition to complex language and self-awareness. Language is certainly the most important phenomena that separates our species from the rest of the biosphere. It allows us to engage in a new evolution, technocultural evolution, which is essentially a new normative layer of complexity in the universe as well. For this transition, the qualitative leap is also important to understand. If you were an australopithecine, your mode of communication would not necessarily be that much more efficient than that of any modern day great ape. Like all other organisms, your mind would be essentially isolated. Your deepest thoughts, feelings, and emotions could not fully be expressed and understood by other minds within your species. Furthermore, an entire range of thought would be completely unimaginable to you. Anything abstract would not be communicable. You could communicate that you were hungry; but you could not communicate about what you thought of particular foods (for example). Language changed all that; it unleashed a new thought frontier. Not only was it now possible to exchange ideas at a faster rate, but the range of ideas that could be thought of, also increased.

And so after that digression we come to the main point: the metasystem transition of the 21st century. What will it be? There are two dominant, non-mutually exclusive, frameworks for imagining this transition: technological singularity and the global brain.

The technological singularity is essentially a point in time when the actual agent of techno-cultural change; itself changes. At the moment the modern human mind is the agent of change. But artificial intelligence is likely to emerge this century. And building a truly artificial intelligence may be the last machine we (i.e., biological humans) invent.

The second framework is the global brain. The global brain is the idea that a collective planetary intelligence is emerging from the Internet, created by increasingly dense information pathways. This would essentially give the Earth an actual sensing centralized nervous system, and its evolution would mirror, in a sense, the evolution of the brain in organisms, and the development of higher-level consciousness in modern humans.

In a sense, both processes could be seen as the phenomena that will continue to enable trends identified by global brain theorist Francis Heylighen:

The flows of matter, energy, and information that circulate across the globe become ever larger, faster and broader in reach, thanks to increasingly powerful technologies for transport and communication, which open up ever-larger markets and forums for the exchange of goods and services.

Some view the technological singularity and global brain as competing futurist hypotheses. However, I see them as deeply symbiotic phenomena. If the metaphor of a global brain is apt, at the moment the internet forms a type of primitive and passive intelligence. However, as the internet starts to form an ever greater role in human life, and as all human minds gravitate towards communicating and interacting in this medium, the internet should start to become an intelligent mediator of human interaction. Heylighen explains how this should be achieved:

the intelligent web draws on the experience and knowledge of its users collectively, as externalized in the “trace” of preferences that they leave on the paths they have traveled.

This is essentially how the brain organizes itself, by recognizing the shapes, emotions, and movements of individual neurons, and then connecting them to communicate a “global picture”, or an individual consciousness.

The technological singularity naturally fits within this evolution. The biological human brain can only connect so deeply with the Internet. We must externalize our experience with the Internet in (increasingly small) devices like laptops, smart phones, etc. However, artificial intelligence and biological intelligence enhanced with nanotechnology could form quite a deeper connection with the Internet. Such a development could, in theory, create an all-encompassing information processing system. Our minds (largely “artificial”) would form the neurons of the system, but a decentralized order would emerge from these dynamic interactions. This would be quite analogous to the way higher-level complexity has emerged in the past.

So what does this mean for you? Well many futurists debate the likely timing of this transition, but there is currently a median convergence prediction of between 2040–2050. As we approach this era we should suspect many fundamental things about our current institutions to change profoundly. There will also be several new ethical issues that arise, including issues of individual privacy, and government and corporate control. All issues that deserve a separate post.

Fundamentally this also means that your consciousness and your nature will change considerably throughout this century. The thought my sound bizarre and even frightening, but only if you believe that human intelligence and nature are static and unchanging. The reality is that human intelligence and nature are an ever evolving process. The only difference in this transition is that you will actually be conscious of the evolution itself.

Consciousness has never experienced a metasystem transition (since the last metasystem transition was towards higher-level consciousness!). So in a sense, a post-human world can still include your consciousness. It will just be a new and different consciousness. I think it is best to think about it as the emergence of something new and more complex, as opposed to the death or end of something. For the first time, evolution will have woken up.

amber alert

Amber Alert for Human Freedom

By Michael Lee

We’re witnessing increased violations of the air space of sovereign nations by drones and of the privacy of individuals by a variety of surveillance and monitoring technologies. And, as Al Gore pointed out in his latest book, The Future, the quality of democracy has been degraded in our times, largely as a result of the role of big money lobby groups influencing public policy to the exclusion of “citizen power”. Ironically, deep below the surface of the high tech, liberating mobile-digital world evolving in front of our bedazzled eyes, our Western concept of freedom is undergoing its sternest test since the end of the Cold War.

For behind the glittering success of the communications revolution powered by internet and mobile telephony, a battle is being waged for global control of the means of information. In the eyes of owners of the digital means of information, whether governments or corporations, the privacy of the individual has been subordinated to the value and leverage of digital content.

Regarding the threat to our privacy represented by these developments, on a scale from low (green) to severe (red)[1], I’d suggest we’ve already reached an amber (orange) alert, the second highest level. Let’s assess intrusions into air space and into our privacy.

  • Edward Snowdon has made disclosures about a state surveillance program called PRISM. It’s a clandestine national security electronic surveillance program which has been operated by the National Security Agency (NSA) since 2007.[2] It appears to empower the government to take customer information from telecommunications companies like Verizon and internet giants like Google and Facebook, enabling the monitoring of phone calls and emails of private citizens. Some senior EU officials say they’re shocked by these reports of state spying on private persons as well as by the bugging of EU offices in Brussels and Washington DC. The PRISM program is clearly intrusive. And it’s endorsed by a democratically elected liberal president.
  • Drones are being used extensively on both domestic and foreign soil under Obama’s presidency, whether reconnaissance drones or ones armed with missiles and bombs. They’re operated by the US Air Force and the CIA. FBI Director Robert Mueller has recently admitted in public that drones have been used domestically for surveillance of some American citizens.[3] (My jaw literally dropped when I saw him matter-of-factually confirm this inappropriate Big Brother-style deployment of military technology, however “targeted”.)
  • Google Street View and satellite photography peep inside the perimeters of people’s homes to expose them to unsolicited public and governmental viewing. Without permission, Street View cameras take photos from an elevated position, overlooking hedges and walls specifically erected to preclude public viewing of some areas of private homes. It seems homes and back gardens are now under the spotlight of Google and the government as they attempt to digitally map out our lives as comprehensively as they can. This is virtual trespassing into the world’s residential areas.[4] It’s an Orwellian practice. To read Google’s viewpoint see http://www.google.com/help/maps/streetview/privacy.html[5]

George-Orwell-1984_2588198b

  • Physical movements of citizens in cities and towns are under increasing surveillance by a growing number of CCTV cameras as well as GPS devices in mobile phones.
  • Financial transactions are all tracked by card associations, networks and financial institutions. In addition, card schemes are aggressively attempting to inaugurate the cashless society in order to obliterate the anonymity and privacy afforded by cash payments, while card and bank details on customer databases are all-too frequently hacked and stolen by fraudsters and identity thieves.
  • Digital profiles of individuals are routinely compiled by both corporations and governments for marketing and monitoring purposes.
  • Social media broadcast on their global platforms personal (and sometimes intimate) photos and comments, unwittingly exposing the material to unsolicited viewing by undesirable persons such as sexual predators and online bullies, even though this is an unintended consequence since the material was voluntarily submitted by the social media users.

In addition to these forms of surveillance of the individual and populations, there’s also the ingrained practices of thought control and groupthink often operational in academia and in the media. As an example, political correctness has created an atmosphere of reverse intolerance, whereby primarily conservative thinkers, and billions of religious persons, who may believe in traditional values frowned upon by the advocates of correctness, are subjected to the very name-calling and insults I assumed political correctness would be keen to eradicate if it aspired at all to be even-handed.[6] True scientific thinking, by contrast, creates an ideology-neutral atmosphere for healthy, open-minded intellectual discussion and efficient production of knowledge.

In sum, the following aspects of an individual’s life are being tracked, mapped and monitored: his/her private home, physical whereabouts, transactions, data, personal communications, thoughts and values. Taken together, all these kinds of intrusion into private lives of individuals and populations add up to total surveillance. That equates to a subtle, but comprehensive, assault on privacy.

Figure 1 Amber alert

Figure 1 shows technologies monitoring, mapping and policing our physical world: drones used for assassinations and for surveillance, Google videos of residential areas, satellite photos of our homes and gardens and CCTV cameras recording our movements in cities and on urban premises. Digital profiles of our homes and lives are assembled, which can be used for both marketing and surveillance, usually without our consent. On top of that, as already mentioned, there’s increased thought control in education and in the media under the regime of correctness.

What does this brave new world of total surveillance of the individual mean for human freedom? Are we all destined to become unwitting Trumans in a reality show written, directed and produced by powerful figures in corporations and governments?

truman show

Figure 2: Poster for the Truman Show

In the global struggle to control the means of information, we’re being offered a deal, a Faustian pact for the digital age. We’re being asked to exchange our claim to privacy for a kind of remotely monitored freedom and security. While there are undoubtedly some public benefits arising from this increased surveillance, such as fraud prevention in financial services and provision of CCTV evidence for crimes and misdemeanours, even a cursory cost-to-benefit analysis shows that we’re already paying too high a price for increased security measured against the diminishment of our freedom.

As long as we hand over our basic privacy to our digital masters, and don’t make a stink like Edward Snowdon, we’ll be left in peace. Until we start to think and believe differently from the mainstream, that is. Then individuals can be targeted with soft power like spying software and phone taps, with medium power like vilification in the media or, in exceptional cases, with hard power from a military drone which can blow human targets up in their vehicles in any public place on earth with pinpoint accuracy.

So we can be free as long as Google street cars and satellite cameras can film our back yards and driveways, as long as drones can violate the air space above us, as long as our emails and phone conversations can be tapped, as long as we conform to political correctness.

In essence, the freedom on offer for our digital age is being stripped of human privacy and of its distinctly human character of independence. But freedom without privacy and independence lacks real substance. Freedom to think and to believe differently is being sucked out of public life. Freedom is becoming an empty shell.

I don’t believe human populations are going to continue to buy this false offer of compromised freedom or to put up forever with surveillance “creep” by the owners of the global means of information. This kind of conformist, highly monitored, impersonal freedom sucks.

As today’s digital masters continue their conquest of the physical world, trying to map it out, digitize it and control it, we notice there’s no underlying social contract between the current powers-that-be and the populations of nations. That’s because the digital world is largely unregulated and borderless, whereas social contracts (such as democracy), which govern politics and society in the real world, go back to a previous era dominated by largely democratic nation-states.

But there’s no social contract for the Information Age. There’s no social contract for Internet. Nothing has been negotiated between population groups and the digital powers of corporations and governments. This represents a dangerous global power vacuum. No wonder our human freedom is dissipating. And all the while, technology is accelerating faster than both knowledge accumulation and the evolution of governance.

It’s the creeping totality of surveillance and the combined use of intrusive technologies which is menacing, not Google Street View on its own, or drone attacks, or the tracking of financial transactions, or the use of CCTV on street corners. Absent a social contract for the digital age, these technologies and practices are turning us into the unfree subjects of a new commercially driven world we can call Globurbia.

Privacy is dying all around us. Independence of thought is shrinking. Democracy is being diluted and undermined by abuses of virtually unbridled power. Information and data are being aggregated and bureaucratized. Nothing is private, nothing is sacred. We’re in the thrall of the owners of the means of information, the new masters of society. It’s not so much a police state as a soul-less vacuum.

If we’re not vigilant, we’ll all end up, within this generation, living in a conformist, militarized, paranoid, drone-policed society of powerless, remotely monitored, robotic individuals, spiritually drugged by consumerism and mentally bullied by correctness.

Conclusions

We urgently need a new international political protocol for the digital age which promotes the protection of privacy, freedom and independent thinking. We need to embrace an ideology-neutral scientific ethos for solving common human and social problems.

The next steps for halting surveillance creep in order to protect human freedom would be for Google to permanently ban Street View cars from all residential areas, for the current imperialistic drone policy[7] to be completely overhauled, for a new Digital Age political protocol to be developed and for global scientific progress, guaranteeing freedom and independence of thought, to be embraced.

Michael Lee’s book Knowing our Future – the startling case for futurology is available at the publisher http://www.infideas.com/pages/store/products/ec_view.asp?PID=1804 or on Amazon.com.

Acknowledgements & websites

Bergen, P & Braun, M. Drone. September 19, 2012. Drone is Obama’s weapon of choice.

http://www.cnn.com/2012/09/05/opinion/bergen-obama-drone

Center for Civilians in Conflict. http://civiliansinconflict.org

Clarke, R. (Original of 15 August 1997, latest revs. 16 September 1999, 8 December 2005, 7 August 2006). Introduction to Dataveillance and Information Privacy, and Definitions of Terms. http://www.rogerclarke.com/DV/Intro.html

Gallagher, R. Separating fact from fiction in NSA surveillance scandal. http://www.dallasnews.com/opinion/sunday-commentary/20130628-ryan-gallagher-separating-fact-from-fiction-in-nsa-surveillance-scandal.ece

Gore, A. 2013. The Future. New York: Random House Publishing Group.

http://www.algore.com

Surveillance Studies Network (SSN). 2006. A Report on the Surveillance Society: Summary Report

http://library.queensu.ca/ojs/index.php/surveillance-and-society/index

Surveillance & Society — http://www.surveillance-studies.net/

Wikipedia — http://en.wikipedia.org/wiki/Drone_attacks_in_Pakistan


[1] Homeland Security employed a system rating terror threats with a five color-code reflecting the probability of a terrorist attack and its potential gravity: Red- severe risk, Orange — high risk, Yellow — significant risk, Blue — general risk, Green — low risk.

[2] http://en.wikipedia.org/wiki/PRISM_(surveillance_program) PRISM is a government code name for a data collection effort known officially by the SIGAD US-984XN.[8][9] The program is operated under the supervision of the United States Foreign Intelligence Surveillance Court pursuant to the Foreign Intelligence Surveillance Act (FISA).[citation needed] Its existence was leaked by NSA contractor Edward Snowden, who claimed the extent of mass data collection was far greater than the public knew, and included “dangerous” and “criminal” activities in law.

[3] (http://www.washingtontimes.com/news/2013/jun/21/surveillance-scandals-swirl-obama-sit-down-privacy/#ixzz2XlO7m4H8)

[5] Google’s Privacy statement regarding Street View reads: “Your privacy and security are important to us. The Street View team takes a number of steps to help protect the privacy and anonymity of individuals when images are collected for Street View, including blurring faces and license plates. You can easily contact the Street View team if you see an image that should be protected or if you see a concerning image.” My view is that we own our homes and none of us have given personal permission for our premises to be photographed and videotaped to be watched by a global audience. Not that Google asked us anyway.

[6] Political correctness is defined as “the avoidance of forms of expression or action that are perceived to exclude, marginalize, or insult groups of people who are socially disadvantaged or discriminated against.” (The New Oxford Dictionary of English, p. 1435.) Paradoxically, though, this doctrine has itself become discriminatory, promoting stereotyping of some social groups which are not perceived as socially disadvantaged. Consequently, political correctness has loaded the dice. The oppressive atmosphere of linguistic bias created by political correctness has hardened into a form of thought control which has entirely lost its sense of proportion and blunted the academic search for truth, independent thought and healthy discourse. Prejudice in reverse is prejudice nonetheless.

[7] “Covert drone strikes are one of President Obama’s key national security policies. He has already authorized 283 strikes in Pakistan, six times more than the number during President George W. Bush’s eight years in office. As a result, the number of estimated deaths from the Obama administration’s drone strikes is more than four times what it was during the Bush administration — somewhere between 1,494 and 2,618.” Bergen, P & Braun, M. Drone. September 19, 2012. Drone is Obama’s weapon of choice. http://www.cnn.com/2012/09/05/opinion/bergen-obama-drone