Toggle light / dark theme

Quoted: “Bitcoin technology offers a fundamentally different approach to vote collection with its decentralized and automated secure protocol. It solves the problems of both paper ballot and electronic voting machines, enabling a cost effective, efficient, open system that is easily audited by both individual voters and the entire community. Bitcoin technology can enable a system where every voter can verify that their vote was counted, see votes for different candidates/issues cast in real time, and be sure that there is no fraud or manipulation by election workers.”

Read the article here » http://www.entrepreneur.com/article/239809?hootPostID=ba473face1754ce69f6a80aacc8412c7

Quoted: “The Factom team suggested that its proposal could be leveraged to execute some of the crypto 2.0 functionalities that are beginning to take shape on the market today. These include creating trustless audit chains, property title chains, record keeping for sensitive personal, medical and corporate materials, and public accountability mechanisms.

During the AMA, the Factom president was asked how the technology could be leveraged to shape the average person’s daily life.”

Kirby responded:

“Factom creates permanent records that can’t be changed later. In a Factom world, there’s no more robo-signing scandals. In a Factom world, there are no more missing voting records. In a Factom world, you know where every dollar of government money was spent. Basically, the whole world is made up of record keeping and, as a consumer, you’re at the mercy of the fragmented systems that run these records.”

» Read the article here » http://www.coindesk.com/factom-white-paper-outlines-record-keeping-layer-bitcoin/

» Visit Factom here » http://www.factom.org/

Preamble: Bitcoin 1.0 is currency — the deployment of cryptocurrencies in applications related to cash such as currency transfer, remittance, and digital payment systems. Bitcoin 2.0 is contracts — the whole slate of economic, market, and financial applications using the blockchain that are more extensive than simple cash transactions like stocks, bonds, futures, loans, mortgages, titles, smart property, and smart contracts

Bitcoin 3.0 is blockchain applications beyond currency, finance, and markets, particularly in the areas of government, health, science, literacy, culture, and art.

Read the article here » http://ieet.org/index.php/IEET/more/swan20141110

What follows is my position piece for London’s FutureFest 2013, the website for which no longer exists.

Medicine is a very ancient practice. In fact, it is so ancient that it may have become obsolete. Medicine aims to restore the mind and body to their natural state relative to an individual’s stage in the life cycle. The idea has been to live as well as possible but also die well when the time came. The sense of what is ‘natural’ was tied to statistically normal ways of living in particular cultures. Past conceptions of health dictated future medical practice. In this respect, medical practitioners may have been wise but they certainly were not progressive.

However, this began to change in the mid-19th century when the great medical experimenter, Claude Bernard, began to champion the idea that medicine should be about the indefinite delaying, if not outright overcoming, of death. Bernard saw organisms as perpetual motion machines in an endless struggle to bring order to an environment that always threatens to consume them. That ‘order’ consists in sustaining the conditions needed to maintain an organism’s indefinite existence. Toward this end, Bernard enthusiastically used animals as living laboratories for testing his various hypotheses.

Historians identify Bernard’s sensibility with the advent of ‘modern medicine’, an increasingly high-tech and aspirational enterprise, dedicated to extending the full panoply of human capacities indefinitely. On this view, scientific training trumps practitioner experience, radically invasive and reconstructive procedures become the norm, and death on a physician’s watch is taken to be the ultimate failure. Humanity 2.0 takes this way of thinking to the next level, which involves the abolition of medicine itself. But what exactly would that mean – and what would replace it?

The short answer is bioengineering, the leading edge of which is ‘synthetic biology’. The molecular revolution in the life sciences, which began in earnest with the discovery of DNA’s function in 1953, came about when scientists trained in physics and chemistry entered biology. What is sometimes called ‘genomic medicine’ now promises to bring an engineer’s eye to improving the human condition without presuming any limits to what might count as optimal performance. In that case, ‘standards’ do not refer to some natural norm of health, but to features of an organism’s design that enable its parts to be ‘interoperable’ in service of its life processes.

In this brave new ‘post-medical’ world, there is always room for improvement and, in that sense, everyone may be seen as ‘underperforming’ if not outright disabled. The prospect suggests a series of questions for both the individual and society: (1) Which dimensions of the human condition are worth extending – and how far should we go? (2) Can we afford to allow everyone a free choice in the matter, given the likely skew of the risky decisions that people might take? (3) How shall these improvements be implemented? While bioengineering is popularly associated with nano-interventions inside the body, of course similarly targeted interventions can be made outside the body, or indeed many bodies, to produce ‘smart habitats’ that channel and reinforce desirable emergent traits and behaviours that may even leave long-term genetic traces.

However these questions are answered, it is clear that people will be encouraged, if not legally required, to learn more about how their minds and bodies work. At the same time, there will no longer be any pressure to place one’s fate in the hands of a physician, who instead will function as a paid consultant on a need-to-know and take-it-or-leave-it basis. People will take greater responsibility for the regular maintenance and upgrading of their minds and bodies – and society will learn to tolerate the diversity of human conditions that will result from this newfound sense of autonomy.

In 1906 the great American pragmatist philosopher William James delivered a public lecture entitled, ‘The Moral Equivalent of War’. James imagined a point in the foreseeable future when states would rationally decide against military options to resolve their differences. While he welcomed this prospect, he also believed that the abolition of warfare would remove an important pretext for people to think beyond their own individual survival and toward some greater end, perhaps one that others might end up enjoying more fully. What then might replace war’s altruistic side?

It is telling that the most famous political speech to adopt James’ title was US President Jimmy Carter’s 1977 call for national energy independence in response to the Arab oil embargo. Carter characterised the battle ahead as really about America’s own ignorance and complacency rather than some Middle Eastern foe. While Carter’s critics pounced on his trademark moralism, they should have looked instead to his training as a nuclear scientist. Historically speaking, nothing can beat a science-led agenda to inspire a long-term, focused shift in a population’s default behaviours. Louis Pasteur perhaps first exploited this point by declaring war on the germs that he had shown lay behind not only human and animal disease but also France’s failing wine and silk industries. Moreover, Richard Nixon’s ‘war on cancer’, first declared in 1971, continues to be prosecuted on the terrain of genomic medicine, even though arguably a much greater impact on the human condition could have been achieved by equipping the ongoing ‘war on poverty’ with comparable resources and resoluteness.

Science’s ability to step in as war’s moral equivalent has less to do with whatever personal authority scientists command than with the universal scope of scientific knowledge claims. Even if today’s science is bound to be superseded, its import potentially bears on everyone’s life. Once that point is understood, it is easy to see how each person could be personally invested in advancing the cause of scientific research. In the heyday of the welfare state, that point was generally understood. Thus, in The Gift Relationship, perhaps the most influential work in British social policy of the past fifty years, Richard Titmuss argued, by analogy with voluntary blood donation, that citizens have a duty to participate as research subjects, but not because of the unlikely event that they might directly benefit from their particular experiment. Rather, citizens should participate because they would have already benefitted from experiments involving their fellow citizens and will continue to benefit similarly in the future.

However, this neat fit between science and altruism has been undermined over the past quarter-century on two main fronts. One stems from the legacy of Nazi Germany, where the duty to participate in research was turned into a vehicle to punish undesirables by studying their behaviour under various ‘extreme conditions’. Indicative of the horrific nature of this research is that even today few are willing to discuss any scientifically interesting results that might have come from it. Indeed, the pendulum has swung the other way. Elaborate research ethics codes enforced by professional scientific bodies and university ‘institutional review boards’ protect both scientist and subject in ways that arguably discourage either from having much to do with the other. Even defenders of today’s ethical guidelines generally concede that had such codes been in place over the past two centuries, science would have progressed at a much slower pace.

The other and more current challenge to the idea that citizens have a duty to participate in research comes from the increasing privatisation of science. If a state today were to require citizen participation in drug trials, as it might jury duty or military service, the most likely beneficiary would be a transnational pharmaceutical firm capable of quickly exploiting the findings for profitable products. What may be needed, then, is not a duty but a right to participate in science. This proposal, advanced by Sarah Chan at the University of Manchester’s Institute for Bioethics, looks like a slight shift in legal language. But it is the difference between science appearing as an obligation and an opportunity for the ordinary citizen. In the latter case, one does not simply wait for scientists to invite willing subjects. Rather, potential subjects are invited to organize themselves and lobby the research community with their specific concerns. In our recent book, The Proactionary Imperative, Veronika Lipinska and I propose the concept of ‘hedgenetics’ to capture just this prospect for those who share socially relevant genetic traits. It may mean that scientists no longer exert final control over their research agenda, but the benefit is that they can be assured of steady public support for their work.

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

AND FOR INSTANCE:

“… Technology is the savior for everything. That’s the point of this course. Technology is accelerating, everything is going to be good, technology is your friend … I think that’s a load of crap …” By Dr. Jonathan White

Back to the White Swan hardcore:

That discourse can be entertained at a forthcoming Renaissance, not now. Going against this idea will be outrageously counterproductive to ascertain the non-annihilation of Earth’s locals.

People who destroy, eternally beforehand, outrageous Black Swans, engaging into super-natural and preter-natural preparations for known and unknown Outliers, thus observing — in all practicality — the successful and prevailing White Swan and Transformative and Integrative Risk Management interdisciplinary problem-solving methodology, include:

(1.-) Sir Martin Rees PhD (cosmologist and astrophysicist), Astronomer Royal, Cambridge University Professor and former Royal Society President.

(2.-) Dr. Stephen William Hawking CH CBE FRS FRSA is an English theoretical physicist, cosmologist, author and Director of Research at the Centre for Theoretical Cosmology within the University of Cambridge. Formerly: Lucasian Professor of Mathematics at the University of Cambridge.

(3.-) Prof. Nick Bostrom Ph.D. is a Swedish philosopher at St. Cross College, University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, the reversal test, and consequentialism. He holds a PhD from the London School of Economics (2000). He is the founding director of both The Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology as part of the Oxford Martin School at Oxford University.

(4.-) The US National Intelligence Council (NIC) [.…] The National Intelligence Council supports the Director of National Intelligence in his role as head of the Intelligence Community (IC) and is the IC’s center for long-term strategic analysis [.…] Since its establishment in 1979, the NIC has served as a bridge between the intelligence and policy communities, a source of deep substantive expertise on intelligence issues, and a facilitator of Intelligence Community collaboration and outreach [.…] The NIC’s National Intelligence Officers — drawn from government, academia, and the private sector—are the Intelligence Community’s senior experts on a range of regional and functional issues.

(5.-) U.S. Homeland Security’s FEMA (Federal Emergency Management Agency).

(6.-) The CIA or any other U.S. Government agencies.

(7.-) Stanford Research Institute (now SRI International).

(8.-) GBN (Global Business Network).

(9.-) Royal Dutch Shell.

(10.-) British Doomsday Preppers.

(11.-) Canadian Doomsday Preppers.

(12.-) Australian Doomsday Preppers

(13.-) American Doomsday Preppers.

(14.-) Disruptional Singularity Book (ASIN: B00KQOEYLG).

(15.-) Scientific Prophets of Doom at https://www.youtube.com/watch?v=9bUe2-7jjtY

White Swans are always getting prepared for Unknown and Known Outliers, and MOST FLUIDLY changing the theater of operation by permanently updating and upgrading the designated preparations.

Authored By Copyright Mr. Andres Agostini
White Swan Book Author
www.linkedin.com/in/andresagostini
www.amazon.com/author/Agostini

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

As the old social bonds unravel, philosopher and member of the Lifeboat Foundation’s advisory board Professor Steve Fuller asks: can we balance free expression against security?

justice

Justice has been always about modes of interconnectivity. Retributive justice – ‘eye for an eye’ stuff – recalls an age when kinship was how we related to each other. In the modern era, courtesy of the nation-state, bonds have been forged in terms of common laws, common language, common education, common roads, etc. The internet, understood as a global information and communication infrastructure, is both enhancing and replacing these bonds, resulting in new senses of what counts as ‘mine’, ‘yours’, ‘theirs’ and ‘ours’ – the building blocks of a just society…

Read the full article at IAI.TV

A presentation of the future strategic options available to both Tesla and Chevy Volt, using the Holistic Business Model, as published in the book, Reengineering Strategies & Tactics. Note, correction that GM will be investing an $449 million not $1.4 billion I had stated in the video.

In Part 1, I show the strategic structural positions Tesla & Chevy Volt occupy. In Part 2, I show the future strategic options available to both, and potential mistakes they could be making.

If after reviewing the videos you would like to a similar 1/2 day review of your business, please do contact me.

A presentation of the future strategic options available to both Tesla and Chevy Volt, using the Holistic Business Model, as published in the book, Reengineering Strategies & Tactics. Note, correction that GM will be investing an $449 million not $1.4 billion I had stated in the video.

In Part 1, I show the strategic structural positions Tesla & Chevy Volt occupy. In Part 2, I show the future strategic options available to both, and potential mistakes they could be making.

If after reviewing the videos you would like to a similar 1/2 day review of your business, please do contact me.