Toggle light / dark theme

Quoted: “If you understand the core innovations around the blockchain idea, you’ll realize that the technology concept behind it is similar to that of a database, except that the way you interact with that database is very different.

The blockchain concept represents a paradigm shift in how software engineers will write software applications in the future, and it is one of the key concepts behind the Bitcoin revolution that need to be well understood. In this post, I’d like to explain 5 of these concepts, and how they interrelate to one another in the context of this new computing paradigm that is unravelling in front of us. They are: the blockchain, decentralized consensus, trusted computing, smart contracts and proof of work / stake. This computing paradigm is important, because it is a catalyst for the creation of decentralized applications, a next-step evolution from distributed computing architectural constructs.

Screen Shot 2014-12-23 at 10.30.59 PM

Read the article here > http://startupmanagement.org/2014/12/27/the-blockchain-is-the-new-database-get-ready-to-rewrite-everything/

A whole nation found it fitting that a university professor with a lifetime call for endocrinology (hormonal diseases) was turned into a university professor of gastroenterology (diseases of the digestive tract) for which she had never specialized: by state decree. Her conscientious “no” stripped her of her honor, her title, her pension and her inherited house. The book she wrote while hoping for the courts to help, on the biological foundations of ageing with a prestigious publisher, did not prevent the raiding of her house in the presence of watchful police. State TV defamed her as “Germany’s laziest professor.”

The same resilience shown here by a human being I can attribute for once to a mere brainchild – Einstein’s most famous natural constant c. She, too, got degraded, stripped of her global validity to retain only her local validity. A bit like the non-removed M.D. of a degraded professor. Here, too, the whole profession decided to play it low and not come to the rescue of the honor of the unjustly defamed one.

There is a difference, of course. The honor of a person ranks much higher than the dignity of a constant. The scandal nonetheless is even bigger here: The physical survival of every person and the planet itself is tied to the rehabilitation of the poor constant.

The hatred displayed by a nation towards a person, paralleled by the hatred displayed by a globe-wide profession towards a constant? The poor Einsteinian constant c-global has been shown to make black holes unsafe. Therefore, the prestigious attempt to produce black holes down on earth, (#1) has become undetectable at first if successful and (#2) makes the planet with a probability in the percentage-range get shrunk to 2 cm after a few years’ delay.

I do not defend this result here which is frequently published for 6 years and was never put into question in the scientific literature. As with any result, there remains a chance that a counterproof can still be found. I only draw attention to the public hatred – self-hatred – displayed: The planet is watching while an international institution attempts to produce black holes down on earth – scheduled to start at twice world-record energies in ten weeks’ time. My complaint is only this: “Why not first update the six years old safety report (LSAG)?” The danger of self-annihilation stays un-disproved while the planet behaves like a hypnotized chicken – eyes open. Einstein’s constant c-global weeps and so does every stone.

Are there no intellectuals left in our time? My late friend John Wheeler – who coined the name “black hole” which proves he was a major soul – would not believe this. And the man who said “j’accuse” 120 years ago was still getting heard. To date, CERN is allowed by the world public to let the message “Honey I shrunk the earth” stand for years. Even if I am wrong, the fact remains that CERN believes they can afford to refuse the claim that what they are going to do is safe for humankind. Every mother joins me: This movie without us!

You are no doubt reminded of the collective feeblemindedness displayed to date in the face of Ebola: There exists a method by which the ongoing exponential growth can be halted – by an air-bridge. But no one demands it because it is too expensive, I guess. The exponential growth of a black hole inside earth, by contrast, is unstoppable owing to c-global. We do still have a chance to prevent the worst if you insist that CERN must renew their safety report before re-starting twice as hard. All I am asking is: WHY NOT, PLEASE???

Dedicated to Alfred Dreyfus and the Great French nation.

(This flyer is written for the attention of President Obama besides yourself who just now read this.)

Quoted: “Ethereum will also be a decentralised exchange system, but with one big distinction. While Bitcoin allows transactions, Ethereum aims to offer a system by which arbitrary messages can be passed to the blockchain. More to the point, these messages can contain code, written in a Turing-complete scripting language native to Ethereum. In simple terms, Ethereum claims to allow users to write entire programs and have the blockchain execute them on the creator’s behalf. Crucially, Turing-completeness means that in theory any program that could be made to run on a computer should run in Ethereum.” And, quoted: “As a more concrete use-case, Ethereum could be utilised to create smart contracts, pieces of code that once deployed become autonomous agents in their own right, executing pre-programmed instructions. An example could be escrow services, which automatically release funds to a seller once a buyer verifies that they have received the agreed products.”

Read Part One of this Series here » Ethereum — Bitcoin 2.0? And, What Is Ethereum.

Read Part Two of this Series here » Ethereum — Opportunities and Challenges.

Read Part Three of this Series here » Ethereum — A Summary.

Quoted: “Bitcoin technology offers a fundamentally different approach to vote collection with its decentralized and automated secure protocol. It solves the problems of both paper ballot and electronic voting machines, enabling a cost effective, efficient, open system that is easily audited by both individual voters and the entire community. Bitcoin technology can enable a system where every voter can verify that their vote was counted, see votes for different candidates/issues cast in real time, and be sure that there is no fraud or manipulation by election workers.”

Read the article here » http://www.entrepreneur.com/article/239809?hootPostID=ba473face1754ce69f6a80aacc8412c7

The audio in this archive file was compiled from a 1984 meeting of futurists, transhumanists, and progressives. The main topic of the meeting was the most appropriate ways to engage or advance these philosophies within government. For example, one significant point of discussion centered around whether running for office was an effective way to drive change.

In the course of the discussion, the primary viewpoint FM-2030 espoused was that some aspects of government — especially the concept of leadership — would become obsolete or be replaced by other aspects of society (see Part 1). However, he also expressed what he believed the core of a ‘true’ democracy might look like. This archive file is assembled from excerpts of that section of the discussion.

The audio in this archive file was compiled from a 1984 meeting of futurists, transhumanists & progressives. The main topic of the meeting was the most appropriate ways to engage or advance these philosophies within government. For example, one significant point of discussion centered around whether running for office was an effective way to drive change.

The excerpts in this archive file collect many of futurist FM 2030’s thoughts over the course of the discussion.

About FM 2030: FM 2030 was at various points in his life, an Iranian Olympic basketball player, a diplomat, a university teacher, and a corporate consultant. He developed his views on transhumanism in the 1960s and evolved them over the next thirty-something years. He was placed in cryonic suspension July 8th, 2000.

What follows is my position piece for London’s FutureFest 2013, the website for which no longer exists.

Medicine is a very ancient practice. In fact, it is so ancient that it may have become obsolete. Medicine aims to restore the mind and body to their natural state relative to an individual’s stage in the life cycle. The idea has been to live as well as possible but also die well when the time came. The sense of what is ‘natural’ was tied to statistically normal ways of living in particular cultures. Past conceptions of health dictated future medical practice. In this respect, medical practitioners may have been wise but they certainly were not progressive.

However, this began to change in the mid-19th century when the great medical experimenter, Claude Bernard, began to champion the idea that medicine should be about the indefinite delaying, if not outright overcoming, of death. Bernard saw organisms as perpetual motion machines in an endless struggle to bring order to an environment that always threatens to consume them. That ‘order’ consists in sustaining the conditions needed to maintain an organism’s indefinite existence. Toward this end, Bernard enthusiastically used animals as living laboratories for testing his various hypotheses.

Historians identify Bernard’s sensibility with the advent of ‘modern medicine’, an increasingly high-tech and aspirational enterprise, dedicated to extending the full panoply of human capacities indefinitely. On this view, scientific training trumps practitioner experience, radically invasive and reconstructive procedures become the norm, and death on a physician’s watch is taken to be the ultimate failure. Humanity 2.0 takes this way of thinking to the next level, which involves the abolition of medicine itself. But what exactly would that mean – and what would replace it?

The short answer is bioengineering, the leading edge of which is ‘synthetic biology’. The molecular revolution in the life sciences, which began in earnest with the discovery of DNA’s function in 1953, came about when scientists trained in physics and chemistry entered biology. What is sometimes called ‘genomic medicine’ now promises to bring an engineer’s eye to improving the human condition without presuming any limits to what might count as optimal performance. In that case, ‘standards’ do not refer to some natural norm of health, but to features of an organism’s design that enable its parts to be ‘interoperable’ in service of its life processes.

In this brave new ‘post-medical’ world, there is always room for improvement and, in that sense, everyone may be seen as ‘underperforming’ if not outright disabled. The prospect suggests a series of questions for both the individual and society: (1) Which dimensions of the human condition are worth extending – and how far should we go? (2) Can we afford to allow everyone a free choice in the matter, given the likely skew of the risky decisions that people might take? (3) How shall these improvements be implemented? While bioengineering is popularly associated with nano-interventions inside the body, of course similarly targeted interventions can be made outside the body, or indeed many bodies, to produce ‘smart habitats’ that channel and reinforce desirable emergent traits and behaviours that may even leave long-term genetic traces.

However these questions are answered, it is clear that people will be encouraged, if not legally required, to learn more about how their minds and bodies work. At the same time, there will no longer be any pressure to place one’s fate in the hands of a physician, who instead will function as a paid consultant on a need-to-know and take-it-or-leave-it basis. People will take greater responsibility for the regular maintenance and upgrading of their minds and bodies – and society will learn to tolerate the diversity of human conditions that will result from this newfound sense of autonomy.

In 1906 the great American pragmatist philosopher William James delivered a public lecture entitled, ‘The Moral Equivalent of War’. James imagined a point in the foreseeable future when states would rationally decide against military options to resolve their differences. While he welcomed this prospect, he also believed that the abolition of warfare would remove an important pretext for people to think beyond their own individual survival and toward some greater end, perhaps one that others might end up enjoying more fully. What then might replace war’s altruistic side?

It is telling that the most famous political speech to adopt James’ title was US President Jimmy Carter’s 1977 call for national energy independence in response to the Arab oil embargo. Carter characterised the battle ahead as really about America’s own ignorance and complacency rather than some Middle Eastern foe. While Carter’s critics pounced on his trademark moralism, they should have looked instead to his training as a nuclear scientist. Historically speaking, nothing can beat a science-led agenda to inspire a long-term, focused shift in a population’s default behaviours. Louis Pasteur perhaps first exploited this point by declaring war on the germs that he had shown lay behind not only human and animal disease but also France’s failing wine and silk industries. Moreover, Richard Nixon’s ‘war on cancer’, first declared in 1971, continues to be prosecuted on the terrain of genomic medicine, even though arguably a much greater impact on the human condition could have been achieved by equipping the ongoing ‘war on poverty’ with comparable resources and resoluteness.

Science’s ability to step in as war’s moral equivalent has less to do with whatever personal authority scientists command than with the universal scope of scientific knowledge claims. Even if today’s science is bound to be superseded, its import potentially bears on everyone’s life. Once that point is understood, it is easy to see how each person could be personally invested in advancing the cause of scientific research. In the heyday of the welfare state, that point was generally understood. Thus, in The Gift Relationship, perhaps the most influential work in British social policy of the past fifty years, Richard Titmuss argued, by analogy with voluntary blood donation, that citizens have a duty to participate as research subjects, but not because of the unlikely event that they might directly benefit from their particular experiment. Rather, citizens should participate because they would have already benefitted from experiments involving their fellow citizens and will continue to benefit similarly in the future.

However, this neat fit between science and altruism has been undermined over the past quarter-century on two main fronts. One stems from the legacy of Nazi Germany, where the duty to participate in research was turned into a vehicle to punish undesirables by studying their behaviour under various ‘extreme conditions’. Indicative of the horrific nature of this research is that even today few are willing to discuss any scientifically interesting results that might have come from it. Indeed, the pendulum has swung the other way. Elaborate research ethics codes enforced by professional scientific bodies and university ‘institutional review boards’ protect both scientist and subject in ways that arguably discourage either from having much to do with the other. Even defenders of today’s ethical guidelines generally concede that had such codes been in place over the past two centuries, science would have progressed at a much slower pace.

The other and more current challenge to the idea that citizens have a duty to participate in research comes from the increasing privatisation of science. If a state today were to require citizen participation in drug trials, as it might jury duty or military service, the most likely beneficiary would be a transnational pharmaceutical firm capable of quickly exploiting the findings for profitable products. What may be needed, then, is not a duty but a right to participate in science. This proposal, advanced by Sarah Chan at the University of Manchester’s Institute for Bioethics, looks like a slight shift in legal language. But it is the difference between science appearing as an obligation and an opportunity for the ordinary citizen. In the latter case, one does not simply wait for scientists to invite willing subjects. Rather, potential subjects are invited to organize themselves and lobby the research community with their specific concerns. In our recent book, The Proactionary Imperative, Veronika Lipinska and I propose the concept of ‘hedgenetics’ to capture just this prospect for those who share socially relevant genetic traits. It may mean that scientists no longer exert final control over their research agenda, but the benefit is that they can be assured of steady public support for their work.

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

I am very pleased to announce the publication of my book “Reengineering Strategies & Tactics”.

The book is based on more than 2 decades in manufacturing & management consulting, and presents the new business model, the Holistic Business Model, that ties together operations, revenue generation and business strategy. It also enables one to do strategy sensitivity analysis, and much more. Watch the video. Buy the book & enjoy rethinking & re-strategizing your company.

I might add that this is much better than anything you can get out of McKinsey, Boston Consulting Group, Booz Allen Hamilton or Bain Capital.

The book details are:

Book details are:
Title: Reengineering Strategies & Tactics
Sub Title: Know Your Company’s and Your Competitors’ Strategies and Tactics Using Public Information
Publisher: Universal Publishers
Date: July, 2014
Pages: 315
ISBN-10: 1627340157
ISBN-13: 9781627340151

Publisher’s Link: http://www.universal-publishers.com/book.php?method=ISBN&book=1627340157

1st 25 pages(free): http://www.bookpump.com/upb/pdf-b/7340157b.pdf

Synopsys:
The Holistic Business Model identifies, in a structured manner, the 48 structural positions and 32 strategies your company can effect, resulting in 2 million variations in your company’s strategic environment. This complexity is handled by three layers, consisting of the Operations Layer, the Revenue Transaction Layer and the Business Management Layer.

Strategy is the migration from one structural position to another in the Business Management Layer. Therefore, the Model prevents investors, business owners and corporate managers from making incorrect moves, while both, enabling them to see their future options, and enhancing the quality of their management decisions.

The Operations Layer explains why lean manufacturing (JIT and Kanbans) works when it does, when it does not, and the important considerations when setting up a manufacturing operation using lessons learned from the semiconductor and Fast Moving Consumer Goods industries. The Revenue Transaction Layer identifies how your company generates its revenue.

Based on 20+ years in manufacturing and management consulting in multinational, large, medium & small companies, Solomon invented the Holistic Business Model that only requires public information to determine your company’s and your competitors’ strategies. Four case studies are presented: a manufacturing operation, a home builder, a non-profit and a sea port.