Toggle light / dark theme

edge.of.dark

Edge of Dark is part space-opera, part coming-of-age story, and part exploration of the relationship between humans and the post-human descendants who may ultimately transcend them.

The book takes place in the same universe as Brenda Cooper’s “Ruby’s Song” books (The Creative Fire; The Diamond Deep). However, you don’t need to have read those books to enjoy this one. The story in Edge of Dark picks up decades after the earlier books.

The setting is a solar system in which the most Earth-like planet, once nearly ecologically destroyed, is now in large part a wilderness preserve, still undergoing active restoration. Most humans live on massive space stations in the inner solar system. A few live on smaller space stations a bit further out, closer to the proverbial “Edge”. And beyond that? Beyond that, far from the sun, dwell exiles, cast out long ago for violating social norms by daring to go too far in tinkering with the human mind and body.

As the story progresses, it becomes clear that those exiles have grown in strength and have become, in some cases, not just transhuman, but truly posthuman. What follows is a story that is rich in politics, and even more rich in plausible, fascinating, and nuanced tensions created by this juxtaposition of human and posthuman.

There are a tremendous number of stories out there that simple-mindedly posit post-humans as a grave threat and enemy to humanity. (Think “Terminator.”) There are others that take a view that human and post- or trans- human can all learn to get along. (Think “X-Men”.) Brenda Cooper has done something remarkable here: She’s given us a story that isn’t simple or moralistic. It’s complicated. At the beginning of the book, I expected a simple morality play with a specific outcome. Later, I changed my mind. Then I changed it again. What she’s presented is messy, just like real life. It’s wound up with politics, just like real life.

The early parts of the book introduce new characters and new settings. The later parts of the book are what grabbed me. In the end, I was extremely happy I read this. Edge of Dark is a unique view of the interaction of human and post-human in my experience. I recommend it highly.


Anyone who posts to the Lifeboat Foundation blog gets a chance to win a signed copy of Edge of Dark!

The deadline for the contest is June 30. If you need access to our blog, send an email with the subject of “Lifeboat Foundation blog” to [email protected].

Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Posted in astronomy, big data, computing, cosmology, energy, engineering, environmental, ethics, existential risks, futurism, general relativity, governance, government, gravity, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear energy, nuclear weapons, open source, particle physics, philosophy, physics, policy, posthumanism, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treatiesTagged , , , , , , , , , , , , | Leave a Comment on Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Why the LHC must be shut down

CERN-Critics: LHC restart is a sad day for science and humanity!

Posted in astronomy, big data, complex systems, computing, cosmology, energy, engineering, ethics, existential risks, futurism, general relativity, governance, government, gravity, hardware, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear energy, nuclear weapons, particle physics, philosophy, physics, policy, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treatiesTagged , , , , , , , , | 1 Comment on CERN-Critics: LHC restart is a sad day for science and humanity!

PRESS RELEASE “LHC-KRITIK”/”LHC-CRITIQUE” www.lhc-concern.info
CERN-Critics: LHC restart is a sad day for science and humanity!
These days, CERN has restarted the world’s biggest particle collider, the so-called “Big Bang Machine” LHC at CERN. After a hundreds of Million Euros upgrade of the world’s biggest machine, CERN plans to smash particles at double the energies of before. This poses, one would hope, certain eventually small (?), but fundamentally unpredictable catastrophic risks to planet Earth.
Basically the same group of critics, including Professors and Doctors, that had previously filed a law suit against CERN in the US and Europe, still opposes the restart for basically the same reasons. Dangers of: (“Micro”-)Black Holes, Strangelets, Vacuum Bubbles, etc., etc. are of course and maybe will forever be — still in discussion. No specific improvements concerning the safety assessment of the LHC have been conducted by CERN or anybody meanwhile. There is still no proper and really independent risk assessment (the ‘LSAG-report’ has been done by CERN itself) — and the science of risk research is still not really involved in the issue. This is a scientific and political scandal and that’s why the restart is a sad day for science and humanity.
The scientific network “LHC-Critique” speaks for a stop of any public sponsorship of gigantomanic particle colliders.
Just to demonstrate how speculative this research is: Even CERN has to admit, that the so called “Higgs Boson” was discovered — only “probably”. Very probably, mankind will never find any use for the “Higgs Boson”. Here we are not talking about the use of collider technology in medical concerns. It could be a minor, but very improbable advantage for mankind to comprehend the Big Bang one day. But it would surely be fatal – how the Atomic Age has already demonstrated — to know how to handle this or other extreme phenomena in the universe.
Within the next Billions of years, mankind would have enough problems without CERN.
Sources:
- A new paper by our partner “Heavy Ion Alert” will be published soon: http://www.heavyionalert.org/
- Background documents provided by our partner “LHC Safety Review”: http://www.lhcsafetyreview.org/

- Press release by our partner ”Risk Evaluation Forum” emphasizing on renewed particle collider risk: http://www.risk-evaluation-forum.org/newsbg.pdf

- Study concluding that “Mini Black Holes” could be created at planned LHC energies: http://phys.org/news/2015-03-mini-black-holes-lhc-parallel.html

- New paper by Dr. Thomas B. Kerwick on lacking safety argument by CERN: http://vixra.org/abs/1503.0066

- More info at the LHC-Kritik/LHC-Critique website: www.LHC-concern.info
Best regards:
LHC-Kritik/LHC-Critique

The Mont Order Club hosted its first video conference in February 2015, as shown below.

Suggested topics included transhumanism, antistatism, world events, movements, collaboration, and alternative media. The Mont Order is an affiliation of dissident writers and groups who share similar views on transnationalism and transhumanism as positive and inevitable developments.

Participants:

  • Harry Bentham (Beliefnet)
  • Mike Dodd (Wave Chronicle)
  • Dirk Bruere (Zero State)

For more information on Mont Order participants, see the Mont Order page at Beliefnet.

In a recent feature article at The clubof.info Blog called “Striving to be Snowdenlike”, I look at the example of Edward Snowden and use his precedent to make a prediction about “transhumans”, the first people who will pioneer our evolution into a posthuman form, and the political upheaval this will necessarily cause.

Transhumanism makes a prediction that people will obtain greater personal abilities as a result of technology. The investment of more political power (potentially) in a single person’s hand’s has been the inexorable result of advancing technology throughout history.

Politically, transhumanism (not as a movement but as a form of sociocultural evolution) would be radically different from other forms of technological change, because it can produce heightened intellect, strength and capability. Many have assumed that these changes would only reinforce existing inequality and the power of the state, but they are wrong. They have failed to note the political disconnect between current government authority figures and political classes, and those people actually involved in engineering, medicine, military trials, and the sciences. Transhumanism will never serve to reinforce the existing political order or make it easier for states to govern and repress their people. On the contrary, transhumanism can only be highly disruptive to the authorities. In fact, it will be more disruptive to current liberal democratic governments than any other challenge they have witnessed before.

There are several realities to this disruption that will convey a profound political change, and would do so whether or not transhumanism pursued political power in the form of the Transhumanist Parties (I still support those parties wholeheartedly due to their ability to raise awareness of transhumanism as a concept and an observation by futurists) or took a political stance for or against these realities. I would narrow the disruption down to these very compelling points of political significance. Please advise any more that you would like to bring to my attention:

  • Some of us will evolve into posthumans prior to others.
  • Such evolution will not be contingent on station, celebrity, political office, political ideology, leadership ability, or other traditional elite criteria.
  • Such evolution will be contingent on injuries (in the case of medical enhancements), vocation (e.g. astronauts), military trials (power armor or other, more invasive enhancements), long before it is marketed to members of the political elite or government authority figures who prefer to stay aloof and avoid taking risks with their lives.
  • This will create a disconnect, early on, between the political elite and the first evolved, posthuman persons.

Therefore, the posthuman elite will not be the current elite, but a completely different elite. Not only this, but they will have a completely different attitude towards authority that will be very disruptive to the status quo:

  • The evolution will create a high-tech “elite” (but only in the sense of capability, and not rule) with a high degree of autonomy.
  • Since all members of the new elite will have the same ability to function alone beyond the abilities of a normal human, they will be able to function without reliance on a hierarchy, even among themselves.
  • Since all will have superior capabilities to those who are not evolved, they will have no desire or need to rule over the people who are not enhanced, as they will not need them.
  • They will function as titans — extremely powerful individuals capable of achieving their political aims single-handedly, independently of organizations or governments (much as Snowden did).
  • The evolved people will be a potential “rebel elite” or “smart rats” (to use Julian Assange’s term from Cypherpunks) because they will not be part of the government authority structures. Their ethos and their political behavior will be the same as hackers, with the exception that they will be able to effect change in the real world in the way that hackers were only able to achieve them via computer systems.
  • Like hackers, they will be dismissive of government authority, able to overcome government safeguards and defenses, and cognizant of government lies. And, like hackers, their power to subvert government authority will cause them to subvert it.

What happened with Snowden is not the first time we are going to witness a single heroic individual challenging existing power structures and winning against the world’s most powerful state.

If technology is going to invest greater power and responsibility into the hands of lone individuals who have been given privileges because of their personal abilities, those individuals are by definition going to be futuristic “insurgents”, at least some of whom will go as far as to dismantle the state. A government, being paranoid of anyone having merely the capability to undermine it, will by definition attempt to curtail the freedoms of enhanced people.

Posthumans, including their early predecessors, will find themselves in the same situation as the current-day “cypherpunk” elite consisting of whistleblowers and hackers. They will listen to few authority figures, they will have the utmost disrespect for the government, and they will be more interested in sharing their abilities indiscriminately with others than adhering to rules laid down by authority figures or obeying the state.

The evolution into posthuman forms will bring with it a clash of ideas about how society should be governed.

Quoted: “Once you really solve a problem like direct brain-computer interface … when brains and computers can interact directly, to take just one example, that’s it, that’s the end of history, that’s the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can basically break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, as it’s often referred to, by definition, we have no way of even starting to imagine what’s happening beyond that.”

Read the article here > http://www.theamericanconservative.com/dreher/silicon-valley-mordor/

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself — evolving over generations of positive feedback design into a far smarter AI — then its offspring could be far smarter than people that designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040–2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

FM 2030 was at various points in his life, an Iranian Olympic basketball player, a diplomat, a university teacher, and a corporate consultant. He developed his views on transhumanism in the 1960s and evolved them over the next thirty-something years. He was placed in cryonic suspension July 8th, 2000.

One of the major aspects that can make or break a creative person according to FM-2030 is the environment. In his book “Are You A Transhuman?”, he asks the reader to grade their own surroundings: “Does your home environment stimulate innovation — cross fertilization — initiative?”

We bet you can see where this is going.

The answers are, once more, “Often, Sometimes, or Hardly Ever”, with “Often being the answer choice that gives you the most points — another 2 to tally up to your score if you’re already proving to be more transhumanist that you thought. It might seem obvious, but it’s true — environment can play a major role in the stimulation of creativity. FM says that “It is difficult to be precise about creativity — how much of it is inherited and how much is learned.” If an environment is one that “encourages free unrestricted thinking… encourages people to take initiatives… open and ever-changing”, it is a dynamic environment that can stimulate creativity in an individual.

People who work with telecommunications are susceptible to views that are far different than their own, and the sciences, though structured, force a person to think creatively to find answers. By the same token, people who have a good balance of leisure time and work are also cultivating a greater internal environment to stimulate creativity. And in case you were forgetting the reason creativity is important to FM-2030, perhaps take a look at his quote that sums up the chapter perfectly, below.

New Book: An Irreverent Singularity Funcyclopedia, by Mondo 2000’s R.U. Sirius.

Posted in 3D printing, alien life, automation, big data, bionic, bioprinting, biotech/medical, complex systems, computing, cosmology, cryptocurrencies, cybercrime/malcode, cyborgs, defense, disruptive technology, DNA, driverless cars, drones, economics, electronics, encryption, energy, engineering, entertainment, environmental, ethics, existential risks, exoskeleton, finance, first contact, food, fun, futurism, general relativity, genetics, hacking, hardware, human trajectories, information science, innovation, internet, life extension, media & arts, military, mobile phones, nanotechnology, neuroscience, nuclear weapons, posthumanism, privacy, quantum physics, robotics/AI, science, security, singularity, software, solar power, space, space travel, supercomputing, time travel, transhumanism | Leave a Comment on New Book: An Irreverent Singularity Funcyclopedia, by Mondo 2000’s R.U. Sirius.

Quoted: “Legendary cyberculture icon (and iconoclast) R.U. Sirius and Jay Cornell have written a delicious funcyclopedia of the Singularity, transhumanism, and radical futurism, just published on January 1.” And: “The book, “Transcendence – The Disinformation Encyclopedia of Transhumanism and the Singularity,” is a collection of alphabetically-ordered short chapters about artificial intelligence, cognitive science, genomics, information technology, nanotechnology, neuroscience, space exploration, synthetic biology, robotics, and virtual worlds. Entries range from Cloning and Cyborg Feminism to Designer Babies and Memory-Editing Drugs.” And: “If you are young and don’t remember the 1980s you should know that, before Wired magazine, the cyberculture magazine Mondo 2000 edited by R.U. Sirius covered dangerous hacking, new media and cyberpunk topics such as virtual reality and smart drugs, with an anarchic and subversive slant. As it often happens the more sedate Wired, a watered-down later version of Mondo 2000, was much more successful and went mainstream.”

Read the article here >https://hacked.com/irreverent-singularity-funcyclopedia-mondo-2000s-r-u-sirius/