Toggle light / dark theme

Social Software Society for Safety.

Is there any scarcity? Perhaps friendship, because it requires time, shared history, and attention, is the ultimate scarcity—but must it always be the case?

A thoroughgoing naturalist, I stipulate that the value of all objects supervenes on their natural properties—rational evaluation of them is constrained by the facts. If I choose one car instead if its identical copy, simply because one has been stamped with a “brand,” this is the very definition of irrationality—if the 2 objects are exactly the same—you must be indifferent or violate the axioms of decision theory/identity theory. If I used a Replicator Ray to duplicate the Hope Diamond—which would you choose—the original—based on its history (was stolen, traveled around the world, etc) or the duplicate—they are identical!!

What happens to the value of the original? It is worth ½ because now there are 2? I make a 3rd copy so now it is worth 1/3? Nonsense—value has nothing to do with scarcity—a piece of feces may be totally unique in shape, just like a snowflake—but it has no value. Intrinsic value of objects depends on their properties. Instrumental value depends on what they can be used for (converted to intrinsic value).

Now I switch the 2 Hope Diamonds—neither of us knows which is which—do you pout and refuse to take it?

Now I duplicate your parents. I’m going to kill one set of them—which do you save? The originals. You owe them a duty simply because of the authenticity of the past relationship—the history you share with them. This is the difference between subjects and objects.

In an October 5, 2007 article, (WSJ, also summer feature in 07 New Atlantis), Christine Rosen argues that “because friendship depends on mutual revelations that are concealed from the rest of the world, it can flourish only within the bounds of privacy; the idea of public friendship is an oxymoron.”

What then, does the arrival of the transparent society bode for friendship? With ubiquitous computing—devices built into our clothes, embedded in the environment, cameras linked to our retinal displays, recording and streaming everything to our digital backup, our “life log,”—what of friendship? Social networking software will be integrated into all interactions—your face recognition software will pull up the profile of anyone you meet, instantly searching keywords for common interests, friends, past lovers, etc. Video testimonial files will pop up—don’t date this guy! You will get an accomplishment “rating” for different areas—career, hobbies, etc. Edited montages of your greatest hits and misses will populate the net—spin control, reputation management—prestige brokering will be the function of “banks” of the future. Myspace and Facebook are nothing compared to what will come.

Rosen argues that Facebook and Myspace dilute the word “friend.” With such a quantity of “friends” we diminish the intensity and quality of relationships. We already rank “real world” friends unconsciously—Myspace makes it explicit with a top friends list. Rosen is aware that the social network sites create a new type of accountability—records of IMs, personal news feeds, etc mean that you can never claim to be unavailable—you will get caught in your white lie.

But the true potential of the transparent society lies in what I call “ruthless objectivity.” I’ve begun practicing this myself as a form of cognitive behavior therapy—confronting yourself on tape/video forces you to see how you interact with the world—allowing you to overcome negativity, if you can take the heat.

Within a decade, “omniveillance” and life logging may be the rule. Acknowledge your failings and insecurities and they no longer have power—except of course if they are things you can’t change. Thus the technological transhumanist imperative to overcome limitations—what is disease, but just such an unfair limitation?

Here is the key point for those of us involved with the lifeboat foundation. We can design defense systems forever but at the end of the day, the best we can do is minimize accidental harm. You will never stop 100% of people determined to go on a rampage or commit acts of terrorism. You can see this already with gun control. You’ll never build a gun smart enough not to be shot in anger/unless you undermine the technology itself. You can’t wait for the gun to authenticate in the heat of battle—unless I suppose it were hooked up to instant face recognition software (we could postulate scenarios all day!) When events like Columbine or the Virginia Tech shootings happened I am always shocked—not that they occurred, but that we have as few rampages each year as we do!!

Most of our social institutions (our factory education system) are set up to create winners and losers, artificial scarcity that breeds resentment, failure, exclusion, marginalization, and anger. No wonder people react unreasonably to an unreasonable world, it is only reasonable (in a twisted way). People are actually more resilient than given credit for.

The Federalist papers, written during the debate over the formation of the institutions of the US government, famously argue for a system of checks and balances, so that ambition will counter ambition, greed counter greed. We could expect reasoned debate and participatory democracy from a government of angels, but we have a government of men so we must assume the worst and design things accordingly. Every man is not a Socrates.

This design approach won’t work in the 21st century. You can only get so far assuming we are sociopaths. We are about to reverse engineer the brain. The mystery of empathy—how Ghandi, Mother Teresa, or Jesus managed to care for the unwashed masses—this will become apparent. Anyone that wants to be more moral can work at it just like going to the gym. The science of empathy is no more mysterious than that of muscle building.

Empathy enhancements, along with constant cognitive therapy thanks to total omniveillance can make us a much more tolerant and humane—and therefore SAFER—society. Of course, people will have to rethink the idea of privacy. My prediction is that once cheap surveillance technology arrives, the only stable endpoint is total recording of everything mundane and sensational. Privacy has no intrinsic value—it only derives instrumental value from the fact that people are evil and will use information to hurt one another. Futurists often seem to miss the essential point that 1$ spent on lessening the chances that somebody is going to be alienated can make us a lot safer in aggregate than $1 million spent on an elaborate technical solution. Not that these approaches are mutually exclusive—I argue that the only real hope for humanity is to re-write our neurological source code.

Facebook is still rather crude—it will give way to the next generation. We control the use to which our technology is put—it does not control us unless we allow it to. When the Virginia Tech “massacre” occurred, April 16, 2007, I scoffed at the memorial groups that sprung up—like an emotional echo chamber—thousands of people, quite distant from the actual events (not direct friends/family members) created pages and testimonials.

Another symptom of our ADD society—only 4 days before Don Imus had been fired over his “nappy headed hoes” remark—if the timing had been a bit different his scandal would have been forgotten before it got started.

My initial reaction was wrong—if these people want to “grieve” this way, perhaps it was of some comfort to the survivors and victims. It certainly doesn’t hurt anyone.

The Net is radically democratic and empowering. It isn’t one to many broadcasting, but many to many. I don’t like Blogs—who cares about your mundane life—what you had for dinner at some restaurant. I don’t want to start a regular blog because if I got a following I’d have to keep cultivating them—you’re only as good as your last post.

And yet I’m guilty of the same narcissism, uploading myself on youtube now, logging my bodybuilding photos for all to see. At least I try to be interesting—my pictures and slideshows, if ridiculous, are more entertaining than 50% of the material out there. As expected, I am starting to attract a gay following on youtube—at least they appreciate the male physique. There is a difference between blogging your life and sharing an area you’ve devoted 13 years to and achieved something in—ultimately I think that comes through.

As for relationships, today there are considerable limits to our empathy and attention—the latest studies show 5 “close” friends as average. The superlative “best” friend admits of only one, regardless of how many BFFs you may say you have. Polyamory (multiple person marriage) doesn’t work well with our current cognitive architecture, and love triangles are socially unstable despite what geometry might say (but at 60% divorce rate, regular marriage ain’t doing a lot better).

A God or superintelligence might have the cognitive capacity to attend and respond to every aspect of your being—multiplied by 6 billion, and truly be everyone’s best friend. Until then, we’ll have to be content to use our new social software to relate not alienate. You never know who’s watching.

Joseph

University of Pittsburgh researchers injected a therapy previously found to protect cells from radiation damage into the bone marrow of mice, then dosed them with some 950 roentgens of radiation — nearly twice the amount needed to kill a person in just five hours. Nine in 10 of the therapy-receiving mice survived, compared to 58 percent of the control group.

Between 30 and 330 days, there were no differences in survival rates between experiment and control group mice, indicating that systemic MnSOD-PL treatment was not harmful to survival.

The researchers will need to verify whether this treatment would work in humans.

This is part of the early development in the use of genetic modification to increase the biological defences (shields) of people against nuclear, biological and chemical threats. We may not be able to prevent all attacks, so we should improve our toughness and survivability. We should still try to stop the attacks and create the conditions for less attacks.

There are dozens of published existential risks; there are undoubtedly many more that Nick Bostrom did not think of in his paper on the subject. Ideally, the Lifeboat Foundation and other organizations would identify each of these risks and take action to combat them all, but this simply isn’t realistic. We have a finite budget and a finite number of man-hours to spend on the problem, and our resources aren’t even particularly large compared with other non-profit organizations. If Lifeboat or other organizations are going to take serious action against existential risk, we need to identify the areas where we can do the most good, even at the expense of ignoring other risks. Humans like to totally eliminate risks, but this is a cognitive bias; it does not correspond to the most effective strategy. In general, when assessing existential risks, there are a number of useful heuristics:

- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.

- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.

- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.

Some specific risks which can be safely ignored:

- Particle accelerator accidents. We don’t yet know enough high-energy physics to say conclusively that a particle accelerator could never create a true vacuum, stable strangelet, or another universe-destroying particle. Luckily, we don’t have to; cosmic rays have been bombarding us for the past four billion years, with energies a million times higher than anything we can create in an accelerator. If it were possible to annihilate the planet with a high-energy particle collision, it would have happened already.

- The simulation gets shut down. The idea that “the universe is a simulation” is equally good at explaining every outcome- no matter what happens in the universe, you can concoct some reason why the simulators would engineer it. Which specific actions would make the universe safer from being shut down? We have no clue, and barring a revelation from On High, we have no way to find out. If we do try and take action to stop the universe from being shut down, it could just as easily make the risk worse.

- A long list of natural scenarios. To quote Nick Bostrom: “solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios.” We can’t prevent most of these anyway, even if they were serious risks.

Some specific risks which should be given lower priority:

- Asteroid impact. This is a serious risk, but it still has a fairly low probability, on the order of one in 105 to 107 for something that would threaten the human species within the next century or so. Mitigation is also likely to be quite expensive compared to other risks.

- Global climate change. While this is fairly probable, the impact of it isn’t likely to be severe enough to qualify as an existential risk. The IPCC Fourth Assessement Report has concluded that it is “very likely” that there will be more heat waves and heavy rainfall events, while it is “likely” that there will be more droughts, hurricanes, and extreme high tides; these do not qualify as existential risks, or even anything particularly serious. We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction. Additionally, climate change has become a political problem, making it next to impossible to implement serious measures without a massive effort.

- Nuclear war is a special case, because although we can’t do much to prevent it, we can take action to prepare for it in case it does happen. We don’t even have to think about the best ways to prepare; there are already published, reviewed books detailing what can be done to seek safety in the event of a nuclear catastrophe. I firmly believe that every transhumanist organization should have a contingency plan in the event of nuclear war, economic depression, a conventional WWIII or another political disaster. This planet is too important to let it get blown up because the people saving it were “collateral damage”.

- Terrorism. It may be the bogeyman-of-the-decade, but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive. While terrorists might do something stupid which results in an existential risk, “terrorism” isn’t a special case that we need to separately plan for; a virus, nanoreplicator or UFAI is just as deadly regardless of where it comes from.

Never underestimate the power of a “do-over.”

Video gamers know exactly what I’m talking about: the ability to face a challenge over and over again, in most cases with a “reset” of the environment to the initial conditions of the fight (or trap, or puzzle, etc.). With a consistent situation and setting, the player is able to experiment with different strategies. Typically, the player will find the approach that works, succeed, then move on to the next challenge; occasionally, the player will try different winning strategies in order to find the one with the best results, putting the player in a better position to meet the next obstacle.

Real life, of course, doesn’t have do-overs. But one of the fascinating results of the increasing sophistication of virtual world and game environments is their ability to serve as proxies for the real world, allowing users to practice tasks and ideas in a sufficiently realistic setting that the results provide useful real life lessons. This capability is based upon virtual worlds being interactive systems, where one’s actions have consequences; these consequences, in turn, require new choices. The utility of the virtual world as a rehearsal system is dependent upon the plausibility of the underlying model of reality, but even simplified systems can elicit new insights.

The classic example of this is Sim City (which I’ve written about at length before), but with the so-called “serious games” movement, we’re seeing the overlap of gaming and rehearsal become increasingly common.

The latest example is particularly interesting to me. The United Nations International Strategy for Disaster Reduction group has teamed up with the UK game design studio Playerthree to create the Flash-based “Stop Disasters” game. The goal of the game is to reduce the harmful results of catastrophic natural events — the disaster that gets stopped isn’t the event itself, but its impact on human life.

The game mechanisms are fairly straightforward. The player chooses what kind of disaster is to be faced (earthquake, hurricane, tsunami, wildfire or flood), then has a limited amount of time to prepare for the inevitable. The player can build new buildings, retrofit or demolish old ones, install appropriate defensive infrastructure (such as mangroves along tsunami-prone shorelines or firebreaks around water towers), institute preparedness training, install sirens and evacuation signs, and so forth — all with a limited budget, and with ancillary goals that must be met for success, such as building schools and hospitals for community development, or bringing in hotels for local economic support.

Once the money is spent (or the time runs out), the preordained disaster strikes, and the player gets to see whether his or her choices were the right ones. At the easy level, there’s generally enough money to protect the small map and limited population; at the harder levels, the player must make difficult choices about who and what to save. The overall complexity reminds me of the very first version of Sim City, but don’t take that as a criticism: the first Sim City arguably offered the clearest demonstration of urban complexity of the four versions, in large measure because of its spartan interface and simplicity.

Stop Disasters is billed as a children’s game, and it’s true that the folks at Architecture for Humanity aren’t going to use it for planning purposes. That’s not the goal, of course. This isn’t a rehearsal tool for the people who have to plan for disasters, but for the people who have to live with that planning — and those people who will choose to help their communities during large-scale emergencies.

I suspect that there would be an audience for a more complex version of Stop Disasters, one which puts more demands on the player to accommodate citizen needs. It’s a bit too easy to simply demolish old buildings rather than retrofit them in the UN/ISDR game, for example, and I would love to see more economic tools. I’d also like to see a wider array of disasters, beyond the short, sharp, shock events of quakes and storms. What would a Stop Disaster global warming scenario look like, for example — not trying to prevent climate change, but to deal with its consequences?

If we really want to get our hands dirty, we’d need to build up Stop Disasters scenarios for the advent of molecular manufacturing, self-aware artificial intelligence, global pandemic, peak oil and asteroid strikes.

Not because such games would tell us what we should do, but because they’d help us see how our choices could play out — and, more importantly, they’d remind us that our choices matter.