Toggle light / dark theme

Within the next few years, robots will move from the battlefield and the factory into our streets, offices, and homes. What impact will this transformative technology have on personal privacy? I begin to answer this question in a chapter on robots and privacy in the forthcoming book, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge: MIT Press).

I argue that robots will implicate privacy in at least three ways. First, they will vastly increase our capacity for surveillance. Robots can go places humans cannot go, see things humans cannot see. Recent developments include everything from remote-controlled insects to robots that can soften their bodies to squeeze through small enclosures.

Second, robots may introduce new points of access to historically private spaces such as the home. At least one study has shown that several of today’s commercially available robots can be remotely hacked, granting the attacker access to video and audio of the home. With sufficient process, governments will also be able to access robots connected to the Internet.

There are clearly ways to mitigate these implications. Strict policies could reign in police use of robots for surveillance, for instance; consumer protection laws could require adequate security. But there is a third way robots implicate privacy, related to their social meaning, that is not as readily addressed.

Study after study has shown that we are hardwired to react to anthropomorphic technology such as robots as though a person were actually present. Reports have emerged of soldiers risking their lives on the battlefield to save a robot under enemy fire. No less than people, therefore, the presence of a robot can interrupt solitude—a key value privacy protects. Moreover, the way we interact with these machines will matter as never before. No one much cares about the uses to which we put our car or washing machine. But the record of our interactions with a social machine might contain information that would make a psychotherapist jealous.

My chapter discusses each of these dimensions—surveillance, access, and social meaning—in detail. Yet it only begins a conversation. Robots hold enormous promise and we should encourage their development and adoption. Privacy must be on our minds as we do.

Originally posted @ Perspective Intelligence

Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.

The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.

Computer aided crash (proof of concept for future cyber-attack)

There has yet to be a definitive explanation of how stocks such as Proctor and Gamble plunged 47% and the normally solid Accenture plunged from a value of roughly $40 to one cent, based on no external input of information into the financial system. The SEC has issued directives in recent years boosting competition and lowering commissions, which has had the effect of fragmenting equity trading around the US and making it highly automated. This has created four leading exchanges, NYSE Euronext, Nasdaq OMX Group, Bats Global Market and Direct Edge and secondary exchanges include International Securities Exchange, Chicago Board Options Exchange, the CME Group and the Intercontinental Exchange. There are also broker-run matching systems like those run by Knight and ITG and so called ‘dark-pools’ where trades are matched privately with prices posted publicly only after trades are done. As similar picture has emerged in Europe, where rules allowing competition with established exchanges and known by the acronym “Mifid” have led to a similar explosion of types and venues.

To navigate this confusing picture traders have to rely on ‘smart order routers’ – electronic systems that seek the best price across all of the platforms. Therefore, trades are done in vast data centers – not in exchange buildings. This total automation of trading allows for the use of a variety of ‘trading algorithms’ to manage investment themes. The best known of these is a ‘Volume Algo’, which ensures throughout the day that a trader maintains his holding in a share at a pre-set percentage of that share’s overall volume, automatically adjusting buy and sell instructions to ensure that percentage remains stable whatever the market conditions. Algorithms such as this have been blamed for exacerbating the rapid price moves on May 6th. High-frequency traders are the biggest proponents of algos and they account for up to 60% of US equity trading.

The most likely cause of the collapse on May 6th was the slowing down or near stop on one side of the trading pool. So in very basic terms a large number of sell orders started backing up on one side of the system (at the speed of light) with no counter-parties taking the order on the other side of the trade. The counter-party side of the trade slowed or stopped causing this almost instant pile-up of orders. The algorithms on the other side finding no buyer for their stocks kept offering lower prices (as per their software) until they attracted a buyer. However, as no buyer’s appeared on the still slowed or stopped counter-party side prices tumbled at an alarming rate. Fingers have pointed at the NYSE for causing the slow down on one side of the trading pool as it instituted some kind of circuit breaker into the system, which caused all the other exchanges to pile-up on the other side of the trade. There has also been a focus on one particular trade, which may have been the spark igniting the NYSE ‘circuit breaker’. Whatever the precise cause, once events were set in train the system had in no way caught up with the new realities of automated trading and diversified exchanges.

More nodes same assumptions

On one level this seems to defy conventional thinking about security – more diversity greater strength – not all nodes in a network can be compromised at the same time. By having a greater number of exchanges surely the US and global financial system is more secure? However, in this case, the theory collapses quickly if thinking is switched from examining the physical to the virtual. While all of the exchanges are physically and operationally separate they all seemingly share the same software and crucially trading algorithms that all have some of the same assumptions. In this case they all assumed that because they could find no counter-party to the trade they needed to lower the price (at the speed of light). The system is therefore highly vulnerable because it relies on one set of assumptions that have been programmed into lighting fast algorithms. If a national circuit breaker could be implemented (which remains doubtful) then this could slow rapid descent but it doesn’t take away the power of the algorithms – which are always going to act in certain fundamental ways ie continue to lower the offer price if they obtain no buy order. What needs to be understood are the fundamental ways in which all the trading algorithms move in concert. All will have variances but they will all share key similarities, understanding these should lead to the design of logic circuit breakers.

New Terrorism

However, for now the system looks desperately vulnerable to both generalized and targeted cyber attack and this is the opportunity for the next generation of terrorists. There has been little discussion as to whether the events of last Thursday were prompted by malicious means but it certainly is worth mentioning. At a time when Greece was burning launching a cyber attack against this part of the US financial system would clearly have been stunningly effective. Combining political instability with a cyber attack against the US financial system would create enough doubt about the cause of a market drop for the collapse gain rapid traction. Using targeted cyber attacks to stop one side of the trade within these exchanges (which are all highly automated and networked) would, as has now been proven, cause a dramatic collapse. This could also be adapted and targeted at specific companies or asset classes to cause a collapse in price. A scenario where-by one of the exchanges slows down its trades surrounding the stock of a company the bad-actor is targeting seems both plausible and effective.

A hybrid cyber and kinetic attack could also cause similar damage – as most trades are now conducted within data-centers – it begs the question why are there armed guards outside the NYSE – of course if retains some symbolic value but security resources would be better placed outside of the data-centers where these trades are being conducted. A kinetic attack against financial data centers responsible for these trades would surely have a devastating effect. Finding the location of these data centers is as simple as conducting a Google search.

In order for terrorism to have impact in the future it needs to shift its focus from the weapons of the 20th Century to those of the present day. Using their current tactics the Pakistan Taliban and their assorted fellow-travelers cannot fundamentally damage western society. That battle is over. However, the next era of conflict motivated by a radicalism from as yet unknown grievances, fueled by a globally networked generation Y, their cyber weapons of choice and the precise application of ultra-violence and information spin has dawned. Five days in Manhattan flashed a light on this new era.

Roderick Jones

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.

Referring to the seizure of more than 400 fake routers so far, Melissa E. Hathaway, head of cyber security in the Office of the Director of National Intelligence, says: “Counterfeit products have been linked to the crash of mission-critical networks, and may also contain hidden ‘back doors’ enabling network security to be bypassed and sensitive data accessed [by hackers, thieves, and spies].” She declines to elaborate. In a 50-page presentation for industry audiences, the FBI concurs that the routers could allow Chinese operatives to “gain access to otherwise secure systems” (page 38).

Read the entire report in Business Week. See a TV news report about the problem on YouTube.

Here I would like to suggest readers a quotation from my book “Structure of the global catastrophe” (http://www.scribd.com/doc/7529531/-) there I discuss problems of preventing catastrophes.

Refuges and bunkers

Different sort of a refuge and bunkers can increase chances of survival of the mankind in case of global catastrophe, however the situation with them is not simple. Separate independent refuges can exist for decades, but the more they are independent and long-time, the more efforts are necessary for their preparation in advance. Refuges should provide ability for the mankind to the further self-reproduction. Hence, they should contain not only enough of capable to reproduction people, but also a stock of technologies which will allow to survive and breed in territory which is planned to render habitable after an exit from the refuge. The more this territory will be polluted, the higher level of technologies is required for a reliable survival.
Very big bunker will appear capable to continue in itself development of technologies and after catastrophe. However in this case it will be vulnerable to the same risks, as all terrestrial civilisation — there can be internal terrorists, AI, nanorobots, leaks etc. If the bunker is not capable to continue itself development of technologies it, more likely, is doomed to degradation.
Further, the bunker can be or «civilizational», that is keep the majority of cultural and technological achievements of the civilisation, or “specific”, that is keep only human life. For “long” bunkers (which are prepared for long-term stay) the problem of formation and education of children and risks of degradation will rise. The bunker can or live for the account of the resources which have been saved up before catastrophe, or be engaged in own manufacture. In last case it will be simply underground civilisation on the infected planet.
The more a bunker is constructed on modern technologies and independent cultural and technically, the higher ammount of people should live there (but in the future it will be not so: the bunker on the basis of advanced nanotechnology can be even at all deserted, — only with the frozen human embryos). To provide simple reproduction by means of training to the basic human trades, thousand people are required. These people should be selected and be in the bunker before final catastrophe, and, it is desirable, on a constant basis. However it is improbable, that thousand intellectually and physically excellent people would want to sit in the bunker “just in case”. In this case they can be in the bunker in two or three changes and receive for it a salary. (Now in Russia begins experiment «Mars 500» in which 6 humans will be in completely independent — on water, to meal, air — for 500 days. Possibly, it is the best result which we now have. In the early nineties in the USA there was also a project «Biosphera-2» in which people should live two years on full self-maintenance under a dome in desert. The project has ended with partial failure as oxygen level in system began to fall because of unforeseen reproduction of microorganisms and insects.) As additional risk for bunkers it is necessary to note fact of psychology of the small groups closed in one premise widely known on the Antarctic expeditions — namely, the increase of animosities fraught with destructive actions, reducing survival rate.
The bunker can be either unique, or one of many. In the first case it is vulnerable to different catastrophes, and in the second is possible struggle between different bunkers for the resources which have remained outside. Or is possible war continuation if catastrophe has resulted from war.
The bunker, most likely, will be either underground, or in the sea, or in space. But the space bunker too can be underground of asteroids or the Moon. For the space bunker it will be more difficult to use the rests of resources on the Earth. The bunker can be completely isolated, or to allow “excursion” in the external hostile environment.
As model of the sea bunker can serve the nuclear submarine possessing high reserve, autonomy, manoeuvrability and stability to negative influences. Besides, it can easily be cooled at ocean (the problem of cooling of the underground closed bunkers is not simple), to extract from it water, oxygen and even food. Besides, already there are ready boats and technical decisions. The boat is capable to sustain shock and radiating influence. However the resource of independent swimming of modern submarines makes at the best 1 year, and in them there is no place for storage of stocks.
Modern space station ISS could support independently life of several humans within approximately year though there are problems of independent landing and adaptation. Not clearly, whether the certain dangerous agent, capable to get into all cracks on the Earth could dissipate for so short term.
There is a difference between gaso — and bio — refuges which can be on a surface, but are divided into many sections for maintenance of a mode of quarantine, and refuges which are intended as a shelter from in the slightest degree intelligent opponent (including other people who did not manage to get a place in a refuge). In case of biodanger island with rigid quarantine can be a refuge if illness is not transferred by air.
A bunker can possess different vulnerabilities. For example, in case of biological threat, is enough insignificant penetration to destroy it. Only hi-tech bunker can be the completely independent. Energy and oxygen are necessary to the bunker. The system on a nuclear reactor can give energy, but modern machines hardly can possess durability more than 30–50 years. The bunker cannot be universal — it should assume protection against the certain kinds of threats known in advance — radiating, biological etc.
The more reinforced is a bunker, the smaller number of bunkers can prepare mankind in advance, and it will be more difficult to hide such bunker. If after a certain catastrophe there was a limited number of the bunkers which site is known, the secondary nuclear war can terminate mankind through countable number of strikes in known places.
The larger is the bunker, the less amount of such bunkers is possible to construct. However any bunker is vulnerable to accidental destruction or contamination. Therefore the limited number of bunkers with certain probability of contamination unequivocally defines the maximum survival time of mankind. If bunkers are connected among themselves by trade and other material distribution, contamination between them is more probable. If bunkers are not connected, they will degrade faster. The more powerfully and more expensively is the bunker, the more difficult is to create it imperceptibly for the probable opponent and so it easeir becomes the goal for an attack. The more cheaply the bunker, the less it is durable.
Casual shelters — the people who have escaped in the underground, mines, submarines — are possible. They will suffer from absence of the central power and struggle for resources. The people, in case of exhaustion of resources in one bunker, can undertake the armed attempts to break in other next bunker. Also the people who have escaped casually (or under the threat of the comong catastrophe), can attack those who was locked in the bunker.
Bunkers will suffer from necessity of an exchange of heat, energy, water and air with an external world. The more independent is the bunker, the less time it can exist in full isolation. Bunkers being in the Earth will deeply suffer from an overheating. Any nuclear reactors and other complex machines will demand external cooling. Cooling by external water will unmask them, and it is impossible to have energy sources lost-free in the form of heat, while on depth of earth there are always high temperatures. Temperature growth, in process of deepening in the Earth, limits depth of possible bunkers. (The geothermal gradient on the average makes 30 degrees C/kilometers. It means, that bunkers on depth more than 1 kilometre are impossible — or demand huge cooling installations on a surface, as gold mines in the republic of South Africa. There can be deeper bunkers in ices of Antarctica.)
The more durable, more universal and more effective, should be a bunker, the earlier it is necessary to start to build it. But in this case it is difficult to foresee the future risks. For example, in 1930th years in Russia was constructed many anti-gase bombproof shelters which have appeared useless and vulnerable to bombardments by heavy demolition bombs.
Efficiency of the bunker which can create the civilisation, corresponds to a technological level of development of this civilisation. But it means that it possesses and corresponding means of destruction. So, especially powerful bunker is necessary. The more independently and more absolutely is the bunker (for example, equipped with AI, nanorobots and biotechnologies), the easier it can do without, eventually, people, having given rise to purely computer civilisation.
People from different bunkers will compete for that who first leaves on a surface and who, accordingly, will own it — therefore will develop the temptation for them to go out to still infected sites of the Earth.
There are possible automatic robotic bunkers: in them the frozen human embryos are stored in a certain artificial uterus and through hundreds or thousand years start to be grown up. (Technology of cryonics of embryos already exists, and works on an artificial uterus are forbidden for bioethics reasons, but basically such device is possible.) With embryos it is possible to send such installations in travel to other planets. However, if such bunkers are possible, the Earth hardly remains empty — most likely it will be populated with robots. Besides, if the human cub who has been brought up by wolves, considers itself as a wolf as whom human who has been brought up by robots will consider itself?
So, the idea about a survival in bunkers contains many reefs which reduce its utility and probability of success. It is necessary to build long-term bunkers for many years, but they can become outdated for this time as the situation will change and it is not known to what to prepare. Probably, that there is a number of powerful bunkers which have been constructed in days of cold war. A limit of modern technical possibilities the bunker of an order of a 30-year-old autonomy, however it would take long time for building — decade, and it will demand billions dollars of investments.
Independently there are information bunkers, which are intended to inform to the possible escaped descendants about our knowledge, technologies and achievements. For example, in Norway, on Spitsbergen have been created a stock of samples of seeds and grain with these purposes (Doomsday Vault). Variants with preservation of a genetic variety of people by means of the frozen sperm are possible. Digital carriers steady against long storage, for example, compact discs on which the text which can be read through a magnifier is etched are discussed and implemented by Long Now Foundation. This knowledge can be crucial for not repeating our errors.

November 14, 2008
Computer History Museum, Mountain View, CA

http://ieet.org/index.php/IEET/eventinfo/ieet20081114/

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

This seminar will precede the futurist mega-gathering Convergence 08, November 15–16 at the same venue, which is co-sponsored by the IEET, Humanity Plus (World Transhumanist Association), the Singularity Institute for Artificial Intelligence, the Immortality Institute, the Foresight Institute, the Long Now Foundation, the Methuselah Foundation, the Millenium Project, Reason Foundation and the Accelerating Studies Foundation.

SEMINAR FACULTY

  • Nick Bostrom Ph.D., Director, Future of Humanity Institute, Oxford University
  • Jamais Cascio, research affiliate, Institute for the Future
  • James J. Hughes Ph.D., Exec. Director, Institute for Ethics and Emerging Technologies
  • Mike Treder, Executive Director, Center for Responsible Nanotechnology
  • Eliezer Yudkowsky, Research Associate. Singularity Institute for Artificial Intelligence
  • William Potter Ph.D., Director, James Martin Center for Nonproliferation Studies

REGISTRATION:
Before Nov 1: $100
After Nov 1 and at the door: $150

The UK’s Guardian today published details of a report produced by Britain’s Security Service (MI5) entitled, ‘Understanding radicalization and violent extremism in the UK’. The report is from MI5’s internal behavioral analysis unit and contains within it some interesting and surprising conclusions. The Guardian report covers many of these in depth (so no need to go over here) but one point, which is worth highlighting is the claim made within the report that religion is and was not a contributory factor in the radicalization of the home-grown terrorist threat that the UK faces. In fact, the report goes on to state that a strong religious faith protects individuals from the effects of extremism.This viewpoint is one that is gathering strength and coincides with an article written by Martin Amis in the Wall Street Journal, which also argues that ‘terrorism’s new structure’ is about the quest for fame and thirst for power, with religion simply acting as a “means of mobilization”.

All of this also tends to agree with the assertion made by Philip Bobbit in ‘Terror and Consent’, that al-Qaeda is simply version 1.0 of a new type of terrorism for the 21st century. This type of terrorism is attuned to the advantages and pressures of a market based world and acts more like a Silicon Valley start-up company than the Red Brigades — being flexible, fast moving and wired — taking advantage of globalization to pursue a violent agenda.

This all somewhat begs the question of, what next? If al-Qaeda is version 1.0 what is 2.0? This of course is hard to discern but looking at the two certain trends, which will shape humanity over the next 20 years — urbanization and virtualization — throws up some interesting potential opponents who are operating today. The road to mass urbanization is currently being highlighted by the 192021 project (19 cities, 20 million people in the 21st century) and amongst other things, points to the large use of slum areas to grow the cities of the 21st century. Slum areas are today being globally exploited from Delhi to Sao Paulo by Nigerian drug organizations that are able to recruit the indigenous people to build their own cities within cities. This kind of highly profitable criminal activity in areas beyond the vision of government is a disturbing incubator.

150px-anonymousdemotivator.jpg
Increased global virtualization complements urbanization as well as standing alone. Virtual environments provide a useful platform for any kind of real-life extremist (as is now widely accepted) but it is the formation of groups within virtual spaces that then spill-out into real-space that could become a significant feature of the 21st century security picture. This is happening with, ‘Project Chanology’ a group that was formed virtually with some elements of the Anonymous movement in order to disrupt the Church of Scientology. While Project Chanology (WhyWeProtest Website)began as a series of cyber actions directed at Scientology’s website, it is now organizing legal protests of Scientology buildings. A shift from the virtual to the real. A more sinister take on this is the alleged actions of the Patriotic Nigras — a group dedicated to the disruption of Second Life, which has reportedly taken to using the tactic of ‘swatting’ — which is the misdirection of armed police officers to a victim’s home address. A disturbing spill-over into real-space. Therefore, whatever pattern future terrorist movements follow, there are signs that religion will play a peripheral rather than central role.

Originally posted on the Counterterrorism blog.

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected].

When I read about the “Aurora Generator Test” video that has been leaked to the media I wondered “why leak it now now and who benefits.” Like many of you, I question the reasons behind any leak from an “unnamed source” inside the US Federal government to the media. Hopefully we’ll all benefit from this particular leak.

Then I thought back to a conversation I had at a trade show booth I was working in several years ago. I was speaking with a fellow from the power generation industry. He indicated that he was very worried about the security ramifications of a hardware refresh of the SCADA systems that his utility was using to control its power generation equipment. The legacy UNIX-based SCADA systems were going to be replaced by Windows based systems. He was even more very worried that the “air gaps” that historically have been used to physically separate the SCADA control networks from power company’s regular data networks might be removed to cut costs.

Thankfully on July 19, 2007 the Federal Energy Regulatory Commission proposed to the North American Electric Reliability Corporation a set of new, and much overdue, cyber security standards that will, once adopted and enforced do a lot to help make an attacker’s job a lot harder. Thank God, the people who operate the most critically important part of our national infrastructure have noticed the obvious.

Hopefully a little sunlight will help accelerate the process of reducing the attack surface of North America’s power grid.

After all, the march to the Singularity will go a lot slower without a reliable power grid.

Matt McGuirl, CISSP