Toggle light / dark theme

Within the next few years, robots will move from the battlefield and the factory into our streets, offices, and homes. What impact will this transformative technology have on personal privacy? I begin to answer this question in a chapter on robots and privacy in the forthcoming book, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge: MIT Press).

I argue that robots will implicate privacy in at least three ways. First, they will vastly increase our capacity for surveillance. Robots can go places humans cannot go, see things humans cannot see. Recent developments include everything from remote-controlled insects to robots that can soften their bodies to squeeze through small enclosures.

Second, robots may introduce new points of access to historically private spaces such as the home. At least one study has shown that several of today’s commercially available robots can be remotely hacked, granting the attacker access to video and audio of the home. With sufficient process, governments will also be able to access robots connected to the Internet.

There are clearly ways to mitigate these implications. Strict policies could reign in police use of robots for surveillance, for instance; consumer protection laws could require adequate security. But there is a third way robots implicate privacy, related to their social meaning, that is not as readily addressed.

Study after study has shown that we are hardwired to react to anthropomorphic technology such as robots as though a person were actually present. Reports have emerged of soldiers risking their lives on the battlefield to save a robot under enemy fire. No less than people, therefore, the presence of a robot can interrupt solitude—a key value privacy protects. Moreover, the way we interact with these machines will matter as never before. No one much cares about the uses to which we put our car or washing machine. But the record of our interactions with a social machine might contain information that would make a psychotherapist jealous.

My chapter discusses each of these dimensions—surveillance, access, and social meaning—in detail. Yet it only begins a conversation. Robots hold enormous promise and we should encourage their development and adoption. Privacy must be on our minds as we do.

Posted by Dr. Denise L Herzing and Dr. Lori Marino, Human-Nonhuman Relationship Board

Over the millennia humans and the rest of nature have coexisted in various relationships. However the intimate and interdependent nature of our relationship with other beings on the planet has been recently brought to light by the oil spill in the Gulf of Mexico. This ongoing environmental disaster is a prime example of “profit over principle” regarding non-human life. This spill threatens not only the reproductive viability of all flora and fauna in the affected ecosystems but also complex and sensitive non-human cultures like those we now recognize in dolphins and whales.

Although science has, for decades, documented the links and interdependence of ecosystems and species, the ethical dilemma now facing humans is at a critical level. For too long have we not recognized the true cost of our life styles and priorities of profit over the health of the planet and the nonhuman beings we share it with. If ever the time, this is a wake up call for humanity and a call to action. If humanity is to survive we need to make an urgent and long-term commitment to the health of the planet. The oceans, our food sources and the very oxygen we breathe may be dependent on our choices in the next 10 years.

And humanity’s survival is inextricably linked to that of the other beings we share this planet with. We need a new ethic.

Many oceanographers and marine biologist have, for a decade, sent out the message that the oceans are in trouble. Human impacts of over-fishing, pollution, and habitat destruction are threatening the very cycles of our existence. In the recent catastrophe in the Gulf, one corporation’s neglectful oversight and push for profit has set the stage for a century of clean up and impact, the implications of which we can only begin to imagine.

Current and reported estimates of stranded dolphins are at fifty-five. However, these are dolphins visibly stranded on beaches. Recent aerial footage, on YouTube, by John Wathen shows a much greater and serious threat. Offshore, in the “no fly zone” hundreds of dolphins and whales have been observed in the oil slick. Some floating belly up and dead, others struggling to breathe in the toxic fumes. Others exhibit “drunken dolphin syndrome” characterized by floating in an almost stupefied state on the surface of the water. These highly visible effects are just the tip of the iceberg in terms of the spill’s impact on the long term health and viability of the Gulf’s dolphin and whale populations, not to mention the suffering incurred by each individual dolphin as he or she tries to cope with this crisis.

Known direct and indirect effects of oil spills on dolphins and whales depend on the species but include, toxicity that can cause organ dysfunction and neurological impairment, damaged airways and lungs, gastrointestinal ulceration and hemorrhaging, eye and skin lesions, decreased body mass due to limited prey, and, the pervasive long term behavioral, immunological, and metabolic impacts of stress. Recent reports substantiate that many dolphins and whales in the Gulf are undergoing tremendous stress, shock and suffering from many of the above effects. The impact to newborns and young calves is clearly devastating.

After the Exxon Valdez spill in Prince William Sound in 1989 two pods of orcas (killer whales) were tracked. It was found that one third of the whales in one pod and 40 percent of the whales in the other pod had disappeared, with one pod never recovering its numbers. There is still some debate about the number of missing whales directly impacted by the oil though it is fair to say that losses of this magnitude are uncommon and do serious damage to orca societies.

Yes, orca societies. Years of field research has led to the conclusion by a growing number of scientists that many dolphin and whale species, including sperm whales, humpback whales, orcas, and bottlenose dolphins possess sophisticated cultures, that is, learned behavioral traditions passed on from one generation to the next. These cultures are not only unique to each group but are critically important for survival. Therefore, not only do environmental catastrophes such as the Gulf oil spill result in individual suffering and loss of life but they contribute to the permanent destruction of entire oceanic cultures. These complex learned traditions cannot be replicated after they are gone and this makes them invaluable.

On December 10, 1948 the General Assembly of the United Nations adopted and proclaimed the Universal Declaration of Human Rights, which acknowledges basic rights to life, liberty, and freedom of cultural expression. We recognize these foundational rights for humans as we are sentient, complex beings. It is abundantly clear that our actions have violated these same rights for other sentient, complex and cultural beings in the oceans – the dolphins and whales. We should use this tragedy as an opportunity to formally recognize societal and legal rights for them so that their lives and their unique cultures are better protected in the future.

Recently, there was a meeting of scientists, philosophers, legal experts and dolphin and whale advocates in Helsinki, Finland, who drafted a Declaration of Rights for Cetaceans a global call for basic rights for dolphins and whales. You can read more about this effort and become a signatory here: http://cetaceanconservation.com.au/cetaceanrights/. Given the destruction of dolphin and whale lives and cultures caused by the ongoing environmental disaster in the Gulf, we think this is one of the ways we can commit ourselves to working towards a future that will be a lifeboat for humans, dolphins and whales, and the rest of nature.

Perhaps you think I’m crazy or naive to pose this question. But more and more the past few months I’ve begun to wonder if there is a possibility here that this idea may not be too far off the mark.

Not because of some half-baked theory about a global conspiracy or anything of the sort but simply based upon the behavior of many multinational corporations recently and the effects this behavior is having upon people everywhere.

Again, you may disagree but my perspective on these financial giants is that they are essentially predatory in nature and that their prey is any dollar in commerce that they can possibly absorb. The problem is that for anyone in the modern or even quasi-modern world money is nearly as essential as plasma when it comes to our well-being.

It has been clearly demonstrated again and again — all over the world — that when a population has become sufficiently destitute that the survival of the individual is actually threatened violence inevitably occurs. On a large enough scale this sort of violence can erupt into civil war and wars, as we all know too well can spread like a virus across borders, even oceans.

Until fairly recently, corporations were not big enough, powerful enough or sufficiently meshed with our government to push the US population to a point of violence and perhaps we’re not there yet, but between the bank bailout, the housing crisis, the bailouts of the automakers, the subsidies to the big oil companies and ten thousand other government gifts that are coming straight from the taxpayer I fear we are getting ever closer to the brink.

Who knows — it might just take one little thing — like that new one dollar charge many stores have suddenly begun instituting for any purchase using an ATM or credit card — to push us over the edge.

The last time I got hit with one of these dollar charges I thought about the ostensible reason for this — that the credit card company is now charging the merchant more per transaction so the merchant is passing that cost on to you — however this isn’t the whole story. The merchant is actually charging you more than the transaction costs him and even if this is a violation of either the law or the terms and services agreement between the card company and the merchant, the credit card company looks the other way because they are securing a bigger transaction because of what the merchant is doing thus increasing their profits even further.

Death by big blows or a thousand cuts — the question is will we be forced to do something about it before the big corporations eat us alive?

Existential Threats

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-to-cern-council-and-member-states-on-lhc-risks_lhc-kritik-et-al_march-17-2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-revision-of-lhc-risks_concerned-int.pdf

More info:
http://lhc-concern.info/

Abstract:

President Obama disbanded the President’s Council on Bioethics after it questioned his policy on embryonic stem cell research. White House press officer Reid Cherlin said that this was because the Council favored discussion over developing a shared consensus. This column lists a number of problems with Obama’s decision, and with his position on the most controversial bioethical issue of our time.

Bioethics and the End of Discussion

In early June, President Obama disbanded the President’s Council on Bioethics. According to White House press officer Reid Cherlin, this was because the Council was designed by the Bush administration to be “a philosophically leaning advisory group” that favored discussion over developing a shared consensus. http://www.nytimes.com/2009/06/18/us/politics/18ethics.html?_r=2

Shared consensus? Like the shared consensus about the Mexico City policy, government funding of Embryonic Stem Cell Research for new lines, or taxpayer funded abortions? All this despite the fact that 51% of Americans consider themselves pro-life? By allowing publicly-funded Embryonic Stem Cell Research only on existing lines, President Bush made a decision that nobody was happy with, but at least it was an honest compromise, and given the principle of second effect, an ethically acceptable one.

President Obama will appoint a new bioethics commission, one with a new mandate and that “offers practical policy options,” Mr. Cherlin said.

Practical policy options? Like the ones likely to be given by Obama’s new authoritative committee to expediently promote the license to kill the most innocent and vulnerable? But that is only the start. As the baby boomers bankrupt Social Security, there will be a strong temptation to expand Obama’s mandate to include the aging “useless mouths”. Oregon and the Netherlands have already shown the way—after all, a suicide pill is much cheaper than palliative care, and it’s much more cost-effective to kill patients rather than care for them. (http://www.euthanasia.com/argumentsagainsteuthanasia.html)

Evan Rosa details many problems with Obama’s decision to disband the Council (http://www.cbc-network.org/research_display.php?id=388), but there are additional disturbing implications:

First, democracies are absolutely dependent on discussion. Dictators have always suppressed free discussion on “sensitive” subjects because it is the nature of evil to fear criticism. This has been true here in the United States, too—in the years leading up to the Civil War, Southern senators and representatives tried to squelch all discussion on slavery. Maybe their consciences bothered them.

Second, no matter how well-meaning the participants may be, consensus between metaphysically opposed parties is impossible in some matters (such as the humanity of a baby a few months before he or she is born, the existence of God, consequentialist vs. deontological reasoning, etc.). The only way to get “consensus” in such situations is by exercising the monopoly of force owned by the government.

Third, stopping government-sponsored discussion on bioethics sets a dangerous precedent for the ethics surrounding nanotechnology. There are numerous ethical issues that nanotechnology is raising, and will continue to raise, that desperately require significant amounts of detailed discussion and deep thinking.

Tyrants begin by marginalizing anyone who disagrees with them, calling them hate-mongering obstructionists (or worse). In addition, they will use governmental power to subdue any who dare oppose their policies.

The details of the dismissal of the Council clearly shows this tendency, though the Council members are not acting very subdued. As one of them supposedly put it, “Instead of meeting at seminars, now we’ll be meeting on Facebook.”

On March 9, Obama removed restrictions on federal funding for research on embryonic stem cell lines derived by means that destroy human embryos.

On March 25, ten out of the eighteen members of the Council questioned Obama’s policy (http://www.thehastingscenter.org/Bioethicsforum/Post.aspx?id=3298).

In the second week of June, Obama fired them all.

Could it be that Obama doesn’t want discussion? We can see what happens if someone gives him advice that he doesn’t want.

Oprah Winfrey’s favorite physician Dr. Mehmet Oz, told her and Michael Fox that “the stem cell debate is dead” because “the problem with embryonic stem cells is that [they are]… very hard to control, and they can become cancerous” (http://www.oprah.com/media/20090319-tows-dr-oz-brain). Besides, induced pluripotent cells can become embryonic, thereby negating the very difficult necessity of cloning.

So “harvesting” embryonic stem cells is not only ethically problematic (i.e. wrong), but it is also scientifically untenable. Obama supports it anyway.

Maybe he could fire Oprah.

Tihamer Toth-Fejel, MS
General Dynamics Advanced Information Systems
Michigan Research and Development Center

About

DIYbio is an organization that aims to help make biology a worthwhile pursuit for citizen scientists, amateur biologists, and DIY biological engineers who value openness and safety. This will require mechanisms for amateurs to increase their knowledge and skills, access to a community of experts, the development of a code of ethics, responsible oversight, and leadership on issues that are unique to doing biology outside of traditional professional settings.

What is DIYbio in 4 minutes?

Get Involved

You can read about current events and developments in the DIYbio community by reading or subscribing to the blog.

Get in contact or get involved through discussions on our mailing list, or by attending or hosting a local DIYbio meetup.

The mailing list is the best way to find out what’s happening with DIYbio right now. There is also a low-traffic announce list.

Find out about our featured projects, including our plans for public wetlabs, global FlashLab experiments, and our innovation of next-gen lab equipment on the Projects page.

Ugolog Creates Surveillance Website To Watch Anyone, Anywhere

Written on April 28, 2009 – 2:43 am | by keith kleiner |

big_brother

What if people all over the world randomly decided to setup motion detection webcams and then send feeds from these webcams to a single website that would centralize the video data for anyone to search, view, and manipulate? Hot off of the heels of our story yesterday about the implications of cameras recording everything in our lives comes a website called Ugolog that does exactly this. The concept is both spooky and captivating all at once. The privacy implications are just out of control, opening the door to all sorts of immoral and illegal invasions of people’s privacy. On the other hand, the power and usefulness of such a network is extremely compelling.

When you go to the Ugolog website you are immediately impressed with the simplicity of the site (I sure hope they keep it this way!). No advertisements, no stupid gimmicks, no complicated interface. The site offers a bare bones, yet elegant design that allows you to do one thing quickly and easily: setup a motion detecting webcam and send the feed to Ugolog. No software is required, only a web browser and a properly configured camera. Don’t know how to setup the camera? No problem! The site has tutorials that tell you everything you need to know. Once Ugolog has a feed from one or more of your cameras, the data will be available for you and anyone else in the world to view along with all of the other feeds on the site.

Photo From Ugolog "how to build a spy camera" manual

Photo From Ugolog “how to build a spy camera” manual

No big deal, many will say! Its just like Justin.tv — the website that already carries thousands of live video feeds from all over the world, boasting more than 80,000 simultaneous viewers earlier today. Yet if you think about this a bit more, you will see that there is indeed a difference between Ugolog and Justin.tv. The difference is their focus — the type of content that the two sites will offer.

Justin.tv offers all sorts of video feeds, including news, sports, random idiots doing stupid random things, and pretty much anything else you can imagine. This is a useful and powerful model, yet Justin.tv’s focus on serving up all kinds of video leaves it open to attack by more narrowly focused sites. Ugolog focuses only on surveillance video. By targeting this specific category of video the site just might be able to carve out a unique niche in the online video space that can really gain some traction. Justin.tv could of course create a category on its site called “surveillance”, but a category on Justin.tv devoted to the surveillance might have difficulty competing with Ugolog’s website, community, and employees devoted completely to surveillance.

Highlighting the specialization available on the site Ugolog founder Alexander Uslontsev says “Compared to sites like Justin.tv and Ustream.com, that work with webcam only, Ugolog works with webcams AND ‘professional’ security cameras that can upload pictures via FTP or HTTP. In this case Ugolog acts only as ‘dropbox’ for images and expects all motion detection and scheduling to be done in camera.”

Ugolog is in beta now and has only recently launched, but the site could easily take off like a rocket in a short amount of time. The idea is powerful. The site is easy, simple, and free. Add this all up and you have a solid recipe for explosive growth in users and content.

Success is not guaranteed, however. Explosive growth can be its own curse, being extremely difficult and expensive to keep up with. Video is especially resource hungry and may keep the folks over at Ugolog (and their wallets) quite overwhelmed.

Another potential stumbling block is the intense legal scrutiny that the site will certainly encounter. We can envision massive feeds of video that invade privacy and break the law showing up on the Ugolog website, creating a virtual feast for lawyers everywhere. One way around this legal mess is probably to allow comprehensive controls over who can see what. Indeed, this appears to be the case at the moment, as most (all?) feeds seem to be currently viewable only by the owner. Yet clearly in the future it will take only the click of a single checkbox to “open a feed” to the public.

Focusing on the positive side for a moment, there are several interesting applications that can come from a site like Ugolog. One such application would be the fulfillment of truly legitimate surveillance needs. Ugolog allows individuals to quickly setup a powerful surveillance system for their own homes. Taking this a step further, perhaps a neighborhood would setup its own surveillance network to increase safety and monitor for theft and other crimes. Consider also more academic applications, such as researchers setting up cameras to monitor glacier growth, animal species patterns, and so on.

Of course the negative and destructive potential of surveillance a la Ugolog is hard to deny. Yet whether we like it or not, ubiquitous video is here to stay. We are increasingly likely to fall under the surveillance of one or more cameras multiple times throughout the day. Ugolog may come and go, but the trend cannot be stopped. Fight the trend if you want, but I for one intend to embrace it!

(Crossposted on the blog of Starship Reckless)

Working feverishly on the bench, I’ve had little time to closely track the ongoing spat between Dawkins and Nisbet. Others have dissected this conflict and its ramifications in great detail. What I want to discuss is whether scientists can or should represent their fields to non-scientists.

There is more than a dollop of truth in the Hollywood cliché of the tongue-tied scientist. Nevertheless, scientists can explain at least their own domain of expertise just fine, even become major popular voices (Sagan, Hawkin, Gould — and, yes, Dawkins; all white Anglo men, granted, but at least it means they have fewer gatekeepers questioning their legitimacy). Most scientists don’t speak up because they’re clocking infernally long hours doing first-hand science and/or training successors, rather than trying to become middle(wo)men for their disciplines.

prometheus

Experimental biologists, in particular, are faced with unique challenges: not only are they hobbled by ever-decreasing funds for basic research while expected to still deliver like before. They are also beset by anti-evolutionists, the last niche that science deniers can occupy without being classed with geocentrists, flat-earthers and exorcists. Additionally, they are faced with the complexity (both intrinsic and social) of the phenomenon they’re trying to understand, whose subtleties preclude catchy soundbites and get-famous-quick schemes.

Last but not least, biologists have to contend with self-anointed experts, from physicists to science fiction writers to software engineers to MBAs, who believe they know more about the field than its practitioners. As a result, they have largely left the public face of their science to others, in part because its benefits — the quadrupling of the human lifespan from antibiotics and vaccines, to give just one example — are so obvious as to make advertisement seem embarrassing overkill.

As a working biologist, who must constantly “prove” the value of my work to credentialed peers as well as laypeople in order to keep doing basic research on dementia, I’m sick of accommodationists and appeasers. Gould, despite his erudition and eloquence, did a huge amount of damage when he proposed his non-overlapping magisteria. I’m tired of self-anointed flatulists — pardon me, futurists — who waft forth on biological topics they know little about, claiming that smatterings gleaned largely from the Internet make them understand the big picture (much sexier than those plodding, narrow-minded, boring experts!). I’m sick and tired of being told that I should leave the defense and promulgation of scientific values to “communications experts” who use the platform for their own aggrandizement.

Nor are non-scientists served well by condescending pseudo-interpretations that treat them like ignorant, stupid children. People need to view the issues in all their complexity, because complex problems require nuanced solutions, long-term effort and incorporation of new knowlege. Considering that the outcomes of such discussions have concrete repercussions on the long-term viability prospects of our species and our planet, I staunchly believe that accommodationism and silence on the part of scientists is little short of immoral.

Unlike astronomy and physics, biology has been reluctant to present simplified versions of itself. Although ours is a relatively young science whose predictions are less derived from general principles, our direct and indirect impact exceeds that of all others. Therefore, we must have articulate spokespeople, rather than delegate discussion of our work to journalists or politicians, even if they’re well-intentioned and well-informed.

Image: Prometheus, black-figure Spartan vase ~500 BCE.


NewScientist — March 10, 2009, by A. C. Grayling

IN THIS age of super-rapid technological advance, we do well to obey the Boy Scout injunction: “Be prepared”. That requires nimbleness of mind, given that the ever accelerating power of computers is being applied across such a wide range of applications, making it hard to keep track of everything that is happening. The danger is that we only wake up to the need for forethought when in the midst of a storm created by innovations that have already overtaken us.

We are on the brink, and perhaps to some degree already over the edge, in one hugely important area: robotics. Robot sentries patrol the borders of South Korea and Israel. Remote-controlled aircraft mount missile attacks on enemy positions. Other military robots are already in service, and not just for defusing bombs or detecting landmines: a coming generation of autonomous combat robots capable of deep penetration into enemy territory raises questions about whether they will be able to discriminate between soldiers and innocent civilians. Police forces are looking to acquire miniature Taser-firing robot helicopters. In South Korea and Japan the development of robots for feeding and bathing the elderly and children is already advanced. Even in a robot-backward country like the UK, some vacuum cleaners sense their autonomous way around furniture. A driverless car has already negotiated its way through Los Angeles traffic.

In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?

Most thinking about the implications of robotics tends to take sci-fi forms: robots enslave humankind, or beautifully sculpted humanoid machines have sex with their owners and then post-coitally tidy the room and make coffee. But the real concern lies in the areas to which the money already flows: the military and the police.

A confused controversy arose in early 2008 over the deployment in Iraq of three SWORDS armed robotic vehicles carrying M249 machine guns. The manufacturer of these vehicles said the robots were never used in combat and that they were involved in no “uncommanded or unexpected movements”. Rumours nevertheless abounded about the reason why funding for the SWORDS programme abruptly stopped. This case prompts one to prick up one’s ears.

Media stories about Predator drones mounting missile attacks in Afghanistan and Pakistan are now commonplace, and there are at least another dozen military robot projects in development. What are the rules governing their deployment? How reliable are they? One sees their advantages: they keep friendly troops out of harm’s way, and can often fight more effectively than human combatants. But what are the limits, especially when these machines become autonomous?

The civil liberties implications of robot devices capable of surveillance involving listening and photographing, conducting searches, entering premises through chimneys or pipes, and overpowering suspects are obvious. Such devices are already on the way. Even more frighteningly obvious is the threat posed by military or police-type robots in the hands of criminals and terrorists.

Military robots in the hands of criminals and terrorists would pose a frightening threat.

There needs to be a considered debate about the rules and requirements governing all forms of robot devices, not a panic reaction when matters have gone too far. That is how bad law is made — and on this issue time is running out.

A. C. Grayling is a philosopher at Birkbeck, University of London