Toggle light / dark theme

By — Wired

Illustration: dzima1/Getty

When Google chief financial officer Patrick Pichette said the tech giant might bring 10 gigabits per second internet connections to American homes, it seemed like science fiction. That’s about 1,000 times faster than today’s home connections. But for NASA, it’s downright slow.

While the rest of us send data across the public internet, the space agency uses a shadow network called ESnet, short for Energy Science Network, a set of private pipes that has demonstrated cross-country data transfers of 91 gigabits per second–the fastest of its type ever reported.

Read more

Kurzweil AI

University of Washington engineers have designed a clever new communication system called Wi-Fi backscatter that uses ambient radio frequency signals as a power source for battery-free devices (such as temperature sensors or wearable technology) and also reuses the existing Wi-Fi infrastructure to provide Internet connectivity for these devices.

“If Internet of Things devices are going to take off, we must provide connectivity to the potentially billions of battery-free devices that will be embedded in everyday objects,” said Shyam Gollakota, a UW assistant professor of computer science and engineering.

Read more

By Michael Harris — Wired

kid-brains-inline

Recently, my two-year-old nephew Benjamin came across a copy of Vanity Fair abandoned on the floor. His eyes scanned the glossy cover, which shone less fiercely than the iPad he is used to but had a faint luster of its own. I watched his pudgy thumb and index finger pinch together and spread apart on Bradley Cooper’s smiling mug. At last, Benjamin looked over at me, flummoxed and frustrated, as though to say, “This thing’s broken.”

Search YouTube for “baby” and “iPad” and you’ll find clips featuring one-year-olds attempting to manipulate magazine pages and television screens as though they were touch-sensitive displays. These children are one step away from assuming that such technology is a natural, spontaneous part of the material world. They’ll grow up thinking about the internet with the same nonchalance that I hold toward my toaster and teakettle. I can resist all I like, but for Benjamin’s generation resistance is moot. The revolution is already complete.

Read more

Julian Assange’s 2014 book When Google Met WikiLeaks consists of essays authored by Assange and, more significantly, the transcript of a discussion between Assange and Google’s Eric Schmidt and Jared Cohen.
As should be of greatest interest to technology enthusiasts, we revisit some of the uplifting ideas from Assange’s philosophy that I picked out from among the otherwise dystopian high-tech future predicted in Cypherpunks (2012). Assange sees the Internet as “transitioning from an apathetic communications medium into a demos – a people” defined by shared culture, values and aspirations (p. 10). This idea, in particular, I can identify with.
Assange’s description of how digital communication is “non-linear” and compromises traditional power relations is excellent. He notes that relations defined by physical resources and technology (unlike information), however, continue to be static (p. 67). I highlight this as important for the following reason. It profoundly strengthens the hypothesis that state power will also eventually recede and collapse in the physical world, with the spread of personal factories and personal enhancement technologies (analogous to personal computers) like 3-d printers and synthetic life-forms, as explained in my own techno-liberation thesis and in the work of theorists like Yannick Rumpala.
When Google Met Wikileaks tells, better than any other text, the story of the clash of philosophies between Google and WikiLeaks – despite Google’s Eric Schmidt assuring Assange that he is “sympathetic to you, obviously”. Specifically, Assange draws our attention to the worryingly close relationship between Google and the militarized US police state in the post-9/11 era. Fittingly, large portions of the book (p. 10–16, 205–220) are devoted to giving Assange’s account of the now exposed world-molesting US regime’s war on WikiLeaks and its cowardly attempts to stifle transparency and accountability.
The publication of When Google Met WikiLeaks is really a reaction to Google chairman Eric Schmidt’s 2013 book The New Digital Age (2013), co-authored with Google Ideas director Jared Cohen. Unfortunately, I have not studied that book, although I intend to pen a fitting enough review for it in due course to follow on from this review. It is safe to say that Assange’s own review in the New York Times in 2013 was quite crushing enough. However, nothing could be more devastating to its pro-US thesis than the revelations of widespread illegal domestic spying exposed by Edward Snowden, which shook the US and the entire world shortly after The New Digital Age’s very release.
Assange’s review of The New Digital Age is reprinted in his book (p. 53–60). In it, he describes how Schmidt and Cohen are in fact little better than State Department cronies (p. 22–25, 32, 37–42), who first met in Iraq and were “excited that consumer technology was transforming a society flattened by United States military occupation”. In turn, Assange’s review flattens both of these apologists and their feeble pretense to be liberating the world, tearing their book apart as a “love song” to a regime, which deliberately ignores the regime’s own disgraceful record of human rights abuses and tries to conflate US aggression with free market forces (p. 201–203).
Cohen and Schmidt, Assange tells us, are hypocrites, feigning concerns about authoritarian abuses that they secretly knew to be happening in their own country with Google’s full knowledge and collaboration, yet did nothing about (p. 58, 203). Assange describes the book, authored by Google’s best, as a shoddily researched, sycophantic dance of affection for US foreign policy, mocking the parade of praise it received from some of the greatest villains and war criminals still at large today, from Madeleine Albright to Tony Blair. The authors, Assange claims, are hardly sympathetic to the democratic internet, as they “insinuate that politically motivated direct action on the internet lies on the terrorist spectrum” (p. 200).
As with Cypherpunks, most of Assange’s book consists of a transcript based on a recording that can be found at WikiLeaks, and in drafting this review I listened to the recording rather than reading the transcript in the book. The conversation moves in what I thought to be three stages, the first addressing how WikiLeaks operates and the kind of politically beneficial journalism promoted by WikiLeaks. The second stage of the conversation addresses the good that WikiLeaks believes it has achieved politically, with Assange claiming credit for a series of events that led to the Arab Spring and key government resignations.
When we get to the third stage of the conversation, something of a clash becomes evident between the Google chairman and WikiLeaks editor-in-chief, as Schmidt and Cohen begin to posit hypothetical scenarios in which WikiLeaks could potentially cause harm. The disagreement evident in this part of the discussion is apparently shown in Schmidt and Cohen’s book: they alleged that “Assange, specifically” (or any other editor) lacks sufficient moral authority to decide what to publish. Instead, we find special pleading from Schmidt and Cohen for the state: while regime control over information in other countries is bad, US regime control over information is good (p. 196).
According to the special pleading of Google’s top executives, only one regime – the US government and its secret military courts – has sufficient moral authority to make decisions about whether a disclosure is harmful or not. Assange points out that Google’s brightest seem eager to avoid explaining why this one regime should have such privilege, and others should not. He writes that Schmidt and Cohen “will tell you that open-mindedness is a virtue, but all perspectives that challenge the exceptionalist drive at the heart of American foreign policy will remain invisible to them” (p. 35).
Assange makes a compelling argument that Google is not immune to the coercive power of the state in which it operates. We need to stop mindlessly chanting “Google is different. Google is visionary. Google is the future. Google is more than just a company. Google gives back to the community. Google is a force for good” (p. 36). It’s time to tell it how it is, and Assange knows just how to say it.
Google is becoming a force for bad, and is little different from any other massive corporation led by ageing cronies of the narrow-minded state that has perpetrated the worst outrages against the open and democratic internet. Google “Ideas” are myopic, close-minded, and nationalist (p. 26), and the corporate-state cronies who think them up have no intention to reduce the number of murdered journalists, torture chambers and rape rooms in the world or criticize the regime under which they live. Google’s politics are about keeping things exactly as they are, and there is nothing progressive about that vision.
To conclude with what was perhaps the strongest point in the book, Assange quotes NYT columnist Tom Friedman. We are warned by Friedman as early as 1999 that Silicon Valley is led less now by the mercurial “hidden hand” of the market than the “hidden fist” of the US state. Assange argues, further, that the close relations between Silicon Valley and the regime in Washington indicate Silicon Valley is now like a “velvet glove” on the “hidden fist” of the regime (p. 43). Similarly, Assange warns those of us of a libertarian persuasion that the danger posed by the state has two horns – one government, the other corporate – and that limiting our attacks to one of them means getting gored on the other. Despite its positive public image, Google’s (and possibly also Facebook’s) ties with the US state for the purpose of monitoring the US pubic deserve a strong public backlash.

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

As the old social bonds unravel, philosopher and member of the Lifeboat Foundation’s advisory board Professor Steve Fuller asks: can we balance free expression against security?

justice

Justice has been always about modes of interconnectivity. Retributive justice – ‘eye for an eye’ stuff – recalls an age when kinship was how we related to each other. In the modern era, courtesy of the nation-state, bonds have been forged in terms of common laws, common language, common education, common roads, etc. The internet, understood as a global information and communication infrastructure, is both enhancing and replacing these bonds, resulting in new senses of what counts as ‘mine’, ‘yours’, ‘theirs’ and ‘ours’ – the building blocks of a just society…

Read the full article at IAI.TV

By

It’s impossible to overstate how much the Internet matters. It has forever altered how we share information and store it for safekeeping, how we communicate with political leaders, how we document atrocities and hold wrongdoers accountable, how we consume entertainment and create it, even how we meet others and maintain relationships. Our society is strengthened and made more democratic by the open access the Internet enables. But the Internet as we know it is at risk from a variety of threats ranging from cybercrime to its very infrastructure, which wasn’t built to withstand the complications our dependence upon it causes.

We asked some of the Net’s biggest stakeholders and thought leaders to lay out ways we can maintain the Internet as a home for innovation, community, and freely exchanged information. We are excited to present you with these six takes on what could go wrong—and how to bring us back from the brink.

Read more

Aalborg University

A new study uses a four minute long mobile video as an example. The method used by the Danish and US researchers in the study resulted in the video being downloaded five times faster than state of the art technology. The video also streamed without interruptions. In comparison, the original video got stuck 13 times along the way.

- This has the potential to change the entire market. In experiments with our network coding of Internet traffic, equipment manufacturers experienced speeds that are five to ten times faster than usual. And this technology can be used in satellite communication, mobile communication and regular Internet communication from computers, says Frank Fitzek, Professor in the Department of Electronic Systems and one of the pioneers in the development of network coding.

Read more

Julia Angwin — Nation of Change
Article image
A new, extremely persistent type of online tracking is shadowing visitors to thousands of top websites, from WhiteHouse.gov to YouPorn.com.

First documented in a forthcoming paper by researchers at Princeton University and KU Leuven University in Belgium, this type of tracking, called canvas fingerprinting, works by instructing the visitor’s Web browser to draw a hidden image. Because each computer draws the image slightly differently, the images can be used to assign each user’s device a number that uniquely identifies it.

Read more

By — gizmag
The Philips LED lighting can be controlled and monitored using the CityTouch control panel
LED lighting offers a host of benefits for cities, such as reduced energy usage and costs. For Buenos Aires, which is in the process of having its lighting infrastructure upgraded, one of the benefits is the increased level of control it provides. Gizmag took a look at technology being used.

It was announced towards the end of last year that Philips had been selected to replace 91,000 street lights across Buenos Aires with LED lighting. That’s more than 70 percent of the city’s lighting. Philips says that it is the biggest city deployment of its kind. A total of 28,000 lights have now been replaced and are already being controlled remotely.

Read more