Toggle light / dark theme

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself — evolving over generations of positive feedback design into a far smarter AI — then its offspring could be far smarter than people that designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040–2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

— The Atlantic

Since its debut in 2012, Google Glass always faced a strong headwind. Even on celebrities it looked, well, dorky. The device itself, once released in the wild, was seen as half-baked, and developers lost interest. The press, already leery, was quick to dog pile, especially when Glass’s users quickly became Glass’s own worst enemy.

Many early adopters who got their hands on the device (and paid $1,500 for the privilege under the Google Explorer program) were underwhelmed. “I found that it was not very useful for very much, and it tended to disturb people around me that I have this thing,” said James Katz, Boston University’s director of emerging media studies, to MIT Technology Review.
Read more

Where will Bitcoin be a few years from now?The recently concluded Bitcoin & the Blockchain Summit in San Francisco on January 27 came up as a vivid source of both anxiety and inspiration. As speakers tackled Bitcoin’s technological limits and possible drawbacks that can be caused by impending regulations, Bitcoin advocate Andreas Antonopoulos lifted up everyone’s hope by discussing how bitcoins will eventually survive and flourish. He managed to do so with no graphics or presentations to prove his claim, just his utmost confidence and conviction that it really will no matter what.

On the currency being weak

There have been statements about Bitcoin’s technology surviving, but not the currency itself. Antonopoulos, however, argues that Bitcoin’s technology, network, and currency are interdependent with each other, which means that one element won’t work without the other. He said: “A consensus network that bases its value on the currency does not work without the currency.”

On why Bitcoin works

Antonopoulos underscores the fact that Bitcoin works because it is a dumb, transaction-processing network. Calling Bitcoin dumb is far from disparaging Bitcoin’s image as he actually thinks of this dumbness as Bitcoin’s true source of strength. According to him, it is a dumb network that supports smart devices, pushing all of the intelligence to the edge. It’s an innovation without permission.

On being 2014’s worst investment

Antonopoulos also argues that those who believe bitcoins to be a bad investment only considers the price when there are other equally important factors to be looked upon such as continuous investments and technological innovations.

For instance, 500 startups were created in 2014, which generated $500 million worth of investments and produced thousands of jobs, some portion from Bitcoin gambling. This was also the year that two remarkably genuine technologies were created, the multi-sig and hierarchal deterministic (HD) wallets.

On waiting for Bitcoin to flourish in 2017

Antonopoulos then stated with unwavering certainty: “Give us two years. Now what happens when you throw 500 companies and 10,000 developers at the problem? Give (it) two years and you will see some pretty amazing things in bitcoin.”

On mining updates

Meanwhile, mining for bitcoins prove to be more challenging than before. A Bitcoin mining facility in China, for instance, generates 4,050 bitcoins every month, which is equivalent to around $1.5 million, but not without repercussions and complexities. The entrepreneurs in the mining facility realize that as the level of difficulty and computing power increase, the ratio also gradually changes.

Typically, the entire mining procedure utilizes about 1,250 kilowatt-hours of electricity, putting the factory’s electricity bill to about $80,000 every month. Nowadays, their miners produce 20–25 bitcoins a day, significantly lesser compared with their previously 100 mined bitcoins per day.

On leaving a thought

The confidence for Bitcoin’s bright future has been regained, thanks to Antonopoulos’ contagious exhilaration and resolute belief in its potential. However, we can only wonder what the increasing difficulties in mining for bitcoins entail to the cryptocurrency’s overall performance and future, though Bitcoin’s unique features have been proven to be strong and resilient enough to surpass any challenges.

By
http://cdn.singularityhub.com/wp-content/uploads/2015/02/inside-SU-rayk-1000x400.jpg

How will you positively impact billions of people?

At Singularity University, this question is often posed to program participants packed into the classroom at the NASA Research Park in the heart of Silicon Valley. Since 2009, select groups of entrepreneurs and innovators have had their perspective shifted to exponential thinking through in-depth lectures, deep discussions, and engagement in workshops.

Yet in that time, only a few thousand individuals from around the world have had the opportunity to transform SU’s insights on accelerating technologies into cutting-edge solutions aimed at solving humanity’s greatest problems. But not anymore.

Read more

Steven Kotler — Forbes
singularity-university-summit-europe-1000x400
*This article co-written with author Ken Goffman.

One of the things that happens when you write books about the future is you get to watch your predictions fail. This is nothing new, of course, but what’s different this time around is the direction of those failures.

Used to be, folks were way too bullish about technology and way too optimistic with their predictions. Flying cars and Mars missions being two classic—they should be here by now—examples. The Jetsons being another.

But today, the exact opposite is happening.
Read more

By Michael S. Malone — MIT Technology Review
https://lifeboat.com/blog.images/ERROR: Can't identify image.

The view from Mike Steep’s office on Palo Alto’s Coyote Hill is one of the greatest in Silicon Valley.

Beyond the black and rosewood office furniture, the two large computer monitors, and three Indonesian artifacts to ward off evil spirits, Steep looks out onto a panorama stretching from Redwood City to Santa Clara. This is the historic Silicon Valley, the birthplace of Hewlett-Packard and Fairchild Semiconductor, Intel and Atari, Netscape and Google. This is the home of innovations that have shaped the modern world. So is Steep’s employer: Xerox’s Palo Alto Research Center, or PARC, where personal computing and key computer-­networking technologies were invented, and where he is senior vice president of global business operations.

And yet Mike Steep is disappointed at what he sees out the windows.
Read more

— CoinDesk
Gemini
Cameron and Tyler Winklevoss aren’t shy about issuing bold predictions for Gemini, their recently revealed bitcoin exchange project.

Calling it the “NASDAQ or Google of bitcoin”, the president and CEO, respectively, believe Gemini will be the fully regulated, fully compliant and fully banked institution the US bitcoin ecosystem needs to develop to its full potential.

In a new interview with CoinDesk, the brothers – prominent bitcoin investors and two of the largest-known holders of bitcoin – opened up about Gemini, discussing why they feel the exchange can become the market leader in what has been an increasingly active part of the bitcoin space.

Read more

Quartz

Bill Gates hosted a Reddit Ask Me Anything session yesterday, and in between pushing his philanthropic agenda and divulging his Super Bowl pick (Seahawks, duh), the Microsoft co-founder divulged that he is one in a growing list of tech giants who has reservations when it comes to artificial intelligence.

In response to Reddit user beastcoin’s question, “How much of an existential threat do you think machine superintelligence will be and do you believe full end-to-end encryption for all internet activity [sic] can do anything to protect us from that threat (eg. the more the machines can’t know, the better)??” Gates wrote this (he didn’t answer the second part of the question):

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. Read more

A revolutionary Finding waits for the final Clinch: c-global

Otto E. Rossler

Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle 14, 72076 Tubingen, Germany

Abstract: The global nature of the speed of light in the vacuum, c, was reluctantly given up by Einstein in December of 1907. A revival of the status c had enjoyed during the previous 2 ½ years, from mid-1905 to late-1907, is in the literature for several years by now. The consequences of c-global for cosmology and black-hole theory are far-reaching. Since black holes are an acute concern to date because there exists an attempt to produce them down on earth, the question of whether a global-c transform of the Einstein field equations can be found represents a vital issue — only days before an experiment that is based on the assumed absence of the new result is about to be ignited. (December 22, 2014, February 6, 2015)

Imagine: Einstein’s c were not just a local constant of nature everywhere, as one reluctantly believes it to be since late 1907, but rather a global constant. Then this return to the original 1905–1907 view would revolutionize physics. For example, cosmic expansion — whose speed by definition is added to the local c — would cease to be a physical option. Second, quantum mechanics would cease to generate problems in its unification with general relativity (or rather vice versa). Thirdly, black holes would be stable and hence show their voraciousness at any — even the smallest — size.

But is the speed of light c not a global constant anyhow in general relativity? While every layman and most every physicist does believe so, this status got actually lost by c in late 1907. To witness, it suffices to have a look at the famous “Shapiro time delay”: Light from a distant satellite is characterized, when grazing the sun on its way towards earth, by an increased travelling time compared to the sun’s absence along the light path [1]. This empirically verified famous implication of Einstein’s equation is canonically believed to reflect a locally masked reduction of the speed of light c in the vicinity of the sun [1]. But with c being a global constant, automatically an increased depth of the space-time funnel present around the sun is the real reason for the delay [2].

Is this unfamiliar proposal the physically correct one?

There are two pieces of evidence in favor of this being so, each individually sufficient. First, the famous “Schwarzschild solution” of the Einstein field equations was shown to possess a global–c transform [3]; hence the global constancy of c exists mathematically. Second, the famous “equivalence principle” between ordinary kinematic acceleration and gravitational acceleration, postulated by Einstein in late 1907, happens to be based solely on special relativity with its well-known global c. The equivalence principle was recently proved to actually non–imply a reduction of c more downstairs in the constantly accelerating extended long Einstein rocketship [4]. A third piece of evidence exists by implication: a global–c transform of the full Einstein field equations – despite the fact that this transform still waits to be written down explicitly.

But why not rather wait with giving c-global a broad visibility in the scientific community, given the embarrassing cosmological consequence which it entails as mentioned? It is c-global’s other big implication (regarding black holes) which justifies and necessitates the visibility. Why?

It is because black holes have a chance to get produced down on earth starting next month [5] .

The official safety report of the experiment [6] is already seven years old. Only an absolutely non-ignorable global–c transform of the full Einstein field equation can apparently force the almost 7 years old LSAG “safety report of the most prestigious experiment of history to be renewed in time. “In time” means: before the re-start at twice world-record energies scheduled for next month [5]. The reward to the scientific journal which accepts this brief note for publication will lie in the emergence-in-time of the existing if not yet made-explicit “global–c Einstein equation.” This task is a superhuman one indeed because finding the transform requires a unique strength of mind (or else serendipity) so that the world likely will have to wait for decades. Therefore, the manpower – the many alerted readers – of this Big Blog is needed as a planetary resource in the face of the rapidly closing time window.

In view of CERN’s open refusal to update its 7 years old Safety Report before the re-start at doubled world-record energies, one cannot be more grateful to Stephen Hawking for his timely warning [7]. There never was a stronger reason to admire this unique person and personality.

I thank Bill Seaman for having alerted me to Stephen Hawking’s latest coup. For J.O.R.

References

[1] I.I. Shapiro, Fourth test of general relativity. Physical Review Letters 13, 789–791 (1964).
[2] A half-3-pseudosphere replaces the Flamm paraboloid: https://lifeboat.com/blog/2013/03/ccc-constant-c-catastrophe
[3] O.E. Rossler, Abraham-like return to constant c in general relativity: Gothic-R theorem demonstrated in Schwarzschild metric. Fractal Spacetime and Noncommutative Geometry in Quantum and High Energy Physics 2, 1-14 (2012). Preprint on: http://www.wissensnavigator.com/documents/chaos.pdf
[4] O.E. Rossler, Equivalence principle implies gravitational-redshift proportional space dilation and hence global constancy of c. European Scientific Journal 10(9), 112–117 (2014).
[5] CERN: see http://www.newseveryday.com/articles/5537/20150101/cern-large-hadron-collider-ready-reopen-march-2015.htm
[6] Official LHC Safety Report, latest edition: http://lsag.web.cern.ch/lsag/LSAG-Report.pdf (note the date 2008)
[7] https://www.youtube.com/watch?v=KJdc3hkcCUc#t=31