Toggle light / dark theme

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

Cover pic_0

“Modern life relies on satellite sytems but they are alarmingly vulnerable to attack as they orbit the Earth. Patricia Lewis explains why defending them from hostile forces is now a primary concern for states”

Read more

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

Read more

I agree 100% with this report by former pentagon official on AI systems involving missiles.


A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.

In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.

The new report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security, a policy research group in Washington, D.C. From 2008 to 2013, Mr. Scharre worked in the office of the Secretary of Defense, where he helped establish United States policy on unmanned and autonomous weapons. He was one of the authors of a 2012 Defense Department directive that set military policy on the use of such systems.

Read more

Around the world, the animals that pollinate our food crops — more than 20,000 species of bees, butterflies, bats and many others — are the subject of growing attention. An increasing number of pollinator species are thought to be in decline, threatened by a variety of mostly human pressures, and their struggles could pose significant risks for global food security and public health.

Until now, most assessments of pollinator health have been conducted on a regional basis, focusing on certain countries or parts of the world. But this week, a United Nations organization has released the first-ever global assessment of pollinators, highlighting their importance for worldwide food and nutrition, describing the threats they currently face and outlining strategies to protect them.

The report, which was released Friday by the U.N.’s Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), has been in the works since the summer of 2014. The research team consisted of more than 70 experts, who drew on the most up-to-date global pollinator science, as well as local and indigenous knowledge, to complete the assessment.

Read more

I luv it — India get’s it. You have to make sure that your IT foundation is solid first before unleashing things like AI. Connected AI requires a solid and secured infrastructure foundation 1st. In order for customers to buy into Cloud & the whole IoT, and connected AI set of products and services; the customer must feel that they can trust you fully.


By Jayadev Parida

Take a stock of the past, analyse the present cliché and frame a strategy for future. In the recent years, India’s approach to cyber security has experienced a shift from style to substance. Prime Minister Modi’s foreign policy has made various strong interventions on cyber security matters. Those interventions need to be materialised to manoeuvre the interest. Presumably, the Prime Minister Office (PMO) is likely to invest both political and capital energy to enhance a cautious cyber-strategy. A dedicated Division in the Indian Ministry of External Affairs (MEA) for cyber security is a value addition to that. In 2015, Minister of Communications and Information Technology in a written reply to the Lok Sabha stated that government allocated Rs 755 crore to combat cyber security threats over a period of five years. But, this financial outlay is quite negligible as the nature of threat is quite huge and unpredictable.

Cheer up, the worst is yet to come! One of those famous words penned by noted American author and novelist Mark Twain a long ago. This sentence is a stark reminder of India’s dawdling approach to new threats. India’s cyber sleuth may be holding their nerves for the worst to frame a robust apparatus to secure cyber ecosystem. The Google Trends of 2015 demonstrated that Islamic State (IS) was a buzz word in India while terrorism continued to exist as the area of concern. Nonetheless, interest over the time for IS’ in Indian Cities is increasing significantly.

Read more

Apple_Backdoor

““You hear over and over and over again, from the pro-backdoor camp, that we need to strike a balance, we need to find a compromise,” says Cardozo. “That doesn’t work. Math doesn’t work like that. Computer security doesn’t work like that … It’s kind of like climate change. There are entrenched political interests on one side of a ‘debate,’ and on the other side is the unanimous scientific and technical community.””

Read more

I wish the CA AG a lot of luck; however, her approach is very questionable when you think about downstream access and feed type scenarios. Example, Business in Boston MA has an agreement with a cloud host company in CA, and Boston also has data that it pulls in from Italy, DE, etc. plus has a service that it offers to all of users and partners in the US and Europe that is hosted in CA.

How is the CA AG going to impose a policy on Boston? It can’t; in fact the business in Boston will change providers and choose to use someone in another state that will not impact their costs and business.

BTW — I didn’t even mention the whole recent announcement from China on deploying out a fully Quantum “secured” infrastructure. If this is true; everyone is exposed and this means there is no way companies can be held accountable because US didn’t have access to the more advance Quantum infrastructure technology.

https://lnkd.in/b9xXVAN


Feb. 17 — California Attorney General Kamala Harris (D) has released the state’s data breach report, laying out the legal and ethical responsibilities of businesses to keep information safe and perhaps most importantly outlining what the state believes is “reasonable security” that companies must employ to avoid possible enforcement actions.

Under the state’s information security statute, businesses must use “reasonable security procedures and practices” that “protect personal information from unauthorized access, destruction, use, modification, or disclosure,” the report said.

Under the guidelines in the report released Feb. 16, failing to implement all 20 of the Center for Internet Security’s Critical Security Controls that apply to an organization’s environment constitutes a lack of reasonable security. The controls define a minimum level of information security all organizations that collect or maintain personal information should meet.

Read more

Here is a question that keeps me up at night…

Is the San Bernardino iPhone just locked or is it properly encrypted?

Isn’t full encryption beyond the reach of forensic investigators? So we come to the real question: If critical data on the San Bernardino iPhone is properly encrypted, and if the Islamic terrorist who shot innocent Americans used a good password, then what is it that the FBI thinks that Apple can do to help crack this phone? Doesn’t good encryption thwart forensic analysis, even by the FBI and the maker of the phone?

iphone-01In the case of Syed Rizwan Farook’s iPhone, the FBI doesn’t know if the shooter used a long and sufficiently unobvious password. They plan to try a rapid-fire dictionary attack and other predictive algorithms to deduce the password. But the content of the iPhone is protected by a closely coupled hardware feature that will disable the phone and even erase memory, if it detects multiple attempts with the wrong password. The FBI wants Apple to help them defeat this hardware sentry, so that they can launch a brute force hack—trying thousands of passwords each second. Without Apple’s help, the crack detection hardware could automatically erase incriminating evidence, leaving investigators in the dark.

Mitch Vogel is an Apple expert. As both a former police officer and one who has worked with Apple he succinctly explains the current standoff between FBI investigators and Apple.


The iPhone that the FBI has is locked with a passcode and encrypted. It can only be decrypted with the unique code. Not even Apple has that code or can decrypt it. Unlike what you see in the movies, it’s not possible for a really skilled hacker to say “It’s impossible“” and then break through it with enough motivation. Encryption really is that secure and it’s really impossible to break without the passcode.

What the FBI wants to do is brute force the passcode by trying every possible combination until they guess the right one. However, to prevent malicious people from using this exact technique, there is a security feature that erases the iPhone after 10 attempts or locks it for incrementally increasing time periods with each attempt. There is no way for the FBI (or Apple) to know if the feature that erases the iPhone after 10 tries is enabled or not, so they don’t even want to try and risk it.

oceans_of_data-sSo the FBI wants Apple to remove that restriction. That is reasonable. They should, if it is possible to do so without undue burden. The FBI should hand over the iPhone to Apple and Apple should help them to crack it.

However, this isn’t what the court order is asking Apple to do. The FBI wants Apple to create software that disables this security feature on any iPhone and give it to them. Even if it’s possible for this software to exist, it’s not right for the FBI to have it in their possession. They should have to file a court order every single time they use it. The FBI is definitely using this situation as an opportunity to create a precedent and give it carte blanche to get into any iPhone without due process.

So the answer to your question is that yes it is that secure and yes, it’s a ploy by the FBI. Whether it’s actually possible for Apple to help or not is one question and whether they should is another. Either way, the FBI should not have that software.