Toggle light / dark theme

Danaher’s Instruments of Change — If you feel like your industry that has always been on a slow & stable growth curve is now under greater pressure to change; you’re not alone. Recent indicators are showing with the latest changes in tech and consumers (namely the millennials as the largest consumers today); industries have been shaken up to perform at new levels like never before or companies in those industries will cease to be relevant.


Doing well by doing good is now expected for businesses, and moral leadership is at a premium for CEOs. For today’s companies to maintain their license to operate, they need to take into account a range of elements in their decision making: managing their supply chains, applying new ways of measuring their business performance that include indicators for social as well as commercial returns, and controlling the full life cycle of their products’ usage as well as disposal. This new reality is demonstrated by the launch last September of the Sustainable Development Goals (SDGs), which call on businesses to address sustainability challenges such as poverty, gender equality, and climate change in new and creative ways. The new expectations for business also are at the heart of the Change the World list, launched by Fortune Magazine in August 2015, which is designed to identify and celebrate companies that have made significant progress in addressing major social problems as a part of their core business strategy.

Technology and millennials seem to be driving much of this change. Socially conscious customers and idealistic employees are applauding companies’ ability to do good as part of their profit-making strategy. With social media capable of reaching millions instantly, companies want to be on the right side of capitalism’s power. This is good news for society. Corporate venturing activities are emerging, and companies are increasingly leveraging people, ideas, technology, and business assets to achieve social and environmental priorities together with financial profit. These new venturing strategies are focusing more and more on areas where new partnerships and investments can lead to positive outcomes for all: the shareholders, the workers, the environment, and the local community.

Furthermore, this is especially true in the technology sector. More than 25% of the Change the World companies listed by Fortune are tech companies, and four are in the top ten–Vodafone, Google, Cisco Systems, and Facebook. Facebook’s billionaire co-founder and CEO, Mark Zuckerberg, and his wife have helped propel the technology sector into the spotlight as a shining beacon of how to do good and do well. Zuckerberg and Priscilla Chan pledged on December 1, 2015, to give 99 percent of their fortune to charity. Facebook shares are valued between $40 and $45 billion, which makes this a very large gift. The donations will initially be focused on personalized learning, curing disease, connecting people, and building strong communities.

Read more

Davos: The True Fear Around Robots — Autonomous weapons, which are currently being developed by the US, UK, China, Israel, South Korea and Russia, will be capable of identifying targets, adjusting their behavior in response to that target, and ultimately firing — all without human intervention.


The issue of ‘killer robots’ one day posing a threat to humans has been discussed at the annual World Economic Forum meeting in Davos, Switzerland.

The discussion took place on 21 January during a panel organised by the Campaign to Stop Killer Robots (CSKR) and Time magazine, which asked the question: “What if robots go to war?”

Participants in the discussion included former UN disarmament chief Angela Kane, BAE Systems chair Sir Roger Carr, artificial intelligence (AI) expert Stuart Russell and robot ethics expert Alan Winfield.

Read more

DoD spending $12 to $15 billion of its FY17 budget on small bets that includes NextGen tech improvements — WOW. Given the DARPA new Neural Engineering System Design (NESD); guessing we may finally have a Brain Mind Interface (BMI) soldier in the future.


The Defense Department will invest the $12 billion to $15 billion from its Fiscal Year 2017 budget slotted for developing a Third Offset Strategy on several relatively small bets, hoping to produce game-changing technology, the vice chairman of the Joint Chiefs of Staff said.

Read more

God does not exist. However, let’s grant for a moment that God is real. Religious texts and practices show that God is wicked, cruel, and immoral, and totally unworthy of affection by moral human beings.

For the sake of brevity, we’ll exclusively consider the God of the New Testament, and ignore the God of the Old Testament, Koran, and other books. This God is often portrayed as hip, cool, and loving. If we dig deeper into some of the basic tenets of Christianity held by mainstream Protestant, Catholic, and Orthodox churches, we’ll see that it’s an elaborate smoke screen. The God of the New Testament is a beast.

The Problem of Evil

Christian churches usually portray God as all powerful, all knowing, and all benevolent. Long ago, freethinkers discovered the silver bullet to prove that God is not a moral agent. The problem of evil, in its simplest form, goes like this, “If God is good, why does he let evil exist in the world?”

Read more

AI can easily replace much of the back office operations and some front office over time. As a result, there will be a need to have a massive social system and displacement program in place as a joint effort with governments and companies to re-school and re-tool workers and financially support the workers and their families until they can be retooled/ retrained to get one of the existing jobs or one of the new careers resulting from AI. There will be a need and social obligation placed back on companies at a scale like we have never seen before. With power and wealth; there truly comes a level of moral responsibility imposed by society.


Tradeshift CloudScan uses machine learning to create automatic mappings from image files and PDFs into a structured format such as UBL.

Read more

Although the recent article and announcement of Josiah Zayner (CA scientist) new do it yourself gene editing kit for $120 sent shock waves across industry as well as further raised the question “how do we best put controls in place to ensure ethics and prevent disaster or a crisis?”; this genie truly is out of the bottle. Because Josiah created this easily in his own kitchen, it can be replicated by many others in their own homes. What we have to decide is how to best mitigate it’s impact. Black markets & exotic animal, etc. collectors around the world will pay handsomely for this capability and raise the stakes of the most bizarre animals (deadly and non-deadly) to be created for their own profits and amusements.


BURLINGAME, Calif. — On the kitchen table of his cramped apartment, Josiah Zayner is performing the feat that is transforming biology. In tiny vials, he’s cutting, pasting and stirring genes, as simply as mixing a vodka tonic. Next, he slides his new hybrid creations, living in petri dishes, onto a refrigerator shelf next to the vegetables. And he’s packaging and selling his DIY gene-editing technique for $120 so that everyone else can do it, too.

Read more

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

Read more

US army’s report visualises augmented soldiers & killer robots.


The US Army’s recent report “Visualizing the Tactical Ground Battlefield in the Year 2050” describes a number of future war scenarios that raise vexing ethical dilemmas. Among the many tactical developments envisioned by the authors, a group of experts brought together by the US Army Research laboratory, three stand out as both plausible and fraught with moral challenges: augmented humans, directed-energy weapons, and autonomous killer robots. The first two technologies affect humans directly, and therefore present both military and medical ethical challenges. The third development, robots, would replace humans, and thus poses hard questions about implementing the law of war without any attending sense of justice.

Augmented humans. Drugs, brain-machine interfaces, neural prostheses, and genetic engineering are all technologies that may be used in the next few decades to enhance the fighting capability of soldiers, keep them alert, help them survive longer on less food, alleviate pain, and sharpen and strengthen their cognitive and physical capabilities. All raise serious ethical and bioethical difficulties.

Drugs and prosthetics are medical interventions. Their purpose is to save lives, alleviate suffering, or improve quality of life. When used for enhancement, however, they are no longer therapeutic. Soldiers designated for enhancement would not be sick. Rather, commanders would seek to improve a soldier’s war-fighting capabilities while reducing risk to life and limb. This raises several related questions.

Read more

Despite more than a thousand artificial-intelligence researchers signing an open letter this summer in an effort to ban autonomous weapons, Business Insider reports that China and Russia are in the process of creating self-sufficient killer robots, and in turn is putting pressure on the Pentagon to keep up.

“We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield,” U.S. Deputy Secretary of Defense Robert Work said during a national security forum on Monday.

Work added, “[Gerasimov] said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.’”

Read more

In the various incarnations of Douglas Adams’ Hitchhiker’s Guide To The Galaxy, a sentient robot named Marvin the Paranoid Android serves on the starship Heart of Gold. Because he is never assigned tasks that challenge his massive intellect, Marvin is horribly depressed, always quite bored, and a burden to the humans and aliens around him. But he does write nice lullabies.

While Marvin is a fictional robot, Scholar and Author David Gunkel predicts that sentient robots will soon be a fact of life and that mankind needs to start thinking about how we’ll treat such machines, at present and in the future.

For Gunkel, the question is about moral standing and how we decide if something does or does not have moral standing. As an example, Gunkel notes our children have moral standing, while a rock or our smartphone may not have moral consideration. From there, he said, the question becomes, where and how do we draw the line to decide who is inside and who is outside the moral community?

“Traditionally, the qualities for moral standing are things like rationality, sentience (and) the ability to use languages. Every entity that has these properties generally falls into the community of moral subjects,” Gunkel said. “The problem, over time, is that these properties have changed. They have not been consistent.”

To illustrate, Gunkel cited Greco-Roman times, when land-owning males were allowed to exclude their wives and children from moral consideration and basically treat them as property. As we’ve grown more enlightened in recent times, Gunkel points to the animal rights movement which, he said, has lowered the bar for inclusion in moral standing, based on the questions of “Do they suffer?” and “Can they feel?” The properties that are qualifying properties are no longer as high in the hierarchy as they once were, he said.

While the properties approach has worked well for about 2,000 years, Gunkel noted that it has generated more questions that need to be answered. On the ontological level, those questions include, “How do we know which properties qualify and when do we know when we’ve lowered the bar too low or raised it too high? Which properties count the most?”, and, more importantly, “Who gets to decide?”

“Moral philosophy has been a struggle over (these) questions for 2,000 years and, up to this point, we don’t seem to have gotten it right,” Gunkel said. “We seem to have gotten it wrong more often than we have gotten it right… making exclusions that, later on, are seen as being somehow problematic and dangerous.”

Beyond the ontological issues, Gunkel also notes there are epistemological questions to be addressed as well. If we were to decide on a set of properties and be satisfied with those properties, because those properties generally are internal states, such as consciousness or sentience, they’re not something we can observe directly because they happen inside the cranium or inside the entity, he said. What we have to do is look at external evidence and ask, “How do I know that another entity is a thinking, feeling thing like I assume myself to be?”

To answer that question, Gunkel noted the best we can do is assume or base our judgments on behavior. The question then becomes, if you create a machine that is able to simulate pain, as we’ve been able to do, do you assume the robots can read pain? Citing Daniel Dennett’s essay Why You Can’t Make a Computer That Feels Pain, Gunkel said the reason we can’t build a computer that feels pain isn’t because we can’t engineer a mechanism, it’s because we don’t know what pain is.

“We don’t know how to make pain computable. It’s not because we can’t do it computationally, but because we don’t even know what we’re trying to compute,” he said. “We have assumptions and think we know what it is and experience it, but the actual thing we call ‘pain’ is a conjecture. It’s always a projection we make based on external behaviors. How do we get legitimate understanding of what pain is? We’re still reading signs.”

According to Gunkel, the approaching challenge in our everyday lives is, “How do we decide if they’re worthy of moral consideration?” The answer is crucial, because as we engineer and build these devices, we must still decide what we do with “it” as an entity. This concept was captured in a PBS Idea Channel video, an episode based on this idea and on Gunkel’s book, The Machine Question.

To address that issue, Gunkel said society should consider the ethical outcomes of the artificial intelligence we create at the design stage. Citing the potential of autonomous weapons, the question is not whether or not we should use the weapon, but whether we should even design these things at all.

“After these things are created, what do we do with them, how do we situate them in our world? How do we relate to them once they are in our homes and in our workplace? When the machine is there in your sphere of existence, what do we do in response to it?” Gunkel said. “We don’t have answers to that yet, but I think we need to start asking those questions in an effort to begin thinking about what is the social status and standing of these non-human entities that will be part of our world living with us in various ways.”

As he looks to the future, Gunkel predicts law and policy will have a major effect on how artificial intelligence is regarded in society. Citing decisions stating that corporations are “people,” he noted that the same types of precedents could carry over to designed systems that are autonomous.

“I think the legal aspect of this is really important, because I think we’re making decisions now, well in advance of these kind of machines being in our world, setting a precedent for the receptivity to the legal and moral standing of these other kind of entities,” Gunkel said.

“I don’t think this will just be the purview of a few philosophers who study robotics. Engineers are going to be talking about it. AI scientists have got to be talking about it. Computer scientists have got to be talking about it. It’s got to be a fully interdisciplinary conversation and it’s got to roll out on that kind of scale.”