Toggle light / dark theme

Interesting Question to ask.


The battle between the FBI and Apple over the unlocking of a terrorist’s iPhone will likely require Congress to create new legislation. That’s because there really aren’t any existing laws which encompass technologies such as these. The battle is between security and privacy, with Silicon Valley fighting for privacy. The debates in Congress will be ugly, uninformed, and emotional. Lawmakers won’t know which side to pick and will flip flop between what lobbyists ask and the public’s fear du jour. And because there is no consensus on what is right or wrong, any decision they make today will likely be changed tomorrow.

This is a prelude of things to come, not only with encryption technologies, but everything from artificial intelligence to drones, robotics, and synthetic biology. Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just the lawmakers who are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but as they learn more, they too will change their views.

Imagine if there was a terror attack in Silicon Valley — at the headquarters of Facebook or Apple. Do you think that Tim Cook or Mark Zuckerberg would continue to put privacy ahead of national security?

Read more

In SELF/LESS, a dying old man (Academy Award winner Ben Kingsley) transfers his consciousness to the body of a healthy young man (Ryan Reynolds). If you’re into immortality, that’s pretty good product packaging, no?

But this thought-provoking psychological thriller also raises fundamental and felicitous ethical questions about extending life beyond its natural boundaries. Postulating the moral and ethical issues that surround mortality have long been defining characteristics of many notable stories within the sci-fi genre. In fact, the Mary Shelley’s age-old novel, Frankenstein, while having little to no direct plot overlaps [with SELF/LESS], it is considered by many to be among the first examples of the science fiction genre.

Screenwriters and brothers David and Alex Pastor show the timelessness of society’s fascination with immortality. However, their exploration reflects a rapidly growing deviation from the tale’s derivation as it lies within traditional science fiction. This shift can be defined, on the most basic level as the genre losing it’s implied fictitious base. Sure, while we have yet to clone dinosaurs, many core elements of beloved past sic-fi films are growing well within our reach, if not in our present and every-day lives. From Luke Skywalker’s prosthetic hand in Star Wars Episode V: The Empire Strikes Back (1980) to the Matrix Sentinal’s (1999) of our past science fiction films help define our current reality to Will Smith’s bionic arm in I, Robot.

Read more

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

Read more

I am not an astronomer or astrophysicist. I have never worked for NASA or JPL. But, during my graduate year at Cornell University, I was short on cross-discipline credits, and so I signed up for Carl Sagan’s popular introductory course, Astronomy 101. I was also an amateur photographer, occasionally freelancing for local media—and so the photos shown here, are my own.

Sagan-1
Carl Sagan is aware of my camera as he talks to a student in the front row of Uris Hall

By the end of the 70’s, Sagan’s star was high and continuing to rise. He was a staple on the Tonight Show with Johnny Carson, producer and host of the PBS TV series, Cosmos, and he had just written Dragons of Eden, which won him a Pulitzer Prize. He also wrote Contact, which became a blockbuster movie, starring Jodie Foster.

Sagan died in 1996, after three bone marrow transplants to compensate for an inability to produce blood cells. Two years earlier, Sagan wrote a book and narrated a film based on a photo taken from space.PaleBlueDot-1

Pale Blue Dot is a photograph of Earth taken in February 1990, by Voyager 1 from a distance of 3.7 billion miles (40 times the distance between earth and the sun). At Sagan’s request (and with some risk to the ongoing scientific mission), the space probe was turned around to take this last photo of Earth. In the photo, Earth is less than a pixel in size. Just a tiny dot against the vastness of space, it appears to be suspended in bands of sunlight scattered by the camera lens.

Four years later, Sagan wrote a book and narrated the short film, Pale Blue Dot, based on the landmark 1990 photograph. He makes a compelling case for reconciliation between humans and a commitment to care for our shared environment. In just 3½ minutes, he unites humanity, appealing to everyone with a conscience. [Full text]

—Which brings us to a question: How are we doing? Are we getting along now? Are we treating the planet as a shared life-support system, rather than a dumping ground?

Sagan points out that hate and misunderstanding plays into so many human interactions. He points to a deteriorating environment and that that we cannot escape war and pollution by resettling to another place. Most importantly, he forces us to face the the fragility of our habitat and the need to protect it. He drives home this point by not just explaining it, but by framing it as an urgent choice between life and death.

It has been 22 years since Sagan wrote and produced Pale Blue Dot. What has changed? Change is all around us, and yet not much has changed. To sort it all out, let’s break it down into technology, our survivable timeline and sociology.

Technology & Cosmology

  • Since Carl Sagan’s death, we have witnessed the first direct evidence of exoplanets. Several hundred have been observed and we will likely find many hundreds more each year. Some of these are in the habitable zone of their star.
  • Sagan died about 25 years after the last Apollo Moon mission. It is now 45 years since those missions, and humans are still locked into low earth orbits. We have sent a few probes to the distant planets and beyond, but the political will and resources to conduct planetary exploration—or even return to the moon—is weak.
  • A few private companies are launching humans, satellites or cargo into Space (Space-X, Virgin Galactic, Blue Origin). Dozens of other private ventures have not yet achieved manned flight or an orbital rendezvous, but it seems likey that some projects will succeed. Lift off is becoming commonplace—but almost all of these launches are focused on TV, communications, monitoring our environment or monitoring our enemies. The space program no longer produces the regular breakthroughs and commercial spin-offs that it did throughout the 70s and 80s.
    continue below photo…
Sagan explains the Drake Equation. (Click for 2 photos with solution)
Sagan explains the Drake Equation. (Click for 2 photos with solution)

Survivable Timeline

  • Like most scientists, Carl Sagan was deeply concerned about pollution, nuclear proliferation, loss of bio-diversity, war and global warming. In fact, the debate over global warming was just beginning to heat up in Sagan’s last years. Today, there is no debate over global warming. All credible scientists understand that the earth is choking, and that our activities are contributing to our own demise.
  • In most regions, air pollution is slightly less of a concern than it was in the 1970s, but ground, water pollution, and radiation contamination are all more evident.
  • Most alarmingly, we humans are even more pitched in posturing and in killing our neighbors than ever before. We fight over land, religion, water, oil, and human rights. We especially fight in the name of our Gods, in the name of national exceptionalism and in the name of protecting our right to consume disposable luxury gadgets, transient thrills and family vacations—as if we were a prisoner consuming his last meal.

We have an insatiable appetite for raw materials, open spaces, cars and luxury. Yet no one seems to be doing the math. As the vast populations of China and India finally come to the dinner table (2 billion humans), it is clear that they have the wealth to match our gluttony. From where will the land, water, and materials come? And what happens to the environment then? In Beijing, the sky is never blue. Every TV screen is covered in a thick film of dust. On many days, commuters wear filter masks. There is no grass in the parks and no birds in the sky. Something is very wrong. With apologies for a mixed metaphor, the canary is already dead while the jester continues to dance.

Carl Sagan's wife designed the plaque bolted to the outside of the first man made object to leave our solar system
This plaque is bolted onto the first man-made object to leave our solar system

Sociology: Man’s Inhumanity to Man

  • Sagan observed that our leaders are passionate about conquering each other, spilling blood over frequent misunderstandings, giving in to imagined self-importance. None of this has changed.
  • Regarding our ability to get off of this planet, Sagan said “Visit? Perhaps…Settle? Not yet”. We still do not possess the technology or resources to settle even a single astronaut away from our fragile home planet. We won’t have both the technology and the will to do so for at least 75 years—and then, only a tiny community of scientists or explorers. It falls centuries shy of resettling a population.
  • Hate, zealotry, intolerance and religious fervor are more toxic than ever before
  • Today, the earth has a bigger population. Hate and misunderstanding has spread like cancer. Weapons of mass destruction have escaped the restraint of governments, oversight and safety mechanisms. They are now in the hands of intolerant and radical organizations that believe in martyrdom and that lack any desire to coexist within a global community.

Sagan-quote

  • Nations, organizations and some individuals possess the technology to kill a million people or more. Without even targeting civilians, a dozen nations can lay waste to the global environment in weeks.

Is it time to revisit Pale Blue Dot? Is it still relevant? The urgency of teaching and heeding Carl Sagan’s words has never been more urgent than now.


Postscript:

Carl Sagan probably didn’t like me. When I was his student, I was a jerk.

Sagan was already a TV personality and author when I took Astronomy 101 in 1977. Occasionally, he discussed material from the pages of his just-released Dragons of Eden, or slipped a photo of himself with Johnny Carson into a slide presentation. He clearly was a star attraction during parent’s weekend before classes started.

Indeed, he often used the phrase “Billions and Billions” even before it led as his trademark. Although he seemed mildly mused that people noticed his annunciation and emphasis, he explained that he thought it was a less distracting alternate to the phrase “That’s billions with a ‘B’ ” when generating appreciation for the vast scope of creation.

At this time that Sagan was my professor, he appeared on the cover of Newsweek magazine. Like a lunkhead, I wrote to Newsweek, claiming that his adulation as a scientist was misplaced and that he was nothing more than an PR huckster for NASA and JPL in the vein of Isaac Asimov. I acknowledged his a gift for popularizing science, but argued that he didn’t have the brains to contribute in any tangible way.

I was wrong, of course. Even in the role of education champion, I failed to appreciate the very powerful and important role that he played in influencing an entire generation of scientists, including, Neil DeGrasse Tyson. Although Newsweek did not publish my letter to the editor, someone on staff sent it to Professor Sagan! When the teaching assistant, a close friend of Sagan, showed me my letter, I was mortified.

Incidentally, I always sat in the front row of the big Uris lecture hall. As a student photographer, I took many photos, which show up on various university web sites from time to time. In the top photo, Professor Sagan is crouching down and clasping hands as he addresses the student seated next to me.

6250

“Online abuse can be cruel – but for some tech companies it is an existential threat. Can giants such as Facebook use behavioural psychology and persuasive design to tame the trolls?”

Read more

The bottom line is robots are machines; and like any other machine, a robot system can be (with the right expertise) reprogram. And, a connected robot to the net, etc. poses a risk as long as hackers poses a risk in the current Cyber environment. Again, I encourage government, tech companies, and businesses work collectively together in addressing the immediate challenge around Cyber Security.

And, there will need to be some way to also track robots & deactivate them remotely especially when the public are allowed to buy them (including criminals).


“We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended goal”.

There’s no manual for being a good human, but greeting strangers as you walk by in the morning, saying thank you and opening doors for people are probably among the top things we know we should do, even if we sometimes forget.

The Quixote technique is best for robots that have a limited objective but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.

Read more

reading-99244_1280-100591780-primary.idge

“Researchers at the Georgia Institute of Technology say that while there may not be one specific manual, robots might benefit by reading stories and books about successful ways to act in society.”

Read more

Again, I see too many gaps that will need to be address before AI can eliminate 70% of today’s jobs. Below, are the top 5 gaps that I have seen so far with AI in taking over many government, business, and corporate positions.

1) Emotion/ Empathy Gap — AI has not been designed with the sophistication to provide personable care such as you see with caregivers, medical specialists, etc.
2) Demographic Gap — until we have a more broader mix of the population engaged in AI’s design & development; AI will not meet the needs for critical mass adoption; only a subset of the population will find will connection in serving most of their needs.
3) Ehtics & Morale Code Gap — AI still cannot understand at a full cognitive level ethics & empathy to a degree that is required.
4) Trust and Compliance Gap — companies need to feel that their IP & privacy is protected; until this is corrected, AI will not be able to replace an entire back office and front office set of operations.
5) Security & Safety Gap — More safeguards are needed around AI to deal with hackers to ensure that information managed by AI is safe as well as ensure public saftey from any AI that becomes disruptive or hijacked to cause injury or worse to the public

Until these gaps are addressed; it will be very hard to eliminate many of today’s government, office/ business positions. The greater job loss will be in the lower skill areas like standard landscaping, some housekeeping, some less personable store clerk, some help desk/ call center operations, and some lite admin admin roles.


The U.S. economy added 2.7 million jobs in 2015, capping the best two-year stretch of employment growth since the late ‘90’s, pushing the unemployment rate down to five percent.

But to listen to the doomsayers, it’s just a matter of time before the rapid advance of technology makes most of today’s workers obsolete – with ever-smarter machines replacing teachers, drivers, travel agents, interpreters and a slew of other occupations.

Read more

DARPA’s efforts to teach AI “Empathy & Ethics”


The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 — 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Read more

The late Supreme Court Justice Potter Stewart once said, “Ethics is knowing the difference between what you have a right to do and what is right to do.”

As artificial intelligence (AI) systems become more and more advanced, can the same statement apply to computers?

According to many technology moguls and policymakers, the answer is this: We’re not quite there yet.

Read more