Toggle light / dark theme

For millennia, our planet has sustained a robust ecosystem; healing each deforestation, algae bloom, pollution or imbalance caused by natural events. Before the arrival of an industrialized, destructive and dominant global species, it could pretty much deal with anything short of a major meteor impact. In the big picture, even these cataclysmic events haven’t destroyed the environment—they just changed the course of evolution and rearranged the alpha animal.

But with industrialization, the race for personal wealth, nations fighting nations, and modern comforts, we have recognized that our planet is not invincible. This is why Lifeboat Foundation exists. We are all about recognizing the limits to growth and protecting our fragile environment.

Check out this April news article on the US president’s forthcoming appointment of Jim Bridenstine, a vocal climate denier, as head of NASA. NASA is one of the biggest agencies on earth. Despite a lack of training or experience—without literacy in science, technology or astrophysics—he was handed an enormous responsibility, a staff of 17,000 and a budget of $19 billion.

In 2013, Bridenstine criticized former president Obama for wasting taxpayer money on climate research, and claimed that global temperatures stopped rising 15 years ago.

The Vox News headline states “Next NASA administrator is a Republican congressman with no background in science”. It points out that Jim Bridenstine’s confirmation has been controversial — even among members of his own party.

Sometimes, flip-flopping is a good thing

In less than one month, Jim Bridenstine has changed—he has changed a lot!

After less then a month as head of NASA, he is convinced that climate change is real, that human activity is the significant cause and that it presents an existential threat. He has changed from climate denier to a passionate advocate for doing whatever is needed to reverse our impact and protect the environment.

What changed?

Bridenstine acknowledges that he was a denier, but feels that exposure to the evidence and science is overwhelming and convincing—even in with just a few weeks exposure to world class scientists and engineers.

For anyone who still claims that there is no global warming or that the evidence is ‘iffy’, it is worth noting that Bridenstine was a hand-picked goon. His appointment was recommended by right wing conservatives and rubber stamped by the current administration. He was a Denier—but had a sufficiently open mind to listen to experts and review the evidence.

Do you suppose that the US president is listening? Do you suppose that he will grasp the most important issues of this century? What about other world leaders, legislative bodies and rock stars? Will they use their powers or influence to do the right thing? For the sake of our existence, let us hope they follow the lead of Jim Bridenstine, former climate denier!


Philip Raymond co-chairs CRYPSA, hosts the New York Bitcoin Event and is keynote speaker at Cryptocurrency Conferences. He sits on the New Money Systems board of Lifeboat Foundation. Book a presentation or consulting engagement.

Dont really care about the competition, but this horse race means AI hitting the 100 IQ level at or before 2029 should probably happen.


The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, Canada, Japan, Singapore, China, the UAE, Finland, Denmark, France, the UK, the EU Commission, South Korea, and India have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure.

This article summarizes the key policies and goals of each national strategy. It also highlights relevant policies and initiatives that the countries have announced since the release of their initial strategies.

I plan to continuously update this article as new strategies and initiatives are announced. If a country or policy is missing (or if something in the summary is incorrect), please leave a comment and I will update the article as soon as possible.

Read more

Interstellar travel one of the most moral projects? “one of the most moral projects might be to prepare for interstellar travel. After all, if the Earth becomes inhabitable—whether in 200 years or in 200,000 years—the only known civilization in the history of the solar system will suddenly go extinct. But if the human species has already spread to other planets, we will escape this permanent eradication, thus saving millions—possibly trillions—of lives that can come into existence after the demise of our first planet.”


The Red Planet is a freezing, faraway, uninhabitable desert. But protecting the human species from the end of life on Earth could save trillions of lives.

Read more

In terms of moral, social, and philosophical uprightness, isn’t it striking to have the technology to provide a free education to all the world’s people (i.e. the Internet and cheap computers) and not do it? Isn’t it classist and backward to have the ability to teach the world yet still deny millions of people that opportunity due to location and finances? Isn’t that immoral? Isn’t it patently unjust? Should it not be a universal human goal to enable everyone to learn whatever they want, as much as they want, whenever they want, entirely for free if our technology permits it? These questions become particularly deep if we consider teaching, learning, and education to be sacred enterprises.


When we as a global community confront the truly difficult question of considering what is really worth devoting our limited time and resources to in an era marked by global catastrophe, I always find my mind returning to what the Internet hasn’t really been used for yet — and what was rumored from its inception that it should ultimately provide — an utterly and entirely free education for all the world’s people.

In regard to such a concept, Bill Gates said in 2010:

“On the web for free you’ll be able to find the best lectures in the world […] It will be better than any single university […] No matter how you came about your knowledge, you should get credit for it. Whether it’s an MIT degree or if you got everything you know from lectures on the web, there needs to be a way to highlight that.”

Read more

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.

Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.

Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.

Read more

A journalist, a soup exec, and an imam walk into a room. There’s no joke here. It’s just another day at CrisprCon.

On Monday and Tuesday, hundreds of scientists, industry folk, and public health officials from all over the world filled the amphitheater at the Boston World Trade Center to reckon with the power of biology’s favorite new DNA-tinkering tool: Crispr. The topics were thorny—from the ethics of self-experimenting biohackers to the feasibility of pan-global governance structures. And more than once you could feel the air rush right out of the room. But that was kind of the point. CrisprCon is designed to make people uncomfortable.

“I’m going to talk about the monkey in the room,” said Antonio Cosme, an urban farmer and community organizer in Detroit who appeared on a panel at the second annual conference devoted to Crispr’s big ethical questions to talk about equitable access to gene editing technologies. He referred to the results of an audience poll that had appeared moments before in a word cloud behind him, with one bigger than all the others: “eugenics.”

Read more

Google ends Pentagon contract to develop AI for recognising people in drone videos after 4,000 employees signed an open letter saying that Google’s involvement is against the company’s “moral and ethical responsibility”.


Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires.

Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business.

Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.

Read more

Check out the internal Google film, “The Selfish Ledger”. This probably wasn’t meant to slip onto a public web server, and so I have embedded a backup copy below. Ping me if it disappears. I will locate a permanent URL.

This 8½ minute video is a lot deeper—and possibly more insipid—than it appears. Nick Foster may be the Anti-Christ, or perhaps the most brilliant sociologist of modern times. It depends on your vantage point, and your belief in the potential of user controls and cat-in-bag containment.

He talks of a species propelling itself toward “desirable goals” by cataloging, data mining, and analyzing the past behavior of peers and ancestors—and then using that data to improve the experience of each user’s future and perhaps even their future generations. But, is he referring to shared goals across cultures, sexes and incomes? Who controls the algorithms and the goal filters?! Is Google the judge, arbiter and God?

Consider these quotes from the video. Do they disturb you? The last one sends a chill down my spine. But, I may be overreacting to what is simply an unexplored frontier. The next generation in AI. I cannot readily determine if it ushers in an era of good or bad:

  • Behavioral sequencing « a phrase used throughout the video
  • Viewing human behavior through a Lemarkian lens
  • An individual is just a carrier for the gene. The gene seeks to improve itself and not its host
  • And [at 7:25]: “The mass multigenerational examination of actions and results could introduce a model of behavioral sequencing.”

There’s that odd term again: behavioral sequencing. It suggests that we are mice and that Google can help us to act in unison toward society’s ideal goals.

Today, Fortune Magazine described it this way: “Total and absolute data collection could be used to shape the decisions you make … The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit.”

The statements could apply just as easily to the NSA as it does to Google. At least we are entering into a bargain with Google. We hand them data and they had us numerous benefits (the same benefits that many users often overlook). Yet, clearly, this is heavy duty stuff—especially for the company that knows everything about everyone. Watch it a second time. Think carefully about the power that Google wields.

Don’t get me wrong. I may be in the minority, but I generally trust Google. I recognize that I am raw material and not a client. I accept the tradeoff that I make when I use Gmail, web search, navigate to a destination or share documents. I benefit from this bargain as Google matches my behavior with improved filtering of marketing directed at me.

But, in the back of my mind, I hope for the day that Google implements Blind Signaling and Response, so that my data can only be used in ways that were disclosed to me—and that strengthen and defend that bargain, without subjecting my behavior, relationships and predilections to hacking, misuse, or accidental disclosure.


Philip Raymond sits on Lifeboat’s New Money Systems board. He co-chairs CRYPSA, hosts the Bitcoin Event, publishes Wild Duck and is keynote speaker at global Cryptocurrency Conferences. Book a presentation or consulting engagement.

Credit for snagging this video: Vlad Savov @ TheVerge

Stem cell technology has advanced so much that scientists can grow miniature versions of human brains — called organoids, or mini-brains if you want to be cute about it — in the lab, but medical ethicists are concerned about recent developments in this field involving the growth of these tiny brains in other animals. Those concerns are bound to become more serious after the annual meeting of the Society for Neuroscience starting November 11 in Washington, D.C., where two teams of scientists plan to present previously unpublished research on the unexpected interaction between human mini-brains and their rat and mouse hosts.

In the new papers, according to STAT, scientists will report that the organoids survived for extended periods of time — two months in one case — and even connected to lab animals’ circulatory and nervous systems, transferring blood and nerve signals between the host animal and the implanted human cells. This is an unprecedented advancement for mini-brain research.

“We are entering totally new ground here,” Christof Koch, president of the Allen Institute for Brain Science in Seattle, told STAT. “The science is advancing so rapidly, the ethics can’t keep up.”

Read more

This week RT en Español aired a half hour show on life extension and #transhumanism on TV to millions of its #Spanish viewers. My #ImmortalityBus and work was covered. Various Lifeboat Foundation members in this video: Give it a watch:


La longevidad, la inmortalidad… Temas que nunca han dejado a nadie indiferente. Ahora algunos científicos aseguran que la inmortalidad es técnicamente alcanzable en un futuro cercano. Pero al mismo tiempo surgen preguntas de carácter moral e incluso filosófico: ¿qué significa alcanzar la inmortalidad para cada uno de nosotros? Además, en una sociedad consumista y de empresas transnacionales como la nuestra, suena poco convincente que la inmortalidad pueda llegar a ser accesible para todos.

Suscríbete a nuestro canal de eventos en vivo:

RT en Twitter: https://twitter.com/ActualidadRT
RT en Facebook: https://www.facebook.com/ActualidadRT
RT en Google+: https://plus.google.com/+RTenEspanol/posts
RT en Vkontakte: http://vk.com/actualidadrt

Vea nuestra señal en vivo: http://actualidad.rt.com/en_vivo

RT EN ESPAÑOL: DESDE RUSIA CON INFORMACIÓN

Read more