Toggle light / dark theme

Happy Easter…and a reality check: https://motherboard.vice.com/en_us/article/where-were-going-we-dont-need-popes #transhumanism #reason


Modern values, transhumanist technology, and the embrace of reason are making many Catholic rules and rituals absurd.

Everywhere I look, Pope Francis, the 266th pope of the Catholic Church, seems to be in the news—and he is being positively portrayed as a genuinely progressive leader. Frankly, this baffles me. Few major religions have as backwards a philosophical and moral platform as Catholicism. Therefore, no leader of it could actually be genuinely progressive. Yet, no one seems to pay attention to this—no one seems to be discussing that Catholicism remains highly oppressive.

To even discuss how many archaic positions the Pope and Catholicism support would take volumes. But the one that irks me the most is that Pope Francis and his church are still broadly against condoms and contraceptives. Putting aside that this view is terribly anti-environmental, with over 175 million Catholics in Africa, it’s quite possible that this position may also create more AIDS deaths in Africa.

While former Pope Benedict XVI did say in late 2010 that condoms could be used in some cases to prevent disease, anything less than 100 percent endorsement of them seems malicious and criminal, which is something I’ve argued before.

Read more

Some weird religious stories w/ transhumanism Expect the conflict between religion and transhumanism to get worse, as closed-minded conservative viewpoints get challenged by radical science and a future with no need for an afterlife: http://barbwire.com/2017/04/06/cybernetic-messiah-transhumanism-artificial-intelligence/ & http://www.livebytheword.blog/google-directors-push-for-computers-inside-human-brains-is-anti-christ-human-rights-abuse-theologians-explain/ & http://ctktexas.com/pastoral-backstory-march-30th-2017/


By J. Davila Ashcroft

The recent film Ghost in the Shell is a science fiction tale about a young girl (known as Major) used as an experiment in a Transhumanist/Artificial Intelligence experiment, turning her into a weapon. At first, she complies, thinking the company behind the experiment saved her life after her family died. The truth is, however, that the company took her forcefully while she was a runaway. Major finds out that this company has done the same to others as well, and this knowledge causes her to turn on the company. Throughout the story the viewer is confronted with the existential questions behind such an experiment as Major struggles with the trauma of not feeling things like the warmth of human skin, and the sensations of touch and taste, and feels less than human, though she is told many times she is better than human. While this is obviously a science fiction story, what might comes as a surprise to some is that the subject matter of the film is not just fiction. Transhumanism and Artificial Intelligence on the level of the things explored in this film are all too real, and seem to be only a few years around the corner.

Recently it was reported that Elon Musk of SpaceX fame had a rather disturbing meeting with Demis Hassabis. Hassabis is the man in charge of a very disturbing project with far reaching plans akin to the Ghost in the Shell story, known as DeepMind. DeepMind is a Google project dedicated to exploring and developing all the possible uses of Artificial Intelligence. Musk stated during this meeting that the colonization of Mars is important because Hassabis’ work will make earth too dangerous for humans. By way of demonstrating how dangerous the goals of DeepMind are, one of its business partners, Shane Lange is reported to have stated, “I think human extinction will probably occur, and this technology will play a part in it.” Lange likely understands what critics of artificial intelligence have been saying for years. That is, such technology has an almost certain probability of become “self aware”. That is, becoming aware of its own existence, abilities, and developing distinct opinions and protocols that override those of its creators. If artificial intelligence does become sentient, that would mean, for advocates of A.I., that we would then owe them moral consideration. They, however, would owe humanity no such consideration if they perceived us as a danger to their existence, since we could simply disconnect them. In that scenario we would be an existential threat, and what do you think would come of that? Thus Lange’s statement carries an important message.

Already so-called “deep learning” machines are capable of figuring out solutions that weren’t programmed into them, and actually teach themselves to improve. For example, AlphaGo, an artificial intelligence designed to play the game Go, developed its own strategies for winning at the game. Strategies which its creators cannot explain and are at a loss to understand.

Transhumanist Philosophy The fact is many of us have been physically altered in some way. Some of the most common examples are lasik surgery, hip and knee replacements, and heart valve replacements, and nearly everyone has had vaccines that enhance our normal physical ability to resist certain illnesses and disease. The question is, how far is too far? How “enhanced” is too enhanced?

Read more

The fast-advancing fields of neuroscience and computer science are on a collision course. David Cox, Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard, explains how his lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.

http://www.weforum.org/

Read more

So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously — even if the possibility seems transparently absurd at the time.”

David Pearce


The Hedonistic Imperative Documentary

Read more

Algorithms with learning abilities collect personal data that are then used without users’ consent and even without their knowledge; autonomous weapons are under discussion in the United Nations; robots stimulating emotions are deployed with vulnerable people; research projects are funded to develop humanoid robots; and artificial intelligence-based systems are used to evaluate people. One can consider these examples of AI and autonomous systems (AS) as great achievements or claim that they are endangering human freedom and dignity.

We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles to fully benefit from the potential of them. AI and AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust for technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.

Read more

The first of my major #Libertarian policy articles for my California gubernatorial run, which broadens the foundational “non-aggression principle” to so-called negative natural phenomena. “In my opinion, and to most #transhumanist libertarians, death and aging are enemies of the people and of liberty (perhaps the greatest ones), similar to foreign invaders running up our shores.” A coordinated defense agianst them is philosophically warranted.


Many societies and social movements operate under a foundational philosophy that often can be summed up in a few words. Most famously, in much of the Western world, is the Golden Rule: Do onto others as you want them to do to you. In libertarianism, the backbone of the political philosophy is the non-aggression principle (NAP). It argues it’s immoral for anyone to use force against another person or their property except in cases of self-defense.

A challenge has recently been posed to the non-aggression principle. The thorny question libertarian transhumanists are increasingly asking in the 21st century is: Are so-called natural acts or occurrences immoral if they cause people to suffer? After all, taken to a logical philosophical extreme, cancer, aging, and giant asteroids arbitrarily crashing into the planet are all aggressive, forceful acts that harm the lives of humans.

Traditional libertarians throw these issues aside, citing natural phenomena as unable to be morally forceful. This thinking is supported by most people in Western culture, many of whom are religious and fundamentally believe only God is aware and in total control of the universe. However, transhumanists —many who are secular like myself—don’t care about religious metaphysics and whether the universe is moral. (It might be, with or without an almighty God.) What transhumanists really care about are ways for our parents to age less, to make sure our kids don’t die from leukemia, and to save the thousands of species that vanish from Earth every year due to rising temperatures and the human-induced forces.

An impasse has developed among philosophers, and questions once thought absurd, now bear the cold bearing of reality. For example, automation, robots, and software may challenge if not obliterate capitalism as we know it before the 21st century is out. Should libertarians stand against it and develop tenets and safeguards to protect their livelihoods? I have argued, yes, a universal basic income of some sort to guarantee a suitable livelihood is in philosophical line with the non-aggression principle.

Read more

Is the risk of cultural stagnation a valid objection to rejuvenation therapies? You guessed it—nope.


This objection can be discussed from both a moral and a practical point of view. This article discusses the matter from a moral standpoint, and concludes it is a morally unacceptable objection. (Bummer, now I’ve spoiled it all for you.)

However, even if the objection can be dismissed on moral grounds, one may still argue that, hey, it may be immoral to let old people die to avoid cultural and social stagnation, but it’s still necessary.

One could argue that. But one would be wrong.

Read more

Want a career in AI and robotics? One of the best ways to enrich your knowledge about the sector is to follow these AI influencers.

March of the Machines

The world of artificial intelligence (AI) and robotics has never been more exciting. With questions around the ethics of AI and the ever-developing robotics sector, there are so many options for someone who wants a career in AI.

Read more

A few ideas on self-awareness and self-aware AIs.


I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you’re ‘giving life to someone else’. Before you make them, there’s no one to give anything to. You’re not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

Read more