Toggle light / dark theme

“De-Extinction” Biotechnology & Conservation Biology — Ben Novak, Lead Scientist Revive & Restore


Ben Novak is Lead Scientist, at Revive & Restore (https://reviverestore.org/), a California-based non-profit that works to bring biotechnology to conservation biology with the mission to enhance biodiversity through the genetic rescue of endangered and extinct animals (https://reviverestore.org/what-we-do/ted-talk/).

Ben collaboratively pioneers new tools for genetic rescue and de-extinction, helps shape the genetic rescue efforts of Revive & Restore, and leads its flagship project, The Great Passenger Pigeon Comeback, working with collaborators and partners to restore the ecology of the Passenger Pigeon to the eastern North American forests. Ben uses his training in ecology and ancient-DNA lab work to contribute, hands-on, to the sequencing of the extinct Passenger Pigeon genome and to study important aspects of its natural history (https://www.youtube.com/watch?v=pK2UlLsHkus&t=1s).

Ben’s mission in leading the Great Passenger Pigeon Comeback is to set the standard for de-extinction protocols and considerations in the lab and field. His 2018 review article, “De-extinction,” in the journal Genes, helped to define this new term. More recently, his treatment, “Building Ethical De-Extinction Programs—Considerations of Animal Welfare in Genetic Rescue” was published in December 2019 in The Routledge Handbook of Animal Ethics: 1st Edition.

Ben’s work at Revive & Restore also includes extensive education and outreach, the co-convening of seminal workshops, and helping to develop the Avian and Black-footed Ferret Genetic Rescue programs included in the Revive & Restore Catalyst Science Fund.

Ben graduated from Montana State University studying Ecology and Evolution. He later trained in Paleogenomics at the McMaster University Ancient DNA Centre in Ontario. This is where he began his study of passenger pigeon DNA, which then contributed to his Master’s thesis in Ecology and Evolutionary Biology at the University of California Santa Cruz. This work also formed the foundational science for de-extinction.

Ben also worked at the Australian Animal Health Laboratory–CSIRO (Commonwealth Scientific and Industrial Research Organisation) to advance genetic engineering protocols for the pigeon.

Thankfully, there is a growing effort toward AI For Good.

This latest mantra entails ways to try and make sure that the advances in AI are being applied for the overall betterment of mankind. These are assuredly laudable endeavors and reassuringly crucial that the technology underlying AI is aimed and deployed in an appropriate and assuredly positive fashion (for my coverage on the burgeoning realm of AI Ethics, see the link here).

Unfortunately, whether we like it or not, there is the ugly side of the coin too, namely the despicable AI For Bad.

Anders Sandberg, University of Oxford.

One of the deepest realizations of the scientific understanding of the world that emerged in the 18th and 19th century is that the world is changing, that it has been radically different in the past, that it can be radically different in the future, and that such changes could spell the end of humanity as we know it. An added twist arrived in the 20th century: we could ourselves be the cause of our demise. In the late 20th century an interdisciplinary field studying global catastrophic and existential risks emerged, driven by philosophical concern about the moral weight of such risks and the realization that many such risks show important commonalities that may allow us as a species to mitigate them. For example, much of the total harm from nuclear wars, supervolcanic eruptions, meteor impacts and some biological risks comes from global agricultural collapse. This talk is going to be an overview of the world of low-probability, high-impact risks and their overlap with questions of complexity in the systems generating or responding to them. Understanding their complex dynamics may be a way of mitigating them and ensuring a happier future.

Follow us on social media:
https://twitter.com/sfiscience.
https://instagram.com/sfiscience.
https://facebook.com/santafeinstitute.
https://facebook.com/groups/santafeinstitute.
https://linkedin.com/company/santafeinstitute.

https://complexity.simplecast.com.
https://aliencrashsite.org

This post is a collaboration with Dr. Augustine Fou, a seasoned digital marketer, who helps marketers audit their campaigns for ad fraud and provides alternative performance optimization solutions; and Jodi Masters-Gonzales, Research Director at Beacon Trust Network and a doctoral student in Pepperdine University’s Global Leadership and Change program, where her research intersects at data privacy & ethics, public policy, and the digital economy.

The ad industry has gone through a massive transformation since the advent of digital. This is a multi-billion dollar industry that started out as a way for businesses to bring more market visibility to products and services more effectively, while evolving features that would allow advertisers to garner valuable insights about their customers and prospects. Fast-forward 20 years later and the promise of better ad performance and delivery of the right customers, has also created and enabled a rampant environment of massive data sharing, more invasive personal targeting and higher incidences of consumer manipulation than ever before. It has evolved over time, underneath the noses of business and industry, with benefits realized by a relative few. How did we get here? More importantly, can we curb the path of a burgeoning industry to truly protect people’s data rights?

There was a time when advertising inventory was finite. Long before digital, buying impressions was primarily done through offline publications, television and radio. Premium slots commanded higher CPM (cost per thousand) rates to obtain the most coveted consumer attention. The big advertisers with the deepest pockets largely benefitted from this space by commanding the largest reach.

Many people reject scientific expertise and prefer ideology to facts. Lee McIntyre argues that anyone can and should fight back against science deniers.
Watch the Q&A: https://youtu.be/2jTiXCLzMv4
Lee’s book “How to Talk to a Science Denier” is out now: https://geni.us/leemcintyre.

“Climate change is a hoax—and so is coronavirus.” “Vaccines are bad for you.” Many people may believe such statements, but how can scientists and informed citizens convince these ‘science deniers’ that their beliefs are mistaken?

Join Lee McIntyre as he draws on his own experience, including a visit to a Flat Earth convention as well as academic research, to explain the common themes of science denialism.

Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and an Instructor in Ethics at Harvard Extension School. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan (Ann Arbor). He has taught philosophy at Colgate University (where he won the Fraternity and Sorority Faculty Award for Excellence in Teaching Philosophy), Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston.

This talk was recorded on 24 August 2021.


A very special thank you to our Patreon supporters who help make these videos happen, especially:
Abdelkhalek Ayad, Martin Paull, Anthony Powers, Ben Wynne-Simmons, Ivo Danihelka, Hamza, Paulina Barren, Metzger, Kevin Winoto, Jonathan Killin, János Fekete, Mehdi Razavi, Mark Barden, Taylor Hornby, Rasiel Suarez, Stephan Giersche, William ‘Billy’ Robillard, Scott Edwardsen, Jeffrey Schweitzer, Gou Ranon, Christina Baum, Frances Dunne, jonas.app, Tim Karr, Adam Leos, Michelle J. Zamarron, Andrew Downing, Fairleigh McGill, Alan Latteri, David Crowner, Matt Townsend, Anonymous, Robert Reinecke, Paul Brown, Lasse T. Stendan, David Schick, Joe Godenzi, Dave Ostler, Osian Gwyn Williams, David Lindo, Roger Baker, Greg Nagel, and Rebecca Pan.

Subscribe for regular science videos: http://bit.ly/RiSubscRibe.
The Ri is on Patreon: https://www.patreon.com/TheRoyalInstitution.
and Twitter: http://twitter.com/ri_science.
and Facebook: http://www.facebook.com/royalinstitution.
and Tumblr: http://ri-science.tumblr.com/
Our editorial policy: http://www.rigb.org/home/editorial-policy.
Subscribe for the latest science videos: http://bit.ly/RiNewsletter.

Acclaimed Harvard professor and entrepreneur Dr. David Sinclair believes that we will see human life expectancy increase to at least 100 years within this century. A world in which humans live significantly longer will have a major impact on economies, policies, healthcare, education, ethics, and more. Sinclair joined Bridgewater Portfolio Strategist Atul Lele to discuss the science and societal, political, systemic and ethical implications of humans living significantly longer lives.

Recorded: Aug 30 2021

The Science of Slowing Aging and Increasing Life Expectancy.
0:00 – 19:20

What Increasing Life Expectancy Means for Individuals.
19:20 – 30:40

The Impact on Pension, Healthcare and Education Systems.
30:40 – 44:18

The Economic Benefits of Longer Life Expectancy.
44:18 – 51:24

Human Factors, Ethical Artificial Intelligence, And Healthy Aging — Dr. Arathi Sethumadhavan, PhD, Head of User Research, AI, Ethics & Society, Microsoft Cloud+AI.


Dr. Arathi Sethumadhavan, Ph.D. is Head of User Research for AI, Ethics & Society, at Microsoft’s Cloud+AI organization, where she works at the intersection of user research, ethics, and product experience.

In her current role, Dr. Sethumadhavan is focused on the Microsoft AI ethical principles (privacy and consent, fairness, inclusion, accountability, and transparency) as it relates to various Microsoft AI experiences.

Dr. Sethumadhavan is a seasoned research leader, with two decades of experience studying human-technology interaction, and during the course of her career, she has led user research for several novel and complex applications (e.g., Microsoft’s custom neural voice, facial recognition), as well as at Medtronic, where she provided human factors leadership to multiple products in the Cardiac Rhythm and Heart Failure portfolio, including the world’s smallest pacemaker. She has also spent several years investigating the implications of automation on air traffic controller performance and situation awareness.

Dr. Sethumadhavan is also a Fellow at the World Economic Forum, where she is working on unlocking opportunities for positive impact with AI to address the needs of the aging population.

Dr. Sethumadhavan has published ~60 articles on a range of topics from patient safety, affective computing, and human-robot interaction, has delivered ~80 lectures, has been cited by the American Psychological Association and the Economist, and has worn many hats along the way, including research leader, strategist, author, mentor, editor, keynote speaker, and sometimes adjunct professor.

Dr. Sethumadhavan’s book, “Design for Health: Applications of Human Factors”, was published in 2020.

Dr. Sethumadhavan has a PhD in Experimental Psychology (specialization in human factors and ergonomics) from Texas Tech University and an undergraduate degree in Computer Science University of Calicut.