Instead of just getting rid of the regressive element from the Heritage Foundation, Google just cancelled the whole thing.
The controversial panel lasted just a little over a week.
Google recently appointed an external ethics council to deal with tricky issues in artificial intelligence. The group is meant to help the company appease critics while still pursuing lucrative cloud computing deals.
In less than a week, the council is already falling apart, a development that may jeopardize Google’s chance of winning more military cloud-computing contracts.
On Saturday, Alessandro Acquisti, a behavioral economist and privacy researcher, said he won’t be serving on the council. While I’m devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don’t believe this is the right forum for me to engage in this important work,’’ Acquisti said on Twitter. He didn’t respond to a request for comment.
https://paper.li/e-1437691924#/
Geoffrey Rockwell and Bettina Berendt’s (2017) article calls for ethical consideration around big data and digital archive, asking us to re-consider whether. In outlining how digital archives and algorithms structure potential relationships with whose testimony has been digitized, Rockwell and Berendt highlight how data practices change the relationship between research and researched. They make a provocative and important argument: datafication and open access should, in certain cases, be resisted. They champion the careful curation of data rather than large-scale collection of, pointing to the ways in which these data are used to construct knowledge about and fundamentally limit the agency of the research subject by controlling the narratives told about them. Rockwell and Berendt, drawing on Aboriginal Knowledge (AK) frameworks, amongst others, argue that some knowledge is just not meant to be openly shared: information is not an inherent good, and access to information must be earned instead. This approach was prompted, in part, by their own work scraping #gamergate Twitter feeds and the ways in which these data could be used to speak for others, in, without their consent.
From our vantage point, Rockwell and Berendt’s renewed call for an ethics of datafication is a timely one, as we are mired in media reports related to social media surveillance, electoral tampering, and on one side. Thanks, Facebook. On the other side, academics fight for the right to collect and access big data in order to reveal how gender and racial discrimination are embedded in the algorithms that structure everything from online real estate listings, to loan interest rates, to job postings (American Civil Liberties Union 2018). As surveillance studies scholars, we deeply appreciate how Rockwell and Berendt take a novel approach: they turn to a discussion of Freedom of Information (FOI), Freedom of Expression (FOE), Free and Open Source software, and Access to Information. In doing so, they unpack the assumptions commonly held by librarians, digital humanists and academics in general, to show that accumulation and datafication is not an inherent good.
Well, Wesley J Smith just did another hit piece against Transhumanism. https://www.nationalreview.com/corner/transhumanism-the-lazy-way-to-human-improvement/
It’s full of his usual horrible attempts to justify his intelligent design roots while trying to tell people he doesn’t have any religious reasons for it. But, then again, what can you expect from something from the National Review.
Sometimes you have to laugh. In “Transhumanism and the Death of Human Exceptionalism,” published in Aero, Peter Clarke quotes criticism I leveled against transhumanism from a piece I wrote entitled, “The Transhumanist Bill of Wrongs” From my piece:
Transhumanism would shatter human exceptionalism. The moral philosophy of the West holds that each human being is possessed of natural rights that adhere solely and merely because we are human. But transhumanists yearn to remake humanity in their own image—including as cyborgs, group personalities residing in the Internet Cloud, or AI-controlled machines.
That requires denigrating natural man as exceptional to justify our substantial deconstruction and redesign. Thus, rather than view human beings as exclusive rights-bearers, the [Transhumanist Bill of Rights] would grant rights to all “sentient entities,” a category that includes both the biological and mechanical.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.
The researchers found that people have a moral preference for supporting good causes and not wanting to support harmful or bad causes. However, depending on the strength of the monetary incentive, people will at one point switch to selfish behavior. When the authors reduced the excitability of the rTPJ using electromagnetic stimulation, the participants’ moral behavior remained more stable.
“If we don’t let the brain deliberate on conflicting moral and monetary values, people are more likely to stick to their moral convictions and aren’t swayed, even by high financial incentives,” explains Christian Ruff. According to the neuroeconomist, this is a remarkable finding, since: “In principle, it’s also conceivable that people are intuitively guided by financial interests and only take the altruistic path as a result of their deliberations.”
Our actions are guided by moral values. However, monetary incentives can get in the way of our good intentions. Neuroeconomists at the University of Zurich have now investigated in which area of the brain conflicts between moral and material motives are resolved. Their findings reveal that our actions are more social when these deliberations are inhibited.
When donating money to a charity or doing volunteer work, we put someone else’s needs before our own and forgo our own material interests in favor of moral values. Studies have described this behavior as reflecting either a personal predisposition for altruism, an instrument for personal reputation management, or a mental trade-off of the pros and cons associated with different actions.
Impact of electromagnetic stimulation on donating behavior
A research team led by UZH professor Christian Ruff from the Zurich Center for Neuroeconomics has now investigated the neurobiological origins of unselfish behavior. The researchers focused on the right Temporal Parietal Junction (rTPJ) — an area of the brain that is believed to play a crucial role in social decision-making processes. To understand the exact function of the rTPJ, they engineered an experimental set-up in which participants had to decide whether and how much they wanted to donate to various organizations. Through electromagnetic stimulation of the rTPJ, the researchers were then able to determine which of the three types of considerations — predisposed altruism, reputation management, or trading off moral and material values — are processed in this area of the brain.