Toggle light / dark theme

Bitnation is growing up.


đŸ”„ đŸ”„ đŸ”„ NEW RELEASE: #BITNATION JURISDICTION v. 1.4.0 for Android and iOS đŸ€© đŸ„ł đŸ„°

The 1.4.0 release has been a crazy road! After the 1.3.4 release, we thought “this app somehow does not say: ”I’m a virtual nation” or ”I’m a blockchain jurisdiction”, but rather we thought it looked more like a confused web3 app which didn’t really know its purpose.

Hence we went back to the drawing board, to put the governance functions in the very center of the user experience. The result is 3 bottom menu main categories, including TOWNHALL, NATIONS and the brand new GOVMARKET. All other functions moved to a new side menu.

Read more

Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes. — Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable. — Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen. — Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make. — Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines. — Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change” — Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.


AI technologies should be accountable, explainable, and unbiased, says EU.

Read more

Bitnation is growing up.


đŸ”„ đŸ”„ đŸ”„ NEW RELEASE: #BITNATION JURISDICTION v. 1.4.0 for Android and iOS đŸ€© đŸ„ł đŸ„°

The 1.4.0 release has been a crazy road! After the 1.3.4 release, we thought “this app somehow does not say: ”I’m a virtual nation” or ”I’m a blockchain jurisdiction”, but rather we thought it looked more like a confused web3 app which didn’t really know its purpose.

Hence we went back to the drawing board, to put the governance functions in the very center of the user experience. The result is 3 bottom menu main categories, including TOWNHALL, NATIONS and the brand new GOVMARKET. All other functions moved to a new side menu.

Read more

James Hughes : “Great convo with Yuval Harari, touching on algorithmic governance, the perils of being a big thinker when democracy is under attack, the need for transnational governance, the threats of automation to the developing world, the practical details of UBI, and a lot more.”


In this episode of the Waking Up podcast, Sam Harris speaks with Yuval Noah Harari about his new book 21 Lessons for the 21st Century. They discuss the importance of meditation for his intellectual life, the primacy of stories, the need to revise our fundamental assumptions about human civilization, the threats to liberal democracy, a world without work, universal basic income, the virtues of nationalism, the implications of AI and automation, and other topics.

Yuval Noah Harari has a PhD in History from the University of Oxford and lectures at the Hebrew University of Jerusalem, specializing in world history. His books have been translated into 50+ languages, with 12+ million copies sold worldwide. Sapiens: A Brief History of Humankind looked deep into our past, Homo Deus: A Brief History of Tomorrow considered far-future scenarios, and 21 Lessons for the 21st Century focuses on the biggest questions of the present moment.

Twitter: @harari_yuval

Want to support the Waking Up podcast?

Please visit: http://www.samharris.org/support

Get Sam’s email newsletter: https://www.samharris.org/email_signup

Read more

Already, 1,000 smart city pilots are under construction or in their final urban planning stages across the globe, driving forward countless visions of the future.

As data becomes the gold of the 21st century, centralized databases and hyper-connected infrastructures will enable everything from sentient cities that respond to data inputs in real time to smart public services that revolutionize modern governance.

Connecting countless industries—real estate, energy, sensors and networks, and transportation, among others—tomorrow’s cities pose no end of creative possibilities and stand to completely transform the human experience.

Read more

CERN has revealed plans for a gigantic successor of the giant atom smasher LHC, the biggest machine ever built. Particle physicists will never stop to ask for ever larger big bang machines. But where are the limits for the ordinary society concerning costs and existential risks?

CERN boffins are already conducting a mega experiment at the LHC, a 27km circular particle collider, at the cost of several billion Euros to study conditions of matter as it existed fractions of a second after the big bang and to find the smallest particle possible – but the question is how could they ever know? Now, they pretend to be a little bit upset because they could not find any particles beyond the standard model, which means something they would not expect. To achieve that, particle physicists would like to build an even larger “Future Circular Collider” (FCC) near Geneva, where CERN enjoys extraterritorial status, with a ring of 100km – for about 24 billion Euros.

Experts point out that this research could be as limitless as the universe itself. The UK’s former Chief Scientific Advisor, Prof Sir David King told BBC: “We have to draw a line somewhere otherwise we end up with a collider that is so large that it goes around the equator. And if it doesn’t end there perhaps there will be a request for one that goes to the Moon and back.”

“There is always going to be more deep physics to be conducted with larger and larger colliders. My question is to what extent will the knowledge that we already have be extended to benefit humanity?”

There have been broad discussions about whether high energy nuclear experiments could pose an existential risk sooner or later, for example by producing micro black holes (mBH) or strange matter (strangelets) that could convert ordinary matter into strange matter and that eventually could start an infinite chain reaction from the moment it was stable – theoretically at a mass of around 1000 protons.

CERN has argued that micro black holes eventually could be produced, but they would not be stable and evaporate immediately due to „Hawking radiation“, a theoretical process that has never been observed.

Furthermore, CERN argues that similar high energy particle collisions occur naturally in the universe and in the Earth’s atmosphere, so they could not be dangerous. However, such natural high energy collisions are seldom and they have only been measured rather indirectly. Basically, nature does not set up LHC experiments: For example, the density of such artificial particle collisions never occurs in Earth’s atmosphere. Even if the cosmic ray argument was legitimate: CERN produces as many high energy collisions in an artificial narrow space as occur naturally in more than hundred thousand years in the atmosphere. Physicists look quite puzzled when they recalculate it.

Others argue that a particle collider ring would have to be bigger than the Earth to be dangerous.

A study on “Methodological Challenges for Risks with Low Probabilities and High Stakes” was provided by Lifeboat member Prof Raffaela Hillerbrand et al. Prof Eric Johnson submitted a paper discussing juridical difficulties (lawsuits were not successful or were not accepted respectively) but also the problem of groupthink within scientific communities. More of important contributions to the existential risk debate came from risk assessment experts Wolfgang Kromp and Mark Leggett, from R. Plaga, Eric Penrose, Walter Wagner, Otto Roessler, James Blodgett, Tom Kerwick and many more.

Since these discussions can become very sophisticated, there is also a more general approach (see video): According to present research, there are around 10 billion Earth-like planets alone in our galaxy, the Milky Way. Intelligent life might send radio waves, because they are extremely long lasting, though we have not received any (“Fermi paradox”). Theory postulates that there could be a ”great filter“, something that wipes out intelligent civilizations at a rather early state of their technical development. Let that sink in.

All technical civilizations would start to build particle smashers to find out how the universe works, to get as close as possible to the big bang and to hunt for the smallest particle at bigger and bigger machines. But maybe there is a very unexpected effect lurking at a certain threshold that nobody would ever think of and that theory does not provide. Indeed, this could be a logical candidate for the “great filter”, an explanation for the Fermi paradox. If it was, a disastrous big bang machine eventually is not that big at all. Because if civilizations were to construct a collider of epic dimensions, a lack of resources would have stopped them in most cases.

Finally, the CERN member states will have to decide on the budget and the future course.

The political question behind is: How far are the ordinary citizens paying for that willing to go?

LHC-Critique / LHC-Kritik

Network to discuss the risks at experimental subnuclear particle accelerators

www.lhc-concern.info

LHC-Critique[at]gmx.com

https://www.facebook.com/LHC-Critique-LHC-Kritik-128633813877959/

Particle collider safety newsgroup at Facebook:

https://www.facebook.com/groups/particle.collider/

https://www.facebook.com/groups/LHC.Critique/

Humanity is under threat. At least according to Sir Martin Rees, one of Britain’s most esteemed astronomers.


These two kinds of technologies enable just a few people to have a hugely wide-ranging and maybe even global cascading effect. This leads to big problems of governance because you’d like to regulate the use of these things, but enforcing regulations worldwide is very, very difficult. Think how hopeless it is to enforce the drug laws globally or the tax laws globally. To actually ensure that no one misuses these new technologies is just as difficult. I worry that we are going to have to minimize this risk by actions which lead to a great tension between privacy, liberty and security.

Do you see ways that we can use and develop these technologies in a responsible way?

We’ve got to try. We can’t put the genie back in the bottle. We’ve just got to make sure that we can derive benefits and minimize risks. When I say we have a bumpy ride, I think it is hard to imagine that there won’t be occasions when there are quite serious disruptions caused by either error or by design using these new powerful technologies.

Read more

I’m speaking next Friday evening, Dec 14, at 6PM then doing a panel at the NodeSF in San Francisco. Hosted by the Institute for Competitive Governance and Startup Societies Foundation, the event will discuss innovative approaches for new cities and societies. Join us in building the future! https://www.eventbrite.com/e/future-cities-distributed-societies-governance-software-tickets-52947221565&h=AT1DkxVDnKm53_vmR6tpEPkQ537GUKo9a0gKdyxZCEZ7QLOYfm5PVKxBKrgqsJBRf4KKhPXUOwiLVXV_BJlubFqiklGRbTOFtWc_RvFiGHm1tqce-pLZ_PRMBo_pZmlBH6cTwhD50y0HneFcj5tkvdjjExMvFlQ4eM55aA3ynpWPspgFeha96wn2 #transhumanism


How are the future cities going to look like? Are they going to be sovereign states? Will people have decentralized governments? What is the future of law like?

Read more

Over two decades, I’ve worked with many collaborators studying infrastructure commons and knowledge commons. We developed the Governing Knowledge Commons framework, adapting Ostrom’s empirical approach to the special characteristics of knowledge resources. Understanding how communities share and develop knowledge is crucial in today’s “information society.” And, of course, sharing and developing knowledge is critical to successful governance of natural resources, especially on a global scale. Using the GKC framework, we’ve made substantial progress toward an empirical picture of knowledge-related commons governance. But it’s not nearly enough. Here’s why.


A classic study of the perils of resource sharing, with implications for how we deal with climate, has been updated.

Read more

Ha
which would be the bigger challenge? đŸ€”.


The growing mistrust and hostility towards global intuitions must be overcome if the world is to successfully tackle the environmental challenges it faces, the head of the University of Sussex’s global sustainability research centre has warned.

Professor Joseph Alcamo, Director of the Sussex Sustainability Research Programme (SSRP), said high-quality research and closer engagement with citizens around the world was needed to overcome the growing zeitgeist that viewed organisations such as the UN as meddling amid a geopolitical backdrop of cancelled treaties, neglected obligations and frozen negotiations.

Delivering his keynote speech at the 2018 Utrecht Conference on Earth System Governance this morning, Prof Alcamo said: To many people earth system governance is not beautiful, it is worrisome, it means loss of control over their lives, and this mistrust is a big part of the national retrenchment going on.

Read more