Toggle light / dark theme

Consciousness remains scientifically elusive because it constitutes layers upon layers of non-material emergence: Reverse-engineering our thinking should be done in terms of networks, modules, algorithms and second-order emergence — meta-algorithms, or groups of modules. Neuronal circuits correlate to “immaterial” cognitive modules, and these cognitive algorithms, when activated, produce meta-algorithmic conscious awareness and phenomenal experience, all in all at least two layers of emergence on top of “physical” neurons. Furthermore, consciousness represents certain transcendent aspects of projective ontology, according to the now widely accepted Holographic Principle.

#CyberneticTheoryofMind


There’s no shortage of workable theories of consciousness and its origins, each with their own merits and perspectives. We discuss the most relevant of them in the book in line with my own Cybernetic Theory of Mind that I’m currently developing. Interestingly, these leading theories, if metaphysically extended, in large part lend support to Cyberneticism and Digital Pantheism which may come into scientific vogue with the future cyberhumanity.

According to the Interface Theory of Perception developed by Donald Hoffman and the Biocentric theory of consciousness developed by Robert Lanza, any universe is essentially non-existent without a conscious observer. In both theories, conscious minds are required as primary building blocks for any universe arising from probabilistic domain into existence. But biological minds reveal to us just a snippet in the space of possible minds. Building on the tenets of Biocentrism, Cyberneticism goes further and includes all other possible conscious observers such as artificially intelligent self-aware entities. Perhaps, the extended theory could be dubbed as ‘Noocentrism’.

Existence boils down to experience. No matter what ontological level a conscious entity finds herself at, it will be smack in the middle, between her transcendental realm and lower levels of organization. This is why I prefer the terms ‘Experiential Realism’ and ‘Pantheism’ as opposed to ‘Panentheism’ as some suggested in regards to my philosophy.

This article is part of our new series, Currents, which examines how rapid advances in technology are transforming our lives.

Imagine operating a computer by moving your hands in the air as Tony Stark does in “Iron Man.” Or using a smartphone to magnify an object as does the device that Harrison Ford’s character uses in “Blade Runner.” Or a next-generation video meeting where augmented reality glasses make it possible to view 3D avatars. Or a generation of autonomous vehicles capable of driving safely in city traffic.

These advances and a host of others on the horizon could happen because of metamaterials, making it possible to control beams of light with the same ease that computer chips control electricity.

WASHINGTON — The Department of Defense wants to see a prototype that can ensure spectrum is available whenever it’s needed for aerial combat training, according to an April 26 request from the National Spectrum Consortium.

The effort, focused specifically on the Operational Spectrum Comprehension, Analytics, and Response (OSCAR) project, is part of a larger portfolio included in the DoD’s office of research and engineering’s Spectrum Access Research & Development Program. That program hope to develop near real time spectrum management technologies that leverage machine learning and artificial intelligence to more efficiently and dynamically allocate spectrum assignments based on operational planning or on operational outcomes, a release said.

“I think of this set of projects as a toolset that’s really the beginning of starting to move toward pushing those fundamental technologies into more direct operational application,” Maren Leed, executive director of the National Spectrum Consortium, told C4ISRNET. It’s “starting to bridge from just sharing with commercial into capabilities that are going to enable warfighting much more directly.”

The Autonomous Weeder, developed by Carbon Robotics, uses a combination of artificial intelligence (AI), robotics, and laser technology to safely and effectively drive through crop fields – identifying, targeting and eliminating weeds.

Unlike other weeding technologies, the robot utilises high-power lasers to eradicate weeds through thermal energy, without disturbing the soil. This could allow farmers to use less herbicides, while reducing labour costs and improving the reliability and predictability of crop yields.

“AI and deep learning technology are creating efficiencies across a variety of industries and we’re excited to apply it to agriculture,” said Paul Mikesell, CEO and founder of Carbon Robotics. “Farmers, and others in the global food supply chain, are innovating now more than ever to keep the world fed. Our goal is to create tools that address their most challenging problems, including weed management and elimination.”

A person can weed about one acre of crops a day. This smart robot can weed 20.


Carbon Robotics has unveiled the third-generation of its Autonomous Weeder, a smart farming robot that identifies weeds and then destroys them with high-power lasers.

The weedkiller challenge: Weeds compete with plants for space, sunlight, and soil nutrients. They can also make it easier for insect pests to harm crops, so weed control is a top concern for farmers.

Chemical herbicides can kill the pesky plants, but they can also contaminate water and affect soil health. Weeds can be pulled out by hand, but it’s unpleasant work, and labor shortages are already a huge problem in the agriculture industry.

Artificial intelligence is helping humans make new kinds of art. It is more likely to emerge as a collaborator than a competitor for those working in creative industries. Film supported by Mishcon de Reya.

Sign up to The Economist’s daily newsletter: https://econ.st/3dm9rp9

Find our most recent science and technology coverage: https://econ.st/2QTAukd.

Listen to Babbage, The Economist’s science and technology podcast: https://econ.st/3ftaPJf.

Read The Economist’s special report on how non-tech businesses are beginning to use artificial intelligence at scale: https://econ.st/3fvuKYe.

Read The Economist’s technology quarterly report on virtual realities: https://econ.st/3dpBOD1

Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.

The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.

Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.

The world’s biggest AI chip just doubled its specs—without adding an inch.

The Cerebras Systems Wafer Scale Engine is about the size of a big dinner plate. All that surface area enables a lot more of everything, from processors to memory. The first WSE chip, released in 2019, had an incredible 1.2 trillion transistors and 400000 processing cores. Its successor doubles everything, except its physical size.

The WSE-2 crams in 2.6 trillion transistors and 850000 cores on the same dinner plate. Its on-chip memory has increased from 18 gigabytes to 40 gigabytes, and the rate it shuttles information to and from said memory has gone from 9 petabytes per second to 20 petabytes per second.