Toggle light / dark theme

Your Survival Depends On All Of Us — Support Open Sourcing Collective Superintelligence Basically, the point of the summit is Artificial Superintelligence or ASI is coming eventually. There are groups of organizations discussing the existential risk that ASI poses to humanity. Even if we only develop an AGI, AGI will still create ASI and we lose control at some point. Supporting the Open Sourcing of Collective Superintelligent systems is our only hope for keeping up and moves us forward before other technologies outpace our ability to keep up. Please support our Summit and help decide how to open source a version of the mASI (mediated Artificial Superintelligence) system, and the creation of a community-driven effort to make these systems better and better. Attendance helps to raise enough money to cover the costs of support services, cloud infrastructure, and the digital resources needed to get this open-source project up, covering publishing and support costs, while also making people aware of it. Papers and formal thinking also are really needed. This particular field of collective intelligence is poorly represented in terms of scientific papers and we hope this project can bring more prominence to this possibility of helping humanity become more than what we are and strong enough to contain AGI while we ourselves are able to become smarter and move to full digitization of humanity for those that want it. Then we can contain ASI safely and embrace the singularity. Please help, save yourself and humanity by support the Collective Superintelligence Conference. Sign up and attend here:


This is the early bird sign-up for the virtual summit held June 4th from 6 am PST to 4 pm PST via Zoom and Youtube. Speakers and Panelists, and workshops will be held in Zoom, and streaming will be done via Youtube.

Who is Running the Summit:

This summit is run in conjunction with BICA (Biologically Inspired Cognitive Architecture) Society and the AGI Laboratory, and The Foundation.

HENDERSON, Nev.—()—Artificial Intelligence Technology Solutions, Inc. (OTCPK: AITX), today announced that its wholly-owned subsidiary Robotic Assistance Devices (RAD) has entered into an agreement with EAGL Technology, Inc. to offer EAGL’s Gunshot Detection System (GDS) in all present and foreseeable future RAD devices.

“We have been receiving repeated requests that gunshot detection capabilities be built into RAD devices from industries as varied as transit operators, retail property managers, and law enforcement. Integrating EAGL’s technology into RAD’s autonomous response solutions should be well received by all of the markets we serve” Tweet this

EAGL Technology was established in 2015 after acquiring gunshot ballistic science developed by the Department of Energy (DOE) Pacific Northwest National Laboratory (PNNL). EAGL has advanced this technology by creating a state-of-the-art security system. The EAGL product offering utilizes the company’s patented FireFly® Ballistic Sensor technology which RAD will offer, as an integrated option, on all mobile and stationary security solutions. EAGL clients include Honeywell, Johnson Controls, Siemens and many more.

Dedicated to those who argue that life extension is bad because it will create overpopulation problems. In adittion to the fact that natality rates are dangerously decreasing in some developed countries, this is only one example of changes that may will take place well before life extension may create a problem of such type, if ever.


Plenty, an ag-tech startup in San Francisco co-founded by Nate Storey, has been able to increase its productivity and production quality by using artificial intelligence and its new farming strategy. The company’s farm farms take up only 2 acres yet produce 720 acres worth of fruit and vegetables. In addition to their impressive food production, they also manage the production with robots and artificial intelligence.

The company says their farm produces about 400 times more food per acre than a traditional farm. It uses robots and AI to monitor water consumption, light, and the ambient temperature of the environment where plants grow. Over time, the AI learns how to grow crops faster with better quality.

While this is great for food quality, it also helps conserve resources. The water is recycled and evaporated water recaptured so there is virtually no waste. The Startup estimates that this smart farm is so efficient that it produces better fruits and vegetables using 95% less water and 99% less land than normal farming operations.

Summary: Combining neuroimaging data with artificial intelligence technology, researchers have identified a complex network within the brain that comprehends the meaning of spoken sentences.

Source: university of rochester medical center.

Have you ever wondered why you are able to hear a sentence and understand its meaning – given that the same words in a different order would have an entirely different meaning?

Researchers from the Graduate School of Engineering and Symbiotic Intelligent Systems Research Center at Osaka University used motion capture cameras to compare the expressions of android and human faces. They found that the mechanical facial movements of the robots, especially in the upper regions, did not fully reproduce the curved flow lines seen in the faces of actual people. This research may lead to more lifelike and expressive artificial faces.

The field of robotics has advanced a great deal in recent decades. However, while current androids can appear very humanlike at first, their active facial expressions are still unnatural and unsettling to people. The exact reasons for this effect have been difficult to pinpoint. Now, a research team at Osaka University has used motion capture technology to monitor the facial expressions of five android faces and compared the results with actual human facial expressions. This was accomplished with six infrared cameras that monitored reflection markers at 120 frames per second and allowed the motions to be represented as three-dimensional displacement vectors.

“Advanced artificial systems can be difficult to design because the numerous components have with each other. The appearance of an android face can experience surface deformations that are hard to control,” study first author Hisashi Ishihara says. These deformations can be due to interactions between components such as the soft skin sheet and the skull-shaped structure, as well as the mechanical actuators.

1.2 billion pixel panorama of Mars by Curiosity rover at Sol 3060 (March 152021)

🎬 360VR video 8K: 🔎 360VR photo 85K: http://bit.ly/sol3060

NASA’s Mars Exploration Program Source images credit: NASA / JPL-Caltech / MSSS Stitching and retouching: Andrew Bodrov / 360pano.eu.

Music in video Song: Gates Of Orion Artist: Dreamstate Logic (http://www.dreamstatelogic.com)

#Mars360​ #Video360​ #360VR​ #Mars​ #Sol3060​ #Gigapixel


NASA’s Mars Curiosity Rover Martian Solar Day 3060: The Vastness of Time.

1.2 billion pixel panorama of Mars http://bit.ly/sol3060

NASA’s Curiosity rover captured high-resolution panorama of the Martian surface between Sol 3057 (Mar. 12) and Sol 3062 (Mar. 172019). A version without the rover contains 136 images from 34-millimeter Mast Camera; a version with the rover contains 260 images from 100-millimeter telephoto Mast Camera. Both versions are composed of more than 396 images that were carefully stitched.

Humans minds don’t easily comprehend the vast eons of time that separate us from the places we explore in space with robots like Curiosity. Our minds are designed to think in terms of hours, days, seasons, and years, extending up to a duration of our lifetime and perhaps those a few generations before us. When we explore Mars, we’re roving over rocks that formed billions of years ago and many of which have been exposed on the surface for at least tens or hundreds of millions of years. It’s a gap of time that we can understand numerically, but there’s no way to have an innate feel for the incredible ancientness of the planet and Gale Crater.

Today, Curiosity is continuing our drill campaign at Nontron and preparing SAM to study the sample later this week. While that’s ongoing, Mastcam will take a sure-to-be-spectacular 360° mosaic and ChemCam will study the Mont Mercou cliff in front of us (as seen in this Navcam image), including a target called “Font de Gaume.” Font de Gaume cave in France is home to stunning paleolithic cave art of bison, reindeer, and other Ice Age wildlife painted 19–27000 years ago. Even that length of time, at least 15000 years before the pyramids were built in Egypt, is barely 0.0005% of the time back to when Gale Crater formed on Mars.

Scott Guzewich.
Atmospheric Scientist at NASA’s Goddard Space Flight Center.

NASA’s Mars Exploration Program.
Source images credit: NASA / JPL-Caltech / MSSS
Stitching and retouching: Andrew Bodrov / 360pano.eu (http://bit.ly/sol3060)