Toggle light / dark theme

On June 25, 2021 NASA published detail description of future missions for Ingenuity Mars Helicopter considering 2nd software update because of HD imaging issue. Ingenuity’s team determined that capturing color images may have been inducing the imaging pipeline glitch, which resulted in the instability (Flight 6 anomaly). So Mars Helicopter needs 2nd software update to make thing going well within upcoming 9th flight. Ingenuity’s first bug was solved by software update (watchdog timer issue). Another software update for Mars Helicopter is intended to return ability to make 13 Megapixels photos on mars without flight anomalies for Ingenuity. Last week Mars Helicopter completed 8th flight on flying to 160 meters South and Perseverance goes to new location Séítah as well. Black and white images are from Ingenuity’s onboard camera directly. Mars Helicopter flew for 77.4 seconds. Maximal horizontal speed was 4 meters per second. Altitude was 10 meters. Ingenuity made amazing work to live on Mars autonomously.

Credit: nasa.gov, NASA/JPL-Caltech, NASA/JPL-Caltech/ASU

Link to Ingenuity’s 9th flight preparation with 2nd software update: https://mars.nasa.gov/technology/helicopter/status/308/flight-8-success-software-updates-and-next-steps/

#mars #ingenuity #helicopter

Researchers and entrepreneurs are starting to ponder how AI could create versions of people after their deaths—not only as static replicas but as evolving digital entities that may steer companies or influence world events.


Experts are exploring ways artificial intelligence might confer a kind of digital immortality, preserving the personalities of the departed in virtual form and then allowing them to evolve.

This video was made possible by NordPass. Sign up with this link and get 70% off your premium subscription + 1 monrth for free! https://nordpass.com/futurology.

Visit Our Parent Company EarthOne For Sustainable Living Made Simple ➤
https://earthone.io/

The story of humanity is progress, from the origins of humanity with slow disjointed progress to the agricultural revolution with linear progress and furthermore to the industrial revolution with exponential almost unfathomable progress.

This accelerating rate of change of progress is due to the compounding effect of technology, in which it enables countless more from 3D printing, autonomous vehicles, blockchain, batteries, remote surgeries, virtual and augmented reality, robotics – the list can go on and on. These devices in turn will lead to mass changes in society from energy generation, monetary systems, space colonization, automation and much more!

This trajectory of progress is now leading us into a time period that is, “characterized by a fusion of technologies that is blurring the lines between the physical, digital and biological spheres”, called by many the technological revolution or the 4th industrial revolution — in which everything will change, from the underlying structure and fundamental institutions of society to how we live our day-to-day lives.

00:00 Intro.
01:30 Pillar 1 – Computing.
05:02 Pillar 2 – Global Connectivity.
08:08 Pillar 3 – Big Data.
09:48 Pillar 4 – AI
11:55 The Technological Revolution.

Thank you to the members who supported this video ➤

This is only the Beginning.


Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

“The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn and his colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

“When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, Anton Zeilinger of the University of Vienna and their colleagues have refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.

Below is my Answer.

“There is big confluence between AI & Social Media. It is a two way thing, AI not only affects Social Media, Social Media also plays a great role in the development of AI.

The way AI is developed is through data, large data (big data) and one of the easiest ways to generate and source for data at this scale is from the contents and interactions on social media.

Most social media platforms operate at scale, so for issues such as monitoring or censorship of what is being posted, the admin of these platforms have to use automation and AI for its management and policing.

AI algorithms such as sentiment analysis or recommendation engines (used by Facebook & Youtube to recommend posts based on the AI understanding of what you will like) are very much an integral part of any social platform architecture.

AI is integral to how and when Adverts are delivered to you on social media. AI controls the engagement levels on your posts and ensures that people who are most likely interested in the topics or communities you belong to get recommended to you as connection; this is because engagement is key goal for every social media platform.

So as you can see, AI plays a very critical role in social media. But beyond this, it is also important to mention that not all the effects of AI on social media are positive ones. For example, AI ensures a never ending supply of content recommendation (recommendation engines) that can keep you engrossed in social media, using time in an unproductive way.

AI being used as an automated censorship tool on social media platforms can also mean that humans may no longer have complete control over what free speech should mean or how dissenting views can be morally managed.

Most AI algorithms used for social media are optimized for profit, benefiting the Advertisers, getting the users dopamine-addicted, and biased to fringe or contrarian views on their platforms.

Overall, the role of AI in social media cannot be diminished. It will continue to grow even deeper and more integral as we continue to generate more data on these platforms. AI models such as GPT-3 are benefactors to the rich data AI can be trained on from Social media.”

Footnote: We need to learn more on how to make AI an agent for good (AI4Good) as it helps us to make social media and our lives in general more meaningful.

#Iconickelx.

#artificialintelligence #SocialMedia

After the program was first revealed in 2019, the Air Force’s then-Assistant Secretary of the Air Force for Acquisition, Technology and Logistics Will Roper stated he wanted to see operational demonstrations within two years. The latest test flight of the Skyborg-equipped Avenger shows the service has clearly hit that benchmark.

The General Atomics Avenger was used in experiments with another autonomy system in 2020, developed as part of the Defense Advanced Research Projects Agency’s (DARPA) Collaborative Operations in Denied Environment (CODE) program that sought to develop drones that could demonstrate “collaborative autonomy,” or the ability to work cooperatively.

Over the past few decades, roboticists have created increasingly advanced and sophisticated robotics systems. While some of these systems are highly efficient and achieved remarkable results, they still perform far poorly than humans on several tasks, including those that involve grasping and manipulating objects.

Researchers from Guangdong University of Technology, Politecnico di Milano, University of Sussex and Bristol Robotics Laboratory (BRL) at University of the West of England have recently developed a that could help to improve manipulation. This model, presented in a paper published in IEEE Transactions on Industrial Informatics, draws inspiration from how humans adapt their manipulation strategies based on the task they are trying to complete.

“Humans have the remarkable ability to deal with and complete dynamic tasks, such as curving, cutting and assembly, optimally and compliantly,” Professor Chenguang Yang, the corresponding author for the paper working at BRL, told TechXplore. “Although these tasks are easy for humans, they are quite challenging for robots to perform, even advanced ones.”