Toggle light / dark theme

WASHINGTON, December 24, 2021 – Former Secretary of State Henry Kissinger says that further use of artificial intelligence will call into question what it means to be human, and that the technology cannot solve all those problems humans fail to address on their own.

Kissinger spoke at a Council on Foreign Relations event highlighting his new book “The Age of AI: And Our Human Future” on Monday along with co-author and former Google CEO Eric Schmidt in a conversation moderated by PBS NewsHour anchor Judy Woodruff.

Schmidt remarked throughout the event on unanswered questions about AI despite common use of the technology.

Real Time Heart Rate Detection using Eulerian Magnification + YOLOR is used for head detection which feeds into a Eulerian Magnification algorithm developed by Rohin Tangirala. Courtesy of Dragos Stan for assistance in this demo and code.

⭐️Code+Dataset — https://lnkd.in/deRj6SPf.

Like and Comment if you want a full tutorial.

⭐ Download the Code at the AI Vision Store
https://augmentedstartups.info/VisionStore.
⭐ FREE Computer Vision Course
https://augmentedstartups.info/yolov4release.
⭐ Membership + Source Code — https://bit.ly/Join_AugmentedStartups.
⭐ Computer Vision Nano Degree — https://bit.ly/AugmentedAICVPRO
⭐ JOIN our Membership to get access to Source Code : https://bit.ly/Join_AugmentedStartups.

===Product Links===
✔️ Webcam — https://amzn.to/35Ou6yQ
✔️ Deep Learning PC — https://amzn.to/3zRdep3
✔️ OpenCV Python Books-https://amzn.to/3jb5LLB
✔️ Camera Gear — https://amzn.to/3qrLQd2
✔️ Drone Kit — https://bit.ly/Drone-kit.
✔️ Raspberry Pi 4 — https://amzn.to/3fhSI7c.
✔️ OpenCV AI Kit — http://bit.ly/GetOAKNow.
✔️ Roboflow — https://roboflow.com/as1
✔️ Arduino Electronics kit — https://amzn.to/2LgiTQJ

Support us on Patreon.
►https://www.AugmentedStartups.info/Patreon.
Chat to us on Discord.
►https://www.AugmentedStartups.info/discord.
Interact with us on Facebook.
►https://www.AugmentedStartups.info/Facebook.
Check my latest work on Instagram.
►https://www.AugmentedStartups.info/instagram.

Part human, part robot, all business.

This new wearable robotic suit can boost human strength, and it is powered by artificial intelligence — taking human augmentation to new levels.

What robot: German Bionic just announced an exoskeleton called Cray X with a plethora of features. It includes assisted walking, waterproofing, and an updated energy management system.

Israel is nearing the halfway mark of its national drone initiative – an ambitious pilot program seeking to test and prepare operational capacities for UAV use in daily life and business, and place participating companies at the forefront of rapidly approaching aerial services.

According to an official statement, the purpose of the trials is to “integrate the use of drones in routine activities such as transportation of basic products, first aid; (and) deploying a drone attached to a vehicle for real-time monitoring of traffic movement with AI-based elements that can provide forecasts, and much more.”

Lockheed Martin’s new hypersonic plane is expected to travel at Mach 6.

The Lockheed Martin SR-72, which is rumored to be the world’s fastest plane, is expected to make a test flight in 2025, eight years after its private proposal in 2013.

SR-72 will be the successor of the SR-71 Blackbird, the fastest manned aircraft which smashed speed records in 1974 and was retired by the U.S. Air Force back in 1998.

The SR-72, or “Son of Blackbird” is envisioned as an unmanned, hypersonic and reusable, reconnaissance, surveillance, and strike aircraft. The striking ability of the aircraft comes to the fore as it will, reportedly, support Lockheed Martin’s novel High-Speed Strike Weapon (HSSW). The aircraft’s combat capabilities enable it to strike its target in dangerous environments that are deemed risky for slower manned aircraft.

Since the technology to build the aircraft was overly ambitious when the project was announced in 2013, the project had to wait for several years.

Full Story:


Intech Company is the ultimate source of the latest AI news. It checks trusted websites and collects bests pieces of AI information.

4D printing works the same as 3D printing, the only difference is that the printing material allows the object to change shape based on environmental factors.

In this case, the bots’ hydrogel material allows them to morph into different shapes when they encounter a change in pH levels — and cancer cells, as it happens, are usually more acidic than normal cells.

The microrobots were then placed in an iron oxide solution, to give them a magnetic charge.

This combination of shape-shifting and magnetism means the bots could become assassins for cancer — destroying tumors without the usual collateral damage on the rest of the body.

Full Story:


A school of fish-y microbots could one day swim through your veins and deliver medicine to precise locations in your body — and cancer patients may be the first people to benefit from this revolution in nanotechnology.

How it works: Scientists recently printed teeny tiny microbots in the shape of different animals, like fish, crabs, and even butterflies. But the coolest thing with these bots is that they don’t stay in one shape — they can morph into different shapes because they are 4D-printed.

PARIS, Dec. 23, 2021 – LightOn announces the integration of one of its photonic co-processors in the Jean Zay supercomputer, one of the Top500 most powerful computers in the world. Under a pilot program with GENCI and IDRIS, the insertion of a cutting-edge analog photonic accelerator into High Performance Computers (HPC) represents a technological breakthrough and a world-premiere. The LightOn photonic co-processor will be available to selected users of the Jean Zay research community over the next few months.

LightOn’s Optical Processing Unit (OPU) uses photonics to speed up randomized algorithms at a very large scale while working in tandem with standard silicon CPU and NVIDIA latest A100 GPU technology. The technology aims to reduce the overall computing time and power consumption in an area that is deemed “essential to the future of computational science and AI for Science” according to a 2021 U.S. Department of Energy report on “Randomized Algorithms for Scientific Computing.”

INRIA (France’s Institute for Research in Computer Science and Automation) researcher Dr. Antoine Liutkus provided additional context to the integration of LightOn’s coprocessor in the Jean Zay supercomputer: “Our research is focused today on the question of large-scale learning. Integrating an OPU in one of the most powerful nodes of Jean Zay will give us the keys to carry out this research, and will allow us to go beyond a simple ” proof of concept.”

Agility Robotics’ Cassie just became the first bipedal robot to complete an outdoor 5K run, completing the jaunt on a single charge.


Agility Robotics’ Cassie just became the first bipedal robot to complete an outdoor 5K run — and it did so untethered and on a single charge.

The challenge: To create robots that can seamlessly integrate into our world, it makes sense to design those robots to walk like we do. That should make it easier for them to navigate our homes and workplaces.

But bipedal robots are inherently less balanced than bots with three or more legs, so creating one that can stably walk, let alone run or climb stairs, has been a major challenge — but AI is helping researchers solve it.

KEAR (Knowledgeable External Attention for commonsense Reasoning) —along with recent milestones in computer vision and neural text-to-speech —is part of a larger Azure AI mission to provide relevant, meaningful AI solutions and services that work better for people because they better capture how people learn and work—with improved vision, knowledge understanding, and speech capabilities. At the center of these efforts is XYZ-code, a joint representation of three cognitive attributes: monolingual text (X), audio or visual sensory signals (Y), and multilingual (Z). For more information about these efforts, read the XYZ-code blog post.

Last month, our Azure Cognitive Services team, comprising researchers and engineers with expertise in AI, achieved a groundbreaking milestone by advancing commonsense language understanding. When given a question that requires drawing on prior knowledge and five answer choices, our latest model— KEAR, Knowledgeable External Attention for commonsense Reasoning —performs better than people answering the same question, calculated as the majority vote among five individuals. KEAR reaches an accuracy of 89.4 percent on the CommonsenseQA leaderboard compared with 88.9 percent human accuracy. While the CommonsenseQA benchmark is in English, we follow a similar technique for multilingual commonsense reasoning and topped the X-CSR leaderboard.

Although recent large deep learning models trained with big data have made significant breakthroughs in natural language understanding, they still struggle with commonsense knowledge about the world, information that we, as people, have gathered in our day-to-day lives over time. Commonsense knowledge is often absent from task input but is crucial for language understanding. For example, take the question “What is a treat that your dog will enjoy?” To select an answer from the choices salad, petted, affection, bone, and lots of attention, we need to know that dogs generally enjoy food such as bones for a treat. Thus, the best answer would be “bone.” Without this external knowledge, even large-scale models may generate incorrect answers. For example, the DeBERTa language model selects “lots of attention,” which is not as good an answer as “bone.”

The contemporaneous development in recent years of deep neural networks, hardware accelerators with large memory capacity and massive training datasets has advanced the state-of-the-art on tasks in fields such as computer vision and natural language processing. Today’s deep learning (DL) systems however remain prone to issues such as poor robustness, inability to adapt to novel task settings, and requiring rigid and inflexible configuration assumptions. This has led researchers to explore the incorporation of ideas from collective intelligence observed in complex systems into DL methods to produce models that are more robust and adaptable and have less rigid environmental assumptions.

In the new paper Collective Intelligence for Deep Learning: A Survey of Recent Developments, a Google Brain research team surveys historical and recent neural network research on complex systems and the incorporation of collective intelligence principles to advance the capabilities of deep neural networks.

Collective intelligence can manifest in complex systems as self-organization, emergent behaviours, swarm optimization, and cellular systems; and such self-organizing behaviours can also naturally arise in artificial neural networks. The paper identifies and explores four DL areas that show close connections with collective intelligence: image processing, deep reinforcement learning, multi-agent learning, and meta-learning.