Toggle light / dark theme

The challenges of making AI work at the edge—that is, making it reliable enough to do its job and then justifying the additional complexity and expense of putting it in our devices—are monumental. Existing AI can be inflexible, easily fooled, unreliable and biased. In the cloud, it can be trained on the fly to get better—think about how Alexa improves over time. When it’s in a device, it must come pre-trained, and be updated periodically. Yet the improvements in chip technology in recent years have made it possible for real breakthroughs in how we experience AI, and the commercial demand for this sort of functionality is high.


AI is moving from data centers to devices, making everything from phones to tractors faster and more private. These newfound smarts also come with pitfalls.

Satellite imagery is becoming ubiquitous. Research has demonstrated that artificial intelligence applied to satellite imagery holds promise for automated detection of war-related building destruction. While these results are promising, monitoring in real-world applications requires high precision, especially when destruction is sparse and detecting destroyed buildings is equivalent to looking for a needle in a haystack. We demonstrate that exploiting the persistent nature of building destruction can substantially improve the training of automated destruction monitoring. We also propose an additional machine-learning stage that leverages images of surrounding areas and multiple successive images of the same area, which further improves detection significantly. This will allow real-world applications, and we illustrate this in the context of the Syrian civil war.

Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete, and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human-rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep-learning techniques combined with label augmentation and spatial and temporal smoothing, which exploit the underlying spatial and temporal structure of destruction. As a proof of concept, we apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. Our approach allows generating destruction data with unprecedented scope, resolution, and frequency—and makes use of the ever-higher frequency at which satellite imagery becomes available.

The “technology intelligence engine” uses A.I. to sift through hundreds of millions of documents online, then uses all that information to spot trends.


Build back better

Tarraf was fed up with incorrect predictions. He wanted a more data-driven approach to forecasting that could help investors, governments, pundits, and anyone else to get a more accurate picture of the shape of tech-yet-to-come. Not only could this potentially help make money for his firm, but it could also, he suggested, illuminate some of the blind spots people have which may lead to bias.

Tarraf’s technology intelligence engine uses natural language processing (NLP) to sift through hundreds of millions of documents — ranging from academic papers and research grants to startup funding details, social media posts, and news stories — in dozens of different languages. The futurist and science fiction writer William Gibson famously opined that the future is already here, it’s just not evenly distributed. In other words, tomorrow’s technology has already been invented, but right now it’s hidden away in research labs, patent applications, and myriad other silos around the world. The technology intelligence engine seeks to unearth and aggregate them.

“It would be difficult to introduce a single thing and it causes crime to go down,” one expert said.


“Are we seeing dramatic changes since we deployed the robot in January?” Lerner, the Westland spokesperson said. “No. But I do believe it is a great tool to keep a community as large as this, to keep it safer, to keep it controlled.”

For its part, Knightscope maintains on its website that the robots “predict and prevent crime,” without much evidence that they do so. Experts say this is a bold claim.

“It would be difficult to introduce a single thing and it causes crime to go down,” said Ryan Calo, a law professor at the University of Washington, comparing the Knightscope robots to a “roving scarecrow.”

NASA’s Perseverance rover captured a historic group selfie with the Ingenuity Mars Helicopter on April 6, 2021. But how was the selfie taken? Vandi Verma, Perseverance’s chief engineer for robotic operations at NASA’s Jet Propulsion Laboratory in Southern California breaks down the process in this video.

Video taken by Perseverance’s navigation cameras shows the rover’s robotic arm twisting and maneuvering to take the 62 images that compose the image. The rover’s entry, descent, and landing microphone captured the sound of the arm’s motors whirring during the process.

Selfies allow engineers to check wear and tear on the rover over time.

For more information on Perseverance, visit https://mars.nasa.gov/perseverance.

Credit: NASA/JPL-Caltech/MSSS

Ut ohh.


The middle and working classes have seen a steady decline in their fortunes. Sending jobs to foreign countries, the hollowing out of the manufacturing sector, pivoting toward a service economy and the weakening of unions have been blamed for the challenges faced by a majority of Americans.

There’s an interesting, compelling and alternative explanation. According to a new academic research study, automation technology has been the primary driver in U.S. income inequality over the past 40 years. The report, published by the National Bureau of Economic Research, claims that 50% to 70% of changes in U.S. wages, since 1980, can be attributed to wage declines among blue-collar workers who were replaced or degraded by automation.

Artificial intelligence, robotics and new sophisticated technologies have caused a wide chasm in wealth and income inequality. It looks like this issue will accelerate. For now, college-educated, white-collar professionals have largely been spared the fate of degreeless workers. People with a postgraduate degree saw their salaries rise, while “low-education workers declined significantly.” According to the study, “The real earnings of men without a high-school degree are now 15% lower than they were in 1980.”

Great new episode with former Fermilab physicist Gerald Jackson who chats about antimatter propulsion and the politics of advanced propulsion research. This one is out a bit later in the week than normal, but please listen. Good stuff.


Guest Gerald Jackson, former Fermilab physicist and advanced propulsion entrepreneur chats about his plans for an Antimatter Propulsion interstellar robotic probe. First stop would be Proxima Centauri. In a wide-ranging interview, Jackson talks about the politics and pitfalls of advance propulsion research. Too many people seem to think antimatter is something that is still science fiction. It’s not. It’s as real as the chair you’re sitting on.

The artificial intelligence revolution is just getting started. But it is already transforming conflict. Militaries all the way from the superpowers to tiny states are seizing on autonomous weapons as essential to surviving the wars of the future. But this mounting arms-race dynamic could lead the world to dangerous places, with algorithms interacting so fast that they are beyond human control. Uncontrolled escalation, even wars that erupt without any human input at all.

DW maps out the future of autonomous warfare, based on conflicts we have already seen – and predictions from experts of what will come next.

For more on the role of technology in future wars, check out the extended version of this video – which includes a blow-by-blow scenario of a cyber attack against nuclear weapons command and control systems: https://youtu.be/TmlBkW6ANsQ

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

For more news go to: http://www.dw.com/en/
Follow DW on social media:
►Facebook: https://www.facebook.com/deutschewellenews/
►Twitter: https://twitter.com/dwnews.
►Instagram: https://www.instagram.com/dwnews.
Für Videos in deutscher Sprache besuchen Sie: https://www.youtube.com/dwdeutsch.
#AutonomousWeapons #ArtificialIntelligence #ModernWarfare

Still the comic relief til about December 31, 2024. By 2035 curing everything, already in the early stages towards that.


Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and also the senior author of the study said, that they were actively working on robots that can help provide health care services to maximize the safety, of both the patients and the health care workforce.

Traverso and his colleagues after the Covid-19 began last year, worked towards reducing interaction between the patients and the health care workers. In this process, they collaborated with Boston Dynamics in creating mobile robots that can interact with patients who waited in the emergency department.

But the question here is, how patients are going to respond to the robots? This question was raised by the researchers of MIT along with Brigham and Women’s Hospital. The researchers conducted a nationwide large-scale online survey of about 1000 people working with a market research company called YouGov. The questions were about the acceptability of robots in healthcare for performing tasks like nasal swabs, inserting a catheter, and turning a patient over in bed.