Toggle light / dark theme

Drones of all sizes are being used by environmental advocates to monitor deforestation, by conservationists to track poachers, and by journalists and activists to document large protests. As a political sociologist who studies social movements and drones, I document a wide range of nonviolent and pro-social drone uses in my new book, “The Good Drone.” I show that these efforts have the potential to democratize surveillance.

But when the Department of Homeland Security redirects large, fixed-wing drones from the U.S.-Mexico border to monitor protests, and when towns experiment with using drones to test people for fevers, it’s time to think about how many eyes are in the sky and how to avoid unwanted aerial surveillance. One way that’s within reach of nearly everyone is learning how to simply disappear from view.

Bitcoin hardware wallet maker Ledger revealed today that its e-commerce database was hacked last month, leaking 1 million emails and some personal documents. No user funds were affected by the breach.

Ledger said the attack targeted only its marketing and e-commerce database, meaning the hackers were unable to access users’ recovery phrases or private keys. All financial information—such as payment information, passwords, and funds—was similarly unaffected. The breach was unrelated to Ledger’s hardware wallets or its Ledger Live security product, the company added.

“Solely contact and order details were involved. This is mostly the email address of approximately [1 million] of our customers. Further to the investigation, we have also been able to establish that a subset of them was also exposed: first and last name, postal address phone number, and product(s) ordered,” said Ledger in its announcement.

Thorcon said it would provide technical support to the ministry’s research and development (R&D) body to develop “a small-scale TMSR reactor under 50 megawatts (MW)”, the company wrote in a statement on Friday, Jul. 24.

“[This will] strengthen national security in the outermost, frontier and least developed regions,” reads the company’s statement.

In a separate statement on Jul. 22, the Defense Ministry said the deal would help it accomplish its 2020–2024 strategic plan but did not mention a planned capacity.

The U.S. intelligence community (IC) on Thursday rolled out an “ethics guide” and framework for how intelligence agencies can responsibly develop and use artificial intelligence (AI) technologies.

Among the key ethical requirements were shoring up security, respecting human dignity through complying with existing civil rights and privacy laws, rooting out bias to ensure AI use is “objective and equitable,” and ensuring human judgement is incorporated into AI development and use.

The IC wrote in the framework, which digs into the details of the ethics guide, that it was intended to ensure that use of AI technologies matches “the Intelligence Community’s unique mission purposes, authorities, and responsibilities for collecting and using data and AI outputs.”

🤔


In October 2019, the Johns Hopkins Center for Health Security hosted a pandemic tabletop exercise called Event 201 with partners, the World Economic Forum and the Bill & Melinda Gates Foundation. Recently, the Center for Health Security has received questions about whether that pandemic exercise predicted the current novel coronavirus outbreak in China. To be clear, the Center for Health Security and partners did not make a prediction during our tabletop exercise. For the scenario, we modeled a fictional coronavirus pandemic, but we explicitly stated that it was not a prediction. Instead, the exercise served to highlight preparedness and response challenges that would likely arise in a very severe pandemic. We are not now predicting that the nCoV-2019 outbreak will kill 65 million people. Although our tabletop exercise included a mock novel coronavirus, the inputs we used for modeling the potential impact of that fictional virus are not similar to nCoV-2019.

Amid ever-increasing demands for privacy and security for highly sensitive data stored in the cloud, Google Cloud announced this week the creation of Confidential Computing.

Terming it a “,” Google said the technology, which will offer a number of products in the coming months, allows users to encrypt not only as it is stored or sent to the cloud, but while it is being worked on as well.

Confidential Computing keeps data encrypted as it’s being “used, indexed, queried, or trained on” in memory and “elsewhere outside the central processing unit,” Google said in a statement about the new technology.

Do you agree with these predictions?


The first few months of 2020 have radically reshaped the way we work and how the world gets things done. While the wide use of robotaxis or self-driving freight trucks isn’t yet in place, the Covid-19 pandemic has hurried the introduction of artificial intelligence across all industries. Whether through outbreak tracing or contactless customer pay interactions, the impact has been immediate, but it also provides a window into what’s to come. The second annual ForbesAI 50, which highlights the most promising U.S.-based artificial intelligence companies, features a group of founders who are already pondering what their space will look like in the future, though all agree that Covid-19 has permanently accelerated or altered the spread of AI.

“We have seen two years of digital transformation in the course of the last two months,” Abnormal Security CEO Evan Reiser told Forbes in May. As more parts of a company are forced to move online, Reiser expects to see AI being put to use to help businesses analyze the newly available data or to increase efficiency.

With artificial intelligence becoming ubiquitous in our daily lives, DeepMap CEO James Wu believes people will abandon the common misconception that AI is a threat to humanity. “We will see a shift in public sentiment from ‘AI is dangerous’ to ‘AI makes the world safer,’” he says. “AI will become associated with safety while human contact will become associated with danger.”

Dr. Ben Goertzel CEO & Founder of the SingularityNET Foundation is particularly visible and vocal on his thoughts on Artificial Intelligence, AGI, and where research and industry are in regards to AGI. Speaking at the (Virtual) OpenCogCon event this week, Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence. He has decades of expertise applying AI to practical problems in areas ranging from natural language processing and data mining to robotics, video gaming, national security, and bioinformatics.

Are we at a turning point in AGI?

Dr. Goertzel believes that we are now at a turning point in the history of AI. Over the next few years he believes the balance of activity in the AI research area is about to shift from highly specialized narrow AIs toward AGIs. Deep neural nets have achieved amazing things but that paradigm is going to run out of steam fairly soon, and rather than this causing another “AI winter” or a shift in focus to some other kind of narrow AI, he thinks it’s going to trigger the AGI revolution.

Image Credits: Getty Images.

Scott Salandy-Defour used to make frequent stops at a battery manufacturer in southern China for his energy startup based in Hong Kong. The appeal of Hong Kong, he said, is its adjacency to the plentiful electronics suppliers in the Pearl River Delta, as well as the city’s amenities for foreign entrepreneurs, be it its well-established financial and legal system or a culture blending the East and West.

“It’s got the best of both worlds,” Salandy-Defour told TechCrunch. “But it’s not going to be the same.”