In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group.
Category: information science
Dr. Oliver Harrison MD, MPH, CEO, Telefonica Innovation Alpha — IdeaXme — Ira Pastor
Posted in aging, biotech/medical, business, computing, disruptive technology, genetics, health, information science, innovation, internet | Leave a Comment on Dr. Oliver Harrison MD, MPH, CEO, Telefonica Innovation Alpha — IdeaXme — Ira Pastor
Environmentalism and climate change are increasingly being pushed on us everywhere, and I wanted to write the transhumanism and life extension counter argument on why I prefer new technology over nature and sustainability. Here’s my new article:
On a warming planet bearing scars of significant environmental destruction, you’d think one of the 21st Century’s most notable emerging social groups—transhumanists—would be concerned. Many are not. Transhumanists first and foremost want to live indefinitely, and they are outraged at the fact their bodies age and are destined to die. They blame their biological nature, and dream of a day when DNA is replaced with silicon and data.
Their enmity of biology goes further than just their bodies. They see Mother Earth as a hostile space where every living creature—be it a tree, insect, mammal, or virus—is out for itself. Everything is part of the food chain, and subject to natural law: consumption by violent murder in the preponderance of cases. Life is vicious. It makes me think of pet dogs and cats, and how it’s reported they sometimes start eating their owner after they’ve died.
Many transhumanists want to change all this. They want to rid their worlds of biology. They favor concrete, steel, and code. Where once biological evolution was necessary to create primates and then modern humans, conscious and directed evolution has replaced it. Planet Earth doesn’t need iniquitous natural selection. It needs premeditated moral algorithms conceived by logic that do the most good for the largest number of people. This is something that an AI will probably be better at than humans in less than two decade’s time.
Ironically, fighting the makings of utopia is a coup a half century in the making. Starting with the good-intentioned people at Greenpeace in the 1970s but overtaken recently with enviro-socialists who often seem to want to control every aspect of our lives, environmentalism has taken over political and philosophical discourse and direction at the most powerful levels of society. Green believers want to make you think humans are destroying our only home, Planet Earth—and that this terrible action of ours is the most important issue of our time. They have sounded a call to “save the earth” by trying to stomp out capitalism and dramatically downsizing our carbon footprint.
Tohoku University researchers have developed an algorithm that enhances the ability of a Canadian-designed quantum computer to more efficiently find the best solution for complicated problems, according to a study published in the journal Scientific Reports.
Quantum computing takes advantage of the ability of subatomic particles to exist in more than one state at the same time. It is expected to take modern-day computing to the next level by enabling the processing of more information in less time.
The D-Wave quantum annealer, developed by a Canadian company that claims it sells the world’s first commercially available quantum computers, employs the concepts of quantum physics to solve ‘combinatorial optimization problems.’ A typical example of this sort of problem asks the question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the original city?” Businesses and industries face a large range of similarly complex problems in which they want to find the optimal solution among many possible ones using the least amount of resources.
In the 2018 movie Avengers: Infinity War, a scene featured Dr. Strange looking into 14 million possible futures to search for a single timeline in which the heroes would be victorious. Perhaps he would have had an easier time with help from a quantum computer. A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) and Griffith University in Australia have constructed a prototype quantum device that can generate all possible futures in a simultaneous quantum superposition.
“When we think about the future, we are confronted by a vast array of possibilities,” explains Assistant Professor Mile Gu of NTU Singapore, who led development of the quantum algorithm that underpins the prototype “These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe.” What he and his research group realised, however, was that a quantum computer can examine all possible futures by placing them in a quantum superposition – similar to Schrödinger’s famous cat, which is simultaneously alive and dead.
To realise this scheme, they joined forces with the experimental group led by Professor Geoff Pryde at Griffith University. Together, the team implemented a specially devised photonic quantum information processor in which the potential future outcomes of a decision process are represented by the locations of photons – quantum particles of light. They then demonstrated that the state of the quantum device was a superposition of multiple potential futures, weighted by their probability of occurrence.
The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.
The EU wants AI that’s fair and accountable, respects human autonomy and prevents harm.
Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes. — Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable. — Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen. — Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make. — Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines. — Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change” — Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.
AI technologies should be accountable, explainable, and unbiased, says EU.
With new advances in technology it all comes down to simple factoring. Classical factoring systems are outdated where some problems would take 80 billion years to solve but with new technologies such as the dwave 2 it can bring us up to speed to do the same problems in about 2 seconds. Shores algorithm shows us also we can hack anything with it simply would need the technology and code simple enough and strong enough. Basically with new infrastructure we can do like jason…
RSA is the standard cryptographic algorithm on the Internet. The method is publicly known but extremely hard to crack. It uses two keys for encryption. The public key is open and the client uses it to encrypt a random session key. Anyone intercepts the encrypted key must use the second key, the private key, to decrypt it. Otherwise, it is just garbage. Once the session key is decrypted, the server uses it to encrypt and decrypt further messages with a faster algorithm. So, as long as we keep the private key safe, the communication will be secure.
RSA encryption is based on a simple idea: prime factorization. Multiplying two prime numbers is pretty simple, but it is hard to factorize its result. For example, what are the factors for 507,906,452,803? Answer: 566,557 × 896,479.
Based on this asymmetry in complexity, we can distribute a public key based on the product of two prime numbers to encrypt a message. But without knowing the prime factors, we cannot decrypt the message to its original intention. In 2014, WraithX used a budget of $7,600 on Amazon EC2 and his/her own resources to factorize a 696-bit number. We can break a 1024-bit key with a sizeable budget within months or a year. This is devasting because SSL certificates holding the public key last for 28 months. Fortunately, the complexity of the prime factorization problem grows exponentially with the key length. So, we are pretty safe since we switch to 2048-bit keys already.
Next month, however, a team of MIT researchers will be presenting a so-called “Proxyless neural architecture search” algorithm that can speed up the AI-optimized AI design process by 240 times or more. That would put faster and more accurate AI within practical reach for a broad class of image recognition algorithms and other related applications.
“There are all kinds of tradeoffs between model size, inference latency, accuracy, and model capacity,” says Song Han, assistant professor of electrical engineering and computer science at MIT. Han adds that:
“[These] all add up to a giant design space. Previously people had designed neural networks based on heuristics. Neural architecture search tried to free this labor intensive, human heuristic-based exploration [by turning it] into a learning-based, AI-based design space exploration. Just like AI can [learn to] play a Go game, AI can [learn how to] design a neural network.”