Toggle light / dark theme

Stem cell biologist Hiromitsu Nakauchi has been waiting for this moment for more than a decade.

After years of planning, the persistent researcher has at last received approval from a government willing to pursue one of the most controversial scientific studies there is: human-animal embryo experiments.

While many countries around the world have restricted, defunded or outright banned these ethically-fraught practices, Japan has now officially lifted the lid on this proverbial Pandora’s box. Earlier this year, the country made it legal to not only transplant hybrid embryos into surrogate animals, but also to bring them to term.

In the 2015 movie “Chappie”, which is set in the near future, automated robots comprise a mechanised police force. An encounter between two rival criminal gangs severely damages the law enforcing robot (Agent 22). His creator Deon recommends dismantling and recycling the damaged police droids. However, criminals kidnap Deon and force him to upload human consciousness into the damaged robot to train it to rob banks. Chappie becomes the first robot with the human mind who can think and feel like a human. Later, in the movie when his creator Deon is dying, it’s Chappie’s turn to upload Deon’s consciousness into a spare robot through a neural helmet. Similarly, in the “Avatar” a 2009 Hollywood science fiction, a character in the film by name Grace connects with Eiwa, the collective consciousness of the planet and transfers her mind to her Avatar body, while another character Jake transfers his mind to his Avatar body rendering his human body lifeless.

Mind uploading is a process by which we relocate the mind, an assemblage of memories, personality, and attributes of a specific individual, from its original biological brain to an artificial computational substrate. Mind uploading is a central conceptual feature of many science fiction novels and films. For instance, Hanson’s book titled “The Age of Em: Work, Love and Life when Robots Rule the Earth” is a 2016 nonfiction book which explores the implications of a future world when researchers have learned to copy humans onto computers, creating “ems,” or emulated people, who quickly come to outnumber the real ones.

After breaking all the records related to training computer vision models, NVIDIA now claims that it’s AI platform is able to train a natural language neural network model based on one of the largest datasets in a record time. It also claims that the inference time is just 2 milliseconds which translates to an extremely fast response from the model participating in a conversation with a user.

After computer vision, natural language processing is one of the top applications of AI. From Siri to Alexa to Cortana to Google Assistant, all conversational user experiences are powered by AI.

The advancements in AI research is putting the power of language understanding and conversational interface into the hands of developers. Data scientists and developers can now build custom AI models that work exactly like Alexa and Siri but for a specialized and highly customized industry use case from the healthcare or legal vertical. This enables doctors and lawyers to interact with expert agents that can understand the terminology and the context of the conversation. This new user experience is going to be a part of future line of business applications.

If you want a vision of the future, imagine a thousand bots screaming from a human face – forever (apologies to George Orwell). As U.S. policymakers remain indecisive over how to prevent a repeat of the 2016 election interference, the threat is looming ever more ominous on the horizon. The public has unfortunately settled on the term “bots” to describe the social media manipulation activities of foreign actors, invoking an image of neat rows of metal automatons hunched over keyboards, when in reality live humans are methodically at work. While the 2016 election mythologized the power of these influence-actors, such work is slow, costly, and labor-intensive. Humans must manually create and manage accounts, hand-write posts and comments, and spend countless hours reading content online to signal-boost particular narratives. However, recent advances in artificial intelligence (AI) may soon enable the automation of much of this work, massively amplifying the disruptive potential of online influence operations.

This emerging threat draws its power from vulnerabilities in our society: an unaware public, an underprepared legal system, and social media companies not sufficiently concerned with their exploitability by malign actors. Addressing these vulnerabilities requires immediate attention from lawmakers to inform the public, address legal blind spots, and hold social media companies to account.

But that always looked like a tall order when faced with stiff competition from tech giants like Google, IBM, and Amazon, all happy to pour billions into AI research. Faced with that reality, OpenAI has undergone a significant metamorphosis in the last couple of years.

Musk stepped away last year, citing conflicts of interest as his electric car company Tesla invests in self-driving technology and disagreements over the direction of the organization. Earlier this year a for-profit arm was also spun off to enable OpenAI to raise investment in its effort to keep up.

A byzantine legal structure will supposedly bind the new company to the original mission of the nonprofit. OpenAI LP is controlled by OpenAI’s board and obligated to advance the nonprofit’s charter. Returns for investors are also capped at 100 times their stake, with any additional value going to the nonprofit, though that’s a highly ambitious target that needs to be hit before any limits on profiteering would kick in.

Is this new law anti-kemetic and anti-pagan as it implies only one “God”? And why should atheists put up with this public brainwashing? A new state law that took effect this month requires all public schools in the state’s 149 districts to paint, stencil or otherwise prominently display the national motto.


RAPID CITY, S.D. (AP)- When students return to public schools across South Dakota this fall, they should expect to see a new message on display: “In God We Trust.”

A new state law that took effect this month requires all public schools in the state’s 149 districts to paint, stencil or otherwise prominently display the national motto.

The South Dakota lawmakers who proposed the law said the requirement was meant to inspire patriotism in the state’s public schools. Displays must be at least 12-by-12 inches and must be approved by the school’s principal, according to the law.

Carl Malamud is on a crusade to liberate information locked up behind paywalls — and his campaigns have scored many victories. He has spent decades publishing copyrighted legal documents, from building codes to court records, and then arguing that such texts represent public-domain law that ought to be available to any citizen online. Sometimes, he has won those arguments in court. Now, the 60-year-old American technologist is turning his sights on a new objective: freeing paywalled scientific literature. And he thinks he has a legal way to do it.


A giant data store quietly being built in India could free vast swathes of science for computer analysis — but is it legal?