Toggle light / dark theme

A realistic article on AI — especially around AI being manipulated by others for their own gain which I have also identified as the real risks with AI.


Artificial intelligence (AI), once the seeming red-headed stepchild of the scientific community, has come a long way in the past two decades. Most of us have reconciled with the fact that we can’t live without our smartphones and Siri, and AI’s seemingly omnipotent nature has infiltrated the nearest and farthest corners of our lives, from robo-advisors on Wall Street and crime-spotting security cameras, to big data analysis by Google’s BigQuery and Watson’s entry into diagnostics in the medical field.

In many unforeseen ways, AI is helping to improve and make our lives more efficient, though the reverse degeneration of human economic and cultural structures is also a potential reality. The Future of Life Institute’s tagline sums it up in succinct fashion: “Technology is giving life the potential to flourish like never before…or to self-destruct.” Humans are the creators, but will we always have control of our revolutionary inventions?

To much of the general public, AI is AI is AI, but this is only part truth. Today, there are two primary strands of AI development — ANI (Artificial Narrow Intelligence) and AGI (Artificial General Intelligence). ANI is often termed “weak AI” and is “the expert” of the pair, using its intelligence to perform specific functions. Most of the technology with which we surround ourselves (including Siri) falls into the ANI bucket. AGI is the next generation of ANI, and it’s the type of AI behind dreams of building a machine that achieves human levels of consciousness.

Read more

Driverless cars, like the one Google launched in 2012, are touted for their potential energy savings, but engineers say we should consider the possibility that the technology will intensify car use.

If people can work, relax, and even hold meetings in their cars, they may drive more.

Read more

Driven by a need for convenience, an IT specialist from Sweden just opened the country’s first unstaffed store, which uses an app for access and scanning technology to make purchases.

After dropping what turned out to be his last jar of baby food on the floor, Robert Ilijason, who was then home alone with his son, had no choice but to make a drive to find a supermarket that was open and buy a new one.

This was no easy task, as shops close early in many rural areas, leaving individuals with nowhere to go to get any last minute necessities late at night.

Read more

In SELF/LESS, a dying old man (Academy Award winner Ben Kingsley) transfers his consciousness to the body of a healthy young man (Ryan Reynolds). If you’re into immortality, that’s pretty good product packaging, no?

But this thought-provoking psychological thriller also raises fundamental and felicitous ethical questions about extending life beyond its natural boundaries. Postulating the moral and ethical issues that surround mortality have long been defining characteristics of many notable stories within the sci-fi genre. In fact, the Mary Shelley’s age-old novel, Frankenstein, while having little to no direct plot overlaps [with SELF/LESS], it is considered by many to be among the first examples of the science fiction genre.

Screenwriters and brothers David and Alex Pastor show the timelessness of society’s fascination with immortality. However, their exploration reflects a rapidly growing deviation from the tale’s derivation as it lies within traditional science fiction. This shift can be defined, on the most basic level as the genre losing it’s implied fictitious base. Sure, while we have yet to clone dinosaurs, many core elements of beloved past sic-fi films are growing well within our reach, if not in our present and every-day lives. From Luke Skywalker’s prosthetic hand in Star Wars Episode V: The Empire Strikes Back (1980) to the Matrix Sentinal’s (1999) of our past science fiction films help define our current reality to Will Smith’s bionic arm in I, Robot.

Read more

“Notice for all Mathmaticians” — Are you a mathmatician who loves complex algorithems? If you do, IARPA wants to speak with you.


Last month, the intelligence community’s research arm requested information about training resources that could help artificially intelligent systems get smarter.

It’s more than an effort to build new, more sophisticated algorithms. The Intelligence Advanced Research Projects Activity could actually save money by refining existing algorithms that have been previously discarded by subjecting them to more rigorous training.

Nextgov spoke with Jacob Vogelstein, a program manager at IARPA who specializes in in applied neuroscience, about the program. This conversation has been edited for length and clarity.

Read more

I hear this author; however, can it pass military basic training/ boot camp? Think not.


Back when Alphabet was known as Google, the company bought Boston Dynamics, makers of the amazingly advanced robot named Atlas. At the time, Google promised that Boston Dynamics would stop taking military contracts, as it often did. But here’s the open secret about Atlas: She can enlist in the US military anytime she wants.

Technology transfer is a two-way street. Traditionally we think of technology being transferred from the public to the private sector, with the internet as just one example. The US government invests in and develops all kinds of important technologies for war and espionage, and many of those technologies eventually make their way to American consumers in one way or another. When the government does so consciously with both military and civilian capabilities in mind, it’s called dual-use tech.

But just because a company might not actively pursue military contracts doesn’t mean that the US military can’t buy (and perhaps improve upon) technology being developed by private companies. The defense community sees this as more crucial than ever, as government spending on research and development has plummeted. About one-third of R&D was being done by private industry in the US after World War II, and two-thirds was done by the US government. Today it’s the inverse.

Read more

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

Read more

The interesting piece in the articles that I have seen on robots taking jobs have only occurred in Asia and in certain situations in the UK. I believe that companies across the US see some of the existing hacking risks (especially since the US has the highest incidents of hackings among the other countries) that prevents companies from just replacing their employees with connected autonomous robots plus I am not sure that robotics is at the level of sophistication that most consumers want to spend a lot of money on at the moment.

Bottom line is that until hacking is drastically reduce (if not finally eliminated); that autonomous AI like connected robots and humanoids will find they will have a hard time being adopted by the US collective mass of the population.


In the future the global employment market will rely heavily on robots, artificial intelligence, and all sorts of automation.

In fact, technology is so crucial going forward, that in January, the World Economic Forum predicted that in less than five years more than five million human jobs will be replaced by automation, AI, and robots.

Just this week, a new report showed nearly a third of retails jobs in the UK could disappear by 2025, with many workers replaced by technology in some way or another.

Read more