Toggle light / dark theme

Aside from staying alive and healthy, the biggest concern most people have during the pandemic is the future of their jobs. Unemployment in the U.S. has skyrocketed, from 5.8 million in February 2020 to 16.3 million in July 2020, according to the U.S. Bureau of Labor Statistics. But it’s not only the lost jobs that are reshaping work in the wake of COVID-19; the nature of many of the remaining jobs has changed, as remote work becomes the norm. And in the midst of it all, automation has become potentially a threat to some workers and a salvation to others. In this issue, we examine this tension and explore the good, bad, and unknown of how automation could affect jobs in the immediate and near future.

Prevailing wisdom says that the wave of new AI-powered automation will follow the same pattern as other technological leaps: They’ll kill off some jobs but create new (and potentially better) ones. But it’s unclear whether that will hold true this time around. Complicating matters is that at a time when workplace safety has to do with limiting the spread of a deadly virus, automation can play a role in reducing the number of people who are working shoulder-to-shoulder — keeping workers safe, but also eliminating jobs.

Even as automation creates exciting new opportunities, it’s important to bear in mind that those opportunities will not be distributed equally. Some jobs are more vulnerable to automation than others, and uneven access to reskilling and other crucial factors will mean that some workers will be left behind.

Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak.

Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how and robots communicate with humans by generating to accompany their speech. Their paper, pre-published on arXiv and set to be presented at the European Conference on Computer Vision (ECCV) 2020, introduces Mix-StAGE, a new that can produce different styles of co-speech gestures that best match the voice of a and what he/she is saying.

“Imagine a situation where you are communicating with a friend in a through a ,” Chaitanya Ahuja, one of the researchers who carried out the study, told TechXplore. “The headset is only able to hear your voice, but not able to see your hand gestures. The goal of our model is to predict the accompanying the speech.”

A team of researchers at Stanford University has created an artificial intelligence-based player called the Vid2Player that is capable of generating startlingly realistic tennis matches—featuring real professional players. They have written a paper describing their work and have uploaded it to the arXiv preprint server. They have also uploaded a YouTube video demonstrating their player.

Video game companies have put a lot of time and effort into making their games look realistic, but thus far, have found it tough going when depicting human beings. In this new effort, the researchers have taken a different approach to the task—instead of trying to create human-looking characters from scratch, they use sprites, which are characters based on of real people. The sprites are then pushed into action by a computer using to mimic the ways a human being moves while playing tennis. The researchers trained their AI system using video of real tennis professionals performing; the footage also provided imagery for the creation of sprites. The result is an interactive player that depicts real professional tennis players such as Roger Federer, Serena Williams, Novak Jovovich and Rafael Nadal in action. Perhaps most importantly, the simulated gameplay is virtually indistinguishable from a televised match.

The Vid2Player is capable of replaying actual matches, but because it is interactive, a user can change the course of the match as it unfolds. Users can change how a player reacts when a ball comes over the net, for example, or how a player plays in general. They can decide which part of the opposite side of the court to aim for, or whether to hit backhand or forehand. They can also slightly alter the course of a real match by allowing a shot that in reality was out of bounds to land magically inside the line. The system also allows for players from different eras to compete. The AI software adjusts for lighting and clothing (if video is used from multiple matches). Because AI software is used to teach the sprites how to play, the actions of the sprites actually mimic the most likely actions of the real player.

OpenAI’s new language generator #GPT-3 is shockingly good—and completely mindless: https://bit.ly/3kphfsX

By Will Douglas Heavenarchive page from MIT Technolgy Review

#AI #MachineLearning #NeuralNetworks #DeepLearning


“Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI.

OpenAI first described GPT-3 in a research paper published in May. But last week it began drip-feeding the software to selected people who requested access to a private beta. For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

The discovery has led to a new polymer that allows humans to integrate electronics into the brain after challenges with substances such as gold, steel and silicon resulted in scarring of organic tissue.

A major breakthrough in materials research may allow the human brain to link with artificial intelligence, it was announced at an American Chemical Society Fall 2020 event on Monday.

Scarring due to previously used materials can block electrical signals transmitted from computers to the brain, but University of Delaware researchers developed new types of polymers aimed at overcoming the risks.

As we wind up our discussion about the Space Race and touch on the strategies employed by China in its bid to stay on top of space and tech, we delve into the meaty topic of next generation Artificial Intelligence including GPT-3, OpenAI, CommaAI and how they are making strides in the avenues of automation, machine learning and translation and also self driving cars. It’s a brave new world and we discuss some of the many pitfalls of this new emerging range of systems that can come with many issues along with many benefits.

A temperature of 130 degrees Fahrenheit (54.4 degrees Celsius) recorded in California’s Death Valley on Sunday by the US National Weather Service could be the hottest ever measured with modern instruments, officials say.

The reading was registered at 3:41 pm at the Furnace Creek Visitor Center in the Death Valley national park by an automated observation system—an electronic thermometer encased inside a box in the shade.

In 1913, a weather station half an hour’s walk away recorded what officially remains the world record of 134 degrees Fahrenheit (56.7 degrees Celsius). But its validity has been disputed because a superheated sandstorm at the time may have skewed the reading.