Toggle light / dark theme

AI squad mates. Called this a few years ago. It’s too annoying getting strangers to join up on some online task for a game.


Who wouldn’t want an A.I.to sit there and play backseat gamer? That’s exactly what looks to be happening thanks to a recently revealed Sony patent. The patent is for an automated Artificial Intelligence (A.I.) control mode specifically designed to perform certain tasks, including playing a game while the player is away.

In the patent, as spotted by SegmentNext, it’s detailed that this A.I. will involve assigning a default gameplay profile to the user. This profile will include a compendium of information detailing the player’s gaming habits, play styles, and decision-making processes while sitting down for a new adventure. This knowledge can then be harnessed to simulate the player’s gaming habits, even when said gamer is away from their platform of choice.

“The method includes monitoring a plurality of game plays of the user playing a plurality of gaming applications,” reads the patent itself. “The method includes generating a user gameplay profile of the user by adjusting the default gameplay style based on the plurality of game plays, wherein the user gameplay profile incudes a user gameplay style customized to the user. The method includes controlling an instance of a first gaming application based on the user gameplay style of the user gameplay profile.”

Around a century ago when film stocks and photographs were first coming to light, they faced a number of challenges in capturing the essence of an image. In addition to the black and white limitation, photography and film methods also struggled to capture other various elements of the color spectrum, rendering many images of famous figures appearing differently than they may have actually looked.

Now, a new AI imaging technique uses color to restyle old photographs in a way that could almost pass for modern day photographs. This colorization method mitigates the main obstacles of cameras and lenses from the olden days—namely, the orthochromatic nature of those tools, meaning that the photo capture device in question incorporated all detected light into the image without discrimination. The inclusion of all of this light resulted in photos that appeared grainy and noisy, leading to renowned figures such as U.S. president Abraham Lincoln looking far older and wrinklier than he actually was.

These days, especially with the aid of computer graphics, more advanced photographic techniques have taken advantage of the fact that tends to penetrate the surface of human skin and illuminate the flesh from underneath. This illumination helps to eliminate extra noise and wrinkle marks that marred many images from the early 1900s.

Words categorize the semantic fields they refer to in ways that maximize communication accuracy while minimizing complexity. Recent studies have shown that human languages are optimally balanced between accuracy and complexity. For example, many languages have a word that denotes the color red, but no language has individual words to distinguish ten different shades of the color. These additional words would complicate the vocabulary and rarely would they be useful to achieve precise communication.

A study published on 23 March in the journal Proceedings of the National Academy of Sciences analyzed how develop spontaneous systems to name colors. A study by Marco Baroni, ICREA research professor at the UPF Department of Translation and Language Sciences (DTCL), conducted with members of Facebook AI Research (France).

For this study, the researchers formed two artificial neural networks trained with two generic deep learning methods. As Baroni explains: “we made the networks play a color-naming game in which they had to communicate about color chips from a continuous color space. We did not limit the “language” they could use, however, when they learned to play the game successfully, we observed the color-naming terms these artificial neural networks had developed spontaneously.”

Take my micro-transaction.


We may be on track to our own version of the Oasis after an announcement yesterday from Epic Games that it has raised $1 billion to put towards building “the metaverse.”

Epic Games has created multiple hugely popular video games, including Fortnite, Assassin’s Creed, and Godfall. An eye-popping demo released last May shows off Epic’s Unreal Engine 5, its next-gen computer program for making video games, interactive experiences, and augmented and virtual reality apps, set to be released later this year. The graphics are so advanced that the demo doesn’t look terribly different from a really high-quality video camera following someone around in real life—except it’s even cooler. In February Epic unveiled its MetaHuman Creator, an app that creates highly realistic “digital humans” in a fraction of the time it used to take.

So what’s “the metaverse,” anyway? The term was coined in 1992 when Neal Stephenson published his hit sci-fi novel Snow Crash, in which the protagonist moves between a virtual world and the real world fighting a computer virus. In the context of Epic Games’ announcement, the metaverse will be not just a virtual world, but the virtual world—a digitized version of life where anyone can exist as an avatar or digital human and interact with others. It will be active even when people aren’t logged into it, and would link all previously-existing virtual worlds, like an internet for virtual reality.

Writer-director Neil Burger is well known for his provocative cinematic projects, most notably 2006’s period-set magician movie “The Illusionist,” 2011’s psychological thriller “Limitless,” and a trio of “Divergent” films adapted from author Veronica Roth’s young adult sci-fi novels.

Now Burger has his eyes fixed on the stars with his new science fiction adventure flick, “Voyagers,” which revolves around the perils inside a generation spaceship carrying 30 home-grown candidates on a one-way mission to settle an exoplanet 86 years from Earth.

Elon Musk finally got to show off his monkey.

Neuralink, a company founded by Musk that is developing artificial-intelligence-powered microchips to go in people’s brains, released a video Thursday appearing to show a macaque using the tech to play video games, including “Pong.”

Musk has boasted about Neuralink’s tests on primates before, but this is the first time the company has put one on display. During a presentation in 2019, Musk said the company had enabled a monkey to “control a computer with its brain.”

Microsoft won a nearly $22 billion contract to supply U.S. Army combat troops with its augmented reality headsets.

Microsoft and the Army separately announced the deal Wednesday.

The technology is based on Microsoft’s HoloLens headsets, which were originally intended for the video game and entertainment industries.

With powerful engines, near-photorealistic graphics, and the ability to build incredible, immersive worlds, it’s hard to imagine what the next big technological advance in gaming might be.

Based on a recent tweet by Neuralink co-founder and President Max Hodak, the word might not even apply. In it, he hinted — vaguely, to be fair — that whatever forms of entertainment get programmed into neural implants and brain-computer interfaces will represent a paradigm shift that moves beyond the current terminology.

“We’re gonna need a better term than ‘video game’ once we start programming for more of the sensorium,” Hodak tweeted.

Flexible electrodes, electronic components that conduct electricity, are of key importance for the development of numerous wearable technologies, including smartwatches, fitness trackers and health monitoring devices. Ideally, electrodes inside wearable devices should retain their electrical conductance when they are stretched or deformed.

Many flexible electrodes developed so far are made of placed on elastic substrates. While some of these electrodes are flexible and well, sometimes, the metal are fractured, which can result in sudden electricity disconnection.

Researchers at University of Illinois at Urbana-Champaign have recently introduced a new design that could enable the development of strain-resilient flexible electrodes that conduct electricity well, even when they are stretched or deformed. This design, outlined in a paper published in Nature Electronics, involves the introduction of a thin, two-dimensional (2-D) interlayer, which reduces the risk of fractures and retains electrical connections of metal films.