Toggle light / dark theme

Facebook has just announced it’s going to hire 10,000 people in Europe to develop the “metaverse”.

This is a concept which is being talked up by some as the future of the internet. But what exactly is it?

**What is the metaverse?
**
To the outsider, it may look like a souped-up version of Virtual Reality (VR) — but some people think the metaverse could be the future of the internet.

In fact, the belief is that it could be to VR what the modern smartphone is to the first clunky mobile phones of the 1980s.

Instead of being on a computer, in the metaverse you might use a headset to enter a virtual world connecting all sorts of digital environments.

Unlike current VR, which is mostly used for gaming, this virtual world could be used for practically anything — work, play, concerts, cinema trips — or just hanging out.

Most people envision that you would have a 3D avatar — a representation of yourself — as you use it.

But because it’s still just an idea, there’s no single agreed definition of the metaverse.

Full Story:

The truth is these systems aren’t masters of language. They’re nothing more than mindless “stochastic parrots.” They don’t understand a thing about what they say and that makes them dangerous. They tend to “amplify biases and other issues in the training data” and regurgitate what they’ve read before, but that doesn’t stop people from ascribing intentionality to their outputs. GPT-3 should be recognized for what it is; a dumb — even if potent — language generator, and not as a machine so close to us in humanness as to call it “self-aware.”

On the other hand, we should ponder whether OpenAI’s intentions are honest and whether they have too much control over GPT-3. Should any company have the absolute authority over an AI that could be used for so much good — or so much evil? What happens if they decide to shift from their initial promises and put GPT-3 at the service of their shareholders?

Full Story:


Not even our imagination will manage to keep up with technology’s pace.

Martha called his name again, “Ash!” But he wasn’t listening, as always. His eyes fixed on the screen while he uploaded a smiley picture of a younger self. Martha joined him in the living room and pointed to his phone. “You keep vanishing. Down there.” Although annoying, Ash’s addiction didn’t prevent the young loving couple to live an otherwise happy life.

The sun was already out, hidden behind the soft morning clouds when Ash came down the stairs the next day. “Hey, get dressed! Van’s got to be back by two.” They had an appointment but Martha’s new job couldn’t wait. After a playfully reluctant goodbye, he left and she began to draw on her virtual easel.

Most of us control light all the time without even thinking about it, usually in mundane ways: we don a pair of sunglasses and put on sunscreen, and close—or open—our window blinds.

But the control of can also come in high-tech forms. The screen of the computer, tablet, or phone on which you are reading this is one example. Another is telecommunications, which controls light to create signals that carry data along .

Scientists also use high-tech methods to control light in the laboratory, and now, thanks to a new breakthrough that uses a specialized material only three atoms thick, they can control light more precisely than ever before.

When’s the last time you chirped, “Hey Google” (or Siri for that matter), and asked your phone for a recommendation for good sushi in the area, or perhaps asked what time sunset would be? Most folks these days perform these tasks on a regular basis on their phones, but you may not have realized there were multiple AI (Artificial Intelligence) engines involved in quickly delivering the results for your request.

In these examples, AI neural network models were used to process natural language recognition, and then also inferred what you were looking for, to deliver relevant search results from internet databases around the globe, but also targeting the most appropriate results based on your location and a number of other factors as well. These are just a couple of examples but, in short, AI or machine learning processing is a big requirement of smartphone experiences these days, from recommendation engines to translation, computational photography and more.

As such, benchmarking tools are now becoming more prevalent, in an effort to measure mobile platform performance. MLPerf is one such tool that nicely covers the gamut of AI workloads, and today Qualcomm is highlighting some fairly impressive results in a recent major update to the MLCommons database. MLCommons is an open consortium comprised of various chip manufacturers and OEMs with founding members like Intel, NVIDIA, Arm, AMD, Google, Qualcomm and many others. The consortium’s MLPerf benchmark measures AI workloads like image classification, natural language processing and object detection. And today Qualcomm has tabulated benchmark results from its Snapdragon 888+ Mobile Platform (a slightly goosed-up version of its Snapdragon 888) versus a myriad of competitive mobile chipsets from Samsung, MediaTek and even and Intel’s 11th Gen Core series laptop chips.

We are living in a time when we can see what needs to be done, but the industrial legacy of the last century has such power invested, politically and in the media, and so much money at its disposal due to the investors who have too much to lose to walk away, and so they throw good money after bad to desperately try to save their stranded assets.

Well, the next decade will bring new technologies which will rupture the business models of the old guard, tipping the balance on their huge economies of scale, which will quickly disintegrate their advantage before consigning them to history, and these new ways of doing things will be better for us and the environment, and cheaper than every before. Just look at how the internet and the smart phone destroyed everything from cameras to video shops to taxis and the very high street itself.

The rest is not far behind and it all holds the opportunity to mend the damage we have done.

If you want to know more about what lies ahead, check out this video.


It might indeed sound more like science fiction but we are approaching an era where everything will be fundamentally disrupted. From the energy that fuels our modern lifestyles, to the food on our plates, from transportation to medicine to production, the changes that the smartphone forced upon everything they touched, from phones to video cameras to personal music players and information portal, well that is set to happen to everything else. And if you want to know more about how autonomous vehicles could change the world, check this out. https://youtu.be/uFRSf_vD-nw

Field doctors still diagnose burns by sight, smell and touch. A smart bandage and smart phone camera may be all we need to change that — and prevent serious and lasting complications.

This article was produced for AMEDD by Scientific American Custom Media, a division separate from the magazine’s board of editors.

AI startups can rake in investment by hiding how their systems are powered by humans. But such secrecy can be exploitative.

The nifty app CamFind has come a long way with its artificial intelligence. It uses image recognition to identify an object when you point your smartphone camera at it. But back in 2015 its algorithms were less advanced: The app mostly used contract workers in the Philippines to quickly type what they saw through a user’s phone camera, CamFind’s co-founder confirmed to me recently. You wouldn’t have guessed that from a press release it put out that year which touted industry-leading “deep learning technology,” but didn’t mention any human labelers.

The practice of hiding human input in AI systems still remains an open secret among those who work in machine learning and AI. A 2019 analysis of tech startups in Europe by London-based MMC Ventures even found that 40% of purported AI startups showed no evidence of actually using artificial intelligence in their products.

Have you ever wondered how much water is needed to charge an iPhone? Probably not, because it takes electricity to charge a phone, not water. But, say if you had a hydraulic generator, you could be able to generate some electricity using only your garden hose. That is precisely what is being done in a video by the YouTube channel The Action Lab

The owner of the channel, James Orgill, demonstrates the power output of his setup, and how the voltage output goes up as he increases the water flow. The power that comes straight out of the generator is AC power, so he connects a full bridge rectifier to the output to convert it to DC. He makes sure the generated voltage is 12V at maximum by adjusting the flow, to prevent the iPhone from frying.

But, if you ever decide to do this at home, you should probably buy a voltage regulator, just to be safe. He then proceeds to charge his phone to figure out how much water would it take to fully charge his phone, and calculates that he would need 528 gallons (2,400 liters) of it! If you want to watch the demonstration, make sure you watch the video above.

Not all who wander are lost – but sometimes their cell phone reception is. That might change soon if a plan to project basic cell phone coverage to all parts of the globe comes to fruition. Lynk has already proven it can use a typical smartphone to bound a standard SMS text message off a low-earth-orbiting satellite, and they don’t plan to stop there.

Formerly known as Ubiquitilink, Lynk was founded a few years ago by Nanoracks founder Charles Miller and his partners but came out of “stealth mode” as a start-up in 2019. In 2020 they then used a satellite to send an SMS message from a typical smartphone, without requiring the fancy GPS locators and antennas needed by other, specially made satellite phones.

The company continued its success recently by demonstrating a “two-way” link this week using a newly launched satellite, its fifth, called “Shannon.” They’ve also proved it over multiple phones in numerous areas, including the UK, America, and the Bahamas.