A team of French researchers have collaberated to create an advanced robotic “third arm”.
A team of French researchers have collaberated to create an advanced robotic “third arm”.
From an AI Chef to a Snoop Dog video ad, to digital people, here are 3 futuristic ways AI is taking brand voice to the next level.
With perspectives and passions, Millennial and Generation Z entrepreneurs are at the forefront of AI innovation. They are unlocking new ways to do even the mundane tasks and will forever change how we work and look at the world.
Conversation and human language is a particularly challenging area for computers, since words and communication is not precise. Learn more about the conversational pattern of AI.
Canada is marching forward with its international partners to establish a permanent research installation near the Moon, the Lunar Gateway.
As it did for the Shuttle and Station programs before, the Canadian Space Agency, via a partnership with MacDonald, Dettwiler, and Associates, Inc., will build the next-generation robotic system: Canadarm3.
This smart robotic system will be Canada’s contribution to the US-led Lunar Gateway station for the Artemis Program, the next major international collaboration in human space exploration — which forms the cornerstone of Canadian space strategy.
Last year’s Netflix movie The Great Hack detailed the dark side of data collection, centered around the 2016 Cambridge Analytica scandal. The movie describes how “psychometric profiles” exist for you, me, and all of our friends. The data collected from our use of digital services can be packaged in a way that gives companies insight into our habits, preferences, and even our personalities. With this information, they can do anything from show us an ad for a pair of shoes we’ll probably like to try to change our minds about which candidate to vote for in an election.
With so much of our data already out there, plus the fact that most of us will likely keep using the free apps we’ve enjoyed for years, could it be too late to try to fundamentally change the way this model works?
Maybe not. Think of it this way: we have a long, increasingly automated and digitized future ahead of us, and data is only going to become more important, valuable, and powerful with time. There’s a line (which some would say we’ve already crossed) beyond which the amount of data companies have access to and the way they can manipulate it for their benefit will become eerie and even dystopian.
Posted in biotech/medical, government, robotics/AI, supercomputing | Leave a Comment on Ohio Supercomputer Center Researchers Analyse Twitter Posts Revealing Polarization in Congress on COVID-19
June 25, 2020 — The rapid politicization of the COVID-19 pandemic can be seen in messages members of the U.S. Congress sent about the issue on the social media site Twitter, a new analysis found.
Using artificial intelligence and resources from the Ohio Supercomputer Center, researchers conducted an analysis that covered all 30,887 tweets that members sent about COVID-19 from the first one on Jan. 17 through March 31.
Happy birthday to the World most important Entrepreneur (Olorogun Elon Musk). We at the Ogba Educational Clinic and Artificial intelligence Hub celebrate and wish to immortalize you by Setting up a club after you (The Elon Musk Club). This is in line with our vision to create small Elon’s that would eventually outdo you from Africa.
For years, Brent Hecht, an associate professor at Northwestern University who studies AI ethics, felt like a voice crying in the wilderness. When he entered the field in 2008, “I recall just agonizing about how to get people to understand and be interested and get a sense of how powerful some of the risks [of AI research] could be,” he says.
To be sure, Hecht wasn’t—and isn’t—the only academic studying the societal impacts of AI. But the group is small. “In terms of responsible AI, it is a sideshow for most institutions,” Hecht says. But in the past few years, that has begun to change. The urgency of AI’s ethical reckoning has only increased since Minneapolis police killed George Floyd, shining a light on AI’s role in discriminatory police surveillance.
This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.