Day 6 at the Artificial Intelligence Hub robotic boot camp, the kids continued the programming class using python. There was an online training section with Camp Peavy, he showed the kids robots he built and shared articles on how to build them. it was an awesome experience. It is our vision to domesticate Artificial Intelligence in Africa and we wont stop until we get there. #TakeOver.
Over the last few years, creating fake videos that swap the face of one person onto another using artificial intelligence and machine learning has become a bit of a hobby for a number of enthusiasts online, with the results of these “deepfakes” getting better and better. Today, a new one applies that tech to Star Trek.
Deep Spocks
YouTuber Jarkan has released a number of “Deepfake” videos featuring different actors swapped into iconic film scenes. Today’s release takes Leonard Nimoy’s younger Spock from the original Star Trek and swaps him in for Zachary Quinto’s Spock in the J.J. Abrams 2009 film Star Trek. He does this in a scene where the younger Spock meets his older self, played by Leonard Nimoy. Deepfake swapping of Nimoy in for Quinto or even for Ethan Peck in Discovery has been done before, but this new deepfake has more impressive results.
What do a frying pan, an LED light, and the most cutting edge camouflage in the world have in common? Well, that largely depends on who you ask. Most people would struggle to find the link, but for University of Michigan chemical engineers Sharon Glotzer and Michael Engel, there is a substantial connection, indeed one that has flipped the world of materials science on its head since its discovery over 30 years ago.
The magic ingredient common to all three items is the quasiperiodic crystal, the “impossible” atomic arrangement discovered by Dan Shechtman in 1982. Basically, a quasicrystal is a crystalline structure that breaks the periodicity (meaning it has translational symmetry, or the ability to shift the crystal one unit cell without changing the pattern) of a normal crystal for an ordered, yet aperiodic arrangement. This means that quasicrystalline patterns will fill all available space, but in such a way that the pattern of its atomic arrangement never repeats. Glotzer and Engel recently managed to simulate the most complex quasicrystal ever, a discovery which may revolutionize the field of crystallography by blowing open the door for a whole host of applications that were previously inconceivable outside of science-fiction, like making yourself invisible or shape-shifting robots.
Rice University’s Early Bird could care less about the worm; it’s looking for megatons of greenhouse gas emissions.
Early Bird is an energy-efficient method for training deep neural networks (DNNs), the form of artificial intelligence (AI) behind self-driving cars, intelligent assistants, facial recognition and dozens more high-tech applications.
Researchers from Rice and Texas A&M University unveiled Early Bird April 29 in a spotlight paper at ICLR 2020, the International Conference on Learning Representations. A study by lead authors Haoran You and Chaojian Li of Rice’s Efficient and Intelligent Computing (EIC) Lab showed Early Bird could use 10.7 times less energy to train a DNN to the same level of accuracy or better than typical training. EIC Lab director Yingyan Lin led the research along with Rice’s Richard Baraniuk and Texas A&M’s Zhangyang Wang.
Eight years ago a machine learning algorithm learned to identify a cat —and it stunned the world. A few years later AI could accurately translate languages and take down world champion Go players. Now, machine learning has begun to excel at complex multiplayer video games like Starcraft and Dota 2 and subtle games like poker. AI, it would appear, is improving fast.
But how fast is fast, and what’s driving the pace? While better computer chips are key, AI research organization OpenAI thinks we should measure the pace of improvement of the actual machine learning algorithms too.
In a blog post and paper —authored by OpenAI’s Danny Hernandez and Tom Brown and published on the arXiv, an open repository for pre-print (or not-yet-peer-reviewed) studies—the researchers say they’ve begun tracking a new measure for machine learning efficiency (that is, doing more with less). Using this measure, they show AI has been getting more efficient at a wicked pace.
We’re getting closer to technologies that let us exist forever in some way — using data to power our existence in VR, robots, chatbots and holograms. Should we do it?
The DroneGun Tactical by Australian-based company DroneShield is like something out of a video game. The rifle-shaped, high-powered antenna “blasts” drones out of the sky with frequency waves.
DroneShield designed the technology to thwart unmanned aerial vehicles (UAV) with explosives or weapons strapped to them. It works by blocking video transmission and GPS information, making it nearly impossible for its pilot to regain control.
“Most modern drones are equipped with a protocol that they come back to their operator when the radio frequency signal is jammed and land when radio frequency and GPS are both jammed,” company spokesman Oleg Vornik told the Daily Mail.
To just solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour.
A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs.
“Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today,” said Shriram Ramanathan, a professor of materials engineering at Purdue University.
Does artificial intelligence jeopardize employment for humans? What will people do when smart robots join the workforce? AI already plays a role in many of our jobs, and if you have ever searched for information online, you have interacted with an AI. If we extrapolate the evolution of search, we can imagine that soon AIs will become even better at helping us learn solutions that have worked in the past and remember what things have failed. In this way, working with AIs can be like having a really smart colleague or expert old-timer on our team. And these AI coworkers can also help us experiment with new approaches because AIs can be creative as well. Their creativity is unlike human creativity, and that uniqueness is its primary value. AIs can also make valuable team members by performing rote tasks that humans are or become bored by. The share of work that AIs perform is likely to shift over time, but I cannot think of a single job or occupation that will not benefit from collaborating with and delegating to AIs. If we reframe our fears about robots taking human jobs, if we can utilize the AI over our shoulder, if we can see AIs as team members, we will find the future of work holds opportunities for all of us.
This video on “The Future of Employment with AI” was commissioned by China Mobile as part of an online course. It is one of 36 lecture videos. A version with Chinese subtitles is available at Citic Migu: http://citic.cmread.com/zxHtml/listenBook/listenDetail/listenDetail.html?bookId=10000019313&isShare=1&channel=1
A transcript of the lecture in English is available here: https://drive.google.com/file/d/16dYZ4Vwm796ScRQ0lHrEauwC4M3-Y3gd/view?usp=sharing