Toggle light / dark theme

Lean, mean bricklaying machine.


Construction is difficult to automate because of the complex, individualized and customized work it requires.

But a company called Construction Robotics has developed a one-of-a-kind robot that can lay bricks six times faster than a human.

LONDON — Artificial intelligence researchers don’t like it when you ask them to name the top AI labs in the world, possibly because it’s so hard to answer.

There are some obvious contenders when it comes to commercial AI labs. U.S. Big Tech — Google, Facebook, Amazon, Apple and Microsoft — have all set up dedicated AI labs over the last decade. There’s also DeepMind, which is owned by Google parent company Alphabet, and OpenAI, which counts Elon Musk as a founding investor.


DeepMind, OpenAI, and Facebook AI Research are fighting it out to be the top AI research lab in the world.

Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.

Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits , says Neuman, who recently graduated with a Ph.D. from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called robomorphic computing, uses a robot’s physical layout and intended applications to generate a customized computer that minimizes the robot’s response time.

A new method to reason about uncertainty might help artificial intelligence to find safer options faster, for example in self-driving cars, according to a new study to be published shortly in AAAI involving researchers at Radboud University, the University of Austin, the University of California, Berkeley, and the Eindhoven University of Technology.

The researchers have defined a new approach to so-called ‘uncertain partially observable Markov decision processes, or uPOMDPs. In layman’s terms, these are models of the real world that estimate the probability of events. A car, for example, will face many unknown situations when it starts driving. To validate the of self-driving cars, extensive calculations are run to analyze how the AI would approach various situations. The researchers argue that with their new approach, these modeling exercises can become far more realistic, and thus allows AI to make better, safer decisions quicker.

Author and entrepreneur Jeff Wald discusses his book “The End of Jobs: The Rise of On-Demand Workers and The Agile Corporation,” on the latest Seeking Delphi™ podcast. The conclusions may not be what you anticipate from the title…

https://www.youtube.com/watch?v=P9DHdbXcoyM


There’s a lot of automation that can happen that isn’t a replacement of humans but of mind-numbing behavior.” –Stewart Butterworth

Automation is going to cause unemployment, and we better prepare for it.”–Mark Cuban

In an early standup routine, Woody Allen once joked that when his father came home to announce that his job on an assembly line was replaced by a 50-dollar part, what was really disturbing was that his mother immediately ran out and bought one of those parts. As funny as that may be, the potential loss of millions of jobs to automation is no joking matter. The fears of such abound as automation, robotics and artificial intelligence continue to invade the world of work. But the scenarios for the future of human employment may be far more nuanced than you might expect. In this episode of Seeking Delphi™ entrepreneur and author Jeff Wald discusses his view of the future of work, as outlined in his book The End of Jobs: The Rise of On-demand Workers and the Agile Corporation. You can subscribe to Seeking Delphi™ on Apple podcasts, PlayerFM, MyTuner, Listen Notes, and YouTube. You can also follow us on twitter @Seeking_Delphi and Facebook.

It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created.

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.

Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. “The question about whether superintelligence could be controlled if created is quite old,” says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. “It goes back at least to Asimov’s First Law of Robotics, in the 1940s.”

Scientists at the University of Southampton and University of Edinburgh have developed a flexible underwater robot that can propel itself through water in the same style as nature’s most efficient swimmer—the Aurelia aurita jellyfish.

The findings, published in Science Robotics, demonstrate that the new underwater robot can swim as quickly and efficiently as the squid and jellyfish which inspired its design, potentially unlocking new possibilities for underwater exploration with its lightweight design and soft exterior.

Co-author Dr. Francesco Giorgio-Serchi, Lecturer and Chancellor’s Fellow, at the School of Engineering, University of Edinburgh, said: “The fascination for organisms such as squid, jellyfish and octopuses has been growing enormously because they are quite unique in that their lack of supportive skeletal structure does not prevent them from outstanding feats of swimming.”

Humans are able to find objects in their surroundings and detect some of their properties simply by touching them. While this skill is particularly valuable for blind individuals, it can also help people with no visual impairments to complete simple tasks, such as locating and grabbing an object inside a bag or pocket.

Researchers at Massachusetts Institute of Technology (MIT) have recently carried out a study aimed at replicating this human capability in robots, allowing them to understand where objects are located simply by touching them. Their paper, pre-published on arXiv, highlights the advantages of developing robots that can interact with their surrounding environment through touch rather than merely through vision and audio processing.

“The goal of our work was to demonstrate that with high-resolution tactile sensing it is possible to accurately localize known objects even from the first contact,” Maria Bauza, one of the researchers who carried out the study, told TechXplore. “Our approach makes an important leap compared to previous works on tactile localization, as we do not rely on any other external sensing modality (like vision) or previously collected tactile data related to the manipulated objects. Instead, our technique, which was trained directly in simulation, can localize known objects from the first touch which is paramount in real robotic applications where real data collection is expensive or simply unfeasible.”