Toggle light / dark theme

A team of researchers from the Harbin Institute of Technology along with partners at the First Affiliated Hospital of Harbin Medical University, both in China, has developed a tiny robot that can ferry cancer drugs through the blood-brain barrier (BBB) without setting off an immune reaction. In their paper published in the journal Science Robotics, the group describes their robot and tests with mice. Junsun Hwang and Hongsoo Choi, with the Daegu Gyeongbuk Institute of Science and Technology in Korea, have published a Focus piece in the same journal issue on the work done by the team in China.

For many years, medical scientists have sought ways to deliver drugs to the brain to treat health conditions such as brain cancers. Because the brain is protected by the skull, it is extremely difficult to inject them directly. Researchers have also been stymied in their efforts by the BBB—a filtering mechanism in the capillaries that supply blood to the brain and that blocks foreign substances from entering. Thus, simply injecting drugs into the bloodstream is not an option. In this new effort, the researchers used a defense cell type that naturally passes through the BBB to carry drugs to the brain.

To build their tiny robots, the researchers exposed groups of white blood cells called neutrophils to tiny bits of magnetic nanogel particles coated with fragments of E. coli material. Upon exposure, the neutrophils naturally encased the tiny robots, believing them to be nothing but E. coli bacteria. The microrobots were then injected into the bloodstream of a test mouse with a cancerous tumor. The team then applied a to the robots to direct them through the BBB, where they were not attacked, as the identified them as normal neutrophils, and into the brain and the tumor. Once there, the robots released their cancer-fighting drugs.

Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent.


THE INSTITUTE Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.

He notes that the imitation of human thinking is not the sole goal of machine learning—the engineering field that underlies recent progress in AI—or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

“People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans,” he says. “We don’t have that, but people are talking as if we do.”

Colonel Mark M. Zais, chief data scientist at United States Special Operations Command (USSOCOM) stresses the importance of AI-related education in the DOD. In his 2020 CJCS Strategic Essay Competition first place Strategy Article, he says, “Without that education, we face a world where senior leaders use AI-enabled technologies to make decisions related to national security without a full grasp of the tools that they—and our adversaries—possess.”


With the release of its first artificial intelligence (AI) strategy in 2019, the Department of Defense (DOD) formalized the increased use of AI technology throughout the military, challenging senior leaders to create “organizational AI strategies” and “make related resource allocation decisions.”1 Unfortunately, most senior leaders currently have limited familiarity with AI, having developed their skills in tactical counterinsurgency environments, which reward strength (physical and mental), perseverance, and diligence. Some defense scholars have advocated a smarter military, emphasizing intellectual human capital and arguing that cognitive ability will determine success in strategy development, statesmanship, and decisionmaking.2 AI might complement that ability but cannot be a substitute for it. Military leaders must leverage AI to help them adapt and be curious. As innovative technologies with AI applications increasingly become integral to DOD modernization and near-peer competition, senior leaders’ knowledge of AI is critical for shaping and applying our AI strategy and creating properly calibrated expectations.

War is about decisionmaking, and AI enables the technology that will transform how humans and machines make those decisions.3 Successful use of this general-purpose technology will require senior leaders who truly understand its capabilities and can demystify the hyperbole.4 Within current AI strategy development and application, many practitioners have a palpable sense of dread as we crest the waves of a second AI hype cycle, seemingly captained by novices of the AI seas.5 In-house technical experts find it difficult to manage expectations and influence priorities, clouded by buzzwords and stifled by ambitions for “quick wins.” The importance of AI-related education increases with AI aspirations and the illusion of progress. Without that education, we face a world where senior leaders use AI-enabled technologies to make decisions related to national security without a full grasp of the tools that they—and our adversaries—possess. This would be equivalent to a combat arms officer making strategic military landpower decisions without the foundations of military education in maneuver warfare and practical experience.

Strategic decisionmaking in a transformative digital environment requires comparably transformative leadership. Modernization of the military workforce should parallel modernization of equipment and technology. In the short term, senior leaders require executive AI education that equips them with enough knowledge to distill problems that need AI solutions and that provides informed guidance for customized solutions. With the ability to trust internal expertise, the military can avoid overreliance on consultants and vendors, following Niccolò Machiavelli’s warning against dependence on auxiliary troops.6 In the long term, military education should give the same attention to AI that is provided to traditional subjects such as maneuver warfare and counterinsurgency operations. Each steppingstone of military education should incorporate subjects from the strategic domain, including maneuver warfare, information warfare, and artificial intelligence.

Pugs, Ferraris, mountains, brunches, beaches, and babies — Instagram is full of them. In fact, it’s become one of the largest image databases on the planet over the last decade and the company’s owner, Facebook, is using this treasure trove to teach machines what’s in a photo.

Facebook announced on Thursday that it had built an artificial intelligence program that can “see” what it is looking at. It did this by feeding it over 1 billion public images from Instagram.

The “computer vision” program, nicknamed SEER, outperformed existing AI models in an object recognition test, Facebook said.

In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.

The researchers have developed a that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.

The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a Ph.D. student at Georgia Tech.

The researchers let the cell clusters assemble in the right proportions and then used micro-manipulation tools to move or eliminate cells — essentially poking and carving them into shapes like those recommended by the algorithm. The resulting cell clusters showed the predicted ability to move over a surface in a nonrandom way.

The team dubbed these structures xenobots. While the prefix was derived from the Latin name of the African clawed frogs (Xenopus laevis) that supplied the cells, it also seemed fitting because of its relation to xenos, the ancient Greek for “strange.” These were indeed strange living robots: tiny masterpieces of cell craft fashioned by human design. And they hinted at how cells might be persuaded to develop new collective goals and assume shapes totally unlike those that normally develop from an embryo.

But that only scratched the surface of the problem for Levin, who wanted to know what might happen if embryonic frog cells were “liberated” from the constraints of both an embryonic body and researchers’ manipulations. “If we give them the opportunity to re-envision multicellularity,” Levin said, then his question was, “What is it that they will build?”

AI plays an important role across our apps — from enabling AR effects, to helping keep bad content off our platforms and better supporting our communities through our COVID-19 Community Help hub. As AI-powered services become more present in everyday life, it’s becoming even more important to understand how AI systems may affect people around the world and how we can strive to ensure the best possible outcomes for everyone.

Several years ago, we created an interdisciplinary Responsible AI (RAI) team to help advance the emerging field of Responsible AI and spread the impact of such work throughout Facebook. The Fairness team is part of RAI, and works with product teams across the company to foster informed, context-specific decisions about how to measure and define fairness in AI-powered products.

In the future, Tesla’s Autopilot and Full Self-Driving suite are expected to handle challenging circumstances on the road with ease. These involve inner-city driving, which includes factors like pedestrians walking about, motorcyclists driving around cars, and other potential edge cases. When Autopilot is able to handle these cases confidently, the company could roll out ambitious projects such as Elon Musk’s Robotaxi Network.

Tesla’s FSD Beta, at least based on videos of the system in action, seems to be designed for maximum safety. Members of the first batch of testers for the FSD Beta have shared clips of the advanced driver-assist system handling even challenging inner-city streets in places such as San Francisco with caution. But even these difficult roads pale in comparison to the traffic situation in other parts of the world.

In Southeast Asian countries such as Vietnam, for example, traffic tends to be very challenging, to the point where even experienced human drivers could experience anxiety when navigating through inner-city roads. The same is true for other countries like India or the Philippines, where road rules are loosely followed. In places such as these, Autopilot still has some ways to go, as seen in a recently shared video from a Tesla Model X owner.

Summary: The BrainGate brain-machine interface is able to transmit signals from a single neuron resolution with full broadband fidelity without physically tethering the user to a decoding system.

Source: Brown University.

Brain-computer interfaces (BCIs) are an emerging assistive technology, enabling people with paralysis to type on computer screens or manipulate robotic prostheses just by thinking about moving their own bodies. For years, investigational BCIs used in clinical trials have required cables to connect the sensing array in the brain to computers that decode the signals and use them to drive external devices.

In 2020, TSMC spent a record $18 billion on building new factories for their chips. TSMC just announced they are spending $100 billion on new factories over the next 3 years. This will radically change the chip landscape. Many other companies, including Samsung and Intel, are upping their spending as well.

Of course, at some point there will be a chip glut again but this greatly increased chip capacity will change the world that we live in. It will also make AGI (Artificial General Intelligence) that much closer to reality… (All this money gives companies an incentive to spend R&D on smaller transistors, etc.)