Toggle light / dark theme

China is deploying robots and drones to remotely disinfect hospitals, deliver food and enforce quarantine restrictions as part of the effort to fight coronavirus.

Chinese state media has reported that drones and robots are being used by the government to cut the risk of person-to-person transmission of the disease.

There are 780 million people that are on some form of residential lockdown in China. Wuhan, the city where the viral outbreak began, has been sealed off from the outside world for weeks.

With some reports predicting the precision agriculture market will reach $12.9 billion by 2027, there is an increasing need to develop sophisticated data-analysis solutions that can guide management decisions in real time. A new study from an interdisciplinary research group at University of Illinois offers a promising approach to efficiently and accurately process precision ag data.

The majority have focused on outlining high-level principles that should guide those building these systems. W hether by chance or by design, the principles they have coalesced around closely resemble those at the heart of medical ethics. But writing in Nature Machine Intelligence, Brent Mittelstadt from the University of Oxford points out that AI development is a very different beast to medicine, and a simple copy and paste won’t work.

The four core principles of medical ethics are respect for autonomy (patients should have control over how they are treated), beneficence (doctors should act in the best interest of patients), non-maleficence (doctors should avoid causing harm) and justice (healthcare resources should be distributed fairly).

The more than 80 AI ethics reports published are far from homogeneous, but similar themes of respect, autonomy, fairness, and prevention of harm run through most. And these seem like reasonable principles to apply to the development of AI. The problem, says Mittelstadt, is that while principles are an effective tool in the context of a discipline like medicine, they simply don’t make sense for AI.

Robotic spacecraft will be able to communicate with the dish using radio waves and lasers.

Surrounded by California desert, NASA officials broke ground Tuesday, Feb. 11, on a new antenna for communicating with the agency’s farthest-flung robotic spacecraft. Part of the Deep Space Network (DSN), the 112-foot-wide (34-meter-wide) antenna dish being built represents a future in which more missions will require advanced technology, such as lasers capable of transmitting vast amounts of data from astronauts on the Martian surface. As part of its Artemis program NASA will send the first woman and next man to the Moon by 2024, applying lessons learned there to send astronauts to Mars.

Using massive antenna dishes, the agency talks to more than 30 deep space missions on any given day, including many international missions. As more missions have launched and with more in the works, NASA is looking to strengthen the network. When completed in 2½ years, the new dish will be christened Deep Space Station-23 (DSS-23), bringing the DSN’s number of operational antennas to 13.

Engineers from Johns Hopkins have looked to how snakes move around to inform the design of a nimble new robot. It is hoped that the development could lead to search and rescue bots able to tackle all kinds of obstacles with ease.

“We look to these creepy creatures for movement inspiration because they’re already so adept at stably scaling obstacles in their day-to-day lives,” said senior author on the study, Chen Li. “Hopefully our robot can learn how to bob and weave across surfaces just like snakes.”

Observing how a variable kingsnake climbed up steps of varying height and having different surfaces, the researchers noted that the snake combined lateral undulation with cantilevering. When faced with a step, the reptile seemed to partition its body into three – the front and rear both moved back and forth while the middle section remained stiff.

Researchers at Keio University and the National Institute of Information and Communications Technology (NICT) in Japan have recently introduced a new design for a terahertz wave radar based on a technique known as leaky-wave coherence tomography. Their paper, published in Nature Electronics, could help to solve some of the limitations of existing wave radar.

The use of , particularly millimeter-wave radar, has increased significantly over the past few years, particularly in the development of smart and self-driving vehicles. The distance and angular resolutions of radar are typically limited by their bandwidth and wavelength, respectively.

Terahertz waves, which have higher frequencies and shorter wavelengths than millimeter waves, allow for the development of radar systems with a smaller footprint and higher resolution. As wavelengths become shorter, however, the attenuation resulting from wave diffraction rapidly increases.

By definition, posthumanism (I choose to call it ‘cyberhumanism’) is to replace transhumanism at the center stage circa 2035. By then, mind uploading could become a reality with gradual neuronal replacement, rapid advancements in Strong AI, massively parallel computing, and nanotechnology allowing us to directly connect our brains to the Cloud-based infrastructure of the Global Brain. Via interaction with our AI assistants, the GB will know us better than we know ourselves in all respects, so mind-transfer, or rather “mind migration,” for billions of enhanced humans would be seamless, sometime by mid-century.

I hear this mantra over and over again — we don’t know what consciousness is. Clearly, there’s no consensus here but in the context of topic discussed, I would summarize my views, as follows: Consciousness is non-local, quantum computational by nature. There’s only one Universal Consciousness. We individualize our conscious awareness through the filter of our nervous system, our “local” mind, our very inner subjectivity, but consciousness itself, the self in a big sense, our “core” self is universal, and knowing it through experience has been called enlightenment, illumination, awakening, or transcendence, through the ages.

Any container with a sufficiently integrated network of information patterns, with a certain optimal complexity, especially complex dynamical systems with biological or artificial brains (say, the coming AGIs) could be filled with consciousness at large in order to host an individual “reality cell,” “unit,” or a “node” of consciousness. This kind of individuated unit of consciousness is always endowed with free will within the constraints of the applicable set of rules (“physical laws”), influenced by the larger consciousness system dynamics. Isn’t too naïve to presume that Universal Consciousness would instantiate phenomenality only in the form of “bio”-logical avatars?

I am not naive — I’ve worked as an aerospace engineer for 35 years — I realize that PR can differ from reality. However, this indication gives me some hope:

“The draft recommendations emphasized human control of AI systems. “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems,” it reads.”

This is far from a Ban on Killer Robots, however, given how many advances are being overturned in the US federal government (example: the US will now use landmines, after over 30 years of not employing them in war), this is somewhat encouraging.

As always, the devil is in the details.


Sources say the list will closely follow an October report from a defense advisory board.

The Defense Department will soon adopt a detailed set of rules to govern how it develops and uses artificial intelligence, officials familiar with the matter told Defense One.

A draft of the rules was released by the Defense Innovation Board, or DIB, in October as “Recommendations on the Ethical Use of Artificial Intelligence.” Sources indicated that the Department’s policy will follow the draft closely.

A BONKERS Russian billionaire claims he’ll make you immortal by 2045.

Internet businessman Dmitry Itskov, 38, is bankrolling a far-fetched plan to uploaded people’s personalities to artificial brains.

These “brains” can then be jammed into robots or holograms, allowing us to live on forever as artificial versions of ourselves, Dmitry claims.