Toggle light / dark theme

And they say computers can’t create art.


In 1642, famous Dutch painter Rembrandt van Rijn completed a large painting called Militia Company of District II under the Command of Captain Frans Banninck Cocq — today, the painting is commonly referred to as The Night Watch. It was the height of the Dutch Golden Age, and The Night Watch brilliantly showcased that.

The painting measured 363 cm × 437 cm (11.91 ft × 14.34 ft) — so big that the characters in it were almost life-sized, but that’s only the start of what makes it so special. Rembrandt made dramatic use of light and shadow and also created the perception of motion in what would normally be a stationary military group portrait. Unfortunately, though, the painting was trimmed in 1715 to fit between two doors at Amsterdam City Hall.

For over 300 years, the painting has been missing 60cm (2ft) from the left, 22cm from the top, 12cm from the bottom and 7cm from the right. Now, computer software has restored the missing parts.

## FUTURE TENSE RN ABC (AUDIO 29 MIN) • JUN 27, 2021.

# Some foresight about.
the future of foresight.

*Trying to predict the future is a timeless and time-consuming pursuit.*

Artificial Intelligence is increasingly being enlisted to the cause, but so too are “super-forecasters” — a new coterie of individuals with remarkable predictive powers.

But what are their limits and what does their rise say about the still popular notion of collective intelligence — the wisdom of the crowd?

Future Tense looks at the changing role of humans in forecasting.

GUESTS

Associate Professor Oguz A. Acar — City University of London.

Dr Steven Rieber — Program Manager, IARPA, Intelligence Advanced Research Projects Activity (US)

Professor michael horowitz — director, perry world house, university of pennsylvania.

Bruce Muirhead — CEO, Mindhive.

Camilla Grindheim Larsen — researcher and consultant, Bergen Public Library (Norway)

Duration: 29min 6sec.

Broadcast: Sun 27 Jun 2021, 12:30pm.

SEE ALSO

Mind Hive.

https://www.web.mindhive.org.

Thanks too folkstone design inc. & zoomers of the sunshine coast BC

#Forecasting #AI #Humans #MindHive.

Trying to predict the future is a timeless and time-consuming pursuit. Artificial Intelligence is increasingly being enlisted to the cause, but so too are “super-forecasters” – a new coterie of individuals with remarkable predictive powers. But what are their limits and what does their rise say about the still popular notion of collective intelligence – the wisdom of the crowd? Future Tense looks at the changing role of humans in forecasting.

Researchers at Google Brain announced a deep-learning computer vision (CV) model containing two billion parameters. The model was trained on three billion images and achieved 90.45% top-1 accuracy on ImageNet, setting a new state-of-the-art record.

The team described the model and experiments in a paper published on arXiv. The model, dubbed ViT-G/14, is based on Google’s recent work on Vision Transformers (ViT). ViT-G/14 outperformed previous state-of-the-art solutions on several benchmarks, including ImageNet, ImageNet-v2, and VTAB-1k. On the few-shot image recognition task, the accuracy improvement was more than five percentage-points. The researchers also trained several smaller versions of the model to investigate a scaling law for the architecture, noting that the performance follows a power-law function, similar to Transformer models used for natural language processing (NLP) tasks.

First described by Google researchers in 2017, the Transformer architecture has become the leading design for NLP deep-learning models, with OpenAI’s GPT-3 being one of the most famous. Last year, OpenAI published a paper describing scaling laws for these models. By training many similar models of different sizes and varying the amount of training data and computing power, OpenAI determined a power-law function for estimating a model’s accuracy. In addition, OpenAI found that not only do large models perform better, they are also more compute-efficient.

It’s hard to see more than a handful of stars from Princeton University, because the lights from New York City, Princeton, and Philadelphia prevent our sky from ever getting pitch black, but stargazers who get into more rural areas can see hundreds of naked-eye stars — and a few smudgy objects, too.

The biggest smudge is the Milky Way itself, the billions of stars that make up our spiral galaxy, which we see edge-on. The smaller smudges don’t mean that you need glasses, but that you’re seeing tightly packed groups of stars. One of the best-known of these “clouds” or “clusters” — groups of stars that travel together — is the Pleiades, also known as the Seven Sisters. Clusters are stellar nurseries where thousands of stars are born from clouds of gas and dust and then disperse across the Milky Way.

For centuries, scientists have speculated about whether these clusters always form tight clumps like the Pleiades, spread over only a few dozen lightyears.

A team of researchers working at Johannes Kepler University has developed an autonomous drone with a new type of technology to improve search-and-rescue efforts. In their paper published in the journal Science Robotics, the group describes their drone modifications. Andreas Birk with Jacobs University Bremen has published a Focus piece in the same journal issue outlining the work by the team in Austria.

Finding people lost (or hiding) in the forest is difficult because of the tree cover. People in planes and helicopters have difficulty seeing through the canopy to the ground below, where people might be walking or even laying down. The same problem exists for thermal applications—heat sensors cannot pick up readings adequately through the canopy. Efforts have been made to add drones to search-and–, but they suffer from the same problems because they are remotely controlled by pilots using them to search the ground below. In this new effort, the researchers have added new technology that both helps to see through the tree canopy and to highlight people that might be under it.

The new technology is based on what the researchers describe as an airborne optical sectioning algorithm—it uses the power of a computer to defocus occluding objects such as the tops of . The second part of the new device uses thermal imaging to highlight the heat emitted from a warm body. A machine-learning application then determines if the heat signals are those of humans, animals or other sources. The new hardware was then affixed to a standard autonomous . The computer in the drone uses both locational positioning to determine where to search and cues from the AOS and thermal sensors. If a possible match is made, the drone automatically moves closer to a target to get a better look. If its sensors indicate a match, it signals the research team giving them the coordinates. In testing their newly outfitted drones over 17 field experiments, the researchers found it was able to locate 38 of 42 people hidden below tree canopies.

1. Bigelow must be pissed.

2. A giant ring like that would make a hell of a spacecraft to get around the solar system.

Lawrence Klaes shared a link to the group: Space Settlement Alliance.


The Orbital Assembly Corporation, a space construction firm run by NASA veterans, announced in a press statement today, June 24, that it has successfully demonstrated its technology for developing the world’s first space hotel.

The company carried out the demonstration during the official opening of its Fontana, California Facility, which will serve as its main headquarters as it aims to make luxury space holidays a reality before 2030.

Large-scale space constructions built by semi-autonomous robots

Orbital Assembly, which pegs itself as the “first large-scale space construction company,” is developing semi-autonomous robot builders that will eventually be sent to space to build large-scale structures, such as its planned Earth-orbiting space hotel.

Over the past few decades, roboticists and computer scientists have developed artificial systems that replicate biological functions and human abilities in increasingly realistic ways. This includes artificial intelligence systems, as well as sensors that can capture various types of sensory data.

When trying to understand properties of objects and how to grasp them or handle them, humans often rely on their sense of touch. Artificial sensing systems that replicate human touch can thus be of great value, as they could enable the development of better performing and more responsive robots or prosthetic limbs.

Researchers at Sungkyunkwan University and Hanyang University in South Korea have recently created an artificial tactile sensing system that mimics the way in which humans recognize objects in their surroundings via their sense of touch. This system, presented in a paper published in Nature Electronics, uses to capture data associated with the tactile properties of objects.

Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them—and for the first time, the technology works regardless of seasonal changes to that terrain.

Details about the process were published on June 23 in the journal Science Robotics.

The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, can locate themselves.

Harness the power of AI to quickly turn simple brushstrokes into realistic landscape images for backgrounds, concept exploration, or creative inspiration. 🖌️

The NVIDIA Canvas app lets you create as quickly as you can imagine.

NVIDIA GPUs accelerate your work with incredible boosts in performance. Less time staring at pinwheels of death means bigger workloads, more features, and creating your work faster than ever. Welcome to NVIDIA Studio—and your new, more creative, process. RTX Studio laptops and desktops are purpose-built for creators, providing the best performance for video editing, 3D animation, graphic design, and photography.

For more information about NVIDIA Studio, visit: https://www.nvidia.com/studio.

CONNECT WITH US ON SOCIAL
Instagram: https://www.instagram.com/NVIDIACreators.
Twitter: https://twitter.com/NVIDIACreators.
Facebook: https://www.facebook.com/NVIDIACreators