Toggle light / dark theme

A high-power laser, optimized optical pathway, a patented adaptive resolution technology, and smart algorithms for laser scanning have enabled UpNano, a Vienna-based high-tech company, to produce high-resolution 3D-printing as never seen before.

“Parts with nano- and microscale can now be printed across 12 orders of magnitude—within times never achieved previously. This has been accomplished by UpNano, a spin-out of the TU Wien, which developed a high-end two-photon polymerization (2PP) 3D-printing system that can produce polymeric parts with a volume ranging from 100 to 1012 cubic micrometers. At the same time the printer allows for a nano- and microscale resolution,” the company said in a statement.

Recently the company demonstrated this remarkable capability by printing four models of the Eiffel Tower ranging from 200 micrometers to 4 centimeters—with perfect representation of all minuscule structures within 30 to 540 minutes. With this, 2PP 3D-printing is ready for applications in R&D and industry that seemed so far impossible.

Teeny-tiny living robots made their world debut earlier this year. These microscopic organisms are composed entirely of frog stem cells, and, thanks to a special computer algorithm, they can take on different shapes and perform simple functions: crawling, traveling in circles, moving small objects — or even joining with other organic bots to collectively perform tasks.


The world’s first living robots may one day clean up our oceans.

On GPT-3, achieving AGI, machine understanding and lots more… Will GPT-3 or an equivalent be used to deepfake human understanding?


Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more
02:40 What’s missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand — what’s missing?
08:35 Symbol grounding — does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can’t write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data — video, audio, text etc
26:00 GPT-3 a universal chat-bot — conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience — it can’t plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters — Amazon may be doing something similar — future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world — no reason why GPT-3 can’t be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it’s relationship to mathematics?
59:30 Stateless systems vs step by step Computation — Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can’t describe a consistent reality without contradictions
1:06:04 Stevan Harnad’s understanding of computation
1:08:32 Causation / answering ‘why’ questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain — would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit — spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models — parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features — predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 ‘Category’ is a useful concept — gradients are often hard to compute — so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term ‘general intelligence’ inherits it’s essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color — natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction.

Singapore-based blockchain data firm CyberVein has become one of 12 firms participating in the construction of China’s Hainan Wenchang International Aerospace City. Construction commenced last month, with the site previously hosting a satellite launch center. Described as “China’s first aerospace cultural and tourism city,” it will be a hub for the development of aerospace products and support services intended for use in Chinese spacecraft and satellite launch missions. The 12-million-square-meter facility will host the country’s first aerospace super-computing center, and will focus on developing 40 technological areas including big data, satellite remote sensing and high precision positioning technology. CyberVein will work alongside major Chinese firms, including Fortune 500 companies Huawei and Kingsoft Cloud, and will leverage its blockchain, artificial intelligence and big data technologies to support the development of the city’s Smart Brain Planning and Design Institute.”


Blockchain firm CyberVein is partnering with the Chinese government to build a blockchain-powered governance system for its aerospace ‘smart city.’

Listen to article.

Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision—only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.

“A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”

This technique complements other vision systems that can see through barriers on the —for applications in medicine—because it’s more focused on large-scale situations, such as navigating self-driving cars in fog or heavy rain and satellite imaging of the surface of Earth and other planets through hazy atmosphere.

A pair of Danish computer scientists have solved a longstanding mathematics puzzle that lay dormant for decades, after researchers failed to make substantial progress on it since the 1990s.

The abstract problem in question is part of what’s called graph theory, and specifically concerns the challenge of finding an algorithm to resolve the planarity of a dynamic graph. That might sound a bit daunting, so if your graph theory is a little rusty, there’s a much more fun and accessible way of thinking about the same inherent ideas.

Going as far back as 1913 – although the mathematical concepts can probably be traced back much further – a puzzle called the three utilities problem was published.

How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. A hot-wire cutter will be used, among other things, to develop building blocks for a mortar-free structure.

A newborn moves its arms and hands largely in an undirected and random manner. It has to learn how to coordinate them step by step. Years of practice are required to master the finely balanced movements of a violinist or calligrapher. It is therefore no surprise that the advanced calculations for the optimal movement of two robot arms to guide a tool precisely involve extremely challenging optimisation tasks. The complexity also increases greatly when the tool itself is not rigid, but flexible in all directions and bends differently depending on its position and movement.

Simon Dünser from Stelian Coros’ research group at the Institute for Intelligent Interactive Systems has worked with other researchers to develop a hot– cutter robot with a wire that bends flexibly as it works. This allows it to create much more in significantly fewer cuts than previous systems, where the electrically heatable wire is rigid and is thus only able to cut ruled surfaces from fusible plastics with a straight line at every point.

An alert pops up in your email: The latest spacecraft observations are ready. You now have 24 hours to scour 84 hours-worth of data, selecting the most promising split-second moments you can find. The data points you choose, depending on how you rank them, will download from the spacecraft in the highest possible resolution; researchers may spend months analyzing them. Everything else will be overwritten like it was never collected at all.

These are the stakes facing the Scientist in the Loop, one of the most important roles on the Magnetospheric Multiscale, or MMS, mission team. Seventy-three volunteers share the responsibility, working weeklong shifts at a time to ensure the very best data makes it to the ground. It takes a keen and meticulous eye, which is why it’s always been left to a carefully-trained human – at least until now.

A paper published recently in Frontiers in Astronomy and Space Sciences describes the first artificial intelligence algorithm to lend the Scientist in the Loop a (virtual) hand.

Artificial Intelligence research is making big strides. But in practice?

There are several buckets you can use to categorize AI, one of which is the BS bucket. Within, you’ll find simple statistical algorithms people have been using forever. But there’s another bucket of things that actually weren’t possible a decade ago.

“The vast majority of businesses are still in the early phases of collecting and using data. Most companies looking for data scientists are looking for people to collect, manage, and calculate basic statistics over normal business processes.”


Today we launch our Register Debates in which we spar over hot topics and YOU decide which side is right – by reader vote.