Toggle light / dark theme

Wave function represents the quantum state of an atom, including the position and movement states of the nucleus and electrons. For decades researchers have struggled to determine the exact wave function when analyzing a normal chemical molecule system, which has its nuclear position fixed and electrons spinning. Fixing wave function has proven problematic even with help from the Schrödinger equation.

Previous research in this field used a Slater-Jastrow Ansatz application of quantum Monte Carlo (QMC) methods, which takes a linear combination of Slater determinants and adds the Jastrow multiplicative term to capture the close-range correlations.

Now, a group of DeepMind researchers have brought QMC to a higher level with the Fermionic Neural Network — or Fermi Net — a neural network with more flexibility and higher accuracy. Fermi Net takes the electron information of the molecules or chemical systems as inputs and outputs their estimated wave functions, which can then be used to determine the energy states of the input chemical systems.

To evaluate the performance of robotics algorithms and controllers, researchers typically use software simulations or real physical robots. While these may appear as two distinct evaluation strategies, there is a whole other range of possibilities that combine elements of both.

In a recent study, researchers at Texas A&M University and the University of South Carolina have set out to examine evaluation and execution scenarios that lie at an intersection between simulations and real implementations. Their investigation, outlined in a paper pre-published on arXiv, specifically focuses on instances in which real robots perceive the world via their sensors, where the environment they sense could be seen as a mere illusion.

“We consider problems in which robots conspire to present a view of the world that differs from reality,” Dylan Shell and Jason O’Kane, the researchers who carried out the study, wrote in their paper. “The inquiry is motivated by the problem of validating robot behavior physically despite there being a discrepancy between the robots we have at hand and those we wish to study, or the environment for testing that is available versus that which is desired, or other potential mismatches in this vein.”

Jeff Bezos’ Blue Origin space venture and Elon Musk’s SpaceX are often at odds, but there’s at least one place where those two space-industry rivals are on the same page: the newly unveiled Space Talent job database.

The search engine for careers in the space industry is a project of Space Angels, a nationwide network designed to link angel investors with space entrepreneurs.

“If you’ve ever considered working in space, this jobs board has 3,000 reasons to make the leap,” Space Angels CEO Chad Anderson said in a tweet.

Ira Pastor, ideaXme longevity and aging ambassador and Founder of Bioquark, interviews Robin Farmanfarmaian, medical futurist, bestselling author, professional speaker, and CEO and Co-Founder of ArO.

Ira Pastor Comments:

In 2019, we are spending over $7 trillion around the globe on healthcare. $1 trillion goes to pharmaceutical products, $350 billion to medical devices, $200 billion new life sciences R&D, and on and on.

We tend to forget how much consolidation has occurred in these different healthcare segments. The world’s 10 largest pharmaceutical companies control 60% of that trillion dollar market. The top 8 insurance companies in the U.S. control over 50% of all individual patient coverage. In 43 countries, which account for 3/4 the world’s population, patients only have appointment times between 5- 10 minutes with their primary care physicians. As patients, we know what it’s like to feel somewhat separated and insignificant in this system.

We usually talk on this show about the future, and the amazing technologies and products coming down the pipeline for more dramatic things, such as complex regeneration, disease reversion, radical life extension and so forth, but it’s equally important to speak on how we as individuals and patients can put ourselves back in the driver’s seat and not just be an afterthought in the equation.

Today’s guest, who knows a lot about this topic, is Ms. Robin Farmanfarmaian. Robin is a medical futurist, entrepreneur, bestselling author, and professional speaker. She focuses on the future of integrated medicine, the changing role of patients in healthcare decision-making, and how technology will change the way we experience and interact with medical facilities and physicians.

The code used below is on GitHub.

In this project, we’ll be solving a problem familiar to any physics undergrad — using the Schrödinger equation to find the quantum ground state of a particle in a 1-dimensional box with a potential. However, we’re going to tackle this old standby with a new method: deep learning. Specifically, we’ll use the TensorFlow package to set up a neural network and then train it on random potential functions and their numerically calculated solutions.

Why reinvent the wheel (ground state)? Sure, it’s fun to see a new tool added to the physics problem-solving toolkit, and I needed the practice with TensorFlow. But there’s a far more compelling answer. We know basically everything there is to know about this topic already. The neural network, however, doesn’t know any physics. Crudely speaking, it just finds patterns. Suppose we examine the relative strength of connections between input neurons and output. The structure therein could give us some insight into how the universe “thinks” about this problem. Later, we can apply deep learning to a physics problem where the underlying theory is unknown. By looking at the innards of that neural network, we might learn something new about fundamental physical principles that would otherwise remain obscured from our view. Therein lies the true power of this approach: peering into the mind of the universe itself.

Much like US corporations do now.


Debates about rights are frequently framed around the concept of legal personhood. Personhood is granted not just to human beings but also to some non-human entities, such as corporations or governments. Legal entities, aka legal persons, are granted certain privileges and responsibilities by the jurisdictions in which they are recognized, and many such rights are not available to non-person agents. Attempting to secure legal personhood is often seen as a potential pathway to get certain rights and protections for animals1, fetuses2, trees and rivers 3, and artificially intelligent (AI) agents4.

It is commonly believed that a new law or judicial ruling is necessary to grant personhood to a new type of entity. But recent legal literature 5–8 suggests that loopholes in the current law may permit legal personhood to be granted to AI/software without the need to change the law or persuade a court.

For example, L. M. LoPucki6 points out, citing Shawn Bayern’s work on conferring legal personhood on AI7, 8, “Professor Shawn Bayern demonstrated that anyone can confer legal personhood on an autonomous computer algorithm merely by putting it in control of a limited liability company (LLC). The algorithm can exercise the rights of the entity, making them effectively rights of the algorithm. The rights of such an algorithmic entity (AE) would include the rights to privacy, to own property, to enter into contracts, to be represented by counsel, to be free from unreasonable search and seizure, to equal protection of the laws, to speak freely, and perhaps even to spend money on political campaigns. Once an algorithm had such rights, Bayern observed, it would also have the power to confer equivalent rights on other algorithms by forming additional entities and putting those algorithms in control of them.”6. (See Note 1.)

Two University of Hawaii at Manoa researchers have identified and corrected a subtle error that was made when applying Einstein’s equations to model the growth of the universe.

Physicists usually assume that a cosmologically large system, such as the , is insensitive to details of the small systems contained within it. Kevin Croker, a postdoctoral research fellow in the Department of Physics and Astronomy, and Joel Weiner, a faculty member in the Department of Mathematics, have shown that this assumption can fail for the compact objects that remain after the collapse and explosion of very large .

“For 80 years, we’ve generally operated under the assumption that the universe, in broad strokes, was not affected by the particular details of any small region,” said Croker. “It is now clear that general relativity can observably connect collapsed stars—regions the size of Honolulu—to the behavior of the universe as a whole, over a thousand billion billion times larger.”

Computer vision is one of the most popular applications of artificial intelligence. Image classification, object detection and object segmentation are some of the use cases of computer vision-based AI. These techniques are used in a variety of consumer and industrial scenarios. From face recognition-based user authentication to inventory tracking in warehouses to vehicle detection on roads, computer vision is becoming an integral part of next-generation applications.

Computer vision uses advanced neural networks and deep learning algorithms such as Convolutional Neural Networks (CNN), Single Shot Multibox Detector (SSD) and Generative Adversarial Networks (GAN). Applying these algorithms requires a thorough understanding of neural network architecture, advanced mathematics and image processing techniques. For an average ML developer, CNN remains to be a complex branch of AI.

Apart from the knowledge and understanding of algorithms, CNNs demand high end, expensive infrastructure for training the models, which is out of reach for most of the developers.