Toggle light / dark theme

1. AI-optimized manufacturing

Paper and pencil tracking, luck, significant global travel and opaque supply chains are part of today’s status quo, resulting in large amounts of wasted energy, materials and time. Accelerated in part by the long-term shutdown of international and regional travel by COVID-19, companies that design and build products will rapidly adopt cloud-based technologies to aggregate, intelligently transform, and contextually present product and process data from manufacturing lines throughout their supply chains. By 2025, this ubiquitous stream of data and the intelligent algorithms crunching it will enable manufacturing lines to continuously optimize towards higher levels of output and product quality – reducing overall waste in manufacturing by up to 50%. As a result, we will enjoy higher quality products, produced faster, at lower cost to our pocketbooks and the environment.

Anna-Katrina Shedletsky, CEO and Founder of Instrumental.

Pagaya, an AI-driven institutional asset manager that focuses on fixed income and consumer credit markets, today announced it raised $102 million in equity financing. CEO Gal Krubiner said the infusion will enable Pagaya to grow its data science team, accelerate R&D, and continue its pursuit of new asset classes including real estate, auto loans, mortgages, and corporate credit.

Pagaya applies machine intelligence to securitization — the conversion of an asset (usually a loan) into marketable securities (e.g., mortgage-backed securities) that are sold to other investors — and loan collateralization. It eschews the traditional method of securitizing pools of previously assembled asset-backed securities (ABS) for a more bespoke approach, employing algorithms to compile discretionary funds for institutional investors such as pension funds, insurance companies, and banks. Pagaya selects and buys individual loans by analyzing emerging alternative asset classes, after which it assesses their risk and draws on “millions” of signals to predict their returns.

Pagaya’s data scientists can build algorithms to track activities, such as auto loans made to residents in cities and even specific neighborhoods, for instance. The company is only limited by the amount of data publicly available; on average, Pagaya looks at decades of information on borrowers and evaluates thousands of variables.

For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?

Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.

Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.

Scientists managed another breakthrough. They built a quantum computer that can execute the difficult Shor’s algorithm. It’s just five atoms big, but the experts claim it will be easy to scale it up.

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

The cosmos contains a Higgs field—similar to an electric field—generated by Higgs bosons in the vacuum. Particles interact with the field to gain energy and, through Albert Einstein’s iconic equation, E=mc2, mass. The Standard Model of particle physics, although successful at describing elementary particles and their interactions at low energies, does not include a viable and hotly debated dark-matter particle. The only possible candidates, neutrinos, do not have the right properties to explain the observed dark matter.

“One particularly interesting possibility is that these long-lived dark particles are coupled to the Higgs boson in some fashion—that the Higgs is actually a portal to the dark world. We know for sure there’s a dark world, and there’s more energy in it than there is in ours. It’s possible that the Higgs could actually decay into these long-lived particles,” said LianTao Wang, a University of Chicago physicist, in 2019, referring to the last holdout particle in physicists’ grand theory of how the universe works, discovered at the LHC in 2012, filling the last gap in the standard model of fundamental particles and forces. Since then, the standard model has stood up to every test, yielding no hints of new physics.

The dark world makes up more than 95 percent of the universe, but scientists only know it exists from its effects—” like a poltergeist you can only see when it pushes something off a shelf.” We know there’s dark matter because like the poltergeist, we can see gravity acting on it keeping galaxies from flying apart.

A black hole, at least in our current understanding, is characterized by having “no hair,” that is, it is so simple that it can be completely described by just three parameters, its mass, its spin and its electric charge. Even though it may have formed out of a complex mix of matter and energy, all other details are lost when the black hole forms. Its powerful gravitational field creates a surrounding surface, a “horizon,” and anything that crosses that horizon (even light) cannot escape. Hence the singularity appears black, and any details about the infalling material are also lost and digested into the three knowable parameters.

Astronomers are able to measure the masses of black holes in a relatively straightforward way: watching how matter moves in their vicinity (including other black holes), affected by the gravitational field. The charges of black holes are thought to be insignificant since positive and negative infalling charges are typically comparable in number. The spins of are more difficult to determine, and both rely on interpreting the X-ray emission from the hot inner edge of the accretion disk around the black hole. One method models the shape of the X-ray continuum, and it relies on good estimates of the mass, distance, and viewing angle. The other models the X-ray spectrum, including observed atomic emission lines that are often seen in reflection from the hot gas. It does not depend on knowing as many other parameters. The two methods have in general yielded comparable results.

CfA astronomer James Steiner and his colleagues reanalyzed seven sets of spectra obtained by the Rossi X-ray Timing Explorer of an outburst from a stellar-mass black hole in our galaxy called 4U1543-47. Previous attempts to estimate the spin of the object using the continuum method resulted in disagreements between papers that were considerably larger than the formal uncertainties (the papers assumed a mass of 9.4 solar-masses and a distance of 24.7 thousand light-years). Using careful refitting of the spectra and updated modeling algorithms, the scientists report a spin intermediate in size to the previous ones, moderate in magnitude, and established at a 90% confidence level. Since there have been only a few dozen well confirmed black hole spins measured to date, the new result is an important addition.

Artificial intelligence (AI) is a broad field constituted of many disciplines like robotics or machine learning. The aim of AI is to create machines capable of performing tasks and cognitive functions that are otherwise only within the scope of human intelligence. To get there, machines must be able to learn these opportunities automatically instead of having each of them to be explicitly programmed end-to-end.

Another task of AI is to write programs. Similar technology was developed by Microsoft in conjunction with Cambridge University. They developed a program which is able to create other programs, borrowing code. The invention is called DeepCoder. This software that can take into account the requirements of developers and find the code fragments in a large database. You can see the work of scientists here.

“The potential for the automation of writing software code is just incredible. This means a reduction of the huge amount of effort that is required to develop code. Such a system will be much more productive than any man. In addition, you can create a system that was previously impossible to build”,