Toggle light / dark theme

Pagaya, an AI-driven institutional asset manager that focuses on fixed income and consumer credit markets, today announced it raised $102 million in equity financing. CEO Gal Krubiner said the infusion will enable Pagaya to grow its data science team, accelerate R&D, and continue its pursuit of new asset classes including real estate, auto loans, mortgages, and corporate credit.

Pagaya applies machine intelligence to securitization — the conversion of an asset (usually a loan) into marketable securities (e.g., mortgage-backed securities) that are sold to other investors — and loan collateralization. It eschews the traditional method of securitizing pools of previously assembled asset-backed securities (ABS) for a more bespoke approach, employing algorithms to compile discretionary funds for institutional investors such as pension funds, insurance companies, and banks. Pagaya selects and buys individual loans by analyzing emerging alternative asset classes, after which it assesses their risk and draws on “millions” of signals to predict their returns.

Pagaya’s data scientists can build algorithms to track activities, such as auto loans made to residents in cities and even specific neighborhoods, for instance. The company is only limited by the amount of data publicly available; on average, Pagaya looks at decades of information on borrowers and evaluates thousands of variables.

Now that the world is in the thick of the coronavirus pandemic, governments are quickly deploying their own cocktails of tracking methods. These include device-based contact tracing, wearables, thermal scanning, drones, and facial recognition technology. It’s important to understand how those tools and technologies work and how governments are using them to track not just the spread of the coronavirus, but the movements of their citizens.

Contact tracing is one of the fastest-growing means of viral tracking. Although the term entered the common lexicon with the novel coronavirus, it’s not a new practice. The Centers for Disease Control and Prevention (CDC) says contact tracing is “a core disease control measure employed by local and state health department personnel for decades.”

Traditionally, contact tracing involves a trained public health professional interviewing an ill patient about everyone they’ve been in contact with and then contacting those people to provide education and support, all without revealing the identity of the original patient. But in a global pandemic, that careful manual method cannot keep pace, so a more automated system is needed.

Tesla Inc. Chief Executive Officer Elon Musk called Amazon.com CEO Jeff Bezos “a copy cat” on Twitter after the online retailer announced it is acquiring self-driving startup Zoox Inc.

.@JeffBezos is a copy 🐈 haha https://twitter.com/ft/status/1276401808068526080— Elon Musk (@elonmusk) June 26, 2020

It’s not the first time Musk, who also serves as CEO of SpaceX, has taken jabs at Bezos. Earlier this month, Musk made headlines when he said it was “time to break up Amazon” in a tweet. He also called Bezos, who runs rocket-launch startup Blue Origin LLC, a copy cat in April 2019 after hearing of plans for a satellite-based internet service to rival his own company’s.

Quantum computers have the potential to revolutionise the way we solve hard computing problems, from creating advanced artificial intelligence to simulating chemical reactions in order to create the next generation of materials or drugs. But actually building such machines is very difficult because they involve exotic components and have to be kept in highly controlled environments. And the ones we have so far can’t outperform traditional machines as yet.

But with a team of researchers from the UK and France, we have demonstrated that it may well be possible to build a quantum computer from conventional silicon-based electronic components. This could pave the way for large-scale manufacturing of quantum computers much sooner than might otherwise be possible.

The theoretical superior power of quantum computers derives from the laws of nanoscale or “quantum” physics. Unlike conventional computers, which store information in binary bits that can be either “0” or “1”, quantum computers use quantum bits (or qubits) that could be in a combination of “0” and “1” at the same time. This is because quantum physics allows particles to be in different states or places simultaneously.

The nascent autonomous-vehicle industry is being reshaped by consolidation. Amazon, which committed to buying 100,000 Rivian electric vehicles, announced today that it is buying Zoox, the self-driving car tech start-up, for $1 billion. Ford and Volkswagen made multi-billion dollar investments in Argo. General Motors purchased Cruise Automation in 2016, while Hyundai is working with tier-one supplier Aptiv to deploy a robotaxi service in multiple global markets.

The tie-up between Waymo and Volvo (with its three brands all aggressive pursuing electric vehicles) could reshape the competitive landscape, although it’s too early to tell.

Google started its self-driving program more than a decade ago but paused the development of its own vehicle in 2016. A tight partnership between Waymo and Volvo to develop ground-up cars, if that’s what materializes, could put those plans back on track – this time with an established auto manufacturer known for high-quality production and safety.

Smart phone apps provide nearly instantaneous navigation on Earth; the Deep Space Atomic Clock could do the same for future robotic and human explorers.

As the time when NASA will begin sending humans back to the Moon draws closer, crewed trips to Mars are an enticing next step. But future space explorers will need new tools when traveling to such distant destinations. The Deep Space Atomic Clock mission is testing a new navigation technology that could be used by both human and robotic explorers making their way around the Red Planet and other deep space destinations.

In less than a year of operations, the mission has passed its primary goal to become one of the most stable clocks to ever fly in space; it is now at least 10 times more stable than atomic clocks flown on GPS satellites. In order to keep testing the system, NASA has extended the mission through August 2021. The team will use the additional mission time to continue to improve the clock’s stability, with a goal of becoming 50 times more stable than GPS atomic clocks.

The audio of the fascinating talks & panel at the Future Day Melbourne 2020 / Machine Understanding event:

Kevin Korb — https://archive.org/searchresults.php Wilkins — https://archive.org/details/john-wilkins-humans-as-machines (John, sorry about the audio — also do you have the slides for this?) Hugo de Garis — https://archive.org/details/hugo-de-garis-future-day-2020 Panel — https://archive.org/…/future-day-panel-kevin-korb-hugo-de-g…

The video will be uploaded at a later date.


There is much public concern nowadays about when an AGI (Artificial General Intelligence) might appear and what it might go and do. The expert community is less concerned, because they know we’re a long ways off yet. More fundamentally though: we’re a long ways off of API (Artificial Primitive Intelligence). In fact, we have no idea what an API might even look like. AI took off without ever reflecting seriously on what I, either NI or AI, really is. So, it’s been streaking along in myriad directions without any goal in sight.

Are you for Ethical Ai Eric Klien?


Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.

For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.

Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.