Toggle light / dark theme

In this DNA factory, organism engineers are using robots and automation to build completely new forms of life.
»Subscribe to Seeker! http://bit.ly/subscribeseeker.
»Watch more Focal Point | https://bit.ly/2M3gmbK

Ginkgo Bioworks, a Boston company specializing in “engineering custom organisms,” aims to reinvent manufacturing, agriculture, biodesign, and more.

Biologists, software engineers, and automated robots are working side by side to accelerate the speed of nature by taking synthetic DNA, remixing it, and programming microbes, turning custom organisms into mini-factories that could one day pump out new foods, fuels, and medicines.

While there are possibly numerous positive and exciting outcomes from this research, like engineering gut bacteria to produce drugs inside the human body on demand or building self-fertilizing plants, the threat of potential DNA sequences harnessing a pathological function still exists.

That’s why Ginkgo Bioworks is developing a malware software to effectively stomp out the global threat of biological weapons, ensuring that synthetic biology can’t be used for evil.

Learn more about synthetic DNA and this biological assembly line on this episode of Focal Point.

NASA’s Curiosity rover has marked the 10th anniversary of its launch to Mars by sending back a spectacular ‘picture postcard’ from the Red Planet.

The robotic explorer snapped two black and white images of the Martian landscape which were then combined and had colour added to them to produce the remarkable composite.

Curiosity, which launched to the Red Planet almost exactly 10 years ago on November 26, 2011, took the pictures from its most recent perch on the side of Mars’ Mount Sharp.

In recent years, many frameworks and guidelines have been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction. But it’s also critical to look beyond technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.

In a recent paper, we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

Tutel is a high-performance MoE library developed by Microsoft researchers to aid in the development of large-scale DNN (Deep Neural Network) models; Tutel is highly optimized for the new Azure NDm A100 v4 series, and Tutel’s diverse and flexible MoE algorithmic support allows developers across AI domains to execute MoE more easily and efficiently. Tutel achieves an 8.49x speedup on an NDm A100 v4 node with 8 GPUs and a 2.75x speedup on 64 NDm A100 v4 nodes with 512 A100 GPUs compared to state-of-the-art MoE implementations like Meta’s Facebook AI Research Sequence-to-Sequence Toolkit (fairseq) in PyTorch for a single MoE layer.

Tutel delivers a more than 40% speedup for Meta’s 1.1 trillion–parameter MoE language model with 64 NDm A100 v4 nodes for end-to-end performance, thanks to optimization for all-to-all communication. When working on the Azure NDm A100 v4 cluster, Tutel delivers exceptional compatibility and comprehensive capabilities to assure outstanding performance. Tutel is free and open-source software that has been integrated into fairseq.

Tutel is a high-level MoE solution that complements existing high-level MoE solutions like fairseq and FastMoE by focusing on the optimizations of MoE-specific computation and all-to-all communication and other diverse and flexible algorithmic MoE supports. Tutel features a straightforward user interface that makes it simple to combine with other MoE systems. Developers can also use the Tutel interface to include independent MoE layers into their own DNN models from the ground up, taking advantage of the highly optimized state-of-the-art MoE features right away.

“The Singularity” is a term coined by John von Neumann, a major figure in the history of computer science. The concept refers to a hypothetical time when computers become more intelligent than humans and can improve themselves without our input. Imagine a run-away reaction where artificial intelligence is able to improve itself. This improved self is able to further improve itself. With each improvement the rate at which…

Since artificial intelligence pioneer Marvin Minsky patented the principle of confocal microscopy in 1957, it has become the workhorse standard in life science laboratories worldwide, due to its superior contrast over traditional wide-field microscopy. Yet confocal microscopes aren’t perfect. They boost resolution by imaging just one, single, in-focus point at a time, so it can take quite a while to scan an entire, delicate biological sample, exposing it light dosages that can be toxic.

To push confocal imaging to an unprecedented level of performance, a collaboration at the Marine Biological Laboratory (MBL) has invented a “kitchen sink” confocal platform that borrows solutions from other high-powered imaging systems, adds a unifying thread of “Deep Learning” artificial intelligence algorithms, and successfully improves the confocal’s volumetric resolution by more than 10-fold while simultaneously reducing phototoxicity. Their report on the technology, called “Multiview Confocal Super-Resolution Microscopy,” is published online this week in Nature.

“Many labs have confocals, and if they can eke more performance out of them using these artificial intelligence algorithms, then they don’t have to invest in a whole new microscope. To me, that’s one of the best and most exciting reasons to adopt these AI methods,” said senior author and MBL Fellow Hari Shroff of the National Institute of Biomedical Imaging and Bioengineering.

Mapping the human connectomics.


Join this channel to get access to perks:
https://www.youtube.com/channel/UCDukC60SYLlPwdU9CWPGx9Q/join.

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes informationopinions, and conducts interviews to easily learn about Neuralink and its future.

Most people aren’t aware of what the company does, or how it does it. If you know other people who are curious about what Neuralink is doing, this is a nice summary episode to share. Tesla, SpaceX, and the Boring Company are going to have to get used to their newest sibling. Neuralink is going to change how humans think, act, learn, and share information.

Neura Pod:
- Twitter: https://twitter.com/NeuraPod.
- Patreon: https://www.patreon.com/neurapod.
- Medium: https://neurapod.medium.com/
- Spotify: https://open.spotify.com/show/2hqdVrReOGD6SZQ4uKuz7c.
- Instagram: https://www.instagram.com/NeuraPodcast.
- Facebook: https://www.facebook.com/NeuraPod.
- Tiktok: https://www.tiktok.com/@neurapod.

Opinions are my own. Neura Pod receives no compensation from Neuralink and has no formal affiliations with the company. I own Tesla stock and/or derivatives.

Edited by: Omar Olivares.
#Neuralink #NeuraPod, Elon Musk, Max Hodak, What is Neuralink?, Neura link, Tesla, SpaceX, Starlink, Nueralink, Nuralink, Brain computer interface, What does Neuralink do?, Brain machine interface, Artificial Intelligence, Metaverse, Facebook, Neuralink News, Neuralink news 2021, Neuralink 2021, Neuralink Update, Neuralink Update 2021, Neuralink news and updates, neauralink update, Neuralink monkey, neuralink presentation 2021, neuralink pig, neuralink demos, Neuralink stock.

Artificial neural networks are famously inspired by their biological counterparts. Yet compared to human brains, these algorithms are highly simplified, even “cartoonish.”

Can they teach us anything about how the brain works?

For a panel at the Society for Neuroscience annual meeting this month, the answer is yes. Deep learning wasn’t meant to model the brain. In fact, it contains elements that are biologically improbable, if not utterly impossible. But that’s not the point, argues the panel. By studying how deep learning algorithms perform, we can distill high-level theories for the brain’s processes—inspirations to be further tested in the lab.