Toggle light / dark theme

Shrinking massive neural networks used to model language

Deep learning neural networks can be massive, demanding major computing power. In a test of the “lottery ticket hypothesis,” MIT researchers have found leaner, more efficient subnetworks hidden within BERT models. The discovery could make natural language processing more accessible.

Leave a Comment

If you are already a member, you can use this form to update your payment info.

Lifeboat Foundation respects your privacy! Your email address will not be published.