Toggle light / dark theme

Extracting Insight from the Data Deluge is a Hard-to-Do Must-Do

Posted in biotech/medical, computing, economics, health

A mantra of these data-rife times is that within the vast and growing volumes of diverse data types, such as sensor feeds, economic indicators, and scientific and environmental measurements, are dots of significance that can tell important stories, if only those dots could be identified and connected in authentically meaningful ways. Getting good at that exercise of data synthesis and interpretation ought to open new, quicker routes to identifying threats, tracking disease outbreaks, and otherwise answering questions and solving problems that previously were intractable.

Now for a reality check. “Today’s hardware is ill-suited to handle such data challenges, and these challenges are only going to get harder as the amount of data continues to grow exponentially,” said Trung Tran, a program manager in DARPA’s Microsystems Technology Office (MTO). To take on that technology shortfall, MTO last summer unveiled its Hierarchical Identify Verify Exploit (HIVE) program, which has now signed on five performers to carry out HIVE’s mandate: to develop a powerful new data-handling and computing platform specialized for analyzing and interpreting huge amounts of data with unprecedented deftness. “It will be a privilege to work with this innovative team of performers to develop a new category of server processors specifically designed to handle the data workloads of today and tomorrow,” said Tran, who is overseeing HIVE.

The quintet of performers includes a mix of large commercial electronics firms, a national laboratory, a university, and a veteran defense-industry company: Intel Corporation (Santa Clara, California), Qualcomm Intelligent Solutions (San Diego, California), Pacific Northwest National Laboratory (Richland, Washington), Georgia Tech (Atlanta, Georgia), and Northrop Grumman (Falls Church, Virginia).

Read more

Leave a Reply