Researchers have developed a new Artificial Intelligence (AI)-based technique that can detect low-sugar levels from raw ECG signals via wearable sensors without any fingerprint test. Current methods to measure glucose requires needles and repeated fingerpicks over the day. Fingerpicks can often be painful, deterring patient compliance.
The new technique developed by researchers at University of Warwick works with an 82 per cent reliability, and could replace the need for invasive finger-prick testing with a needle, especially for kids who are afraid of those.
“Our innovation consisted in using AI for automatic detecting hypoglycaemia via few ECG beats. This is relevant because ECG can be detected in any circumstance, including sleeping,” said Dr Leandro Pecchia from School of Engineering in a paper published in the Nature Springer journal Scientific Reports.
We’re at a fascinating point in the discourse around artificial intelligence (AI) and all things “smart”. At one level, we may be reaching “peak hype”, with breathless claims and counter claims about potential society impacts of disruptive technologies. Everywhere we look, there’s earnest discussion of AI and its exponentially advancing sisters – blockchain, sensors, the Internet of Things (IoT), big data, cloud computing, 3D / 4D printing, and hyperconnectivity. At another level, for many, it is worrying to hear politicians and business leaders talking with confidence about the transformative potential and societal benefits of these technologies in application ranging from smart homes and cities to intelligent energy and transport infrastructures.
Why the concern? Well, these same leaders seem helpless to deal with any kind of adverse weather incident, ground 70,000 passengers worldwide with no communication because someone flicked the wrong switch, and rush between Brexit crisis meetings while pretending they have a coherent strategy. Hence, there’s growing concern that we’ll see genuine stupidity in the choices made about how we deploy ever more powerful smart technologies across our infrastructure for society’s benefit. So, what intelligent choices could ensure that intelligent tools genuinely serve humanity’s best future interests.
Firstly, we are becoming a society of connected things with appalling connectivity. Literally every street lamp, road sign, car component, object we own, and item of clothing we wear could be carrying a sensor in the next five to ten years. With a trillion plus connected objects throwing off a continuous stream of information – we are talking about a shift from big to humungous data. The challenge is how we’ll transport that information? For Britain to realise its smart nation goals and attract the industries of tomorrow in the post-Brexit world, it seems imperative that we have broadband speeds that puts us amongst the five fastest nations on the planet. This doesn’t appear to be part of the current plan.
The second issue is governance of smart infrastructure. If we want to be driverless pioneers, then we need to lead on thinking around the ethical frameworks that govern autonomous vehicle decision making. This means defining clear rules around liability and choice making on who to hit in accident. Facial recognition technology allows identification of most potential victims and vehicles could calculate instantly our current and potential societal contribution. The information is available, what will we choose to do with it? Similarly, when smart traffic infrastructures know who is driving, and drones can allow individualised navigation, how will we use their information in traffic management choices? In a traffic jam, who will be allowed onto the hard shoulder? Will we prioritise doctors on emergency calls, executives of major employers, or school teachers educating our young?
At the physical level, globally we see experiments with innovations such as solar roadways, and self-monitoring, self-repairing surfaces. We can of course wait until these technologies are proven, commercialised, and expensive. Or, we can recognise the market opportunity of piloting such innovations, accelerate the development of the ventures that are commercialising them, deliver genuinely smarter infrastructure in advance, of many competitor nations, and create leadership opportunities in these new global markets.
The final issue I’d like to highlight is that of speed. Global construction firms are delivering 57 storey buildings in 19 days and completing roadways in China and Dubai at three to four times the speed of the UK. The capabilities exist, the potential for exponential cost and time savings are evident. We can continue to find genuinely stupid reasons not to innovate or give ourselves permission to experiment with these new techniques. Again, the results would be enhanced infrastructure provision to UK society whilst at the same creating globally exportable capabilities.
As we look to the future, it will become increasingly apparent that the payoff from smart infrastructure will be even more dependent on the intelligence of our decision making than on the applications and technologies we deploy.
ABOUT THE AUTHOR
Rohit Talwar is a global futurist, award-winning keynote speaker, author, and the CEO of Fast Future. His prime focus is on helping clients understand and shape the emerging future by putting people at the center of the agenda. Rohit is the co-author of Designing Your Future, lead editor and a contributing author for The Future of Business, and editor of Technology vs. Humanity. He is a co-editor and contributor for the recently published Beyond Genuine Stupidity – Ensuring AI Serves Humanity and The Future Reinvented – Reimagining Life, Society, and Business, and two forthcoming books — Unleashing Human Potential – The Future of AI in Business, and 50:50 – Scenarios for the Next 50 Years.
Biological organisms have certain useful attributes that synthetic robots do not, such as the abilities to heal, adapt to new situations, and reproduce. Yet molding biological tissues into robots or tools has been exceptionally difficult to do: Experimental techniques, such as altering a genome to make a microbe perform a specific task, are hard to control and not scalable.
Now, a team of scientists at the University of Vermont and Tufts University in Massachusetts has used a supercomputer to design novel lifeforms with specific functions, then built those organisms out of frog cells.
The new, AI-designed biological bots crawl around a petri dish and heal themselves. Surprisingly, the biobots also spontaneously self-organize and clear their dish of small trash pellets.
As the U.S. Army increasingly uses facial and object recognition to train artificial intelligent systems to identify threats, the need to protect its systems from cyberattacks becomes essential.
An Army project conducted by researchers at Duke University and led by electrical and computer engineering faculty members Dr. Helen Li and Dr. Yiran Chen, made significant progress toward mitigating these types of attacks. Two members of the Duke team, Yukun Yang and Ximing Qiao, recently took first prize in the Defense category of the CSAW ‘19 HackML competition.
“Object recognition is a key component of future intelligent systems, and the Army must safeguard these systems from cyberattacks,” said MaryAnne Fields, program manager for intelligent systems at the Army Research Office. “This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the object recognition system is subtly altered to give incorrect answers. Safeguarding object recognition systems will ensure that future Soldiers will have confidence in the intelligent systems they use.”
The information-processing capabilities of the brain are often reported to reside in the trillions of connections that wire its neurons together. But over the past few decades, mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable.
The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.
“I believe that we’re just scratching the surface of what these neurons are really doing,” said Albert Gidon, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that presented these findings in Science earlier this month.
The discovery marks a growing need for studies of the nervous system to consider the implications of individual neurons as extensive information processors. “Brains may be far more complicated than we think,” said Konrad Kording, a computational neuroscientist at the University of Pennsylvania, who did not participate in the recent work. It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.
The Limitations of Dumb Neurons
In the 1940s and ’50s, a picture began to dominate neuroscience: that of the “dumb” neuron, a simple integrator, a point in a network that merely summed up its inputs. Branched extensions of the cell, called dendrites, would receive thousands of signals from neighboring neurons — some excitatory, some inhibitory. In the body of the neuron, all those signals would be weighted and tallied, and if the total exceeded some threshold, the neuron fired a series of electrical pulses (action potentials) that directed the stimulation of adjacent neurons.
At around the same time, researchers realized that a single neuron could also function as a logic gate, akin to those in digital circuits (although it still isn’t clear how much the brain really computes this way when processing information). A neuron was effectively an AND gate, for instance, if it fired only after receiving some sufficient number of inputs.
The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks.
FSJs (Ferroelectric Semiconductor Junction) in neuromorphic chips.
Engineers at Purdue University and at Georgia Tech have constructed the first devices from a new kind of two-dimensional material that combines memory-retaining properties and semiconductor properties. The engineers used a newly discovered ferroelectric semiconductor, alpha indium selenide, in two applications: as the basis of a type of transistor that stores memory as the amount of amplification it produces; and in a two-terminal device that could act as a component in future brain-inspired computers. The latter device was unveiled last month at the IEEE International Electron Devices Meeting in San Francisco.
Ferroelectric materials become polarized in an electric field and retain that polarization even after the field has been removed. Ferroelectric RAM cells in commercial memory chips use the former ability to store data in a capacitor-like structure. Recently, researchers have been trying to coax more tricks from these ferroelectric materials by bringing them into the transistor structure itself or by building other types of devices from them.
In particular, they’ve been embedding ferroelectric materials into a transistor’s gate dielectric, the thin layer that separates the electrode responsible for turning the transistor on and off from the channel through which current flows. Researchers have also been seeking a ferroelectric equivalent of the memristors, or resistive RAM, two-terminal devices that store data as resistance. Such devices, called ferroelectric tunnel junctions, are particularly attractive because they could be made into a very dense memory configuration called a cross-bar array. Many researchers working on neuromorphic- and low-power AI chips use memristors to act as the neural synapses in their networks. But so far, ferroelectric tunnel junction memories have been a problem.
A lack of tools to precisely control gene expression has limited our ability to evaluate relationships between expression levels and phenotypes. Here, we describe an approach to titrate expression of human genes using CRISPR interference and series of single-guide RNAs (sgRNAs) with systematically modulated activities. We used large-scale measurements across multiple cell models to characterize activities of sgRNAs containing mismatches to their target sites and derived rules governing mismatched sgRNA activity using deep learning. These rules enabled us to synthesize a compact sgRNA library to titrate expression of ~2,400 genes essential for robust cell growth and to construct an in silico sgRNA library spanning the human genome. Staging cells along a continuum of gene expression levels combined with single-cell RNA-seq readout revealed sharp transitions in cellular behaviors at gene-specific expression thresholds. Our work provides a general tool to control gene expression, with applications ranging from tuning biochemical pathways to identifying suppressors for diseases of dysregulated gene expression.