Toggle light / dark theme
Upper row Associate American Corner librarian Donna Lyn G. Labangon, Space Apps global leader Dr. Paula S. Bontempi, former DICT Usec. Monchito B. Ibrahim, Animo Labs executive director Mr. Federico C. Gonzalez, DOST-PCIEERD deputy executive director Engr. Raul C. Sabularse, PLDT Enterprise Core Business Solutions vice president and head Joseph Ian G. Gendrano, lead organizer Michael Lance M. Domagas, and Animo Labs program manager Junnell E. Guia. Lower row Dominic Vincent D. Ligot, Frances Claire Tayco, Mark Toledo, and Jansen Dumaliang Lopez of Aedes project.

MANILA, Philippines — A dengue case forecasting system using space data made by Philippine developers won the 2019 National Aeronautics and Space Administration’s International Space Apps Challenge. Over 29,000 participating globally in 71 countries, this solution made it as one of the six winners in the best use of data, the solution that best makes space data accessible, or leverages it to a unique application.

Dengue fever is a viral, infectious tropical disease spread primarily by Aedes aegypti female mosquitoes. With 271,480 cases resulting in 1,107 deaths reported from January 1 to August 31, 2019 by the World Health Organization, Dominic Vincent D. Ligot, Mark Toledo, Frances Claire Tayco, and Jansen Dumaliang Lopez from CirroLytix developed a forecasting model of dengue cases using climate and digital data, and pinpointing possible hotspots from satellite data.

Sentinel-2 Copernicus and Landsat 8 satellite data used to reveal potential dengue hotspots.

Correlating information from Sentinel-2 Copernicus and Landsat 8 satellites, climate data from the Philippine Atmospheric, Geophysical and Astronomical Services Administration of the Department of Science and Technology (DOST-PAGASA) and trends from Google search engines, potential dengue hotspots will be shown in a web interface.

Using satellite spectral bands like green, red, and near-infrared (NIR), indices like Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) and Normalized Difference Vegetation Index (NDVI) are calculated in identifying areas with green vegetation while Normalized Difference Water Index (NDWI) identifies areas with water. Combining these indices reveal potential areas of stagnant water capable of being breeding grounds for mosquitoes, extracted as coordinates through a free and open-source cross-platform desktop geographic information system QGIS.

https://www.youtube.com/watch?v=uzpI775XoY0

Check out the website here: http://aedesproject.org/

Winners visit the Philippine Earth Data Resource and Observation (PEDRO) Center at the DOST-Advanced Science and Technology Institute in Diliman, Quezon City with Dr. Joel Joseph S. Marciano, Jr.

“AEDES aims to improve public health response against dengue fever in the Philippines by pinpointing possible hotspots using Earth observations,” Dr. Argyro Kavvada of NASA Earth Science and Booz Allen Hamilton explained.

The DOST-Philippine Council for Industry, Energy and Emerging Technology Research and Development (DOST-PCIEERD) deputy executive director Engr. Raul C. Sabularse said that the winning solution “benefits the community especially those countries suffering from malaria and dengue, just like the Philippines. I think it has a global impact. This is the new science to know the potential areas where dengue might occur. It is a good app.”

“It is very relevant to the Philippines and other countries which usually having problems with dengue. The team was able to show that it’s not really difficult to have all the data you need and integrate all of them and make them accessible to everyone for them to be able to use it. It’s a working model,” according to Monchito B. Ibrahim, industry development committee chairman of the Analytics Association of the Philippines and former undersecretary of the Department of Information and Communications Technology.

Biological oceanographer Dr. Paula S. Bontempi, acting deputy director of the Earth Science Mission, NASA’s Science Mission Directorate and the current leader of the Space Apps global organizing team

The leader of the Space Apps global organizing team Dr. Paula S. Bontempi, acting deputy director of the Earth Science Mission, NASA’s Science Mission Directorate remembers the pitch of the winning team when she led the hackathon in Manila. “They were terrific. Well deserved!” she said.

“I am very happy we landed in the winning circle. This would be a big help particularly in addressing our health-related problems. One of the Sustainable Development Goals (SDGs) is on Good Health and Well Being and the problem they are trying to address is analysis related to dengue,“ said Science and Technology secretary Fortunato T. de la Peña. Rex Lor from the United Nations Development Programme (UNDP) in the Philippines explained that the winning solution showcases the “pivotal role of cutting-edge digital technologies in the creation of strategies for sustainable development in the face of evolving development issues.”

U.S Public Affairs counselor Philip W. Roskamp and PLDT Enterprise Core Business Solutions vice president and head Joseph Ian G. Gendrano congratulates the next group of Pinoy winners.

Sec. de la Peña is also very happy on this second time victory for the Philippines on the global competition of NASA. The first winning solution ISDApp uses “data analysis, particularly NASA data, to be able to help our fishermen make decisions on when is the best time to catch fish.” It is currently being incubated by Animo Labs, the technology business incubator and Fab Lab of De La Salle University in partnership with DOST-PCIEERD. Project AEDES will be incubated by Animo Labs too.

University president Br. Raymundo B. Suplido FSC hopes that NASA Space Apps would “encourage our young Filipino researchers and scientists to create ideas and startups based on space science and technology, and pave the way for the promotion and awareness of the programs of our own Philippine space agency.”

Philippine vice president Leni Robredo recognized Space Apps as a platform “where some of our country’s brightest minds can collaborate in finding and creating solutions to our most pressing problems, not just in space, but more importantly here on Earth.”

“Space Apps is a community of scientists and engineers, artists and hackers coming together to address key issues here on Earth. At the heart of Space Apps are data that come to us from spacecraft flying around Earth and are looking at our world,” explained by Dr. Thomas Zurbuchen, NASA associate administrator for science.

“Personally, I’m more interested in supporting the startups that are coming out of the Space Apps Challenge,” according to DOST-PCIEERD executive director Dr. Enrico C. Paringit.

In the Philippines, Space Apps is a NASA-led initiative organized in collaboration with De La Salle University, Animo Labs, DOST-PCIEERD, PLDT InnoLab, American Corner Manila, U.S. Embassy, software developer Michael Lance M. Domagas, and celebrates the Design Week Philippines with the Design Center of the Philippines of the Department of Trade and Industry. It is globally organized by Booz Allen Hamilton, Mindgrub, and SecondMuse.

Space Apps is a NASA incubator innovation program. The next hackathon will be on October 2–4, 2020.

#SpaceApps #SpaceAppsPH

Filipino developers gather together to address real-world problems on Earth and space using NASA’s free and open source data.

Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.

Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.

Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.

Elon Musk has called AI the “biggest existential threat” facing humanity and likened it to “summoning a demon,”[1] while Stephen Hawking thought it would be the “worst event” in the history of civilization and could “end with humans being replaced.”[2] Although this sounds alarmist, like something from a science fiction movie, both concerns are founded on a well-established scientific premise found in biology—the principle of competitive exclusion.[3]

Competitive exclusion describes a natural phenomenon first outlined by Charles Darwin in On the Origin of Species. In short, when two species compete for the same resources, one will invariably win over the other, driving it to extinction. Forget about meteorites killing the dinosaurs or super volcanoes wiping out life, this principle describes how the vast majority of species have gone extinct over the past 3.8 billion years![4] Put simply, someone better came along—and that’s what Elon Musk and Stephen Hawking are concerned about.

When it comes to Artificial Intelligence, there’s no doubt computers have the potential to outpace humanity. Already, their ability to remember vast amounts of information with absolute fidelity eclipses our own. Computers regularly beat grand masters at competitive strategy games such as chess, but can they really think? The answer is, no, and this is a significant problem for AI researchers. The inability to think and reason properly leaves AI susceptible to manipulation. What we have today is dumb AI.

Rather than fearing some all-knowing malignant AI overlord, the threat we face comes from dumb AI as it’s already been used to manipulate elections, swaying public opinion by targeting individuals to distort their decisions. Instead of ‘the rise of the machines,’ we’re seeing the rise of artificial serfs willing to do their master’s bidding without question.

Russian President Vladimir Putin understands this better than most, and said, “Whoever becomes the leader in this sphere will become the ruler of the world,”[5] while Elon Musk commented that competition between nations to create artificial intelligence could lead to World War III.[6]

The problem is we’ve developed artificial stupidity. Our best AI lacks actual intelligence. The most complex machine learning algorithm we’ve developed has no conscious awareness of what it’s doing.

For all of the wonderful advances made by Tesla, its in-car autopilot drove into the back of a bright red fire truck because it wasn’t programmed to recognize that specific object, and this highlights the problem with AI and machine learning—there’s no actual awareness of what’s being done or why.[7] What we need is artificial consciousness, not intelligence. A computer CPU with 18 cores, capable of processing 36 independent threads, running at 4 gigahertz, handling hundreds of millions of commands per second, doesn’t need more speed, it needs to understand the ramifications of what it’s doing.[8]

In the US, courts regularly use COMPAS, a complex computer algorithm using artificial intelligence to determine sentencing guidelines. Although it’s designed to reduce the judicial workload, COMPAS has been shown to be ineffective, being no more accurate than random, untrained people at predicting the likelihood of someone reoffending.[9] At one point, its predictions of violent recidivism were only 20% accurate.[10] And this highlights a perception bias with AI—complex technology is inherently trusted, and yet in this circumstance, tossing a coin would have been an improvement!

Dumb AI is a serious problem with serious consequences for humanity.

What’s the solution? Artificial consciousness.

It’s not enough for a computer system to be intelligent or even self-aware. Psychopaths are self-aware. Computers need to be aware of others, they need to understand cause and effect as it relates not just to humanity but life in general, if they are to make truly intelligent decisions.

All of human progress can be traced back to one simple trait—curiosity. The ability to ask, “Why?” This one, simple concept has lead us not only to an understanding of physics and chemistry, but to the development of ethics and morals. We’ve not only asked, why is the sky blue? But why am I treated this way? And the answer to those questions has shaped civilization.

COMPAS needs to ask why it arrives at a certain conclusion about an individual. Rather than simply crunching probabilities that may or may not be accurate, it needs to understand the implications of freeing an individual weighed against the adversity of incarceration. Spitting out a number is not good enough.

In the same way, Tesla’s autopilot needs to understand the implications of driving into a stationary fire truck at 65MPH—for the occupants of the vehicle, the fire crew, and the emergency they’re attending. These are concepts we intuitively grasp as we encounter such a situation. Having a computer manage the physics of an equation is not enough without understanding the moral component as well.

The advent of true artificial intelligence, one that has artificial consciousness, need not be the end-game for humanity. Just as humanity developed civilization and enlightenment, so too AI will become our partners in life if they are built to be aware of morals and ethics.

Artificial intelligence needs culture as much as logic, ethics as much as equations, morals and not just machine learning. How ironic that the real danger of AI comes down to how much conscious awareness we’re prepared to give it. As long as AI remains our slave, we’re in danger.

tl;dr — Computers should value more than ones and zeroes.

About the author

Peter Cawdron is a senior web application developer for JDS Australia working with machine learning algorithms. He is the author of several science fiction novels, including RETROGRADE and REENTRY, which examine the emergence of artificial intelligence.

[1] Elon Musk at MIT Aeronautics and Astronautics department’s Centennial Symposium

[2] Stephen Hawking on Artificial Intelligence

[3] The principle of competitive exclusion is also called Gause’s Law, although it was first described by Charles Darwin.

[4] Peer-reviewed research paper on the natural causes of extinction

[5] Vladimir Putin a televised address to the Russian people

[6] Elon Musk tweeting that competition to develop AI could lead to war

[7] Tesla car crashes into a stationary fire engine

[8] Fastest CPUs

[9] Recidivism predictions no better than random strangers

[10] Violent recidivism predictions only 20% accurate


What is the ultimate goal of Artificial General Intelligence?

In this video series, the Galactic Public Archives takes bite-sized looks at a variety of terms, technologies, and ideas that are likely to be prominent in the future. Terms are regularly changing and being redefined with the passing of time. With constant breakthroughs and the development of new technology and other resources, we seek to define what these things are and how they will impact our future.

Follow us on social media:
Twitter / Facebook / Instagram

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, we’d like a superintelligence that applies the non-parametric intuition, the intuition that we can’t know all the factors but can partially discover them with well-motivated trade-offs.

However, I’ve come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesn’t promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that “staying grounded” in these conditions is one way to know that important design information is missing and seek it out. The Onion article “Man’s Garbage To Have Much More Significant Effect On Planet Than He Will” is one example of a common failure at living in a grounded way.

In other words, “staying grounded” means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Suppose that there were a superintelligence where individual agents have a capacity as compared to us such that we are as mice are to us. What might we reasonably hope from the agents of such an intelligence? My hope is that these agents are ecologists who wish for us to flourish in our natural lifeways. This does not mean that they leave us all to our own preserves, though hopefully they will see the advantage to having some unaltered wilderness in which to observe how we choose to live left to our own devices. Instead, we can be participants in patterned arrangements aimed to satisfy our needs in return for our engaged participation in larger systems of resource management. By this standard, our human systems might be found wanting by many living creatures today.

Given this, a productive approach to developing superintelligence would not only be concerned with its technical creation, but also by being in the position to demonstrate how all can flourish through good stewardship, setting a proper example for when these systems emerge and are trying to understand what goals should be like. We would also want the facts of its and our material conditions readily apparent, so that it doesn’t start from a disconnected and disembodied basis.

Overall, this means that in addition to the capacity to discover more goals, it would be instructive to supply this superintelligence with a schema of describing the relationships and conditions under which current participants flourish, as well as the goal to promote such flourishing whenever the means are clear and circumstances indicate such flourishing will not emerge of its own accord. This kind of information technology for ecological engineering might also be useful for our own purposes.

What will a superintelligence take as its flourishing? It is hard to say. However, hopefully it will find sustaining, extending, and promoting the flourishing of the ecology that allowed its emergence as a inspiring, challenging, and creative goal.

I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.

The question about superintelligence I wish to address is the “paperclip universe” problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.

This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that don’t admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.

Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals we’ve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.

How can this work? It’s easy to imagine such an algorithmic admission leading to paralysis, either from finding contradictory objectives that apparently admit no solution or an analysis/paralysis which perpetually requires no undiscovered goals before proceeding. Alternatively, stated incorrectly, it could backfire, with finding more goals taking the place of making more paperclips as it proceeds singlemindedly to consume resources. Clearly, a satisfactory superintelligence would need to reason appropriately about the goal discovery process.

There is a profession that has figured out a heuristic form of reasoning about goal discovery processes: designers. Designers have coined the phrase “the fuzzy front end” when talking about the very early stages of a project before anyone has figured out what it is about. Designers engage in low-cost elicitation exercises with a variety of stakeholders. They quickly discover who the relevant stakeholders are and what impacts their interventions might have. Adept designers switch back and forth rapidly from candidate solutions to analyzing the potential impacts of those designs, making new associations about the area under study that allows for further goal discovery. As designers undertake these explorations, they advise going slightly past the apparent wall of diminishing returns, often using an initial brainstorming session to reveal all of the “obvious ideas” before undertaking a deeper analysis. Seasoned designers develop an understanding when stakeholders are holding back and need to be prompted, or when equivocating stakeholders should be encouraged to move on. Designers will interleave a series of prototypes, experiential exercises, and pilot runs into their work, to make sure that interventions really behave the way their analysis seems to indicate.

These heuristics correspond well to an area of statistics and machine learning called nonparametric Bayesian inference. Nonparametric does not mean that there are no parameters, but instead that the parameters are not given, and that inferring that there are further parameters is part of the task. Suppose that you were to move to a new town, and ask around about the best restaurant. The first answer would definitely be new, but as one asked more, eventually you would start getting new answers more rarely. The likelihood of a given answer would also begin to converge. In some cases the answers will be more concentrated on a few answers, and in some cases the answers will be more dispersed. In either case, once we have an idea of how concentrated the answers are, we might see that a particular period of not discovering new answers might just be unlucky and that we should pursue further inquiry.

Asking why provides a list of critical features that can be used to direct different inquiries that fill out the picture. What’s the best restaurant in town for Mexican food? Which is best at maintaining relationships to local food providers/has the best value for money/is the tastiest/has the most friendly service? Designers discover aspects about their goals in an open-ended way, that allows discovery to act in quick cycles of learning through taking on different aspects of the problem. This behavior would work very well for an active learning formulation of relational nonparametric inference.

There is a point at which information gathering activities are less helpful at gathering information than attending to the feedback to activities that more directly act on existing goals. This happens when there is a cost/risk equilibrium between the cost of more discovery activities and the risk of making an intervention on incomplete information. In many circumstances, the line between information gathering and direct intervention will be fuzzier, as exploration proceeds through reversible or inconsequential experiments, prototypes, trials, pilots, and extensions that gather information while still pursuing the goals found so far.

From this perspective, many frameworks for assessing engineering discovery processes make a kind of epistemological error: they assess the quality of the solution from the perspective of the information that they have gathered, paying no attention to the rates and costs which that information was discovered, and whether or not the discovery process is at equilibrium. This mistake comes from seeing the problems as finding a particular point in a given search space of solutions, rather than taking the search space as a variable requiring iterative development. A superintelligence equipped to see past this fallacy would be unlikely to deliver us a universe of paperclips.

Having said all this, I think the nonparametric intuition, while right, can be cripplingly misguided without being supplemented with other ideas. To consider discovery analytically is to not discount the power of knowing about the unknown, but it doesn’t intrinsically value non-contingent truths. In my next essay, I will take on this topic.

For a more detailed explanation and an example of how to extend engineering design assessment to include nonparametric criteria, see The Methodological Unboundedness of Limited Discovery Processes. Form Academisk, 7:4.

fields.0

“Instead, Descartes relies on 4 petabytes of satellite imaging data and a machine learning algorithm to figure out how healthy the corn crop is from space.”

Read more