Toggle light / dark theme

Architecture has evolved and has become much more than just a design realized in concrete and modern building material. It has been transformed to help humanity in achieving all kinds of sustainability.

The eVolo Magazine for Architecture has been organizing another round of Skyscraper Competition in 2017 to honor those visionaries that try to realize a future that benefits humanity and the one Earth we all need to cherish and sustain.

A team from Spain with aspiring architects Arturo Emilio Garrido Ontiveros, Andrés Pastrana Bonillo, Judit Pinach Martí and Alex Tintea is thinking of a hybrid solution, that ensures Humanity’s survival in the early days of Mars’ colonization. The skyscraper design is both clever and beautiful, combining existing technologies with many practical ideas to open up and terraform more red soil as we understand the planet. It’s a genesis of Mars and a revival of form following function.

Read more

Audio engineering can make computerized customer support lines seem friendlier and more helpful.

Say you’re on the phone with a company and the automated virtual assistant needs a few seconds to “look up” your information. And then you hear it. The sound is unmistakable. It’s familiar. It’s the clickity-clack of a keyboard. You know it’s just a sound effect, but unlike hold music or a stream of company information, it’s not annoying. In fact, it’s kind of comforting.

Michael Norton and Ryan Buell of the Harvard Business School studied this idea —that customers appreciate knowing that work is being done on their behalf, even when the only “person” “working” is an algorithm. They call it the labor illusion.

Read more

“This handbook will:

  • help architects better understand their role and how to prepare for and respond to disasters
  • prepare AIA Component staff to engage and coordinate their architect members and provide community discourse and assistance
  • explain how built environment professionals can work with architects and the community on disaster response and preparedness efforts
  • inform municipal governments of the unique ways architects assist the public and their clients in mitigating, responding to and recovering from disasters”

Read more

I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.

The question about superintelligence I wish to address is the “paperclip universe” problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.

This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that don’t admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.

Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals we’ve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.

How can this work? It’s easy to imagine such an algorithmic admission leading to paralysis, either from finding contradictory objectives that apparently admit no solution or an analysis/paralysis which perpetually requires no undiscovered goals before proceeding. Alternatively, stated incorrectly, it could backfire, with finding more goals taking the place of making more paperclips as it proceeds singlemindedly to consume resources. Clearly, a satisfactory superintelligence would need to reason appropriately about the goal discovery process.

There is a profession that has figured out a heuristic form of reasoning about goal discovery processes: designers. Designers have coined the phrase “the fuzzy front end” when talking about the very early stages of a project before anyone has figured out what it is about. Designers engage in low-cost elicitation exercises with a variety of stakeholders. They quickly discover who the relevant stakeholders are and what impacts their interventions might have. Adept designers switch back and forth rapidly from candidate solutions to analyzing the potential impacts of those designs, making new associations about the area under study that allows for further goal discovery. As designers undertake these explorations, they advise going slightly past the apparent wall of diminishing returns, often using an initial brainstorming session to reveal all of the “obvious ideas” before undertaking a deeper analysis. Seasoned designers develop an understanding when stakeholders are holding back and need to be prompted, or when equivocating stakeholders should be encouraged to move on. Designers will interleave a series of prototypes, experiential exercises, and pilot runs into their work, to make sure that interventions really behave the way their analysis seems to indicate.

These heuristics correspond well to an area of statistics and machine learning called nonparametric Bayesian inference. Nonparametric does not mean that there are no parameters, but instead that the parameters are not given, and that inferring that there are further parameters is part of the task. Suppose that you were to move to a new town, and ask around about the best restaurant. The first answer would definitely be new, but as one asked more, eventually you would start getting new answers more rarely. The likelihood of a given answer would also begin to converge. In some cases the answers will be more concentrated on a few answers, and in some cases the answers will be more dispersed. In either case, once we have an idea of how concentrated the answers are, we might see that a particular period of not discovering new answers might just be unlucky and that we should pursue further inquiry.

Asking why provides a list of critical features that can be used to direct different inquiries that fill out the picture. What’s the best restaurant in town for Mexican food? Which is best at maintaining relationships to local food providers/has the best value for money/is the tastiest/has the most friendly service? Designers discover aspects about their goals in an open-ended way, that allows discovery to act in quick cycles of learning through taking on different aspects of the problem. This behavior would work very well for an active learning formulation of relational nonparametric inference.

There is a point at which information gathering activities are less helpful at gathering information than attending to the feedback to activities that more directly act on existing goals. This happens when there is a cost/risk equilibrium between the cost of more discovery activities and the risk of making an intervention on incomplete information. In many circumstances, the line between information gathering and direct intervention will be fuzzier, as exploration proceeds through reversible or inconsequential experiments, prototypes, trials, pilots, and extensions that gather information while still pursuing the goals found so far.

From this perspective, many frameworks for assessing engineering discovery processes make a kind of epistemological error: they assess the quality of the solution from the perspective of the information that they have gathered, paying no attention to the rates and costs which that information was discovered, and whether or not the discovery process is at equilibrium. This mistake comes from seeing the problems as finding a particular point in a given search space of solutions, rather than taking the search space as a variable requiring iterative development. A superintelligence equipped to see past this fallacy would be unlikely to deliver us a universe of paperclips.

Having said all this, I think the nonparametric intuition, while right, can be cripplingly misguided without being supplemented with other ideas. To consider discovery analytically is to not discount the power of knowing about the unknown, but it doesn’t intrinsically value non-contingent truths. In my next essay, I will take on this topic.

For a more detailed explanation and an example of how to extend engineering design assessment to include nonparametric criteria, see The Methodological Unboundedness of Limited Discovery Processes. Form Academisk, 7:4.

Deep learning owes its rising popularity to its vast applications across an increasing number of fields. From healthcare to finance, automation to e-commerce, the RE•WORK Deep Learning Summit (27−28 April) will showcase the deep learning landscape and its impact on business and society.

Of notable interest is speaker Jeffrey De Fauw, Research Engineer at DeepMind. Prior to joining DeepMind, De Fauw developed a deep learning model to detect Diabetic Retinopathy (DR) in fundus images, which he will be presenting at the Summit. DR is a leading cause of blindness in the developed world and diagnosing it is a time-consuming process. De Fauw’s model was designed to reduce diagnostics time and to accurately identify patients at risk, to help them receive treatment as early as possible.

Joining De Fauw will be Brian Cheung, A PhD student from UC Berkeley, and currently working at Google Brain. At the event, he will explain how neural network models are able to extract relevant features from data with minimal feature engineering. Applied in the study of physiology, his research aims to use a retinal lattice model to examine retinal images.

Read more

For the first time ever, a single flexible fiber no bigger than a human hair has successfully delivered a combination of optical, electrical, and chemical signals back and forth into the brain, putting into practice an idea first proposed two years ago. With some tweaking to further improve its biocompatibility, the new approach could provide a dramatically improved way to learn about the functions and interconnections of different brain regions.

The new fibers were developed through a collaboration among material scientists, chemists, biologists, and other specialists. The results are reported in the journal Nature Neuroscience, in a paper by Seongjun Park, an MIT graduate student; Polina Anikeeva, the Class of 1942 Career Development Professor in the Department of Materials Science and Engineering; Yoel Fink, a professor in the departments of Materials Science and Engineering, and Electrical Engineering and Computer Science; Gloria Choi, the Samuel A. Goldblith Career Development Professor in the Department of Brain and Cognitive Sciences, and 10 others at MIT and elsewhere.

The fibers are designed to mimic the softness and flexibility of brain tissue. This could make it possible to leave implants in place and have them retain their functions over much longer periods than is currently possible with typical stiff, metallic fibers, thus enabling much more extensive data collection. For example, in tests with lab mice, the researchers were able to inject viral vectors that carried genes called opsins, which sensitize neurons to light, through one of two fluid channels in the fiber. They waited for the opsins to take effect, then sent a pulse of light through the optical waveguide in the center, and recorded the resulting neuronal activity, using six electrodes to pinpoint specific reactions. All of this was done through a single flexible fiber just 200 micrometers across — comparable to the width of a human hair.

Read more

Artificial intelligence has reached peak hype. News outlets report that companies have replaced workers with IBM Watson and that algorithms are beating doctors at diagnoses. New AI startups pop up everyday, claiming to solve all your personal and business problems with machine learning.

Ordinary objects like juicers and Wi-Fi routers suddenly advertise themselves as “powered by AI.” Not only can smart standing desks remember your height settings, they can also order you lunch.

Much of the AI hubbub is generated by reporters who’ve never trained a neural network and by startups or those hoping to be acqui-hired for engineering talent despite not having solved any real business problems. No wonder there are so many misconceptions about what AI can and cannot do.

Read more

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in . The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, -powered pumps.

Read more

Back in August 2014, researchers at Michigan State University created a fully transparent solar concentrator, which could turn any window or sheet of glass (like your smartphone’s screen) into a photovoltaic solar cell. Unlike other “transparent” solar cells that we’ve reported on in the past, this one really is transparent, as you can see in the photos throughout this story. According to Richard Lunt, who led the research at the time, the team was confident the transparent solar panels can be efficiently deployed in a wide range of settings, from “tall buildings with lots of windows or any kind of mobile device that demands high aesthetic quality like a phone or e-reader.”

Now Ubiquitous Energy, an MIT startup we first reported on in 2013, is getting closer to bringing its transparent solar panels to market. Lunt cofounded the company and remains assistant professor of chemical engineering and materials science at Michigan State University. Essentially, what they’re doing is instead of shrinking the components, they’re changing the way the cell absorbs light. The cell selectively harvests the part of the solar spectrum we can’t see with our eye, while letting regular visible light pass through.

Scientifically, a transparent solar panel is something of an oxymoron. Solar cells, specifically the photovoltaic kind, make energy by absorbing photons (sunlight) and converting them into electrons (electricity). If a material is transparent, however, by definition it means that all of the light passes through the medium to strike the back of your eye. This is why previous transparent solar cells have actually only been partially transparent — and, to add insult to injury, they usually they cast a colorful shadow too.

Read more