Toggle light / dark theme

Watch the newest video from Big Think: https://bigth.ink/NewVideo.
Join Big Think+ for exclusive videos: https://bigthink.com/plus/

What does it mean when someone calls you smart or intelligent? According to developmental psychologist Howard Gardner, it could mean one of eight things. In this video interview, Dr. Gardner addresses his eight classifications for intelligence: writing, mathematics, music, spatial, kinesthetic, interpersonal, and intrapersonal.

HOWARD GARDNER: Howard Gardner is a developmental psychologist and the John H. and Elisabeth A. Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education. He holds positions as Adjunct Professor of Psychology at Harvard University and Senior Director of Harvard Project Zero. Among numerous honors, Gardner received a MacArthur Prize Fellowship in 1981. In 1990, he was the first American to receive the University of Louisville’s Grawemeyer Award in Education and in 2000 he received a Fellowship from the John S. Guggenheim Memorial Foundation. In 2005 and again in 2008 he was selected by Foreign Policy and Prospect magazines as one of 100 most influential public intellectuals in the world. He has received honorary degrees from twenty-two colleges and universities, including institutions in Ireland, Italy, Israel, and Chile. The author of over twenty books translated into twenty-seven languages, and several hundred articles, Gardner is best known in educational circles for his theory of multiple intelligences, a critique of the notion that there exists but a single human intelligence that can be assessed by standard psychometric instruments. During the past twenty five years, he and colleagues at Project Zero have been working on the design of performance-based assessments, education for understanding, and the use of multiple intelligences to achieve more personalized curriculum, instruction, and assessment. In the middle 1990s, Gardner and his colleagues launched The GoodWork Project. “GoodWork” is work that is excellent in quality, personally engaging, and exhibits a sense of responsibility with respect to implications and applications. Researchers have examined how individuals who wish to carry out good work succeed in doing so during a time when conditions are changing very quickly, market forces are very powerful, and our sense of time and space is being radically altered by technologies, such as the web. Gardner and colleagues have also studied curricula. Gardner’s books have been translated into twenty-seven languages. Among his books are The Disciplined Mind: Beyond Facts and Standardized Tests, The K-12 Education that Every Child Deserves (Penguin Putnam, 2000) Intelligence Reframed (Basic Books, 2000), Good Work: When Excellence and Ethics Meet (Basic Books, 2001), Changing Minds: The Art and Science of Changing Our Own and Other People’s Minds (Harvard Business School Press, 2004), and Making Good: How Young People Cope with Moral Dilemmas at Work (Harvard University Press, 2004; with Wendy Fischman, Becca Solomon, and Deborah Greenspan). These books are available through the Project Zero eBookstore. Currently Gardner continues to direct the GoodWork project, which is concentrating on issues of ethics with secondary and college students. In addition, he co-directs the GoodPlay and Trust projects; a major current interest is the way in which ethics are being affected by the new digital media. In 2006 Gardner published Multiple Intelligences: New Horizons, The Development and Education of the Mind, and Howard Gardner Under Fire. In Howard Gardner Under Fire, Gardner’s work is examined critically; the book includes a lengthy autobiography and a complete biography. In the spring of 2007, Five Minds for the Future was published by Harvard Business School Press. Responsibility at Work, which Gardner edited, was published in the summer of 2007.

TRANSCRIPT: Howard Gardner: Currently I think there are eight intelligences that I’m very confident about and a few more that I’ve been thinking about. I’ll share that with our audience. The first two intelligences are the ones which IQ tests and other kind of standardized tests valorize and as long as we know there are only two out of eight, it’s perfectly fine to look at them. Linguistic intelligence is how well you’re able to use language. It’s a kind of skill that poets have, other kinds of writers; journalists tend to have linguistic intelligence, orators. The second intelligence is logical mathematical intelligence. As the name implies logicians, mathematicians…Read the full transcript at https://bigthink.com/videos/howard-gardner-on-the-eight-intelligences

Recent projects used machine learning to resurrect paintings by Klimt and Rembrandt. They raise questions about what computers can understand about art.

Full Story:


IN 1945, FIRE claimed three of Gustav Klimt’s most controversial paintings. Commissioned in 1,894 for the University of Vienna, “the Faculty Paintings”—as they became known—were unlike any of the Austrian symbolist’s previous work. As soon as he presented them, critics were in an uproar over their dramatic departure from the aesthetics of the time. Professors at the university rejected them immediately, and Klimt withdrew from the project. Soon thereafter, the works found their way into other collections. During World War II, they were placed in a castle north of Vienna for safekeeping, but the castle burned down, and the paintings presumably went with it. All that remains today are some black-and-white photographs and writings from the time. Yet I am staring right at them.

Well, not the paintings themselves. Franz Smola, a Klimt expert, and Emil Wallner, a machine learning researcher, spent six months combining their expertise to revive Klimt’s lost work. It’s been a laborious process, one that started with those black-and-white photos and then incorporated artificial intelligence and scores of intel about the painter’s art, in an attempt to recreate what those lost paintings might have looked like. The results are what Smola and Wallner are showing me—and even they are taken aback by the captivating technicolor images the AI produced.

Let’s make one thing clear: No one is saying this AI is bringing back Klimt’s original works. “It’s not a process of recreating the actual colors, it is re-colorizing the photographs,” Smola is quick to note. “The medium of photography is already an abstraction from the real works.” What machine learning is doing is providing a glimpse of something that was believed to be lost for decades.

The plant-based antiviral agent thapsigargin (TG), derived from a group of poisonous plants known as ‘deadly carrots’, appears to be effective against all variants of SARS-CoV-2 in the lab – and that includes the quick-spreading Delta variant.

A previous study published in February demonstrated that TG can be effective against a host of viruses. Now, this latest work by the same research team confirms that the antiviral also isn’t being outflanked as SARS-CoV-2 evolves. With the emergence of new variants an ongoing possibility, it’s intriguing to observe the continuous efficacy of TG.

In tests on cell cultures in the lab, doses of TG delivered either before infection or during active infection were shown to block and inhibit SARS-CoV-2 variants, triggering a broad and powerful protective response.

Solid-state nuclear magnetic resonance (NMR) spectroscopy—a technique that measures the frequencies emitted by the nuclei of some atoms exposed to radio waves in a strong magnetic field—can be used to determine chemical and 3D structures as well as the dynamics of molecules and materials.

A necessary initial step in the analysis is the so-called chemical shift assignment. This involves assigning each peak in the NMR spectrum to a given atom in the molecule or material under investigation. This can be a particularly complicated task. Assigning chemical shifts experimentally can be challenging and generally requires time-consuming multi-dimensional correlation experiments. Assignment by comparison to statistical analysis of experimental chemical shift databases would be an alternative solution, but there is no such for molecular solids.

A team of researchers including EPFL professors Lyndon Emsley, head of the Laboratory of Magnetic Resonance, Michele Ceriotti, head of the Laboratory of Computational Science and Modeling and Ph.D. student Manuel Cordova decided to tackle this problem by developing a method of assigning NMR spectra of organic crystals probabilistically, directly from their 2D chemical structures.

NVIDIA recently rolled out a demo of GAUGAN 2, an artificial intelligence-based text to image creation tool. GAUGAN 2 takes keywords and phrases you type in as input, and then generates unique images based on them.

In NVIDIA’s demo video, a user inputs “mountains by a lake” and GAUGAN 2 spits out a beautiful alpine landscape with a small lake in the foreground. We tried using GAUGAN 2 and, in practice, things aren’t as smooth as the demo implies. Certain keywords resulted in bizarre, terrifying results. GAUGAN 2 used this author’s name, for instance, to output an image of what looked like fungi on legs, walking down a street.

GAUGAN 2 is early in development at this point, and likely been trained only on a rather limited data set. Regardless, when it works, it offers a breathtaking snapshot of how AI technology could transform asset creation in movies in games in the years to come, with unique photorealistic landscapes and objects generated from just a few words of user input.

(CNN) — A British man has become the first patient in the world to be fitted with a 3D printed eye, according to Moorfields Eye Hospital in London.

Steve Verze, who is 47 and an engineer from Hackney, east London, was given the left eye on Thursday and first tried it for size earlier this month.

Moorfields Eye Hospital said in a press release Thursday that the prosthetic is the first fully digital prosthetic eye created for a patient.

A new Artificial Intelligence model manages to do complex physics simulations in real time with only using a fraction of the power that a traditionally computed simulation would use. These simulations could soon be used for things like biotechnology, gaming, weather predictions and more. Two Minute Papers has done several videos on it before, but this is a more complex AI with a wider range of applications.

TIMESTAMPS:
00:00 The Future of Advanced Physics Simulations.
01:57 How this new approach to AI works.
04:03 Are medical simulations a possibility?
06:02 Last Words.

#ai #physics #simulation

A new and revolutionary approach to building Artificial Intelligence models has shown promise of enabling almost any device, regardless of how powerful it is, to run enormous and intelligent Artificial Intelligence’s in a similar way to how our Human Brain operate. This is partially done with new and improved Neuromorphic Computing Hardware which is modelled after our real brains. We may soon see AI beating humans at many different general tasks like an Artificial General Intelligence.

TIMESTAMPS:
00:00 The Impossibility of Human AI
01:54 A new Approach is in town.
04:33 Other approaches to AI
06:44 Is this the Future of Artificial Intelligence?
09:43 Last Words.

#ai #agi #neuralcomputing

Welcome to AIP.
- The main focus of this channel is to publicize and promote existing SoTA AI research works presented in top conferences, removing barrier for people to access the cutting-edge AI research works.
- All videos are either taken from the public internet or the Creative Common licensed, which can be accessed via the link provided in the description.
- To avoid conflict of interest with the ongoing conferences, all videos are published at least 1 week after the main events. A takedown can be requested if it infringes your right via email.
- If you would like your presentation to be published on AIP, feel free to drop us an email.
- AI conferences covered include: NeurIPS (NIPS), AAAI, ICLR, ICML, ACL, NAACL, EMNLP, IJCAI

If you would like to support the channel, please join the membership:
https://www.youtube.com/c/AIPursuit/join.

Donation:
Paypal ⇢ https://paypal.me/tayhengee.
Patreon ⇢ https://www.patreon.com/hengee.
Donate any cryptocurrency on BEP20 (BTC, ETH, USDT, BNB, Doge, Shiba): 0x0712795299bf00eee99f13b4cda0e19dc656bf2c.
BTC ⇢ 1BwE1gufcE5t1Xh4w3wQhGgcJuCTb7AGj3
ETH ⇢ 0x0712795299bf00eee99f13b4cda0e19dc656bf2c.
Doge ⇢ DL57g3Qym7XJkRUz5VTU97nvV3XuvvKqMX
USDT (TRN20) ⇢ THV9dCnGfWtGeAiZEBZVWHw8JGdGCWC4Sh.

The video is reposted for educational purposes and encourages involvement in the field of AI research.

A good GitHub repo on self supervised learning: https://github.com/jason718/awesome-self-supervised-learning#machine-learning