Toggle light / dark theme

How could AI disrupt the music and commercial media industries?


1Artificial intelligence may be set to disrupt the world of live music. Using data driven algorithms, AI would be able to calculate when and where artists should play, as well as streamline the currently deeply flawed means through which fans discover concerts happening in their area.

____________________________________________

Guest Post by Cortney Harding on Medium

A few weeks ago, I posited that Artificial Intelligence could disrupt “background music”. While it wouldn’t replace pop stars (no robot could ever do what Beyonce did at the Super Bowl), it could replace the music we hear in ads, in stores, and while we’re doing other tasks. And while people will still flock to see live rock stars play in venues and arenas for years to come, AI will also have a huge impact on how we get to those shows, and how those shows are booked.

Read more

We don’t live in a world that’s pinning the survival of humanity of Matthew McConaughey’s shoulders, but if it turns out the plot of the 2014 film Interstellar is true, then we live in a world with at least five dimensions. And that would mean that a ring-shaped black hole would, as scientists recently demonstrated, “break down” Einstein’s general theory of relativity. (And to think, the man was just coming off a phenomenal week.)

In a study published in Physical Review Letters, researchers from the UK simulated a black hole in a “5-D” universe shaped like a thin ring (which were first posited by theoretical physicists in 2002). In this universe, the black hole would bulge strangely, with stringy connections that become thinner as time passes. Eventually, those strings pinch off like budding bacteria or water drops off a stream and form miniature black holes of their own.

This is wicked weird stuff, but we haven’t even touched on the most bizarre part. A black hole like this leads to what physicists call a “naked singularity,” where the equations that support general relativity — a foundational block of modern physics — stop making sense.

Read more

(Phys.org)—Researchers have designed and implemented an algorithm that solves computing problems using a strategy inspired by the way that an amoeba branches out to obtain resources. The new algorithm, called AmoebaSAT, can solve the satisfiability (SAT) problem—a difficult optimization problem with many practical applications—using orders of magnitude fewer steps than the number of steps required by one of the fastest conventional algorithms.

The researchers predict that the amoeba-inspired may offer several benefits, such as high efficiency, miniaturization, and low , that could lead to a new computing paradigm for nanoscale high-speed .

Led by Masashi Aono, Associate Principal Investigator at the Earth-Life Science Institute, Tokyo Institute of Technology, and at PRESTO, Japan Science and Technology Agency, the researchers have published a paper on the amoeba-inspired system in a recent issue of Nanotechnology.

Read more

Actors and Actresses will never have to worry about reading through pages of scripts to decide whether or not the role is worth their time; AI will do the work for you.


A version of this story first appeared in the Feb. 26 issue of The Hollywood Reporter magazine. To receive the magazine, click here to subscribe.

During his 12 years in UTA’s story department, Scott Foster estimates he read about 5,500 screenplays. “Even if it was the worst script ever, I had to read it cover to cover,” he says. So when Foster left the agency in 2013, he teamed with Portland, Ore.-based techie Brian Austin to create ScriptHop, an artificial intelligence system that manages the volume of screenplays that every agency and studio houses. “When I took over [at UTA], we were managing hundreds of thousands of scripts on a Word document,” says Foster, who also worked at Endeavor and Handprint before UTA. “The program began to eat itself and become corrupt because there was too much information to handle.” ScriptHop can read a script and do a complete character breakdown in four seconds, versus the roughly four man hours required of a reader. The tool, which launches Feb. 16 is free, and is a sample of the overall platform coming later in 2016 that will recommend screenplays as well as store and manage a company’s library for a subscription fee of $29.99 a month per user.

As for how exactly it works, Austin is staying mum. “There’s a lot of sauce in the secret sauce,” he says. Foster and Austin aren’t the first to create AI to analyze scripts. ScriptBook launched in 2015 as an algorithmic assessment to determine a script’s box-office potential. By contrast, ScriptHop is more akin to a Dewey Decimal System for film and TV. Say a manager needs to find a project for a 29-year-old male client who is 5 feet tall, ScriptHop will spit out the options quickly. “If you’re an agent looking for roles for minority clients, it’s hugely helpful,” says Foster. There’s also an emotional response dynamic (i.e., Oscar bait) that charts a character’s cathartic peaks and valleys as well as screen time and shooting days. So Meryl Streep instantly can find the best way to spend a one month window between studio gigs. Either way, it appears that A.I. script reading is the future. The only question is what would ScriptHop make of Ex Machina’s Ava? “That would be an interesting character breakdown,” jokes Foster.

Read more

Neural networks have become enormously successful – but we often don’t know how or why they work. Now, computer scientists are starting to peer inside their artificial minds.

A PENNY for ’em? Knowing what someone is thinking is crucial for understanding their behaviour. It’s the same with artificial intelligences. A new technique for taking snapshots of neural networks as they crunch through a problem will help us fathom how they work, leading to AIs that work better – and are more trustworthy.

In the last few years, deep-learning algorithms built on neural networks – multiple layers of interconnected artificial neurons – have driven breakthroughs in many areas of artificial intelligence, including natural language processing, image recognition, medical diagnoses and beating a professional human player at the game Go.

The trouble is that we don’t always know how they do it. A deep-learning system is a black box, says Nir Ben Zrihem at the Israel Institute of Technology in Haifa. “If it works, great. If it doesn’t, you’re screwed.”

Neural networks are more than the sum of their parts. They are built from many very simple components – the artificial neurons. “You can’t point to a specific area in the network and say all of the intelligence resides there,” says Zrihem. But the complexity of the connections means that it can be impossible to retrace the steps a deep-learning algorithm took to reach a given result. In such cases, the machine acts as an oracle and its results are taken on trust.

To address this, Zrihem and his colleagues created images of deep learning in action. The technique, they say, is like an fMRI for computers, capturing an algorithm’s activity as it works through a problem. The images allow the researchers to track different stages of the neural network’s progress, including dead ends.

Read more

I must admit that this will be hard to do. Sure; I can code anything to come across as responding & interacting to questions, topics, etc. Granted logical/ pragmatic decision making is based on facts/ information that people have at a given point of time; being human isn’t only based on algorithms and prescript data it includes being spontaneous, and sometimes emotional thinking. Robots without the ability to be spontaneous, and have emotional thinking capabilities; will not be human and will lack the connection that humans need.


Some people worry that someday a robot – or a collective of robots – will turn on humans and physically hurt or plot against us.

The question, they say, is how can robots be taught morality?

There’s no user manual for good behavior. Or is there?

Read more

This is one that truly depends on the targeted audience. I still believe that the 1st solely owned & operated female robotics company will make billions.


Beyond correct pronunciation, there is the even larger challenge of correctly placing human qualities like inflection and emotion into speech. Linguists call this “prosody,” the ability to add correct stress, intonation or sentiment to spoken language.

Today, even with all the progress, it is not possible to completely represent rich emotions in human speech via artificial intelligence. The first experimental-research results — gained from employing machinelearning algorithms and huge databases of human emotions embedded in speech — are just becoming available to speech scientists.

Synthesised speech is created in a variety of ways. The highest-quality techniques for natural-sounding speech begin with a human voice that is used to generate a database of parts and even subparts of speech spoken in many different ways. A human voice actor may spend from 10 hours to hundreds of hours, if not more, recording for each database.

Read more

GPS is an utterly pervasive and wonderful technology, but it’s increasingly not accurate enough for modern demands. Now a team of researchers can make it accurate right down to an inch.

Regular GPS registers your location and velocity by measuring the time it takes to receive signals from four or more satellites, that were sent into space by the military. Alone, it can tell you where you are to within 30 feet. More recently a technique called Differential GPS (DGPS) improved on that resolution by adding ground-based reference stations—increasing accuracy to within 3 feet.

Now, a team from the University of California, Riverside, has developed a technique that augments the regular GPS data with on-board inertial measurements from a sensor. Actually, that’s been tried before, but in the past it’s required large computers to combine the two data streams, rendering it ineffective for use in cars or mobile devices. Instead what the University of California team has done is create a set of new algorithms which, it claims, reduce the complexity of the calculation by several order of magnitude.

Read more

What you’re looking at is the first direct observation of an atom’s electron orbitalan atom’s actual wave function! To capture the image, researchers utilized a new quantum microscope — an incredible new device that literally allows scientists to gaze into the quantum realm.

An orbital structure is the space in an atom that’s occupied by an electron. But when describing these super-microscopic properties of matter, scientists have had to rely on wave functions — a mathematical way of describing the fuzzy quantum states of particles, namely how they behave in both space and time. Typically, quantum physicists use formulas like the Schrödinger equation to describe these states, often coming up with complex numbers and fancy graphs.

Up until this point, scientists have never been able to actually observe the wave function. Trying to catch a glimpse of an atom’s exact position or the momentum of its lone electron has been like trying to catch a swarm of flies with one hand; direct observations have this nasty way of disrupting quantum coherence. What’s been required to capture a full quantum state is a tool that can statistically average many measurements over time.

Read more

Here is a concept to think about when we’re 20 or 30 years into the future — imagine a world where humans and all living things in it are truly Singular, and the new AI & Humanoid robots are alive and well. Will AI (including Robots) ever need therapy, will AI ever get stressed out or have panic attacks, will any humans know what AI is thinking once we give AI more independence?

I ask these questions because as we enhance and evolve AI to be like humans and interpret and process emotions, feelings, and interact like humans; will AI expeience fully the struggles of everyday life like some humans do? And, when needs counseling or therapy will they go to another AI or will they see a human therapist?

As we evolve AI; we must look at the full longer picture around AI including how human do we really wish to make AI.


Two actors pose for stock footage in that can be used in political ads. Karen O’Connell, left, & Leslie Luxemburg pretend to chat over coffee. In a political ad, this clip could be used to illustrate any number of topics. (Marvin Joseph/The Washington Post)

Two weeks ago, the Internet Archive started its new Political TV Ad Archive, which monitors television stations in 20 markets in eight U.S. states to compile a list of 2016 primary-election advertisements & uses audio fingerprinting algorithms to automatically flag each one airing of those spots.

[Who are all those smiling people in crusade advertisements?]

The project will create a public database of political television ads in the 2016 race, where they are running & who is paying for them. Since the end of Nov., the archive has identified 267 distinct ads that, if broadcast end to end, would total 196 minutes. They have aired a collective 72,807 times on the stations it’s monitoring.

Read more