Toggle light / dark theme

A team of researchers with Ulsan National Institute of Science and Technology and Dong-A University, both in South Korea, has developed an artificial skin that can detect both pressure and heat with a high degree of sensitivity, at the same time. In their paper published in the journal Science Advances, the team describes how they created the skin, what they found in testing it and the other types of things it can sense.

Many scientists around the world are working to develop , both to benefit robots and human beings who have lost skin sensation or limbs. Such efforts have led to a wide variety of artificial skin types, but until now, none of them have been able to sense both pressure and heat to a high degree, at the same time.

The new artificial skin is a sandwich of materials; at the top there is a meant to mimic the human fingerprint (it can sense texture), beneath that sit sensors sandwiched between . The sensors are domed shaped and compress to different degrees when the skin is exposed to different amount of pressure. The compression also causes a small electrical charge to move through the skin, as does heat or sound, which is also transmitted to sensors—the more pressure, heat or sound exerted, the more charge there is—using a computer to measure the charge allows for measuring the degree of sensation “felt.” The ability to sense sound, the team notes, was a bit of a surprise—additional testing showed that the artificial skin was actually better at picking up sound than an iPhone microphone.

Read more

Post-Human is a scifi proof-of-concept short based on the bestselling series of novels by me, David Simpson. Amazingly, filmed over just three hours by a crew of three, the short depicts the opening of Post-Human, drawing back the curtain on the Post-Human world and letting viewers see the world and characters they’ve only been able to imagine previously. You’ll get a taste of a world where everyone is immortal, have onboard mental “mind’s eye” computers, nanotechnology can make your every dream a reality, and thanks to the magnetic targeted fusion implants every post-human has, everyone can fly (and yep, there’s flying in this short!) But there’s a dark side to this brave new world, including the fact that every post-human is monitored from the inside out, and the one artificial superintelligence running the show might be about to make its first big mistake. wink

The entire crew was only three people, including me, and I was behind the camera at all times. The talent is Madison Smith as James Keats, and Bridget Graham as his wife, Katherine. As a result of the expense of the spectacular location, the entire short had to be filmed in three hours, so we had to be lean and fast. What a rush! (Pun intended).

The concept was to try to replicate what a full-length feature would look and feel like by adapting the opening of Post-Human, right up to what would be the opening credits. Of course, as I was producing the movie myself, we only had a micro-budget, but after researching the indie films here on Vimeo over the last year, I became convinced that we could create a reasonable facsimile of what a big-budget production would look like and hopefully introduce this world to many more people who aren’t necessarily aficionados of scifi exclusively on the Kindle. While the series has been downloaded over a million times since 2012, I’ve always intended for it to be adapted for film, and I’m excited to have, in some small measure, finally succeeded.

Read more

Researchers from the University of South Florida College of Engineering have proposed a new form of computing that uses circular nanomagnets to solve quadratic optimization problems orders of magnitude faster than that of a conventional computer.

A wide range of application domains can be potentially accelerated through this research such as finding patterns in social media, error-correcting codes to Big Data and biosciences.

In an article published in the current issue of Nature Nanotechnology, “Non Boolean computing with nanomagnets for computer vision applications,” authors Sanjukta Bhanja, D.K. Karunaratne, Ravi Panchumarthy, Srinath Rajaram, and Sudeep Sarkar discuss how their work harnessed the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive.

Read more

Allow me to introduce you to someone who has the potential to be very important in the future of Bitcoin. His name is Balaji Srinivasan, and he is the chairman and co-founder of 21 Inc. What is 21 Inc? 21 Inc. is the Bitcoin startup that secured the most venture capital of any Bitcoin company in history, at $116 million. What do they need $116 million in venture capital for? They are investing in “future proprietary products designed to drive mainstream adoption of Bitcoin.” With that in mind, the research of 21 Inc. has highlighted some interesting Bitcoin factoids. One Srinivasan released at the second annual Bitcoin Job Fair held last weekend in Sunnyvale, California regarding how big Bitcoin has become in the computing world.

Honestly, I looked online to find out what a petahash rate and a gigahash rate was, and that is one long rabbit hole, so I’ll leave the technical ramble to techies like Mr. Srinivasan. He makes the comparison to Google based on the fair assumption that they are using 1e7 servers, for 1e7 H/s per Xeon, and ~10 Xeons/server = 1 PH/s. One petahash equals 1,000,000 gigahash or 1000 terahashes. Bitcoin reached 1 PH/s of computing power/speed on September 15th, 2013. It is now normally working at over 350 PH/s, or over 350,000,000 GH/s.

” All of Google today would represent less than 1% of all of mining (Bitcoin operations worldwide). The sheer degree of what is happening in (Bitcoin) mining is not being appreciated by the press,” said Balaji Srinivasan at the Bitcoin Job Fair. “If we assume there are 10 million Google servers, and each of these servers is running, you can multiply that through and get one petahash. If they turned off all of their data centers and pointed them at Bitcoin (mining network), they would be less than 1% of the network.”

Read more

To many people, the introduction of the first Macintosh computer and its graphical user interface in 1984 is viewed as the dawn of creative computing. But if you ask Dr. Nick Montfort, a poet, computer scientist, and assistant professor of Digital Media at MIT, he’ll offer a different direction and definition for creative computing and its origins.

Defining Creative

Creative Computing was the name of a computer magazine that ran from 1974 through 1985. Even before micro-computing there was already this magazine extolling the capabilities of the computer to teach, to help people learn, help people explore and help them do different types of creative work, in literature, the arts, music and so on,” Montfort said.

“It was a time when people had a lot of hope that computing would enable people personally as artists and creators to do work. It was actually a different time than we’re in now. There are a few people working in those areas, but it’s not as widespread as hoped in the late 70’s or early 80s.”

These days, Montfort notes that many people use the term “artificial intelligence” interchangeably with creative computing. While there are some parallels, Montfort said what is classically called AI isn’t the same as computational creativity. The difference, he says, is in the results.

“A lot of the ways in which AI is understood is the ability to achieve a particular known objective,” Montfort said. “In computational creativity, you’re trying to develop a system that will surprise you. If it does something you already knew about then, by definition, it’s not creative.”

Given that, Montfort quickly pointed out that creative computing can still come from known objectives.

“A lot of good creative computer work comes from doing things we already know computers can do well,” he said. “As a simple example, the difference between a computer as a producer of poetic language and person as a producer of poetic language is, the computer can just do it forever. The computer can just keep reproducing and, (with) that capability to bring it together with images to produce a visual display, now you’re able to do something new. There’s no technical accomplishment, but it’s beautiful nonetheless.”

Models of Creativity

As a poet himself, another area of creative computing that Montfort keeps an eye on is the study of models of creativity used to imitate human creativity. While the goal may be to replicate human creativity, Montfort has a greater appreciation for the end results that don’t necessarily appear human-like.

“Even if you’re using a model of human creativity the way it’s done in computational creativity, you don’t have to try to make something human-like, (even though) some people will try to make human-like poetry,” Montfort said. “I’d much rather have a system that is doing something radically different than human artistic practice and making these bizarre combinations than just seeing the results of imitative work.”

To further illustrate his point, Montfort cited a recent computer generated novel contest that yielded some extraordinary, and unusual, results. Those novels were nothing close to what a human might have written, he said, but depending on the eye of the beholder, it at least bodes well for the future.

“A lot of the future of creative computing is individual engagement with creative types of programs,” Montfort said. “That’s not just using drawing programs or other facilities to do work or using prepackaged apps that might assist creatively in the process of composition or creation, but it’s actually going and having people work to code themselves, which they can do with existing programs, modifying them, learning about code and developing their abilities in very informal ways.”

That future of creative computing lies not in industrial creativity or video games, but rather a sharing of information and revisioning of ideas in the multiple hands and minds of connected programmers, Montfort believes.

“One doesn’t have to get a computer science degree or even take a formal class. I think the perspective of free software and open source is very important to the future of creative programming,” Montfort said. “…If people take an academic project and provide their work as free software, that’s great for all sorts of reasons. It allows people to replicate your results, it allows people to build on your research, but also, people might take the work that you’ve done and inflect it in different types of artistic and creative ways.”