Some tasty news lately for aspirant cyborgs. Let’s take a look:
- From BBC News – Monkeys’ brain waves offer paraplegics hope:
…researchers trained the monkeys, Mango and Tangerine, to play a video game using a joystick to move the virtual arm and capture three identical targets. Each target was associated with a different vibration of the joystick.
Multiple electrodes were implanted in the brains of the monkeys and connected to the computer screen. The joystick was removed and motor signals from the monkey’s brains then controlled the arm.
At the same time, signals from the virtual fingers as they touched the targets were transmitted directly back into the brain.
The monkeys had to search for a target with a specific texture to gain a reward of fruit juice. It only took four attempts for one of the monkeys to figure out how to make the system work.
According to Prof Nicolelis, the system has now been developed so the monkeys can control the arm wirelessly.
“We have an interface for 600 channels of brain signal transmission, so we can transmit 600 channels of brain activity wirelessly as if you had 600 cell phones broadcasting this activity.
“For patients this will be very important because there will be no cables whatsoever connecting the patient to any equipment.”
The scientists say that this work represents a major step on the road to developing robotic exoskeletons – wearable technology would allow patients afflicted by paralysis to regain some movement.
- From Engadget – Cyberdyne HAL robotic arm hands-on:
…if all goes well, we may well see a brand new full-body suit at CES 2012 in January, so stay tuned.
- From Gizmodo – Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment:
…according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—”this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.”
Indeed, it’s mindblowing. I’m simultaneously excited and terrified. This is how it works:
They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain’s blood flow through their brains’ visual cortex.
The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.
After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.
Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.
Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.
- Let’s not forget our second-selfs. From WIRED – Clive Thompson on Memory Engineering:
Right now, of course, our digital lives are so bloated they’re basically imponderable. Many of us generate massive amounts of personal data every day — phonecam pictures, text messages, status updates, and so on. By default, all of us are becoming lifeloggers. But we almost never go back and look at this stuff, because it’s too hard to parse.
Memory engineers are solving that problem by creating services that reformat that data in witty, often artistic ways. 4SquareAnd7YearsAgo was coinvented this past winter by New York programmer Jonathan Wegener, who had a clever intuition: One year is a potent anniversary that makes us care about a specific moment in our past. After developing the Foursquare service, his team went on to craft PastPosts, which does the same thing with Facebook activity, and it has amassed tens of thousands of users in just a few months.
“There are so many trails we leave through the world,” Wegener says. “I wanted to make them interesting to you again.”
Lastly, some older things that slipped through the cracks:
- From io9 – A gallery of biotech devices that could give you superpowers right now
A quick tutorial on how to extract serial data from the $80 Mattel Mindflex (mindflexgames.com)
- From MIT’s technology review – Tattoo Tracks Sodium and Glucose via an iPhone:
The tattoo developed by Clark’s team contains 120-nanometer-wide polymer nanodroplets consisting of a fluorescent dye, specialized sensor molecules designed to bind to specific chemicals, and a charge-neutralizing molecule.
Once in the skin, the sensor molecules attract their target because they have the opposite charge. Once the target chemical is taken up, the sensor is forced to release ions in order to maintain an overall neutral charge, and this changes the fluorescence of the tattoo when it is hit by light. The more target molecules there are in the patient’s body, the more the molecules will bind to the sensors, and the more the fluorescence changes.
The original reader was a large boxlike device. One of Clark’s graduate students, Matt Dubach, improved upon that by making a modified iPhone case that allows any iPhone to read the tattoos.
Here’s how it works: a case that slips over the iPhone contains a nine-volt battery, a filter that fits over the iPhone’s camera, and an array of three LEDs that produce light in the visible part of the spectrum. This light causes the tattoos to fluoresce. A light-filtering lens is then placed over the iPhone’s camera. This filters out the light released by the LEDs, but not the light emitted by the tattoo. The device is pressed to the skin to prevent outside light from interfering.
Dubach and Clark hope to create an iPhone app that would easily measure and record sodium levels. At the moment, the iPhone simply takes images of the fluorescence, which the researchers then export to a computer for analysis. They also hope to get the reader to draw power from the iPhone itself, rather than from a battery.
Clark is working to expand her technology from glucose and sodium to include a wide range of potential targets. “Let’s say you have medication with a very narrow therapeutic range,” she says. Today, “you have to try it [a dosage] and see what happens.” She says her nanosensors, in contrast, could let people monitor the level of a given drug in their blood in real time, allowing for much more accurate dosing.
The researchers hope to soon be able to measure dissolved gases, such as nitrogen and oxygen, in the blood as a way of checking respiration and lung function. The more things they can track, the more applications will emerge, says Clark