Cyborg News Special – 08-10-11

Posted by on October 8th, 2011

Some tasty news lately for aspirant cyborgs. Let’s take a look:

  • From BBC News – Monkeys’ brain waves offer paraplegics hope:


    …researchers trained the monkeys, Mango and Tangerine, to play a video game using a joystick to move the virtual arm and capture three identical targets. Each target was associated with a different vibration of the joystick.

    Multiple electrodes were implanted in the brains of the monkeys and connected to the computer screen. The joystick was removed and motor signals from the monkey’s brains then controlled the arm.

    At the same time, signals from the virtual fingers as they touched the targets were transmitted directly back into the brain.

    The monkeys had to search for a target with a specific texture to gain a reward of fruit juice. It only took four attempts for one of the monkeys to figure out how to make the system work.

    According to Prof Nicolelis, the system has now been developed so the monkeys can control the arm wirelessly.

    “We have an interface for 600 channels of brain signal transmission, so we can transmit 600 channels of brain activity wirelessly as if you had 600 cell phones broadcasting this activity.

    “For patients this will be very important because there will be no cables whatsoever connecting the patient to any equipment.”

    The scientists say that this work represents a major step on the road to developing robotic exoskeletons – wearable technology would allow patients afflicted by paralysis to regain some movement.

  • From Engadget – Cyberdyne HAL robotic arm hands-on:
    YouTube Preview Image

    …if all goes well, we may well see a brand new full-body suit at CES 2012 in January, so stay tuned.

  • From Gizmodo – Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment:
    YouTube Preview Image

    …according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—”this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.”

    Indeed, it’s mindblowing. I’m simultaneously excited and terrified. This is how it works:

    They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain’s blood flow through their brains’ visual cortex.

    The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.

    After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

    Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

    Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.

  • Let’s not forget our second-selfs. From WIRED – Clive Thompson on Memory Engineering:

    Right now, of course, our digital lives are so bloated they’re basically imponderable. Many of us generate massive amounts of personal data every day — phonecam pictures, text messages, status updates, and so on. By default, all of us are becoming lifeloggers. But we almost never go back and look at this stuff, because it’s too hard to parse.

    Memory engineers are solving that problem by creating services that reformat that data in witty, often artistic ways. 4SquareAnd7YearsAgo was coinvented this past winter by New York programmer Jonathan Wegener, who had a clever intuition: One year is a potent anniversary that makes us care about a specific moment in our past. After developing the Foursquare service, his team went on to craft PastPosts, which does the same thing with Facebook activity, and it has amassed tens of thousands of users in just a few months.

    “There are so many trails we leave through the world,” Wegener says. “I wanted to make them interesting to you again.”

Lastly, some older things that slipped through the cracks:

  • From io9 – A gallery of biotech devices that could give you superpowers right now
  • http://www.vimeo.com/10184668

    A quick tutorial on how to extract serial data from the $80 Mattel Mindflex (mindflexgames.com)

  • From MIT’s technology review – Tattoo Tracks Sodium and Glucose via an iPhone:

  • The tattoo developed by Clark’s team contains 120-nanometer-wide polymer nanodroplets consisting of a fluorescent dye, specialized sensor molecules designed to bind to specific chemicals, and a charge-neutralizing molecule.

    Once in the skin, the sensor molecules attract their target because they have the opposite charge. Once the target chemical is taken up, the sensor is forced to release ions in order to maintain an overall neutral charge, and this changes the fluorescence of the tattoo when it is hit by light. The more target molecules there are in the patient’s body, the more the molecules will bind to the sensors, and the more the fluorescence changes.

    The original reader was a large boxlike device. One of Clark’s graduate students, Matt Dubach, improved upon that by making a modified iPhone case that allows any iPhone to read the tattoos.

    Here’s how it works: a case that slips over the iPhone contains a nine-volt battery, a filter that fits over the iPhone’s camera, and an array of three LEDs that produce light in the visible part of the spectrum. This light causes the tattoos to fluoresce. A light-filtering lens is then placed over the iPhone’s camera. This filters out the light released by the LEDs, but not the light emitted by the tattoo. The device is pressed to the skin to prevent outside light from interfering.

    Dubach and Clark hope to create an iPhone app that would easily measure and record sodium levels. At the moment, the iPhone simply takes images of the fluorescence, which the researchers then export to a computer for analysis. They also hope to get the reader to draw power from the iPhone itself, rather than from a battery.

    Clark is working to expand her technology from glucose and sodium to include a wide range of potential targets. “Let’s say you have medication with a very narrow therapeutic range,” she says. Today, “you have to try it [a dosage] and see what happens.” She says her nanosensors, in contrast, could let people monitor the level of a given drug in their blood in real time, allowing for much more accurate dosing.

    The researchers hope to soon be able to measure dissolved gases, such as nitrogen and oxygen, in the blood as a way of checking respiration and lung function. The more things they can track, the more applications will emerge, says Clark


LIFT 11: Radical transparency and opaque algorithms

Posted by on February 8th, 2011

The LIFT 11 conference just concluded in Geneva, Switzerland. I’ve picked the two most interesting talks to post here, but there’s many others of course, and please feel free to post your favourites in the comments.

Hasan Elahi: Giving away your privacy to escape the US terrorist watch list

Hasan will tell us his incredible story: he was suspected of terrorism by the FBI by mistake, and ended up living totally in public to protect himself from surveillance. His talk will show how forfeiting your privacy can in fact become a new form of protection of your identity.

liftconference on livestream.com. Broadcast Live Free

Hasan concludes his talk by saying that if we all did what he does the intelligence community would be overwhelmed with information. Wrong; the NSA and others like it already do this. How? Algorithms running on incredibly powerful computer systems. Arguably a new lifeform, perhaps evolving to become the dominant one, if we believe the Singularitarians. Or is that already the case and we just haven’t realised it yet?

Kevin Slavin: Those algorithms that govern our lives

Digital technologies and on-line platforms are essential to the way we work and live. Interestingly, they are defined by algorithms which are not neutral. Kevin will discuss how they define new social norms and how our culture is affected by the possibilities embedded in the software we use.

liftconference on livestream.com. Broadcast Live Free

Incidental Media vs Invasive Advertising

Posted by on February 1st, 2011

As the information we’re leaving and using on the internet leaks out into the world, there’s many ways it can do so.

A subtle approach, making it an ambient part of the background, popping up on anything from digital clocks to the ticker on a TV screen to a receipt,  is explored in this collaboration between Dentsu London and BERG:

http://www.vimeo.com/16423199

These devices already exist; such as this mirror that displays your latest SMS messages as you approach it.

As we saw a while ago, the type of personalized advertising shown in the mall scene in Minority Report is just part of the biometric scanning system already being prototyped in Leon, Mexico.  Good for the marketers and authorities who like to track people, more than a little annoying to have personalized advertising constantly calling out to you.

How really not to do it? Try this invasive attention-seeking stunt from Alfa Romeo, in a Dutch shopping centre:

YouTube Preview Image

The war for your attention is only going to get worse, unless we’re able to take control of how it’s used.


CNN Video interview with Wafaa Bila, of the Third I project

Posted by on December 6th, 2010

I know Kevin posted about this last month, but I just found this video interview by CNN and.. well, you’ve got to see it.  (Just try and self-filter out the CNN lady)


SMS Skyscrapers

Posted by on November 25th, 2010

Created by Aaron Koblin, using the SMS traffect generated on new year’s eve:

Via NextNature.


Q Sensor – new wrist device to monitor stress

Posted by on October 28th, 2010

As reader Tzagash Shal-Goram said, on sending this in, File this one under “shriekyware“. I have to agree.

Developed to help caregivers monitor the mood of autistic children, it’s easy to see other uses for this – from personal alarms to livebloggin’ a night out.

More details from Technology Review:

[The] device developed by Affectiva, based in Waltham, Massachusetts, detects and records physiological signs of stress and excitement by measuring slight electrical changes in the skin. While researchers, doctors, and psychologists have long used this measurement–called skin conductance–in the lab or clinical setting, Affectiva’s Q Sensor is worn on a wristband and lets people keep track of stress during everyday activities. The Q Sensor stores or transmits a wearer’s stress levels throughout the day

When a person–autistic or not–experiences stress or enters a “flight or fight” mode, moisture collects under the skin (often leading to sweating) as a sympathetic nervous system response. This rising moisture makes the skin more electrically conductive. Skin conductance sensors send a tiny electrical pulse to one point of the skin and measure the strength of that signal at another point on the skin to detect its conductivity.

More still in this video from Technology Review.


Dutch researchers create new Body Area Network (BAN) for telehealth monitoring

Posted by on October 9th, 2010

From New Scientist:

Dutch research organisation IMEC, based in Eindhoven, this week demonstrated a new type of wireless body area network (BAN). Dubbed the Human++ BAN platform, the system converts IMEC’s ultra-low-power electrocardiogram sensors into wireless nodes in a short-range network, transmitting physiological data to a hub – the patient’s cellphone. From there, the readings can be forwarded to doctors via a Wi-Fi or 3G connection. They can also be displayed on the phone or sound an alarm when things are about to go wrong, giving patients like me a chance to try to slow our heart rates and avoid an unnecessary shock.

Julien Penders, who developed the system, says it can also work with other low-power medical sensors, such as electroencephalograms (EEGs) to monitor neurological conditions or electromyograms to detect neuromuscular diseases. Besides helping those already diagnosed with chronic conditions, BANs could be used by people at risk of developing medical problems – the so-called “worried well” – or by fitness enthusiasts and athletes who want to keep tabs on their physiological processes during training.

IMEC’s technology is not the first BAN, but integrates better than earlier versions with the gadgets that many people carry around with them. IMEC has created a dongle that plugs into the standard SD memory card interface of a cellphone to stream data from the sensors in real time and allow the phone to reconfigure the sampling frequency of sensors on the fly. The associated software runs on Google’s Android cellphone operating system.


Interview with Kevin Warwick on Motherboard.tv

Posted by on August 13th, 2010

We kind of like Kevin Warwick a lot here. And for good reason, he is, like Tony Stark, using himself as a test-pilot for the future.

So, of course, we must post this interview with him over on Motherboard.tv:

Most interesting is him further confirming that neuro-streaming will be the Next Big Thing, in Lifeblogging..

via Gizmodo


Watch a NY Times journalist try to interview a robot

Posted by on July 17th, 2010

Time for a chuckle, as we watch this stumbling interview between a NY Times journalist and the robot “Bina48″, a creation of Terasem:

YouTube Preview Image

As is made clear, Bina48 isn’t actually a robot, but rather an imperfect digital emulation of a person, based on an incomplete ‘upload’. Which would be a far more interesting thing to explore. Rather than, “LOL, robot speak funny”. Better luck next time, NY Times!


Amber Case: Cyborg Anthropologist

Posted by on March 20th, 2010

What exactly is a cyborg anthropologist? 

Let Amber herself tell you, in this video from late last year on ‘prosthetic culture’:

 YouTube Preview Image

Like to know more?  Our friends over at Technoccult just did a great interview with her.

Thanks for the YouTube link Vertigo Jones!


Facial recognition phone application

Posted by on March 3rd, 2010

From textually.org:
Swedish software developer, The Astonishing Tribe, is testing a iPhone application called Reconiizr that will enable the user to find names and numbers of complete strangers.

The user simply has to take a picture of a person and hit the ‘Recognize’ button.

The photo is then compared to shots on social networking sites including Facebook and Twitter before personal information, which can include phone numbers, addresses and email addresses, is sent to the user.

The app works on phones with a camera of five or more megapixel resolution

Via textually.org.


4Chan founder speaks to CNN

Posted by on February 23rd, 2010

Chris Poole, founder of 4Chan, did a short interview with CNN.

He has some very interesting things to say about online identity and lifestreaming and, well, truth:

He also spoke at the TED 2010 conference. Can’t wait to check that out when it goes online.


Motion-sensing phones that predict your every move

Posted by on December 13th, 2009

From NewScientist.com:

COULD your cellphone learn to predict what you are going to do before you’ve even started doing it?

Communications engineer Arjen Peddemors thinks so, and along with colleagues at the Technical University of Delft in the Netherlands he has devised a system that learns users’ behaviour patterns to provide them with an enhanced cellphone service. It could, for example, prevent the phone starting large downloads such as music tracks or podcasts when your behaviour suggests you are about to go out of network range.

Such prediction has become possible because smartphones like the Nokia N97 and Apple iPhone contain accelerometers that sense motion. They are normally used to reorient images when the screen is flipped from vertical to horizontal, or by software that responds to a shake of the phone. But Peddemors realised that they also generate a data stream that reflects every move the phone’s owner makes.

Routine events such as going to work are likely always to involve similar sequences of actions: locking the front door, opening the garage, getting in the car, for instance. The Delft system uses telltale sequences and timings like this to create an electronic signature of particular events.

A neural network software app running on the phone is then trained to predict what happens next and act accordingly. So if your regular drive to work takes you through a particular phone cell, the “going to work” signature could trigger the software to negotiate with the cellphone network to ensure that the cell will have the 3G capacity to maintain your streaming music channel as you drive through it.


First Wi-Fi pacemaker in U.S. gives patient freedom

Posted by on August 11th, 2009

After relying on a pacemaker for 20 years, Carol Kasyjanski has become the first American recipient of a wireless pacemaker that allows her doctor to monitor her health from afar — over the Internet.

When Kasyjanski heads to St. Francis Hospital in Roslyn, New York, for a routine check-up, about 90 percent of the work has already been done because her doctor logged into his computer and learned most of what he needed to know about his patient.

Three weeks ago Kasyjanski, 61, became the first person in the United States to be implanted with a pacemaker with a wireless home monitoring system that transmits critical information to her doctor via the Internet.

Kasyjanski, who has suffered from a severe heart condition for more than 20 years, says the device has given her renewed confidence and a new lease of life, because if her pacemaker were to malfunction or stop working, only immediate action would save her life.

“Years ago the problem was with my lead, it was nicked, and until I collapsed no one knew what the problem was, no tests would show what the problem was until I passed out,” she told Reuters Television.

Dr. Steven Greenberg, the director of St. Francis’ Arrhythmia and Pacemaker Center, said the new technology helps him better treat his patients and will likely become the new standard in pacemakers.

He said the server and the remote monitor communicate at least once a day to download all the relevant information and alert the doctor and patient if there is anything unusual.

“If there is anything abnormal, and we have a very intricate system set up, it will literally call the physician responsible at two in the morning if need be,” he said.

Link and words via reuters.com. Interesting that the article mentions nothing about any security measures in place.


Landlord sues tenant after tweet about moldy apartment

Posted by on July 28th, 2009

Food for thought on public tweets:

Those 140-character “microblog” posts to Twitter don’t constitute much more than links, dinner recipes, and bitching, right? Be careful with the bitching, though—a property management company in Chicago has filed a lawsuit against a tenant who tweeted an off-the-cuff comment about the company. The company, Horizon Group Management, says that the Twitter user in question sent the message maliciously, and is now asking for $50,000 in damages.

There are several reasons why this lawsuit is breaking new ground, not the least of which is its Twitter origin. There is much debate as to whether people’s Twitter streams are more like blogs—which are increasingly being held to the same legal standards as regular media when it comes to defamation—or a giant chat room, where most people presume “anything goes.” It may actually be somewhere in between, but the one problem with trying to hold tweets to a higher journalistic standard is the hard character limitation—it’s difficult to back up your comments within 140 characters (or even within several 140-character tweets), plus links to sources or pictures of evidence.

The other question is: did Horizon make any effort to sort out this issue with Bonnen before filing the lawsuit? It doesn’t seem so, given Bonnen’s immediate deletion of her Twitter account after the lawsuit was filed, but we admittedly don’t know the answer (and Horizon did not respond to our request for comment by publication time). The lawsuit makes no mention of the company making any effort to ensure that Bonnen’s apartment doesn’t have mold or to work with her to address her concerns.

Either way, the company has now managed to position itself as one that a lot renters and prospective homeowners wouldn’t want to do business with, unlike those that monitor their reputations on Twitter to address customer service issues. Zipcar, Boingo, one of my local pizza places, and even Allstate and Comcast have all swooped in to help out Ars staffers in need after we have aired some complaints. Even if Bonnen really had no mold and Horizon was technically innocent, the bad PR from this move will surely do more damage than Bonnen’s message to 20 of her best Twitter friends.

From arstechnica.com.


Does Social Media Produce Groupthink?

Posted by on July 28th, 2009

From inventorspot.com, Ron Callari applies the eight signs of Janis’ “Groupthink” thesis to social media:

In the 1970s, Irving L. Janis’s book “Victims of Groupthink” described it as “a deterioration of mental efficiency, reality testing and moral judgment that results from in-group pressures.” In the Age of Social Media, where social networks like Twitter and Facebook have consumed our lives, has Digital Man evolved into the the current version of “groupthink” or the herd mentality?

* Invulnerability. Members of the group are so overly optimistic that they are willing to take extraordinary risks and unwilling to heed signs of danger.
An example here might be the rallying cry we heard from the streets of Tehran and their access to the microblogging site Twitter which was used to amplify their protest message to the world. While on the one hand, using Twitter as a communication tool was eye-opening, might it have created a false sense of security? As the West joined the Iranian protesters online, did we put people at risk? I myself was approached by several of my LinkedIn contacts to remove Twitter profiles from blogs that I had posted that listed Iranian Twitter account names.

* Rationale. They rationalize away negative feedback and warnings that might otherwise cause the group to change course.
Are we encouraging children to be intellectually curious or merely teaching them that every question has an instant and obvious answer? Does Google or Twitter Search make us less intellectually curious as we rely on their easily accessible database of knowledge?


Smart tags to reveal where our trash ends up

Posted by on July 16th, 2009

Ever wondered where your trash goes to die? New Scientist is collaborating with Massachusetts Institute of Technology in a ground-breaking experiment to electronically tag and follow ordinary trash as travels from ordinary garbage cans to landfills, recycling plants, and possibly some extraordinary destinations.

The team behind the experiment, MIT’s Senseable City lab, led by Carlo Ratti, have made a device that is about the size of a small matchbox and that works like a cell phone – without the phone bit. A SIM card inside the chip blips out its location every 15 minutes, the signal is picked up by local cell phone antennae and the chip’s location is relayed back to MIT.

Ratti’s team and New Scientist have already deployed a test run of 50 tracked items of trash ranging from paper cups to computers in Seattle. Several thousand more will be released in Seattle and New York garbage cans later this summer and we’ll chuck a batch into the London trash for good measure.

From newscientist.com.


spacebook – an interactive, post-privacy house

Posted by on July 15th, 2009

MIT’s Spacebook project looks to be a very interesting exploration of post-privacy:

Spacebook is a project to design an interactive house whose walls gradually change in transparency with changes in local environmental conditions and the presence or absence of people inside and outside the space. The projects uses a new type of glass that was recently patented at the SENSEable City Lab

YouTube Preview Image

via Planet Damage


The Rise Of Homeless Internet Users

Posted by on June 1st, 2009

Anyone can be anyone on the internet, even if they don’t have a permanent roof over their head:

Cheap computers and free Internet access fuel the phenomenon. So does an increasingly computer-savvy population. Many job and housing applications must be submitted online. Some homeless advocates say the economic downturn is pushing more of the wired middle class on to the streets

Link via disinfo.com, story from the Wall Street Journal online.


new ‘smart patch’ is a dieters best friend

Posted by on May 1st, 2009

Because friends always tell the truth, right.

From Technology Review:

The calorie monitor, which is being developed by biotech incubator PhiloMetron, uses a combination of sensors, electrodes, and accelerometers that–together with a unique algorithm–measure the number of calories eaten, the number of calories burned, and the net gain or loss over a 24-hour period. The patch sends this data via a Bluetooth wireless connection to a dieter’s cell phone, where an application tracks the totals and provides support. “You missed your goal for today, but you can make it up tomorrow by taking a 15-minute walk or having a salad for dinner,” it might suggest.

All this in something “no bigger than a large Band-Aid”.