TRUE SKIN [short film]

Posted by on October 11th, 2012
http://www.vimeo.com/51138699

via Digitalyn


they call it “beaming”

Posted by on May 14th, 2012

From the BBC:

Beaming, of a kind, is no longer pure science fiction. It is the name of an international project funded by the European Commission to investigate how a person can visit a remote location via the internet and feel fully immersed in the new environment.

The visitor may be embodied as an avatar or a robot, interacting with real people.

Motion capture technology – such as the Microsoft Kinect games console – robots, 3D glasses and special haptic suits with body sensors can all be used to create a rich, realistic experience, that reproduces that holy grail – “presence”.

Project leader Mel Slater, professor of virtual environments at University College London (UCL), calls beaming augmented reality, rather than virtual reality. In beaming – unlike the virtual worlds of computer games and the Second Life website – the robot or avatar interacts with real people in a real place.

He and his team have beamed people from Barcelona to London, embodying them either as a robot, or as an avatar in a specially equipped “cave”. One avatar was able to rehearse a play with a real actor, the stage being represented by the cave’s walls – screens projecting 3D images.

…this also raises the possibility of new types of crime.

Could beaming increase the risk of sexual harassment or even virtual rape? That is one of many ethical questions that the beaming project is considering, along with the technical challenges.

Law researcher Ray Purdy says you might get a new type of cyber crime, where lovers have consensual sexual contact via beaming and a hacker hijacks the man’s avatar to have virtual sex with the woman.

It raises all sorts of problems that courts and lawmakers may need to resolve. How could a court prove that that amounted to molestation or rape? The human who hacks into an avatar could easily live in another country, under different laws.

The electronic evidence might be insufficient for prosecution. Crimes taking place remotely might sometimes leave digital trails, but they do not leave forensic evidence, which is often vital to secure rape convictions, Purdy says.

“Clearly, laws might have to adapt to the fact that certain crimes can be committed at a distance, via the use of beamed technologies,” he says.

Sexual penetration by a robot part is another possibility. Current law may not go far enough to cover that, Purdy says. And what if a robot injured you with an over-zealous handshake? Or if an avatar made a sexually explicit gesture amounting to sexual harassment?

He argues that using a robot maliciously would be similar in law to using a gun – responsibility lies with the controller. “While it is the gun that fires the bullet, it is the person in control of the gun that commits the act – not the gun itself.”

The Kinect technology, capturing an individual’s gestures, is potentially a powerful tool in the hands of an identity thief, argues Prof Jeremy Bailenson, founder of the Virtual Human Interaction Lab at Stanford University, California.

“A hacker can steal my very essence, really capture all of my nuances, then build a competing avatar, a copy of me,” he told the BBC. “The courts haven’t even begun to think about that.”

Prof Patrick Haggard, a neuroscientist at UCL who has been examining ethical issues thrown up by beaming, says there is a risk that such a virtual culture could reinforce body image prejudices.

But equally an avatar could form part of a therapy, he says, for example to show an obese person how he or she might look after losing weight.

As beaming develops, one of the biggest questions for philosophers may be defining where a person actually is – just as it is key for lawyers to determine in which jurisdiction an avatar’s crime is committed.

Even now people are often physically in one place but immersed in a virtual world online.

Avatars challenge the human bond between identity and a physical body.

“My body may be here in London but my life may be in a virtual apartment in New York,” says Haggard. “So where am I really?”

Click through for more, including a video demonstration of the tech.


the brain-controlled drones are here

Posted by on April 25th, 2012

Take the Emotiv EPOC neuroheadset, connect it to an AR Drone using the dark magix of computer science and you get this:

YouTube Preview Image

thanks Justin Pickard!


Projected tiger has golden 2D run through Paris

Posted by on March 25th, 2012
http://www.vimeo.com/36338299

 

via Laughing Squid


In Defense of the Retail Simulacra

Posted by on December 10th, 2011

Recently, retail clothing chain H&M has caught a great deal of flack for using computer generated bodies in their online catalog. And while there is something to be said for looking critically at the introduction of computer-generated “perfection” into an industry already psychotically obsessed with unattainable standards of physical beauty, Coilhouse’s Nadya Lev has some relevant re-contextualization to share:

Also, this foray into the uncanny valley brings us one step closer to the age of the idoru. With teenage pop idol Aimi Eguchi, whose face is a composite of six different singers, and vocaloids (singing synthesizers) such as pigtailed holographic superstar, we’re almost there — in The Future.  And even though H&M’s online catalogue conforms to the same beauty standard as any other big fashion retailer, this technology actually has potential to subvert the paradigm altogether.

See the rest over at Coilhouse.

[See also: Building a Better Pop Star, and Building a Better Pop Star II]


Marco Tempest’s Open Source Techno Magic

Posted by on November 16th, 2011
Using sleight-of-hand techniques and charming storytelling, techno-illusionist Marco Tempest brings a jaunty stick figure to life onstage at TEDGlobal.”


MSFT’s hand-held AR demo

Posted by on November 3rd, 2011

Now this is WAY better than MSFT’s CorporateFuturist video:

YouTube Preview Image

via @th0ma5 | zenbullets


FuturePresent News Special – 1-11-11

Posted by on November 1st, 2011

Here’s your menu for today’s FuturePresent news round-up:

  • MSFT’s “Productivity Vision 2011″ video:
    YouTube Preview Image
    via GeekWire who give this nice description:

    As the new video opens, special eyeglasses translate audio into English in real-time for a business traveler in Johannesburg. A thin screen on a car window highlights a passing building to show where her meeting will be the next day, based on information from her calendar. Office workers gesture effortlessly to control and reroute text and charts as the screens around them morph and pulse with new information.

    And on and on from there, making our modern-day digital breakthroughs seem like mere baby steps on the road to a far more spectacular future.

    Now I want my fucking spex now as much as the next cyberpunk, BUT… actual world problems solved here? ZERO. When the current estimate is that 80 Million new jobs need to be created to replace the ones lost during this recent period of disaster capitalism, building a shinier operating system hardly seems likely to help.

  • In better cyberpunky news, from the very same Microsoft, there’s OMNITOUCH:
    YouTube Preview Image
    via Design Taxi, who give us this succinct description:

    OmniTouch is depth-sensing projection system worn on the shoulder.

    With the system, hands, legs, arms, walls, books and tabletops, become interactive touch-screen surfaces—without any need for calibration.

    If only they didn’t look so terrible. Get ya mod on there future-dwellers!

  • It may have over 5Million views, but let’s take a look at the QUANTUM LEVITATION video again
    YouTube Preview Image
    via Gizmodo. Advances in basic science and engineering, now we’re talking!

  • If you like SCIENCE! you’ll love simulated pocket universes:

    Some of these universes would collapse instants after forming; in others, the forces between particles would be so weak they could not give rise to atoms or molecules. However, if conditions were suitable, matter would coalesce into galaxies and planets, and if the right elements were present in those worlds, intelligent life could evolve.

    Some physicists have theorized that only universes in which the laws of physics are “just so” could support life, and that if things were even a little bit different from our world, intelligent life would be impossible. In that case, our physical laws might be explained “anthropically,” meaning that they are as they are because if they were otherwise, no one would be around to notice them.

    MIT physics professor Robert Jaffe and his collaborators felt that this proposed anthropic explanation should be subjected to more careful scrutiny, and decided to explore whether universes with different physical laws could support life.

    The MIT physicists have showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us, even when the masses of elementary particles called quarks are dramatically altered.

    Jaffe and his collaborators felt that this proposed anthropic explanation should be subjected to more careful scrutiny, so they decided to explore whether universes with different physical laws could support life. Unlike most other studies, in which varying only one constant usually produces an inhospitable universe, they examined more than one constant.

    Whether life exists elsewhere in our universe is a longstanding mystery. But for some scientists, there’s another interesting question: could there be life in a universe significantly different from our own?

    In work recently featured in a cover story in Scientific American, Jaffe, former MIT postdoc, Alejandro Jenkins, and recent MIT graduate Itamar Kimchi showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us. Even when the masses of the elementary particles are dramatically altered, life may find a way.

    “You could change them by significant amounts without eliminating the possibility of organic chemistry in the universe,” says Jenkins.

    Keep reading… And if that’s not heavy enough for you, how about a paper on the Mass of the universe in a black hole (via reddit)

  • From the macro to the micro – Scientists create computing building blocks from bacteria and DNA [PhysOrg]:

    The scientists constructed a type of logic gate called an “AND Gate” from bacteria called Escherichia coli (E.Coli), which is normally found in the lower intestine. The team altered the E.Coli with modified DNA, which reprogrammed it to perform the same switching on and off process as its electronic equivalent when stimulated by chemicals.

    The researchers were also able to demonstrate that the biological logic gates could be connected together to form more complex components in a similar way that electronic components are made. In another experiment, the researchers created a “NOT gate” and combined it with the AND gate to produce the more complex “NAND gate”.

    The next stage of the research will see the team trying to develop more complex circuitry that comprises multiple logic gates. One of challenges faced by the team is finding a way to link multiple biological logic gates together, similar to the way in which electronic logic gates are linked together, to enable complex processing to be carried out.


Sony’s “SmartAR” Augmented Reality Tech Demo

Posted by on May 30th, 2011

YouTube Preview Image

Needless to say, the ability to photograph barcode-less items in the real world and get instant information on them could be huge, a sort of away-from-a-home-computer Google. What remains to be seen is if Sony can bring it to the masses in a palatable format and, of course, what Google will counteroffer if SmartAR takes off.

Video and words from core77.com.


Song of the Machine

Posted by on April 23rd, 2011
http://www.vimeo.com/22616192

Song of the Machine is my favourite kind of design fiction, combining multiple forms of extrapolation from the present into the future.

Unlike the implants and electrodes used to achieve bionic vision, this science modifies the human body genetically from within. First, a virus is used to infect the degenerate eye with a light-sensitive protein, altering the biological capabilities of the subject. Then, the new biological capabilities are augmented with wearable (opto)electronics, which, by mimicking the eye’s neural song, establish a direct optical link to the brain. It’s as if the virus gives the body ears to hear the song of the machine, allowing it to sing the world into being.

So we’ve got advances in genetic engineering combined with electronic ones to overcome a biological disability through continuing man’s progress, it’s ongoing co-evolution with the tools he creates. Except this marks a Rubicon Moment, the crossing of a threshold into a merger between man and his technology and the result is something far more, a step toward the posthuman.

Get used to this. Better living through upgrades.

For more details see this article in the Guardian by the consultant to this project, Dr Patrick Degenaar, optogenetics researcher at Newcastle University and leader of the OptoNeuro project.


Suwappu: part-physical, part-digital toys

Posted by on April 5th, 2011

I’ll just let BERGLondon do most of the talking for this one:

Dentsu London are developing an original product called Suwappu. Suwappu are woodland creatures that swap pants, toys that come to life in augmented reality. BERG have been brought in as consultant inventors, and we’ve made this film. Have a look!

YouTube Preview Image

This is where it starts to get interesting:

We wanted to picture a toy world that was part-physical, part-digital and that acts as a platform for media. We imagine toys developing as connected products, pulling from and leaking into familiar media like Twitter and Youtube. Toys already have a long and tenuous relationship with media, as film or television tie-ins and merchandise. It hasn’t been an easy relationship. AR seems like a very apt way of giving cheap, small, non-interactive plastic objects an identity and set of behaviours in new and existing media worlds.

Then it gets really interesting, quoting directly from BERG’s Jack Schulze:

In the film, one of the characters makes a reference to dreams. I love the idea that the toys in their physical form, dream their animated televised adventures in video. When they awake, into their plastic prisons, they half remember the super rendered full motion freedoms and adventures from the world of TV.

For me, this marks an entry into the territory explored in the anime Dennō Coil. But it’s a little Tachikoma that I’d like to see running around my desk, giving me messages, through AR magics.


The “Predator”, or how to build a camera that learns

Posted by on April 4th, 2011

Via a whole bunch of people, who are justifiably equal parts excited and terrified about what this might lead to:

YouTube Preview Image

My first question, how does it handle CV Dazzle? Find out yourself! More details, including the code itself, are available on developer Zdenek Kalal’s website.


Kinect + 3D goggles = “Holodeck”?

Posted by on March 29th, 2011

Today on yetAnotherAwesomeKinectDemo:

YouTube Preview Image

Easy to envisage future interactions with architects being via something like this.

From The Future Digital Life, via Chris Arkenberg.


The Invisible Wi-Fi Landscape

Posted by on March 1st, 2011

Immaterials: Light painting WiFi from Timo on Vimeo.

This project explores the invisible terrain of WiFi networks in urban spaces by light painting signal strength in long-exposure photographs.

A four-metre long measuring rod with 80 points of light reveals cross-sections through WiFi networks using a photographic technique called light-painting.


We see things differently

Posted by on January 11th, 2011

We’re 11 days into 2011 and I’m watching the north of my country drown on live-television, as they in turn switch between exhausted officals giving press conferences, to reports straight from social media. In fact, they’re just sending viewers straight to #qldfloods. But, look.. SHINY!

Let’s face it, we’re going to need ever better methods to record disaster pr0n and navigate our way through it. OK, we don’t need them, but some kind of distraction is needed now and again. What have we got so far this year?

Augmented reality HUDS? Check. This was just released for skiers:

Introducing  Transcend, Recon Instruments’ collaboration with Colorado’s Zeal Optics. Transcend is the world’s first GPS-enabled goggles with a head-mounted display system.

Minimum interaction is required during use, sleek graphics and smart optics are completely unobtrusive for front and peripheral vision making it the ultimate solution for use in fast-paced environments.

Transcend provides real-time feedback including speed, latitude/longitude, altitude, vertical distance travelled, total distance travelled, chrono/stopwatch mode, a run-counter, temperature and time. It is also the only pair of goggles in the world that boasts GPS capabilities, USB charging and data transfer, and free post-processing software all with a user-friendly, addictive interface.

Just like the dashboard of a sports car or the instruments of a fighter jet, Transcend’s display provides performance-enhancing data, but only when you choose to view it. Safe, smart, fun…all wrapped up in the hottest goggle frame of 2010/11.

Now, of course you ask, but how will I best show my friends a panoramic, interactive recording of that sick black run (or train for the next one)? Sony has just the thing:

Besides looking über futuristic, Sony’s “virtual 3D cinematic experience” head mounted display (aka ‘Headman’) sports some fairly impressive specs. The tiny OLED screens inside are head HD resolution (1280 x 720), and the headphones integrated into the sides of the goggles are outputting high quality simulated 5.1 channel surround sound.

OK, that’s just a prototype. But something like it will be coming soon, so leave some space for it in your underground bunker.

But m1k3y, you say.. “those are great and all, but WHERE’S MY CLATTER?!” Well, I saved the best for last:

In 2008, as a proof of concept, Babak Parviz at the University of Washington in Seattle created a prototype contact lens containing a single red LED. Using the same technology, he has now created a lens capable of monitoring glucose levels in people with diabetes.

It works because glucose levels in tear fluid correspond directly to those found in the blood, making continuous measurement possible without the need for thumb pricks, he says. Parviz’s design calls for the contact lens to send this information wirelessly to a portable device worn by diabetics, allowing them to manage their diet and medication more accurately.

Lenses that also contain arrays of tiny LEDs may allow this or other types of digital information to be displayed directly to the wearer through the lens. This kind of augmented reality has already taken off in cellphones, with countless software apps superimposing digital data onto images of our surroundings, effectively blending the physical and online worlds.

Making it work on a contact lens won’t be easy, but the technology has begun to take shape. Last September, Sensimed, a Swiss spin-off from the Swiss Federal Institute of Technology in Lausanne, launched the very first commercial smart contact lens, designed to improve treatment for people with glaucoma.

The disease puts pressure on the optic nerve through fluid build-up, and can irreversibly damage vision if not properly treated. Highly sensitive platinum strain gauges embedded in Sensimed’s Triggerfish lens record changes in the curvature of the cornea, which correspond directly to the pressure inside the eye, says CEO Jean-Marc Wismer. The lens transmits this information wirelessly at regular intervals to a portable recording device worn by the patient, he says.

Like an RFID tag or London’s Oyster travel cards, the lens gets its power from a nearby loop antenna – in this case taped to the patient’s face. The powered antenna transmits electricity to the contact lens, which is used to interrogate the sensors, process the signals and transmit the readings back.

Each disposable contact lens is designed to be worn just once for 24 hours, and the patient repeats the process once or twice a year. This allows researchers to look for peaks in eye pressure which vary from patient to patient during the course of a day. This information is then used to schedule the timings of medication.

Parviz, however, has taken a different approach. His glucose sensor uses sets of electrodes to run tiny currents through the tear fluid and measures them to detect very small quantities of dissolved sugar. These electrodes, along with a computer chip that contains a radio frequency antenna, are fabricated on a flat substrate made of polyethylene terephthalate (PET), a transparent polymer commonly found in plastic bottles. This is then moulded into the shape of a contact lens to fit the eye.

Parviz plans to use a higher-powered antenna to get a better range, allowing patients to carry a single external device in their breast pocket or on their belt. Preliminary tests show that his sensors can accurately detect even very low glucose levels. Parvis is due to present his results later this month at the IEEE MEMS 2011 conference in Cancún, Mexico.

“There’s still a lot more testing we have to do,” says Parviz. In the meantime, his lab has made progress with contact lens displays. They have developed both red and blue miniature LEDs – leaving only green for full colour – and have separately built lenses with 3D optics that resemble the head-up visors used to view movies in 3D.

Parviz has yet to combine both the optics and the LEDs in the same contact lens, but he is confident that even images so close to the eye can be brought into focus. “You won’t necessarily have to shift your focus to see the image generated by the contact lens,” says Parviz. It will just appear in front of you, he says. The LEDs will be arranged in a grid pattern, and should not interfere with normal vision when the display is off.

For Sensimed, the circuitry is entirely around the edge of the lens (see photo). However, both have yet to address the fact that wearing these lenses might make you look like the robots in the Terminator movies. False irises could eventually solve this problem, says Parviz. “But that’s not something at the top of our priority list,” he says.

So close… And Terminator eyes? That’s a feature, not a bug. YES PLEASE!


Building a Better Pop Star II

Posted by on December 28th, 2010

A little while ago, I was describing virtual pop star, Hatsune Miku, as a “prosthetic identity“.

Now, thanks to Kinect hackers in Japan, you could slide even further into her virtual identity if you wanted.


Latrama’s “Love & Projects” ships with augmented reality DJ app

Posted by on December 26th, 2010
http://www.vimeo.com/17056388

via Chris Arkenberg


Word Lens

Posted by on December 16th, 2010

How about an onboard, dynamic AR language translation app for your iPhone?

YouTube Preview Image

(Insert obligatory: just needs Spex to be perfect cyberpunk future present app).


Building a Better Pop Star

Posted by on November 18th, 2010


Infinite Present. Zero History.

Posted by on September 30th, 2010

Comrade-in-arms, grinder, and occasional Science Fictional overlord M1k3y recently penned a very insightful, spoiler-laden and topical overview of William Gibson’s new novel ZERO HISTORY over at the Tech Gonzo Diary.

ATEMPORALITY!  There, I said it again.  It’s been an obsession of mine recently and much of my excitement on the release of this book stemmed from videos of Bruce Sterling’s lectures on the subject, which he kept speaking of as a back’n’forth between him and Gibson, as they fleshed-out this idea.  That Zero History would be the bible of Atemporality. That this would be the case was furthered by twitter exchanges between these two, and thusly hashtagged tweets by them on the subject.

So is Zero History a manifesto of Atemporality.. a guidebook to a new understanding of progress, a new way of viewing the present, the defining of a new historical epoch?

[Via: The Tech Gonzo Diary]