You are currently browsing the archives for the augmented reality category.
From the BBC:
Beaming, of a kind, is no longer pure science fiction. It is the name of an international project funded by the European Commission to investigate how a person can visit a remote location via the internet and feel fully immersed in the new environment.
The visitor may be embodied as an avatar or a robot, interacting with real people.
Motion capture technology – such as the Microsoft Kinect games console – robots, 3D glasses and special haptic suits with body sensors can all be used to create a rich, realistic experience, that reproduces that holy grail – “presence”.
Project leader Mel Slater, professor of virtual environments at University College London (UCL), calls beaming augmented reality, rather than virtual reality. In beaming – unlike the virtual worlds of computer games and the Second Life website – the robot or avatar interacts with real people in a real place.
He and his team have beamed people from Barcelona to London, embodying them either as a robot, or as an avatar in a specially equipped “cave”. One avatar was able to rehearse a play with a real actor, the stage being represented by the cave’s walls – screens projecting 3D images.
…this also raises the possibility of new types of crime.
Could beaming increase the risk of sexual harassment or even virtual rape? That is one of many ethical questions that the beaming project is considering, along with the technical challenges.
Law researcher Ray Purdy says you might get a new type of cyber crime, where lovers have consensual sexual contact via beaming and a hacker hijacks the man’s avatar to have virtual sex with the woman.
It raises all sorts of problems that courts and lawmakers may need to resolve. How could a court prove that that amounted to molestation or rape? The human who hacks into an avatar could easily live in another country, under different laws.
The electronic evidence might be insufficient for prosecution. Crimes taking place remotely might sometimes leave digital trails, but they do not leave forensic evidence, which is often vital to secure rape convictions, Purdy says.
“Clearly, laws might have to adapt to the fact that certain crimes can be committed at a distance, via the use of beamed technologies,” he says.
Sexual penetration by a robot part is another possibility. Current law may not go far enough to cover that, Purdy says. And what if a robot injured you with an over-zealous handshake? Or if an avatar made a sexually explicit gesture amounting to sexual harassment?
He argues that using a robot maliciously would be similar in law to using a gun – responsibility lies with the controller. “While it is the gun that fires the bullet, it is the person in control of the gun that commits the act – not the gun itself.”
The Kinect technology, capturing an individual’s gestures, is potentially a powerful tool in the hands of an identity thief, argues Prof Jeremy Bailenson, founder of the Virtual Human Interaction Lab at Stanford University, California.
“A hacker can steal my very essence, really capture all of my nuances, then build a competing avatar, a copy of me,” he told the BBC. “The courts haven’t even begun to think about that.”
Prof Patrick Haggard, a neuroscientist at UCL who has been examining ethical issues thrown up by beaming, says there is a risk that such a virtual culture could reinforce body image prejudices.
But equally an avatar could form part of a therapy, he says, for example to show an obese person how he or she might look after losing weight.
As beaming develops, one of the biggest questions for philosophers may be defining where a person actually is – just as it is key for lawyers to determine in which jurisdiction an avatar’s crime is committed.
Even now people are often physically in one place but immersed in a virtual world online.
Avatars challenge the human bond between identity and a physical body.
“My body may be here in London but my life may be in a virtual apartment in New York,” says Haggard. “So where am I really?”
Click through for more, including a video demonstration of the tech.
Recently, retail clothing chain H&M has caught a great deal of flack for using computer generated bodies in their online catalog. And while there is something to be said for looking critically at the introduction of computer-generated “perfection” into an industry already psychotically obsessed with unattainable standards of physical beauty, Coilhouse’s Nadya Lev has some relevant re-contextualization to share:
Also, this foray into the uncanny valley brings us one step closer to the age of the idoru. With teenage pop idol Aimi Eguchi, whose face is a composite of six different singers, and vocaloids (singing synthesizers) such as pigtailed holographic superstar, we’re almost there — in The Future. And even though H&M’s online catalogue conforms to the same beauty standard as any other big fashion retailer, this technology actually has potential to subvert the paradigm altogether.
See the rest over at Coilhouse.
Here’s your menu for today’s FuturePresent news round-up:
As the new video opens, special eyeglasses translate audio into English in real-time for a business traveler in Johannesburg. A thin screen on a car window highlights a passing building to show where her meeting will be the next day, based on information from her calendar. Office workers gesture effortlessly to control and reroute text and charts as the screens around them morph and pulse with new information.
And on and on from there, making our modern-day digital breakthroughs seem like mere baby steps on the road to a far more spectacular future.
Now I want my fucking spex now as much as the next cyberpunk, BUT… actual world problems solved here? ZERO. When the current estimate is that 80 Million new jobs need to be created to replace the ones lost during this recent period of disaster capitalism, building a shinier operating system hardly seems likely to help.
OmniTouch is depth-sensing projection system worn on the shoulder.
With the system, hands, legs, arms, walls, books and tabletops, become interactive touch-screen surfaces—without any need for calibration.
If only they didn’t look so terrible. Get ya mod on there future-dwellers!
Some of these universes would collapse instants after forming; in others, the forces between particles would be so weak they could not give rise to atoms or molecules. However, if conditions were suitable, matter would coalesce into galaxies and planets, and if the right elements were present in those worlds, intelligent life could evolve.
Some physicists have theorized that only universes in which the laws of physics are “just so” could support life, and that if things were even a little bit different from our world, intelligent life would be impossible. In that case, our physical laws might be explained “anthropically,” meaning that they are as they are because if they were otherwise, no one would be around to notice them.
MIT physics professor Robert Jaffe and his collaborators felt that this proposed anthropic explanation should be subjected to more careful scrutiny, and decided to explore whether universes with different physical laws could support life.
The MIT physicists have showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us, even when the masses of elementary particles called quarks are dramatically altered.
Jaffe and his collaborators felt that this proposed anthropic explanation should be subjected to more careful scrutiny, so they decided to explore whether universes with different physical laws could support life. Unlike most other studies, in which varying only one constant usually produces an inhospitable universe, they examined more than one constant.
Whether life exists elsewhere in our universe is a longstanding mystery. But for some scientists, there’s another interesting question: could there be life in a universe significantly different from our own?
In work recently featured in a cover story in Scientific American, Jaffe, former MIT postdoc, Alejandro Jenkins, and recent MIT graduate Itamar Kimchi showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us. Even when the masses of the elementary particles are dramatically altered, life may find a way.
“You could change them by significant amounts without eliminating the possibility of organic chemistry in the universe,” says Jenkins.
The scientists constructed a type of logic gate called an “AND Gate” from bacteria called Escherichia coli (E.Coli), which is normally found in the lower intestine. The team altered the E.Coli with modified DNA, which reprogrammed it to perform the same switching on and off process as its electronic equivalent when stimulated by chemicals.
The researchers were also able to demonstrate that the biological logic gates could be connected together to form more complex components in a similar way that electronic components are made. In another experiment, the researchers created a “NOT gate” and combined it with the AND gate to produce the more complex “NAND gate”.
The next stage of the research will see the team trying to develop more complex circuitry that comprises multiple logic gates. One of challenges faced by the team is finding a way to link multiple biological logic gates together, similar to the way in which electronic logic gates are linked together, to enable complex processing to be carried out.
Needless to say, the ability to photograph barcode-less items in the real world and get instant information on them could be huge, a sort of away-from-a-home-computer Google. What remains to be seen is if Sony can bring it to the masses in a palatable format and, of course, what Google will counteroffer if SmartAR takes off.
Video and words from core77.com.
Song of the Machine is my favourite kind of design fiction, combining multiple forms of extrapolation from the present into the future.
Unlike the implants and electrodes used to achieve bionic vision, this science modifies the human body genetically from within. First, a virus is used to infect the degenerate eye with a light-sensitive protein, altering the biological capabilities of the subject. Then, the new biological capabilities are augmented with wearable (opto)electronics, which, by mimicking the eye’s neural song, establish a direct optical link to the brain. It’s as if the virus gives the body ears to hear the song of the machine, allowing it to sing the world into being.
So we’ve got advances in genetic engineering combined with electronic ones to overcome a biological disability through continuing man’s progress, it’s ongoing co-evolution with the tools he creates. Except this marks a Rubicon Moment, the crossing of a threshold into a merger between man and his technology and the result is something far more, a step toward the posthuman.
Get used to this. Better living through upgrades.
I’ll just let BERGLondon do most of the talking for this one:
Dentsu London are developing an original product called Suwappu. Suwappu are woodland creatures that swap pants, toys that come to life in augmented reality. BERG have been brought in as consultant inventors, and we’ve made this film. Have a look!
This is where it starts to get interesting:
We wanted to picture a toy world that was part-physical, part-digital and that acts as a platform for media. We imagine toys developing as connected products, pulling from and leaking into familiar media like Twitter and Youtube. Toys already have a long and tenuous relationship with media, as film or television tie-ins and merchandise. It hasn’t been an easy relationship. AR seems like a very apt way of giving cheap, small, non-interactive plastic objects an identity and set of behaviours in new and existing media worlds.
Then it gets really interesting, quoting directly from BERG’s Jack Schulze:
In the film, one of the characters makes a reference to dreams. I love the idea that the toys in their physical form, dream their animated televised adventures in video. When they awake, into their plastic prisons, they half remember the super rendered full motion freedoms and adventures from the world of TV.
This project explores the invisible terrain of WiFi networks in urban spaces by light painting signal strength in long-exposure photographs.
A four-metre long measuring rod with 80 points of light reveals cross-sections through WiFi networks using a photographic technique called light-painting.
We’re 11 days into 2011 and I’m watching the north of my country drown on live-television, as they in turn switch between exhausted officals giving press conferences, to reports straight from social media. In fact, they’re just sending viewers straight to #qldfloods. But, look.. SHINY!
Let’s face it, we’re going to need ever better methods to record disaster pr0n and navigate our way through it. OK, we don’t need them, but some kind of distraction is needed now and again. What have we got so far this year?
Augmented reality HUDS? Check. This was just released for skiers:
Introducing Transcend, Recon Instruments’ collaboration with Colorado’s Zeal Optics. Transcend is the world’s first GPS-enabled goggles with a head-mounted display system.
Minimum interaction is required during use, sleek graphics and smart optics are completely unobtrusive for front and peripheral vision making it the ultimate solution for use in fast-paced environments.
Transcend provides real-time feedback including speed, latitude/longitude, altitude, vertical distance travelled, total distance travelled, chrono/stopwatch mode, a run-counter, temperature and time. It is also the only pair of goggles in the world that boasts GPS capabilities, USB charging and data transfer, and free post-processing software all with a user-friendly, addictive interface.
Just like the dashboard of a sports car or the instruments of a fighter jet, Transcend’s display provides performance-enhancing data, but only when you choose to view it. Safe, smart, fun…all wrapped up in the hottest goggle frame of 2010/11.
Now, of course you ask, but how will I best show my friends a panoramic, interactive recording of that sick black run (or train for the next one)? Sony has just the thing:
Besides looking über futuristic, Sony’s “virtual 3D cinematic experience” head mounted display (aka ‘Headman’) sports some fairly impressive specs. The tiny OLED screens inside are head HD resolution (1280 x 720), and the headphones integrated into the sides of the goggles are outputting high quality simulated 5.1 channel surround sound.
OK, that’s just a prototype. But something like it will be coming soon, so leave some space for it in your underground bunker.
In 2008, as a proof of concept, Babak Parviz at the University of Washington in Seattle created a prototype contact lens containing a single red LED. Using the same technology, he has now created a lens capable of monitoring glucose levels in people with diabetes.
It works because glucose levels in tear fluid correspond directly to those found in the blood, making continuous measurement possible without the need for thumb pricks, he says. Parviz’s design calls for the contact lens to send this information wirelessly to a portable device worn by diabetics, allowing them to manage their diet and medication more accurately.
Lenses that also contain arrays of tiny LEDs may allow this or other types of digital information to be displayed directly to the wearer through the lens. This kind of augmented reality has already taken off in cellphones, with countless software apps superimposing digital data onto images of our surroundings, effectively blending the physical and online worlds.
Making it work on a contact lens won’t be easy, but the technology has begun to take shape. Last September, Sensimed, a Swiss spin-off from the Swiss Federal Institute of Technology in Lausanne, launched the very first commercial smart contact lens, designed to improve treatment for people with glaucoma.
The disease puts pressure on the optic nerve through fluid build-up, and can irreversibly damage vision if not properly treated. Highly sensitive platinum strain gauges embedded in Sensimed’s Triggerfish lens record changes in the curvature of the cornea, which correspond directly to the pressure inside the eye, says CEO Jean-Marc Wismer. The lens transmits this information wirelessly at regular intervals to a portable recording device worn by the patient, he says.
Like an RFID tag or London’s Oyster travel cards, the lens gets its power from a nearby loop antenna – in this case taped to the patient’s face. The powered antenna transmits electricity to the contact lens, which is used to interrogate the sensors, process the signals and transmit the readings back.
Each disposable contact lens is designed to be worn just once for 24 hours, and the patient repeats the process once or twice a year. This allows researchers to look for peaks in eye pressure which vary from patient to patient during the course of a day. This information is then used to schedule the timings of medication.
Parviz, however, has taken a different approach. His glucose sensor uses sets of electrodes to run tiny currents through the tear fluid and measures them to detect very small quantities of dissolved sugar. These electrodes, along with a computer chip that contains a radio frequency antenna, are fabricated on a flat substrate made of polyethylene terephthalate (PET), a transparent polymer commonly found in plastic bottles. This is then moulded into the shape of a contact lens to fit the eye.
Parviz plans to use a higher-powered antenna to get a better range, allowing patients to carry a single external device in their breast pocket or on their belt. Preliminary tests show that his sensors can accurately detect even very low glucose levels. Parvis is due to present his results later this month at the IEEE MEMS 2011 conference in Cancún, Mexico.
“There’s still a lot more testing we have to do,” says Parviz. In the meantime, his lab has made progress with contact lens displays. They have developed both red and blue miniature LEDs – leaving only green for full colour – and have separately built lenses with 3D optics that resemble the head-up visors used to view movies in 3D.
Parviz has yet to combine both the optics and the LEDs in the same contact lens, but he is confident that even images so close to the eye can be brought into focus. “You won’t necessarily have to shift your focus to see the image generated by the contact lens,” says Parviz. It will just appear in front of you, he says. The LEDs will be arranged in a grid pattern, and should not interfere with normal vision when the display is off.
For Sensimed, the circuitry is entirely around the edge of the lens (see photo). However, both have yet to address the fact that wearing these lenses might make you look like the robots in the Terminator movies. False irises could eventually solve this problem, says Parviz. “But that’s not something at the top of our priority list,” he says.
So close… And Terminator eyes? That’s a feature, not a bug. YES PLEASE!
(Insert obligatory: just needs Spex to be perfect cyberpunk future present app).
Comrade-in-arms, grinder, and occasional Science Fictional overlord M1k3y recently penned a very insightful, spoiler-laden and topical overview of William Gibson’s new novel ZERO HISTORY over at the Tech Gonzo Diary.
ATEMPORALITY! There, I said it again. It’s been an obsession of mine recently and much of my excitement on the release of this book stemmed from videos of Bruce Sterling’s lectures on the subject, which he kept speaking of as a back’n’forth between him and Gibson, as they fleshed-out this idea. That Zero History would be the bible of Atemporality. That this would be the case was furthered by twitter exchanges between these two, and thusly hashtagged tweets by them on the subject.
So is Zero History a manifesto of Atemporality.. a guidebook to a new understanding of progress, a new way of viewing the present, the defining of a new historical epoch?
[Via: The Tech Gonzo Diary]