DARPA’s SCENICC to make future soldiers omniscient

Posted by on December 23rd, 2010 in Head-Up Displays, military, surveillance

From WIRED’s Danger Room:

In a solicitation released today, Darpa, the Pentagon’s far-out research branch, unveiled the Soldier Centric Imaging via Computational Cameras effort, or SCENICC. Imagine a suite of cameras that digitally capture a kilometer-wide, 360-degree sphere, representing the image in 3-D (!) onto a wearable eyepiece.

You’d be able to literally see all around you, including behind yourself, and zooming in at will, creating a “stereoscopic/binocular system, simultaneously providing 10x zoom to both eyes.” And you would do this all hands-free, apparently by barking out or pre-programming a command (the solicitation leaves it up to a designer’s imagination) to adjust focus.

Then comes the Terminator-vision. Darpa wants the eyepiece to include “high-resolution computer-enhanced imagery as well as task-specific non-image data products such as mission data overlays, threat warnings/alerts, targeting assistance, etc.” Target identified: Sarah Connor… The “Full Sphere Awareness” tool will provide soldiers with “muzzle flash detection,” “projectile tracking” and “object recognition/labeling,” basically pointing key information out to them.

And an “integrated weapon sighting” function locks your gun on your target when acquired. That’s far beyond an app mounted on your rifle that keeps track of where your friendlies and enemies are.

The imaging wouldn’t just be limited to what any individual soldier sees. SCENICC envisions a “networked optical sensing capability” that fuses images taken from nodes worn by “collections of soldiers and/or unmanned vehicles.” The Warrior-Alpha drone overhead? Its full-motion video and still images would be sent into your eyepiece.

Keep reading..

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Reddit
  • del.icio.us
  • Twitter
  • Facebook

3 Responses to “DARPA’s SCENICC to make future soldiers omniscient”

  1. Spider and Jeanne Robinson had a geek character in one of the Stardance sequels field testing his own beta model of that idea – in zero-G.

    Question is, would the 360 input drive you to Lovecraftian-like perspectives (pun intended, as ever).

  2. I'm curious about how they'd qualify/train infantrymen to handle this amount of data. It comes off sounding sort of like what Apache pilots have to deal with, and I've heard to even begin to qualify for Apache training you have to be able to read two books simultaneously while doing a bunch of other shit, because that's the amount of data you're expected to process while in the seat.

  3. I imagine this sort of tech will be limited to special forces (from both a training and a cost perspective). Super-soldiers of the future will be limited in number (a la Dune). I think a generation of soldiers from now, raised on similar technology in video games and media, will more seamlessly be able to adapt and integrate this tech into their battlefield operations.