Ben Goertzel on The Future of AGI

Posted by on January 6th, 2012

Over on Formspring, as part of the Grinder’s Guide to the Next Five Minutes (this will be collated and continued again soon) I mentioned Ben Goertzel and the open-source Artificial General Intelligence project Open Cog.

Here’s the man himself explaining why:

YouTube Preview Image

Ericsson’s vision of the future-present smart home

Posted by on April 8th, 2011
YouTube Preview Image

Of course, in the Philip K. Dick version of this scenario the devices would probably conspire against him.

via @bruces


thoughts on the Transcendent Man

Posted by on April 7th, 2011

The Ray Kurzweil documentary, Transcendent Man, has been downloadable for a while now and is having selected screenings across the world. Prompted by Paul Raven’s review on Futurismic (and his appearance on the panel discussing the film in London, get down there locals!) I decided to give it a watch.

Firstly, to be clear, I’m not a Believer in the Singularity, and I feel that it’s these Believers that contribute to the frequent descriptions of it as a techno-religious cult. Indeed, the almost immediate impression you got from this Kurzweil love-fest was: There is only one God (Technology) and Ray Kurzweil is its Prophet. There’s A LOT of time dedicated to what seems to be the creation/promotion of a Cult of Personality around Ray (but this may well be my own sensitivities speaking, since I’ve dedicated a lot of time lately studying Mao Zedong) and far less given to those equally brilliant people, doing amazing work, with dissenting opinions (the clips with Kevin Warwick, always a favourite here, are particularly good.) But far be it for me to mount an aggressive campaign picking apart his arguments. Starting a War is the furthest thing from my mind. Instead, let me gently point out a few of flaws as I see them:

  1. Ray is focusing on Technological change to the complete exclusion of all other elements, be they Social, Political or Economical. I agree we’re in the midst of rapid change, but you must account for the intersection & feedback between all aspects of Human Society.
  2. Ray seems to have pinned his hopes on the eventual arrival of Friendly AIs (primarily to resurrect his dead father.) While I can sympathize with this desire, I think it’s foolish to imagine that should near-god-like AIs suddenly burst into existence that they will even remotely resemble Human Beings and have human concerns. This applies equally to the Doomsday scenarios, that The Machines Will Wipe Us Out! Why would they/it bother?? If anything, like in Vernor Vinge’s book Rainbow End, they might see us as a curiosity to be toyed with, but forget the Terminator-nightmare. That is just Humanity’s ego overstating it’s own anthropocentric importance.
  3. Finally, his whole theory is posited on the idea that all human biological evolution has ceased and it is within the sphere of technology and culture that evolution now takes place. This idea has been popular for a while, but like all scientific ideas, it represents just the current understanding (with its implicit statement of the Myth of Nature, the idea that Humans have left the Animal Kingdom.) Whereas current indications (see Are We Still Evolving?) hint that not only did we never stop evolving (because we are still Animals), but as a consequence of <rapid change = rapidly changing selection criteria> humanity might Naturally be about to make a Great Leap Forward.

Nonetheless, for all interested in Technological Change this documentary is well made and worth seeing, if only to focus your own viewpoints and sharpen your arguments.

Tangentially, I recently read Jonathan Hickman’s run on the Fantastic Four comic and Kurzweil seems oddly similar to the depiction of Reed Richards in these (especially in issue #579.) They’re both unquestioningly brilliant men trying to fix the world and in short: Solve Everything. However, intelligence and wisdom are two different things. (Weirdly enough, both had largely absent fathers too.) But enough conflating fact and fiction.

Fundamentally my bugbear with Singularitarianism is this: it discourages engaging with the Present, thinking that we can just lie back and let Technology Fix Everything. It seems focused on watching for the arrival of elements necessary to fulfill its predictions, rather than closely observing the present and trying to extrapolate from emerging trends, continuously updating your future-world-view. What worries me is that people viewing this documentary will think that everything will be just fine and they can safely adopt a passive role.

As I see it, the challenge we’re facing right now is making the Transition as gentle as possible. As I’ve said before, we’re already mid-Singularity, in that a one-way shift is happening. The world that lies on the other side will very much be the product of the choices we make right now and they require us all to be engaged in making and shaping them; but I am absolutely down for Total Life Forever.


Watch a NY Times journalist try to interview a robot

Posted by on July 17th, 2010

Time for a chuckle, as we watch this stumbling interview between a NY Times journalist and the robot “Bina48″, a creation of Terasem:

YouTube Preview Image

As is made clear, Bina48 isn’t actually a robot, but rather an imperfect digital emulation of a person, based on an incomplete ‘upload’. Which would be a far more interesting thing to explore. Rather than, “LOL, robot speak funny”. Better luck next time, NY Times!


IBM simulate feline cortex

Posted by on November 18th, 2009

image ganked from those Happy Mutants at BoingBoing

From Yahoo News:

this week researchers from IBM Corp. are reporting that they’ve simulated a cat’s cerebral cortex, the thinking part of the brain, using a massive supercomputer. The computer has 147,456 processors (most modern PCs have just one or two processors) and 144 terabytes of main memory — 100,000 times as much as your computer has.

The scientists had previously simulated 40 percent of a mouse’s brain in 2006, a rat’s full brain in 2007, and 1 percent of a human’s cerebral cortex this year, using progressively bigger supercomputers.

The latest feat, being presented at a supercomputing conference in Portland, Ore., doesn’t mean the computer thinks like a cat, or that it is the progenitor of a race of robo-cats.

The simulation, which runs 100 times slower than an actual cat’s brain, is more about watching how thoughts are formed in the brain and how the roughly 1 billion neurons and 10 trillion synapses in a cat’s brain work together.

The researchers created a program that told the supercomputer, which is in the Lawrence Livermore National Laboratory, to behave how a brain is believed to behave. The computer was shown images of corporate logos, including IBM’s, and scientists watched as different parts of the simulated brain worked together to figure out what the image was.

Dharmendra Modha, manager of cognitive computing for IBM Research and senior author of the paper, called it a “truly unprecedented scale of simulation.” Researchers at Stanford University and Lawrence Berkeley National Laboratory were also part of the project.

Modha says the research could lead to computers that rely less on “structured” data, such the input 2 plus 2 equals 4, and can handle ambiguity better, like identifying the corporate logo even if the image is blurry. Or such computers could incorporate senses like sight, touch and hearing into the decisions they make.

One reason that development would be significant to IBM: The company is selling “smarter planet” services that use digital sensors to monitor things like weather and traffic and feed that data into computers that are asked to do something with the information, like predicting a tsunami or detecting freeway accidents. Other companies could use “cognitive computing” to make better sense of large volumes of information.

via Mark Pesce


Eclipse Phase

Posted by on October 12th, 2009

I remember reading a scan of an old real print comic once.  The character in it was railing against the imaginary people of his imaginary world, taking them to task about their dissatisfaction with the future they lived in.  But it was really aimed at the stupid people who wanted their stupid little futures and who were too stupid to see that the future is now.  It’s always now.  Except it isn’t anymore.  The TITANs changed that.  The future is now yesterday, and last week, and ten years ago.

–ECLIPSE PHASE

In August of this year, I had the opportunity to interview Rob Boyle and Brian Cross – two of the minds behind the post-singularity, transhumanist horror Role-Playing Game ECLIPSE PHASE.  We covered a lot of topics — from details about the game and the game world to the singularity, technology’s influence on politics, reputation economies, anarcho-transhumanism and more.

(Also?  Creative uses for bacon in the dark post-singularity future.)

http://farm3.static.flickr.com/2611/4006243151_6283aea7aa_o.jpg

You can listen to the interview (recorded August 7th, 2009 in a noisy bar during the GEN CON gaming gaming convention in Indianapolis, Indiana) here:

Powered by Podbean.com

(Or you can download it in a podcast format from here.)  As a minor warning, there are some setting spoilers in the interview.

ECLIPSE PHASE comes out this week in the US and elsewhere from bookstores and gaming retailers.  (Or in PDF format from Drive Thru RPG.)


Science recruits weak AIs to finish it’s homework

Posted by on April 7th, 2009

From WIRED:

In just over a day, a powerful computer program accomplished a feat that took physicists centuries to complete: extrapolating the laws of motion from a pendulum’s swings.

Condensing rules from raw data has long been considered the province of human intuition, not machine intelligence. It could foreshadow an age in which scientists and programs work as equals to decipher datasets too complex for human analysis.

Initially, the equations generated by the program failed to explain the data, but some failures were slightly less wrong than others. Using a genetic algorithm, the program modified the most promising failures, tested them again, chose the best, and repeated the process until a set of equations evolved to describe the systems. Turns out, some of these equations were very familiar: the law of conservation of momentum, and Newton’s second law of motion.

Lipson likened the quest to a “detective story” — a hint of the changing role of researchers in hybridized computer-human science. Programs produce sets of equations — describing the role of rainfall on a desert plateau, or air pollution in triggering asthma, or multitasking on cognitive function. Researchers test the equations, determine whether they’re still incomplete or based on flawed data, use them to identify new questions, and apply them to messy reality.

It was always about co-evolution.

thanks to bookhling for the tip-off!


CB2 – first true member of the robo-species?

Posted by on April 7th, 2009

From PhysOrg:

Below the soft silicon skin of one of Japan’s most sophisticated robots, processors record and evaluate information. The 130-cm (four-foot, four-inch) humanoid is designed to learn just like a human infant.

The team is trying to teach the pint-sized android to think like a baby who evaluates its mother’s countless facial expressions and “clusters” them into basic categories, such as happiness and sadness.

Asada’s project brings together robotics engineers, brain specialists, psychologists and other experts, and is supported by the state-funded Japan Science and Technology Agency.

With 197 film-like pressure sensors under its light grey rubbery skin, CB2 can also recognise human touch, such as stroking of its head.

The robot can record emotional expressions using eye-cameras, then memorise and match them with physical sensations, and cluster them on its circuit boards, said Asada.

…In the two years since then, he said, CB2 has taught itself how to walk with the aid of a human and can now move its body through a room quite smoothly, using 51 “muscles” driven by air pressure.

In coming decades, Asada expects science will come up with a “robo species” that has learning abilities somewhere between those of a human and other primate species such as the chimpanzee.

Thousands of humanoids could be working alongside humans in a decade or so, if that is what society wants, said Fumio Miyazaki, engineering science professor at the Toyonaka Campus of Osaka University.

“Robots have hearts,” said Kokoro planning department manager Yuko Yokota.

“They don’t look human unless we put souls in them.

“When manufacturing a robot, there comes a moment when light flickers in its eyes. That’s when we know our work is done.”

thanks to lizbt for the tip-off!


Making a Cyborg Brain

Posted by on January 6th, 2009

Forget the wires. Think carbon nanotubes!

Research shows that carbon nanotubes, which, like neurons, are highly electrically conductive, form extremely tight contacts with neuronal cell membranes. Unlike the metal electrodes that are currently used in research and clinical applications, the nanotubes can create shortcuts between the distal and proximal compartments of the neuron, resulting in enhanced neuronal excitability.

What could the tubing be used for? (In the real world, and not this authors sci-fi addled brain)

“This result is extremely relevant for the emerging field of neuro-engineering and neuroprosthetics,” explains Giugliano, who hypothesizes that the nanotubes could be used as a new building block of novel “electrical bypass” systems for treating traumatic injury of the central nervous system. Carbon nano-electrodes could also be used to replace metal parts in clinical applications such as deep brain stimulation for the treatment of Parkinson’s disease or severe depression. And they show promise as a whole new class of “smart” materials for use in a wide range of potential neuroprosthetic applications.

Link via inventorspot.com.


Kevin Kelly on “Evidence of a Global SuperOrganism”

Posted by on October 25th, 2008

Kevin Kelly has posted a fascinating essay on his blog, further exploring his idea of that all the computers connected via the internet form a superorganism, the One Machine.

This megasupercomputer is the Cloud of all clouds, the largest possible inclusion of communicating chips. It is a vast machine of extraordinary dimensions. It is comprised of quadrillion chips, and consumes 5% of the planet’s electricity. It is not owned by any one corporation or nation (yet), nor is it really governed by humans at all. Several corporations run the larger sub clouds, and one of them, Google, dominates the user interface to the One Machine at the moment.

Manufactured intelligence is a new commodity in the world. Until now all useable intelligence came in the package of humans – and all their troubles. El Goog and the One Machine offer intelligence without human troubles. In the beginning this intelligence is transhuman rather than non-human intelligence. It is the smartness derived from the wisdom of human crowds, but as it continues to develop this smartness transcends a human type of thinking. Humans will eagerly pay for El Goog intelligence. It is a different kind of intelligence. It is not artificial – i.e. a mechanical — because it is extracted from billions of humans working within the One Machine. It is a hybrid intelligence, half humanity, half computer chip. Therefore it is probably more useful to us. We don’t know what the limits are to its value. How much would you pay for a portable genius who knew all there was known?

via MAKE


Daniel Suarez’s Long Now lecture on our “Bot-Mediated Reality”

Posted by on August 23rd, 2008

Probably best to wait on reading this if you’ve a had big weekend, because this is the stuff to make the most sober tres paranoid.

Is our robot overlord future already here? Daniel Suarez thinks so:

Forget about HAL-like robots enslaving humankind a few decades from now, the takeover is already underway. The agents of this unwelcome revolution aren’t strong AIs, but “bots”– autonomous programs that have insinuated themselves into the internet and thus into every corner of our lives. Apply for a mortgage lately? A bot determined your FICA score and thus whether you got the loan. Call 411? A bot gave you the number and connected the call. Highway-bots collect your tolls, read your license plate and report you if you have an outstanding violation.

Bots are proliferating because they are so very useful. Businesses rely on them to automate essential processes, and of course bots running on zombie computers are responsible for the tsunami of spam and malware plaguing Internet users worldwide. At current growth rates, bots will be the majority users of the Net by 2010.

Here’s the full lecture, a meaty hour of knowledge (plus QnA) – so grab a coffee and sit down, ears ready or copy it to your mp3 player of choice. I really think this is one not to miss!

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

All that being said, I’m not really sure I agree with the solution he proposes, to:

…build a new Internet hard-coded with democratic values. Start with an encrypted Darknet into which only verifiably human users can enter. Create augmented reality tools to identify bots in the physical world. Enlist the aid of a few tame bots to help forge a symbiotic relationship with narrow AI.

But there is definitely a lot to think and talk about, before we lose the reins altogether on our society!


Image Metrics say they have leaped the Uncanny Valley with “Emily”

Posted by on August 18th, 2008

From the Times Online:

uncanny valley graph
Emily…was produced using a new modelling technology that enables the most minute details of a facial expression to be captured and recreated.

Researchers…started with a video of an employee talking. They then broke down down the facial movements down into dozens of smaller movements, each of which was given a ‘control system’.

The team at Image Metrics…then recreated the gestures, movement by movement, in a model. The aim was to overcome the traditional difficulties of animating a human face, for instance that the skin looks too shiny, or that the movements are too symmetrical.

Ninety per cent of the work is convincing people that the eyes are real,” Mike Starkenburg, chief operating officer of Image Metrics, said.

Watch the video and judge for yourself!

YouTube Preview Image

IBM’s PENSIEVE – Next-Gen searchable outboard memory

Posted by on July 29th, 2008

This is the PENSIEVE user interface (click through for high-resolution):

PENSIEVE UI

This is IBM’s promo video for it:

YouTube Preview Image

This is ganked from PhysOrg:

“This is like having a personal assistant for your memory,” said Dr. Yaakov Navon, the lead researcher and image processing expert from IBM’s Haifa Research Lab. “Our daily routines are overflowing with situations where we gain new information through meetings, advertisements, conferences, events, surfing the web, or even window shopping. Instead of going home and using a general web search to find that information, PENSIEVE helps the brain recall those everyday things you might normally forget.”

…By simply typing the person’s name into PENSIEVE, you can recall when and where you met them, and any related information garnered at that time. You could even browse forwards or backwards in time to find out what events transpired before or after the initial meeting.

Another use of this technology is in reconstructing and sharing an experience or memory. If enough media-rich data was collected about a particular event, it can be used to build a more complex visual associative representation of the experience.

“This is where the real power of collaboration kicks in,” said Eran Belinsky, research team leader and a specialist in collaboration. “You can recall the name of the person you met right before you entered a meeting by traversing a timeline of your experiences, or share a business trip with colleagues by creating a mashup that shows a map with an animation of your trail and the pictures you took in every location.”

This is the corporate future and it is only just starting to get messy. Let us just say I would be very careful now about using any company property for personal reasons.

Obviously this is awesome technology for personal use though, but I would want to be controlling the database. In a secure location. (According to CSI) Police already take people’s mobile phones in the event of emergency or tragedy. Would you want to hand over an indexed/tagged, searchable lifestream?

That being said, how rad would it be if it pulled-in CCTV images of you walking around?

Philip K Dick :- becoming more a prophet of the modern condition every second.


Building Better Pattern Recognition?

Posted by on July 12th, 2008

From BusinessWeek:

Through his development of the Palm (PALM) Pilot and Treo smartphone, Jeff Hawkins helped change the way people access information computers. Now he may have come up with a way to alter how we comb through that data.

Hawkins’ software startup Numenta is trying to replicate the thinking patterns of the human brain in an effort to recognize subtle patterns in immense streams of data. Researchers, of course, say such tools could lead to advances in data-rich fields from drug discovery to law enforcement.


AIs are in our casinos, taking our money!

Posted by on July 10th, 2008

Did you catch the news about the Second Man-Machine Poker Competition? Once more man has been defeated by the machine, this time in a game of Texas Hold ‘em!

So just how did this trumped up piece of software so easily defeat it’s human creators?

“There are two really big changes in Polaris over last year,” said professor Michael Bowling, who supervised graduate students who programmed Polaris. “First of all, our poker model is much expanded over last year–its much harder for humans to exploit weaknesses. And secondly, we have added an element of learning, where Polaris identifies which common poker stratagy a human is using and switches its own strategy to counter. This complicated the human players ability to compare notes, since Polaris chose a different strategy to use against each of the humans it played,” Bowling said.

One algorithm, called counter-factual regret, monitored the outcome of hands lost by Polaris and what could have been done to change the outcome. Polaris could then watch for similar circumstances and adjust more effectively.

They’re LEARNING!

Which brings us to this man, lured by the promise of ca$h, that the AIs might steal his tasty poker-skillz:

Freedom fighter or collaborator? You be the judge!

via Uncertain Times.