2011-05-09

Dr Denise Herzing preparing to make breakthrough with new system at sea

Learning to speak dolphin Pattern detector

(New Scientist Via Acquire Media NewsEdge)

A computer system that divers can wear may bridge the language barrier
between us and dolphins A DIVER carrying a computer that tries to
recognize dolphin sounds and generate responses in real time will soon
attempt to communicate with wild dolphins off the coast of Florida. If
the bid is successful, it will be a big step towards two-way
communication between humans and dolphins.

Since the 1960s, captive dolphins have been communicating via pictures
and sounds. In the 1990s, Louis Herman of the Kewalo Basin Marine
Mammal Laboratory in Honolulu, Hawaii, found that bottlenose dolphins
can keep track of over 100 different words. They can also respond
appropriately to commands in which the same words appear in a
different order, understanding the difference between "bring the
surfboard to the man" and "bring the man to the surfboard", for
example.

But communication in most of these early experiments was one-way, says
Denise Herzing, founder of the Wild Dolphin Project in Jupiter,
Florida. "They create a system and expect the dolphins to learn it,
and they do, but the dolphins are not empowered to use the system to
request things from the humans," she says.

Since 1998, Herzing and colleagues have been attempting two-way
communication with dolphins, first using rudimentary artificial
sounds, then by getting them to associate the sounds with four large
icons on an underwater "keyboard".

By pointing their bodies at the different symbols, the dolphins could
make requests - to play with a piece of seaweed or ride the bow wave
of the divers' boat, for example. The system managed to get the
dolphins' attention, Herzing says, but wasn't "dolphin-friendly"
enough to be successful.

Herzing is now collaborating with Thad Starner, an artificial
intelligence researcher at the Georgia Institute of Technology in
Atlanta, on a project named Cetacean Hearing and Telemetry (CHAT).
They want to work with dolphins to "co-create" a language that uses
features of sounds that wild dolphins communicate with naturally.

Knowing what to listen for is a huge challenge. Dolphins can produce
sound at frequencies up to 200 kilohertz - around 10 times as high as
the highest pitch we can hear - and can also shift a signal's pitch or
stretch it out over a long period of time.

The animals can also project sound in different directions without
turning their heads, making it difficult to use visual cues alone to
identify which dolphin in a pod "said" what and to guess what a sound
might mean.

To record, interpret and respond to dolphin sounds, Starner and his
students are building a prototype device featuring a smartphone-sized
computer and two hydrophones capable of detecting the full range of
dolphin sounds.

A diver will carry the computer in a waterproof case worn across the
chest, and LEDs embedded around the diver's mask will light up to show
where a sound picked up by the hydrophones originates from. The diver
will also have a Twiddler - a handheld device that acts as a
combination of mouse and keyboard - for selecting what kind of sound
to make in response.

Herzing and Starner will start testing the system on wild Atlantic
spotted dolphins (Stenella frontalis) in the middle of this year. At
first, divers will play back one of eight "words" coined by the team
to mean "seaweed" or "bow wave ride", for example. The software will
listen to see if the dolphins mimic them. Once the system can
recognize these mimicked words, the idea is to use it to crack a much
harder problem: listening to natural dolphin sounds and pulling out
salient features that may be the "fundamental units" of dolphin
communication.

The researchers don't know what these units might be. But the
algorithms they are using are designed to sift through any unfamiliar
data set and pick out interesting features (see "Pattern detector", p
23). The software does this by assuming an average state for the data
and labeling features that deviate from it. It then groups similar
types of deviations - distinct sets of clicks or whistles, say - and
continues to do so until it has extracted all potentially interesting
patterns.

Once these units are identified, Herzing hopes to combine them to make
dolphin-like signals that the animals find more interesting than
human-coined "words". By associating behaviors and objects with these
sounds, she may be the first to decode the rudiments of dolphins'
natural language.

Justin Gregg of the Dolphin Communication Project, a non-profit
organization in Old Mystic, Connecticut, thinks that getting wild
dolphins to adopt and use artificial "words" could work, but is
skeptical that the team will find "fundamental units" of natural
dolphin communication.

Even if they do, deciphering their meanings and using them in the
correct context poses a daunting challenge. "Imagine if an alien
species landed on Earth wearing elaborate spacesuits and walked
through Manhattan speaking random lines from The Godfather to
passers-by," he says.

"We don't even know if dolphins have words," Herzing admits. But she
adds, "We could use their signals, if we knew them. We just don't." n
The software that Thad Starner is using to make sense of dolphin
sounds was originally designed by him and a former student, David
Minnen, to "discover" interesting features in any data set. After
analyzing a sign-language video, the software labeled 23 of 40 signs
used. It also identified when the person started and stopped signing,
or scratched their head.

The software has also identified gym routines - dumb-bell curls, for
example - by analyzing readings from accelerometers worn by the person
exercising, even though the software had not previously encountered
such data. However, Starner cautions that if meaning must be ascribed
to the patterns picked out by the software, then this will require
human input.