The anticonversation

http://www.newscientist.com/

how eerie it would be, yet also how peaceful – people all around having conversations on their mobile phones, but without uttering a sound.

If the purpose of the artifacts described has anything to do with conversations, then I would say this is a study case of how a good idea can be ruined by misunderstanding a little detail. The idea of a collar which detects the words pronounced by means of sensing the movement of the vocal cords sounds scfi-fi and interesting, and it reminds me of the sonic weapons described in Dune (though I’d search for other uses since I hate weapons of any kind). As the article tells us, the collar finds out what you’re saying and then re-reproduces the words by means of a computer, and so “the receiver of the call would hear the speaker talking with an artificial voice”. The point is, unless a human is talking to a computer, the human expects to hear human voices in a conversation. I know that Hawking speaks through a computer, but we all have assumed that to be his own voice. However, can you picture yourself talking on the phone with a computerized voice which is supposed to be your mother, your brother? I can’t. What about the battlefield? Will you pay the same attention to the shouts and calls of your comrades when they sound like the droids in Star Wars? (very nice sounds, nevertheless). Finally, what about the loud bar? Just imagine it: the whole symbiosis between bartender and customers reduced to a mere trade between computers. Will the computer reflect the drunkenness too?… In my opinion, when all you have for a conversation is voice, like on the phone, the nuances and subtleties of the other speaker´s voice are essential to set up an emotional context without which there can´t be an engagement enough to provide a satisfactory (say comfortable) communication. In other words, my brother is not an answering machine. (nor does his answering machine speak like a computer).

Anyway, out of conversational purposes the concept is still promising. For instance, in the field of electroacoustic music it would be interesting to make a singer sing along with a robotized doubler, which could add to real time processing of both the organic and electronic voices. Or the device could be used as a sophisticated trigger, so that the performer could change parameters in real time just by whispering what and how much to change. As a whole, it’s as useful for a human-computer communication as useless it is for human-human one.

Edit: I hope not to be misunderstood. My criticism goes towards the use of electronic voices to substitute organic ones in the contexts described in the article, not towards their use as an aid for mute or impaired people.

Leave a Reply

Your email address will not be published.