The article explains, in part:
Electrodes are attached to the neck and face to detect the movements that occur as the person silently mouths words and phrases.Sensing the movement of the speech organs, rather than an interpreting sound-waves, allows the device to come as close as possible to real-time translation without the trouble of having overlapping voices. The effect is "like watching a television programme that [has] been dubbed."Using this data, a computer can work out the sounds being formed and then build these sounds up into words.
The system is then able to translate the words into another language which is read out by a synthetic voice.
This is a huge leap forward, but clearly they have a long way to go. On Star Trek, people's lips always match what you hear them saying, no matter what language it is being translated into.
But if they can perfect this device, and add some kind of holographic lip-synching we'll have Universal Translators before we know it.