One of the things that has troubled me almost from day one is how do you leave an appliance in-situ, but make it smarter with the minimum of intervention?

Voice recognition needs tweaking in any circumstance.  You have to adjust for background conditions, different types of speakers, different communications channels (perhaps one call is mobile and the next is a multi-channel call from a trading turret), and a host of other factors.  One size does not fit all, and if you want to squeeze the best out, particularly from phone conversations, you need to do a lot of configuration.

But that configuration scales with the number of channels you are monitoring, and frankly, the task goes from being unwieldy to impossible very quickly.

A lot of our research recently has gone into this, and we think we’ve come a long way along the track to making the configuration happen automatically, and to adjust itself over time.  This is not just a matter of learning what people talk about (which you do from e-mail and IM traffic as well as the voice), but to actually perform a self-diagnostic in effect on the performance of the voice algorithm on a general and a speaker-specific basis.  I can’t go into detail at the moment, as we’re looking at the patentability of what we’ve done, but we are in a position now where we can leave a several thousand channel system on its own, and it will actually get better and better results over time.

“Advanced Self Adapting  Learning Technology” – A/SALT: You heard it here first.

Leave a Reply

Your email address will not be published. Required fields are marked *

8 + 1 =