Members | Sign In
Legacy MOVI User Community Forum (readonly) > MOVI Question & Answers
avatar

Training sentence sets

posted Aug 05, 2017 22:38:18 by Craig Morgan
Is it possible to train sentences using a pushbutton?
page   1
6 replies
avatar
GeraldFriedland said Aug 06, 2017 02:48:52
Probably. Can you be a bit more specific about what you are trying to do?

Thanks,
Gerald
avatar
Craig Morgan said Aug 06, 2017 16:39:42
For example, can a sentence be set by pushing a button, have MOVI listen and then take that voice command as a sentence set.
[Last edited Aug 06, 2017 16:52:25]
avatar
GeraldFriedland said Aug 09, 2017 23:56:17
Sentences can be trained based on a push of a button. But training does not imply listening to the new sentences.

The way to re-train MOVI in an Arduino script is to create a new MOVI object. Let me know if you need more details.

Gerald
avatar
Craig Morgan said Aug 10, 2017 15:24:04
Yes if you can be so kind as to go into more details because I am a little confused by MOVI object. And can you please explain how MOVI can listen to new sentences.

Thanks in advance.
avatar
GeraldFriedland said Aug 11, 2017 03:35:49
Craig,

On the lowest level, MOVI is stateless. That means that a call sign or now sentences can be trained at any time. After training, MOVI just sends events over the serial connection between Arduino and the shield, notifying that it has heard a sentence, a call sign, or certain words. You can observe all of this using the LowLevelInterface example that comes with the library. Or just open the Serial Monitor.

So now, once you use MOVI's Arduino API (MOVIShield.h), things become a little more ordered. The API assumes that things happen in the sequence of:
1) a new MOVI object is created:
MOVI recognizer(true);            // Get a MOVI object, true enables serial monitor interface, rx and tx can be passed as parameters for alternate communication pins on AVR architecture

2) MOVI is initialized using:
recognizer.init();      // Initialize MOVI (waits for it to boot)

3) New callsign/sentences are added and trained using something like:
recognizer.addSentence("This is the first sentence"); // sentence one
recognizer.addSentence("This is the second sentence"); // sentence two
recognizer.addSentence("This is the third sentence"); // sentence three
recognizer.train();    

4) After train() things are considered to be finished and everything is saved in internal buffers so it can be returned by the poll() method, called in setup():
signed int res=recognizer.poll(); // Get result from MOVI, 0 denotes nothing happened, negative values denote events (see User Manual)


All of this can be examined in detail in: https://github.com/audeme/MOVIArduinoAPI/blob/master/MOVIShield.cpp

I am explaining this because there is no obligation to follow this sequence. It's just easier. If you want to train new sentences, after things have already been running for a while. You could do a bunch of things, for example

In step 1) create two recognizers but only construct the first one. Now, once you are ready to train your new sentences on the fly, you can deconstruct the first one, with
recognizer.~MOVI()


Once you construct the second one, you have a fresh new MOVI object to follow the sequence 1-4.

Hope that helped,
Gerald
avatar
Craig Morgan said Aug 13, 2017 13:44:07
Thank you Gerald, appreciate the feedback.
Login below to reply: