On the lowest level, MOVI is stateless. That means that a call sign or now sentences can be trained at any time. After training, MOVI just sends events over the serial connection between Arduino and the shield, notifying that it has heard a sentence, a call sign, or certain words. You can observe all of this using the LowLevelInterface example that comes with the library. Or just open the Serial Monitor.
So now, once you use MOVI's Arduino API (MOVIShield.h), things become a little more ordered. The API assumes that things happen in the sequence of:
1) a new MOVI object is created:
MOVI recognizer(true); // Get a MOVI object, true enables serial monitor interface, rx and tx can be passed as parameters for alternate communication pins on AVR architecture
2) MOVI is initialized using:
recognizer.init(); // Initialize MOVI (waits for it to boot)
3) New callsign/sentences are added and trained using something like:
recognizer.addSentence("This is the first sentence"); // sentence one
recognizer.addSentence("This is the second sentence"); // sentence two
recognizer.addSentence("This is the third sentence"); // sentence three
4) After train() things are considered to be finished and everything is saved in internal buffers so it can be returned by the poll() method, called in setup():
signed int res=recognizer.poll(); // Get result from MOVI, 0 denotes nothing happened, negative values denote events (see User Manual)
I am explaining this because there is no obligation to follow this sequence. It's just easier. If you want to train new sentences, after things have already been running for a while. You could do a bunch of things, for example
In step 1) create two recognizers but only construct the first one. Now, once you are ready to train your new sentences on the fly, you can deconstruct the first one, with
Once you construct the second one, you have a fresh new MOVI object to follow the sequence 1-4.