One thing I've noticed is that
ada (in particular) will often return an empty string for a completion. There are no errors. For whatever reason, the AI simply doesn't have anything further to say. For the prompt we use for monitoring this happens more than half the time. If we bump up the model, to `babbage` or `curie`, we don't run into this issue. It would be good to have a security policy that alerts when this happens in real production applications so that we know if our chat bot isn't responding with empty results. At the moment there is no easy way to catch this condition within the Mantium platform.