You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there anyway to return perplexity with respect to the number of iterations? In case we want to optimise the number of iteration and avoid getting into burn-in periods in future executions.
A way I found to do that, is to add a list attribute that is iteratively filled with the perplexity corresponding to that iteration.
Perplexity is computed as exp(-(modelLogLikelihood() / totalTokens)))
Any chance I can submit a pull request?
The text was updated successfully, but these errors were encountered:
Thanks for looking into this! I'm not sure I understand. Is the idea to stop training when model log likelihood stops dropping? Burn-in usually refers to the early iterations, while log likelihood is still improving.
Is there anyway to return perplexity with respect to the number of iterations? In case we want to optimise the number of iteration and avoid getting into burn-in periods in future executions.
A way I found to do that, is to add a list attribute that is iteratively filled with the perplexity corresponding to that iteration.
Perplexity is computed as exp(-(modelLogLikelihood() / totalTokens)))
Any chance I can submit a pull request?
The text was updated successfully, but these errors were encountered: