“You can’t handle the truth!”

There are some layers, such as the output layer, that likely require more training than others, so we should use different learning rates for these layers.I trained the model until I reached about 43% accuracy.Modeling Approach: Classifier ModelNow that we have fine-tuned the language model, we can now further fine-tune it for our actual task, which was to predict whether the movie quote is memorable or not..This involves simply popping off the last layer and replacing it with a layer with two outputs (memorable, non-memorable)..After training for a little while, we get to around 70% accuracy..Not too bad considering the dataset is quite small!In an attempt to even further improve the accuracy of the model, I re-ran all of the training but with the documents backwards; there are also publicly available pre-trained weights with a backwards language model..This ended up not helping, but is worth examining in your own datasets.ConclusionsHere, this analysis demonstrates that controlling for length and time presentation of a quote, there is something about the content of certain quotes that make them memorable..The next step may be to try to understand why some quotes are memorable whereas others are not (i.e., model interpretation); actually, this was the original goal of the creators of this dataset..Perhaps this classifier may be used by writers and journalists to check whether a quote will be memorable.It is really quite amazing that with such a small dataset we can achieve a reasonable level of accuracy..This level of accuracy would be quite difficult to achieve with such a small dataset if we had not used transfer learning..I am really excited to see further use of transfer learning in NLP..There are a lot of other small details on training this model..Check out my GitHub to see the actual code and please do not hesitate to ask if you have any questions!. More details

Leave a Reply