Building sentence embeddings via quick thoughts

Same as skip-gram, both quick thoughts and skip-gram build leverage classifier to learn vectors.#a skip-thoughts, #b quick-thoughts (Logeswaran et al., 2018)Giving a target sentence and using negative sampling approach (Mikolov et al., 2013) to construct a both valid context sentence and non-context sentences for a binary classifier.By labelling valid context sentence as target (e.g. 1) while other non-context sentences are labeled as non-target (e.g. 0), it can build a classifier to figure out which sentence is related to the target sentence.The architecture is same as skip-thoughts which use encoder and decoder approach but quick-thoughts use classifier instead of language model..You may check out here to understanding the architecture.ExperimentsUnsupervised Representation Learning Comparison (Logeswaran et al. 2018)From above result, you can noticed that QT (Quick Thoughts) got a quite good result in different tasks..uni-QT and bi-QT means using single directional and bi-directional RNN model.Supervised Representation Learning Comparison (Logeswaran et al. 2018)In supervised representation learning comparison, MC-QT (Multi Channel-Quick Thoughts) almost win in all tasks..MC-QT is defined as concatenation of two bi-directional RNNs..First bi-directional RNN use a pre-trained word vectors which is GloVe..Another bi-directional RNN is tunable word vectors which means training the vectors from scratch.Image Caption Retrieval (Logeswaran et al. 2018)In image-to-text and text-to-image downstream tasks, MC-QT also got a good result.Take AwaySame as Skip-thoughts, Quick-Thoughts construct sentence embeddings.Same as skip-gram, Quick-Thoughts leverages classifier to learn embeddings.MC-QT demonstrates the capability on learning sentence embeddings from multi NLP downstream tasks.About MeI am Data Scientist in Bay Area..Focusing on state-of-the-art in Data Science, Artificial Intelligence , especially in NLP and platform related..You can reach me from Medium Blog, LinkedIn or Github.ReferenceLogeswaran L., Lee H., 2018, An efficient framework for learning sentence representationsQuick-Thoughts in Tensorflow (Original)Skip-thoughts StoryWord Embeddings Story. More details

Leave a Reply