NLP word2vec paper

tech2023-07-26  127

论文1

[word2vec]Efficient Estimation of Word Representation in Vector Space

part1 Introduction
part2 Model Architectures
part3 New Log-linear Models
part4 Results
part5 Examples of the Learned Relationships
part6 Conclusion
part7 Follow-Up work

论文2

[word2vec]Distributed Representations of Words and Phrase and their Compositionality

part1 Introduction
part2 The Skip-gram Model
Hierarchical SoftmaxNegative SamplingSubsampling of Frequent Words
part3 Empirical Results
part4 Learning Phrases
Phrase Skip-Gram Results
part5 Additive Compositionality
part6 Comparison to Published Word Representations
part7 Conclusion
最新回复(0)