forked from svn2github/word2vec
-
Notifications
You must be signed in to change notification settings - Fork 0
Original from word2vec, Mikolov et al. (2013b)
License
zwChan/word2vec
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Tools for computing distributed representtion of words ------------------------------------------------------ We provide an implementation of the Continuous Bag-of-Words (CBOW) and the Skip-gram model (SG), as well as several demo scripts. Given a text corpus, the word2vec tool learns a vector for every word in the vocabulary using the Continuous Bag-of-Words or the Skip-Gram neural network architectures. The user should to specify the following: - desired vector dimensionality - the size of the context window for either the Skip-Gram or the Continuous Bag-of-Words model - training algorithm: hierarchical softmax and / or negative sampling - threshold for downsampling the frequent words - number of threads to use - the format of the output word vector file (text or binary) Usually, the other hyper-parameters such as the learning rate do not need to be tuned for different training sets. The script demo-word.sh downloads a small (100MB) text corpus from the web, and trains a small word vector model. After the training is finished, the user can interactively explore the similarity of the words. More information about the scripts is provided at https://code.google.com/p/word2vec/ ### Note: - Word '</s>' in the text are consider as stop word. ## Modification: - Used later for sorting by word counts; if ties, longer word first; if ties, compare the word - if there is a stop word, we don't consider the adjacent words are phrase - show phrase at the end
About
Original from word2vec, Mikolov et al. (2013b)
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- C 84.8%
- Shell 14.0%
- Makefile 1.2%