Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate 3 different topic modeling algorithms #55

Closed
hajarzankadi opened this issue Feb 24, 2022 · 5 comments
Closed

Evaluate 3 different topic modeling algorithms #55

hajarzankadi opened this issue Feb 24, 2022 · 5 comments

Comments

@hajarzankadi
Copy link

hajarzankadi commented Feb 24, 2022

  • OCTIS version:
  • Python version:3,7
  • Operating System: linux

Description

I am a PhD candidate and I need to evaluate the performance of three different topic model algorithm including: LDA, LSI and Bertopic. ( LDA and LSI were trained using the Gensim package)
what are the relevance metrics that I should use apart from coherence score? I would like to include in my paper a sort of table or graph that shows an evaluation in term of accuracy of the model (coherence score) and relevance of topics ( should I use the topic diversity metric ?)
Thank you

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
@silviatti
Copy link
Collaborator

Hello,
it depends on what your objective is. Any evaluation metric focuses on a specific aspect of a topic model. OCTIS includes different categories of evaluation metrics:

  • topic coherence metrics (evaluating if the top-words of the topics make sense together)
  • topic significance metrics that consider the document-topic and word-topic distributions to discover high-quality and junk topics. You can find the reference paper here.
  • classification metrics (F1, accuracy, etc), which use the document-topic distributions as features to train a classifier. These metrics require that documents are labelled.
  • diversity metrics, which consider the top-words or the word-topic distribution and compute the distance between a topic and the others.

I am not sure if BERTopic generates the document-topic and word-topic distributions (in that case, you will not be able to compute the topic significance metrics). Maybe you'd like to consider Contextualized Topic Models (CTM) which is a topic model that uses pre-trained contextualized representations (as BERTopic). CTM is part of OCTIS too.

Let me know if you have further questions,

Silvia

@hajarzankadi
Copy link
Author

Hello Silvia
Thank you for your feedback
I trained LDA model using Gensim, and I would like to evaluate using the topic significance, topic coherence and topic diversity.
for lda, I generated the word-topic distribution using the following code:

#get raw topic > word estimates
topics_terms = lda_model.state.get_lambda() 

#convert estimates to probability (sum equals to 1 per topic)
topics_terms_proba = topics_terms / topics_terms.sum(axis=1)[:, None]
topic_term_dist = topics_terms_proba[:, fnames_argsort]
topic_term_dist

the "topic_term_dist" contains the normalized distribution of word-topic:

[[1.1844748e-05 4.0855210e-02 1.1844748e-05 ... 1.1844748e-05
  1.1844748e-05 1.1844748e-05]
 [8.0169802e-06 8.0169802e-06 8.0169802e-06 ... 8.0169802e-06
  8.0169802e-06 8.0169802e-06]
 [7.5956509e-06 7.5956509e-06 7.5956509e-06 ... 7.5956509e-06
  7.5956509e-06 7.5956509e-06]
 ...
 [1.2837388e-05 1.2837388e-05 1.2837388e-05 ... 1.2837388e-05
  1.2837388e-05 1.2837388e-05]
 [8.9911064e-06 8.9911064e-06 8.9911064e-06 ... 8.9911064e-06
  8.9911064e-06 8.9911064e-06]
 [1.6502319e-05 1.6502319e-05 1.6502319e-05 ... 1.6502319e-05
  1.6502319e-05 1.6502319e-05]]
when I passe it to the diversity metric, I get the following error: 
`IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
`

so, how can I resolve this error, do you have a code that you could reference to me.
thank you 

@silviatti
Copy link
Collaborator

Which diversity metric are you using? Can you also show the snippet of the code in which you call the metric?
In general a metric in OCTIS expects to receive in input the output of a Model(). Any topic model in OCTIS returns a dictionary with up to 4 fields. Depending on the metric, the right field will be used to compute the metric (see here for the details on model_output) So if you want to use a metric that uses the word-topic distribution to compute the diversity, then you will construct your model_output like this:

model_output = {"topic-word-matrix": topic_term_dist}

And then use it to compute the score of a metric. For example,

div = KLDivergence()
result = div.score(model_output)

Let me know if it works.

Silvia

@hajarzankadi
Copy link
Author

yes, it works perfectly.
also for the link you recommended regarding the keys of the model_output is very useful.
I still have another question, if i want to apply classification metrics ( precision and recall), you've mentioned that my documents should be labeled. I already mapped each dominant topic for each document, is that considered as labeled document, correct me if i am wrong.
for the key: *test-topic-document-matrix* in the model_output, is it the document_topic distribution on unseen documents ?
thank you again for your help

@silviatti
Copy link
Collaborator

Hello, sorry for the late reply.

mapping each document with a topic is indeed a strategy to label documents. In OCTIS we provide some already labeled corpora, you may want to have a look at those. For example, 20 Newsgroups and BBC news.

And yes, test-topic-document-matrix represents the document-topic distribution for documents that are unseen, i.e. the documents of the testing dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants