Tags: inflaton/ragas_extended
Tags
fix: answer_correctness embedding (explodinggradients#513)
fix: handle edge cases in prompt processing (explodinggradients#374)
Make testgenerator output compatible with evaluate (explodinggradient… …s#302) * Changed context from str to List[str] so that it is consistent with eval. Now output of TestDataset can be used for evaluation. * Changed typo in _generate_doc_nodes_map * Changed TestDataset class to reflect the changes in test set generation. Drawback is episode_done will be True in all cases as data is changed at the level above.
testset generation: bug fixes (explodinggradients#185) Fixes - [x] issues with multi-context question generation - [x] Error in doc filtering
ZeroDivisionError in context_relevance (explodinggradients#154) Changed: python3.9/site-packages/ragas/metrics/context_relevance.py", line 162, in _score_batch From: `score = min(len(indices) / len(context_sents), 1)` To: ``` if len(context_sents) == 0: score = 0 else: score = min(len(indices) / len(context_sents), 1)``` fixes: explodinggradients#153 Co-authored-by: devtribble <[email protected]>
Fix remap_column_names (explodinggradients#140) When I try to do the following, I got error: ```python ds = Dataset.from_dict( { "question": ["question"], "answer": ["answer"], "contexts": [["context"]], } ) from ragas import evaluate from ragas.metrics import Faithfulness evaluate(dataset =ds, metrics=[Faithfulness(batch_size=1)]) ``` ``` KeyError: "Column ground_truths not in the dataset. Current columns in the dataset: ['question', 'answer', 'contexts']" ``` But `ground_truths ` is not needed for `Faithfulness` . This PR is to fix it.
PreviousNext