-
Notifications
You must be signed in to change notification settings - Fork 100
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Right now the evaluation module uses ragas evaluate to always output:
answer relevance, faithfulness, context_recall, context_precision
Answer relevance and faithfulness are interesting metrics, however they work better for scenarios where you do not have a ground truth answer available.
I think we need a simple scoring that compares the ground truth answer with the generated answer as an alternative to these two. Since we have the ground truth, it might be more powerful, and it should be quite a bit faster. That can also easily be done with ragas, we just need to add the option.
Finally, right now you are forced to rerun answer_evals, even if we just want to benchmark the search itself, which I think we should also update.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request