Skip to content

More flexible evaluation options #36

@emilradix

Description

@emilradix

Right now the evaluation module uses ragas evaluate to always output:
answer relevance, faithfulness, context_recall, context_precision

Answer relevance and faithfulness are interesting metrics, however they work better for scenarios where you do not have a ground truth answer available.

I think we need a simple scoring that compares the ground truth answer with the generated answer as an alternative to these two. Since we have the ground truth, it might be more powerful, and it should be quite a bit faster. That can also easily be done with ragas, we just need to add the option.

Finally, right now you are forced to rerun answer_evals, even if we just want to benchmark the search itself, which I think we should also update.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions