Leaderboard

This page tracks benchmark results for HybridRAG-Bench.

Summary

  • Metric columns can be adapted to your final evaluation protocol.

  • Higher is better unless a column explicitly states lower-is-better.

  • Update tables by editing: docs/source/_static/leaderboard_arxiv_ai.csv, docs/source/_static/leaderboard_arxiv_cy.csv, docs/source/_static/leaderboard_arxiv_bio.csv.

Arxiv_AI

Rank

Date

Model

Method

Acc

Notes

1

2026-02-12

Llama-3.3-70B

HybridRAG

0.000

placeholder

2

2026-02-12

Llama-3.3-70B

KG-RAG

0.000

placeholder

3

2026-02-12

Llama-3.3-70B

RAG

0.000

placeholder

4

2026-02-12

Llama-3.3-70B

IO

0.000

placeholder

Arxiv_CY

Rank

Date

Model

Method

Acc

Notes

1

2026-02-12

Llama-3.3-70B

HybridRAG

0.000

placeholder

2

2026-02-12

Llama-3.3-70B

KG-RAG

0.000

placeholder

3

2026-02-12

Llama-3.3-70B

RAG

0.000

placeholder

4

2026-02-12

Llama-3.3-70B

IO

0.000

placeholder

Arxiv_BIO

Rank

Date

Model

Method

Acc

Notes

1

2026-02-12

Llama-3.3-70B

HybridRAG

0.000

placeholder

2

2026-02-12

Llama-3.3-70B

KG-RAG

0.000

placeholder

3

2026-02-12

Llama-3.3-70B

RAG

0.000

placeholder

4

2026-02-12

Llama-3.3-70B

IO

0.000

placeholder

Submission Format

Use this schema when adding new results:

  • Date: YYYY-MM-DD

  • Model: Model name and size

  • Method: IO, RAG, KG-RAG, HybridRAG, etc.

  • Acc: Primary accuracy metric

  • Notes: Optional details (retriever, hops, or config)