File size: 5,082 Bytes
8ec091f
 
 
 
 
 
 
 
 
09b4620
8ec091f
 
 
 
 
 
 
ed402a9
 
 
 
 
 
 
 
3dd8dc0
a030632
ed402a9
 
 
 
 
 
 
 
 
 
 
 
 
09b4620
ed402a9
 
 
09b4620
 
ed402a9
 
 
 
3dd8dc0
 
09b4620
 
 
 
3dd8dc0
 
 
09b4620
8ec091f
 
4ee29d7
 
 
 
 
 
 
09b4620
4ee29d7
 
 
 
 
 
 
 
 
09b4620
4ee29d7
09b4620
4ee29d7
09b4620
 
 
4ee29d7
09b4620
 
4ee29d7
09b4620
 
 
4ee29d7
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---

language:
- en
license: mit
task_categories:
- question-answering
- text-retrieval
- other
pretty_name: BioKGBench
size_categories: 10K<n<100K
annotations_creators: 
- expert-generated
- machine-generated
task_ids:
- fact-checking
- closed-domain-qa
- fact-checking-retrieval
dataset_info:
  features:
    - name: kgcheck
      dtype: string
    - name: kgqa
      dtype: string
    - name: scv
      dtype: string
    - name: bioKG
      dtype: string
configs:
  - config_name: kgcheck
    data_files:
    - split: dev
      path: kgcheck/dev.json
    - split: test
      path: kgcheck/test.json
  - config_name: kgqa
    data_files:
    - split: dev
      path: kgqa/dev.json
    - split: test
      path: kgqa/test.json
  - config_name: scv-corpus
    data_files:
      - split: corpus
        path: scv/merged_corpus.jsonl
  - config_name: scv
    data_files:
      - split: dev
        path: scv/dev.jsonl
      - split: test
        path: scv/test.jsonl
  - config_name: biokg
    data_files:
    - split: datasets
      path: bioKG/datasets/*.tsv
    - split: ontologies
      path: bioKG/ontologies/*.tsv
tags:
  - agent
  - medical
arxiv: 2407.00466
---


# Agent4S-BioKG
A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science.
<p align="left">
<a href="https://github.com/westlake-autolab/Agent4S-BioKG/blob/main/LICENSE" alt="license">
    <img src="https://img.shields.io/badge/license-MIT-blue" /></a>

    

<a href="https://github.com/westlake-autolab/Agent4S-BioKG" alt="license">

    <img src="/assets/img/github-mark.png" /> Github </a>

</p>


## Introduction
Pursuing artificial intelligence for biomedical science, a.k.a. AI Scientist, draws increasing attention, where one common approach is to build a copilot agent driven by Large Language Models~(LLMs).   
However, to evaluate such systems, people either rely on direct Question-Answering~(QA) to the LLM itself, or in a biomedical experimental manner. How to precisely benchmark biomedical agents from an AI Scientist perspective remains largely unexplored. To this end, we draw inspiration from one most important abilities of scientists, understanding the literature, and introduce `BioKGBench`.   
In contrast to traditional evaluation benchmark that only focuses on factual QA, where the LLMs are known to have hallucination issues, we first disentangle **Understanding Literature** into two atomic abilities, i) **Understanding** the unstructured text from research papers by performing scientific claim verification, and ii) Ability to interact with structured Knowledge-Graph Question-Answering~(KGQA) as a form of **Literature** grounding. We then formulate a novel agent task, dubbed KGCheck, using KGQA and domain-based Retrieval-Augmented Generation (RAG) to identify the factual errors of existing large-scale knowledge graph databases.   We collect over two thousand data for two atomic tasks and 225 high-quality annotated data for the agent task. Surprisingly, we discover that state-of-the-art agents, both daily scenarios and biomedical ones, have either failed or inferior performance on our benchmark. We then introduce a simple yet effective baseline, dubbed `BKGAgent`. On the widely used popular dataset, we discover over 90 factual errors which yield the effectiveness of our approach, yields substantial value for both the research community or practitioners in the biomedical domain.

## Overview
<details open>
<summary>Dataset(Need to <a href="https://huggingface.co/datasets/AutoLab-Westlake/BioKGBench-Dataset">download</a> from huggingface)</summary>

* **bioKG**: The knowledge graph used in the dataset.
* **KGCheck**: Given a knowledge graph and a scientific claim, the agent needs to check whether the claim is supported by the knowledge graph. The agent can interact with the knowledge graph by asking questions and receiving answers.
  * **Dev**: 20 samples
  * **Test**: 205 samples
  * **corpus**: 51 samples
* **KGQA**: Given a knowledge graph and a question, the agent needs to answer the question based on the knowledge graph.
  * **Dev**: 60 samples
  * **Test**: 638 samples
* **SCV**: Given a scientific claim and a research paper, the agent needs to check whether the claim is supported by the research paper.
  * **Dev**: 120 samples
  * **Test**: 1265 samples
  * **corpus**: 5664 samples

</details>

## Citation

## Contact
For adding new features, looking for helps, or reporting bugs associated with `BioKGBench`, please open a [GitHub issue](https://github.com/A4Bio/ProteinInvBench/issues) and [pull request](https://github.com/A4Bio/ProteinInvBench/pulls) with the tag `new features`, `help wanted`, or `enhancement`. Feel free to contact us through email if you have any questions.

- Xinna Lin([email protected]), Westlake University
- Siqi Ma([email protected]), Westlake University
- Junjie Shan([email protected]), Westlake University
- Xiaojing Zhang([email protected]), Westlake University