ucllovelab commited on
Commit
8541b2e
·
verified ·
1 Parent(s): 1e8078c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -6,4 +6,8 @@ colorTo: pink
6
  sdk: static
7
  pinned: false
8
  ---
9
- Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts. To evaluate this possibility, we created BrainBench, a forward-looking benchmark for predicting neuroscience results. We find that LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs were confident in their predictions, they were more likely to be correct, which presages a future where humans and LLMs team together to make discoveries. Our approach is not neuroscience-specific and is transferable to other knowledge-intensive endeavors.
 
 
 
 
 
6
  sdk: static
7
  pinned: false
8
  ---
9
+ The scientific literature is exponentially increasing in size. One challenge for scientists is keeping abreast of developments. One solution is a human-machine teaming approach in which scientists interact with a vast knowledge base of the neuroscience literature, referred to as BrainGPT. BrainGPT is trained to capture data patterns in the neuroscience literature, taking advantage of recent machine learning advances in large-language models.
10
+
11
+ BrainGPT functions as a generative model of the scientific literature, allowing researchers to propose study designs as prompts for which BrainGPT would generate likely data patterns reflecting its current synthesis of the scientific literature. Modellers can use BrainGPT to assess their models against the field's general understanding of a domain (e.g., instant meta-analysis). BrainGPT could help identify anomalous findings, whether because they point to a breakthrough or contain an error.
12
+
13
+ Importantly, BrainGPT does not summarize papers nor retrieve articles. In such cases, large-language models often confabulate, which is potentially harmful. Instead, BrainGPT stitches together existing knowledge too vast for human comprehension to assist humans in expanding scientific frontiers.