Spaces:
Running
on
Zero
Running
on
Zero
Commit
Β·
d89b95a
1
Parent(s):
ac1681f
Update README
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ Scientific progress often depends on connecting ideas across papers, fields, and
|
|
44 |
|
45 |
In 2024, Luo et al. published a landmark study in *Nature Human Behaviour* showing that **large language models (LLMs) can outperform human experts** in predicting the results of neuroscience experiments by integrating knowledge across the scientific literature. Their model, **BrainGPT**, demonstrated how tuning a general-purpose LLM (like Mistral-7B) on domain-specific data could synthesize insights that surpass human forecasting ability. Notably, the authors found that models as small as 7B parameters performed well β an insight that influenced the foundation for this project.
|
46 |
|
47 |
-
Inspired by this work β and a YouTube breakdown by physicist and science communicator Sabine Hossenfelder β this project began as an attempt to explore similar methods of knowledge integration at the level of paper-pair relationships.
|
48 |
|
49 |
Originally conceived as a perplexity-ranking experiment using LLMs directly (mirroring Luo et al.'s evaluation method), the project gradually evolved into what it is now β **Inkling**, a reasoning-aware embedding model fine-tuned on LLM-rated abstract pairings, and built to help researchers uncover links that would be obvious β *if only someone had the time to read everything*.
|
50 |
|
|
|
44 |
|
45 |
In 2024, Luo et al. published a landmark study in *Nature Human Behaviour* showing that **large language models (LLMs) can outperform human experts** in predicting the results of neuroscience experiments by integrating knowledge across the scientific literature. Their model, **BrainGPT**, demonstrated how tuning a general-purpose LLM (like Mistral-7B) on domain-specific data could synthesize insights that surpass human forecasting ability. Notably, the authors found that models as small as 7B parameters performed well β an insight that influenced the foundation for this project.
|
46 |
|
47 |
+
Inspired by this work β and a YouTube breakdown by physicist and science communicator **Sabine Hossenfelder**, titled *["AIs Predict Research Results Without Doing Research"](https://www.youtube.com/watch?v=Qgrl3JSWWDE)* β this project began as an attempt to explore similar methods of knowledge integration at the level of paper-pair relationships. Her clear explanation and commentary sparked the idea to apply this paradigm not just to forecasting outcomes, but to identifying latent connections between published studies.
|
48 |
|
49 |
Originally conceived as a perplexity-ranking experiment using LLMs directly (mirroring Luo et al.'s evaluation method), the project gradually evolved into what it is now β **Inkling**, a reasoning-aware embedding model fine-tuned on LLM-rated abstract pairings, and built to help researchers uncover links that would be obvious β *if only someone had the time to read everything*.
|
50 |
|