Stepanov

Ihor

AI & ML interests

Text classification, computational biology, relations extraction, path reasoning

Recent Activity

upvoted a collection about 13 hours ago
GLiNER-Biomed
updated a collection about 15 hours ago
GLiNER-Biomed
updated a collection about 15 hours ago
GLiNER-Biomed
View all activity

Organizations

Knowledgator Engineering's profile picture Blog-explorers's profile picture GLiNER Community's profile picture eyva.ai's profile picture

Posts 8

view post
Post
1386
๐Ÿš€ Reproducing DeepSeek R1 for Text-to-Graph Extraction

Iโ€™ve been working on replicating DeepSeek R1, focusing on zero-shot text-to-graph extractionโ€”a challenging task where LMs extract entities and relations from text based on predefined types.

๐Ÿง  Key Insight:
Language models struggle when constrained by entity/relation types. Supervised training alone isnโ€™t enough, but reinforcement learning (RL), specifically Guided Reward Policy Optimization (GRPO), shows promise.

๐Ÿ’ก Why GRPO?
It trains the model to generate structured graphs, optimizing multiple reward functions (format, JSON validity, and extraction accuracy).
It allows the model to learn from both positive and hard negative examples dynamically.
RL can be fine-tuned to emphasize relation extraction improvements.

๐Ÿ“Š Early Results:
Even with limited training, F1 scores consistently improved, and we saw clear benefits from RL-based optimization. More training = better performance!

๐Ÿ”ฌ Next Steps:
Weโ€™re scaling up experiments with larger models and high-quality data. Stay tuned for updates! Meanwhile, check out one of our experimental models here:
Ihor/Text2Graph-R1-Qwen2.5-0.5b

๐Ÿ“” Learn more details from the blog post: https://medium.com/p/d8b648d9f419

Feel free to share your thoughts and ask questions!

Articles 1

Article
30

Replicating DeepSeek R1 for Information Extraction