Ontocord.AI commited on
Commit
7a9f14a
·
1 Parent(s): bfb78a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -32,4 +32,7 @@ If you are interested in contributing to this project, please reach out to us an
32
 
33
  Let's work together to create open-source models that benefit everyone! 🤝 #AI #MDEL #Supercomputers #Summit #OpenSource #Innovation #VolunteersNeeded #OpenScience #DemocratizeAI
34
 
 
 
 
35
  ** Why did we change the term "Layer" to "Learning"? Because we are exploring, in addition to layerwise experts, also working with different adapters and architecture like Flamingo (https://arxiv.org/abs/2204.14198), EMU (https://arxiv.org/abs/2307.05222) and a novel multi-node architecture for training loras we call lora-x, which will allow us to swap out different component experts to improve the performance of the model.
 
32
 
33
  Let's work together to create open-source models that benefit everyone! 🤝 #AI #MDEL #Supercomputers #Summit #OpenSource #Innovation #VolunteersNeeded #OpenScience #DemocratizeAI
34
 
35
+ ## Requirements for joining this HF Repo: By joiing this hf repo, you agree that you will not disclose any data we are gathering or ideas we present in our community channels until after a paper has been written.
36
+ This protects the intellecctual freedom of researchers and their right to publish and benefit from their work.
37
+
38
  ** Why did we change the term "Layer" to "Learning"? Because we are exploring, in addition to layerwise experts, also working with different adapters and architecture like Flamingo (https://arxiv.org/abs/2204.14198), EMU (https://arxiv.org/abs/2307.05222) and a novel multi-node architecture for training loras we call lora-x, which will allow us to swap out different component experts to improve the performance of the model.