schuler commited on
Commit
3f1915e
·
verified ·
1 Parent(s): 9110504

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -8,7 +8,12 @@ datasets:
8
  This repository contains experiment results for the [Saving 77% of the Parameters in Large Language Models Technical Report (PDF)](https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT).
9
 
10
  ## Abstract
11
- This technical report demonstrates that large language models (LLMs) can maintain their learning capacity while reducing their non-embedding parameters by up to 77%. We achieve this by adapting a parameter reduction technique originally developed for computer vision, replacing dense layers with an optimized subnetwork that contains grouped pointwise convolutions. Using Microsoft's phi-3-mini-4k-instruct as our baseline, we show that our optimized model (kphi-3) achieves comparable validation loss while using only 15-23% of the original non-embedding parameters. All experiments were conducted on a single NVIDIA L4 GPU within a 3-day timeframe, supporting the democratization of AI research. Our findings suggest that current LLM architectures may be substantially overparameterized, opening possibilities for more efficient model training and deployment.
 
 
 
 
 
12
 
13
  ## Key Findings
14
  - Achieved 77% parameter reduction while maintaining model performance.
 
8
  This repository contains experiment results for the [Saving 77% of the Parameters in Large Language Models Technical Report (PDF)](https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT).
9
 
10
  ## Abstract
11
+ This technical report demonstrates that large language models (LLMs) can maintain their learning capacity while reducing their non-embedding parameters by up to 77%.
12
+ We achieve this by adapting a parameter reduction technique originally developed for computer vision, replacing dense layers with an optimized subnetwork that
13
+ contains grouped pointwise convolutions. Using a 2-layer phi-3-mini-4k-instruct codebase from Microsoft as our baseline, we show that our optimized model (kphi-3)
14
+ achieves comparable validation loss while using only 15-23% of the original non-embedding parameters. Each experiment was conducted on a single NVIDIA L4 GPU within
15
+ a 3-day timeframe, supporting the democratization of AI research. Our findings suggest that current LLM architectures may be substantially overparameterized, opening
16
+ possibilities for more efficient model training and deployment.
17
 
18
  ## Key Findings
19
  - Achieved 77% parameter reduction while maintaining model performance.