pengsu commited on
Commit
ee38591
·
verified ·
1 Parent(s): 2159cc9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -159
README.md CHANGED
@@ -1,202 +1,124 @@
1
  ---
2
  base_model: google/gemma-2-2b-it
3
  library_name: peft
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
  ## Model Details
13
 
14
- ### Model Description
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
17
 
 
 
18
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
43
 
44
- [More Information Needed]
 
 
45
 
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
 
76
  ## Training Details
77
 
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
 
 
83
 
84
- ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
 
194
 
195
- [More Information Needed]
 
 
196
 
197
- ## Model Card Contact
 
 
198
 
199
- [More Information Needed]
200
- ### Framework versions
 
 
201
 
202
- - PEFT 0.12.0
 
 
 
1
  ---
2
  base_model: google/gemma-2-2b-it
3
  library_name: peft
4
+ tags:
5
+ - imdb
6
+ - sentiment-analysis
7
  ---
8
 
9
+ # Model Card for Fine-Tuned `gemma-2-2b-it` on IMDb Sentiment Analysis
10
 
11
+ ## Model Summary
12
 
13
+ This model is a fine-tuned version of `google/gemma-2-2b-it` using **LoRA (Low-Rank Adaptation)** for efficient parameter tuning. It was trained on the IMDb dataset for binary sentiment classification (positive and negative), optimized using **4-bit quantization (NF4)** via **BitsAndBytes** for memory and computation efficiency.
14
 
15
+ You can find the model and its details on Hugging Face Hub [here](https://huggingface.co/pengsu/MLB-care-for-mind-eng).
16
 
17
  ## Model Details
18
 
19
+ ### Developed By:
20
+ This model was fine-tuned by [Your Name or Organization] using Hugging Face's `peft` and `transformers` libraries with the IMDb dataset for English sentiment analysis.
21
 
22
+ ### Model Type:
23
+ This is a transformer-based model for **binary sentiment classification** using the IMDb dataset.
24
 
25
+ ### Language:
26
+ - **Language(s)**: English (IMDb movie reviews)
27
 
28
+ ### License:
29
+ [Add relevant license here]
30
 
31
+ ### Finetuned From:
32
+ - **Base Model**: `google/gemma-2-2b-it`
 
 
 
 
 
33
 
34
+ ### Framework Versions:
35
+ - **Transformers**: 4.44.2
36
+ - **PEFT**: 0.12.0
37
+ - **Datasets**: 3.0.1
38
+ - **PyTorch**: 2.4.1+cu121
39
 
40
+ ## Intended Uses & Limitations
41
 
42
+ ### Intended Use:
43
+ This model can be used to classify movie reviews as **positive** or **negative**. It's well-suited for tasks like review analysis, social media sentiment classification, or feedback systems.
 
44
 
45
+ ### Out-of-Scope Use:
46
+ The model may not perform well on tasks that require multi-class sentiment classification or text outside of the domain of English movie reviews.
47
 
48
+ ### Limitations:
49
+ - **Bias**: Since the model is trained on IMDb data, it may reflect the dataset's biases and could be less accurate when applied to different domains or types of sentiment analysis.
50
+ - **Generalization**: The model may not generalize well to other forms of text, such as product reviews or social media comments, without additional fine-tuning.
51
 
52
+ ## Model Architecture
53
 
54
+ ### Quantization:
55
+ The model leverages **4-bit quantization** (NF4) using `BitsAndBytes` to make it more memory-efficient. This allows the model to be run on smaller hardware resources while maintaining competitive performance.
56
 
57
+ ### LoRA Configuration:
58
+ The model uses **Low-Rank Adaptation (LoRA)** to efficiently fine-tune a subset of parameters. The specific modules adapted include:
59
+ - `down_proj`, `gate_proj`, `q_proj`, `o_proj`, `up_proj`, `v_proj`, `k_proj`.
60
 
61
+ The LoRA configuration is:
62
+ - `r = 16`, `lora_alpha = 32`, `lora_dropout = 0.05`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
  ## Training Details
65
 
66
+ ### Dataset:
67
+ The model was trained on the **IMDb dataset**, which contains 50,000 labeled movie reviews, split into 25,000 training examples and 25,000 test examples. Each review is labeled as either **positive** or **negative**.
 
68
 
69
+ - **Train Set Size**: 25,000 samples
70
+ - **Test Set Size**: 25,000 samples
71
+ - **Classes**: 2 (POSITIVE, NEGATIVE)
72
 
73
+ ### Preprocessing:
74
 
75
+ Text from IMDb reviews was tokenized using the `google/gemma-2-2b-it` tokenizer with a maximum sequence length of 64. The tokenization included padding and truncation to ensure consistent input lengths.
76
 
77
+ ### Hyperparameters:
78
 
79
+ - **Learning Rate**: 2e-5
80
+ - **Batch Size (train)**: 8
81
+ - **Batch Size (eval)**: 8
82
+ - **Epochs**: 5
83
+ - **Optimizer**: AdamW (with 8-bit optimization)
84
+ - **Weight Decay**: 0.01
85
+ - **Gradient Accumulation Steps**: 2
86
+ - **Evaluation Steps**: 1000
87
+ - **Logging Steps**: 1000
88
+ - **4-bit Quantization**: Enabled (via `BitsAndBytes`)
89
+ - **Metric for Best Model**: Accuracy
 
90
 
91
  ## Evaluation
92
 
93
+ ### Metrics:
94
+ The model was evaluated on the IMDb test dataset using the following metrics:
95
+ - **Accuracy**
96
+ - **F1 Score** (weighted)
97
+ - **Precision**
98
+ - **Recall**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
+ The model performs well in classifying movie reviews as positive or negative, achieving strong results across all metrics. Exact evaluation numbers will depend on the specific test runs and should be provided upon evaluation completion.
101
 
102
+ ### Code Example:
103
 
104
+ You can load the fine-tuned model and use it for inference on your own data using the code below:
105
 
106
+ ```python
107
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
108
 
109
+ # Load model and tokenizer
110
+ model = AutoModelForSequenceClassification.from_pretrained("pengsu/MLB-care-for-mind-eng")
111
+ tokenizer = AutoTokenizer.from_pretrained("pengsu/MLB-care-for-mind-eng")
112
 
113
+ # Tokenize input text
114
+ text = "This movie was absolutely amazing!"
115
+ inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
116
 
117
+ # Get predictions
118
+ outputs = model(**inputs)
119
+ logits = outputs.logits
120
+ predicted_class = logits.argmax(-1).item()
121
 
122
+ # Map prediction to label
123
+ id2label = {0: "NEGATIVE", 1: "POSITIVE"}
124
+ print(f"Predicted sentiment: {id2label[predicted_class]}")