Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
3
Dam Thanh
buithanhdam02
Follow
tahamajs's profile picture
esselte974's profile picture
mondalsurojit's profile picture
4 followers
·
10 following
https://www.kaggle.com/damthanh
buithanhdam
dam-thanh-99a84b236
AI & ML interests
None yet
Recent Activity
reacted
to
ImranzamanML
's
post
with 👍
8 days ago
Here is how we can calculate the size of any LLM model: Each parameter in LLM models is typically stored as a floating-point number. The size of each parameter in bytes depends on the precision. 32-bit precision: Each parameter takes 4 bytes. 16-bit precision: Each parameter takes 2 bytes To calculate the total memory usage of the model: Memory usage (in bytes) = No. of Parameters × Size of Each Parameter For example: 32-bit Precision (FP32) In 32-bit floating-point precision, each parameter takes 4 bytes. Memory usage in bytes = 1 billion parameters × 4 bytes 1,000,000,000 × 4 = 4,000,000,000 bytes In gigabytes: ≈ 3.73 GB 16-bit Precision (FP16) In 16-bit floating-point precision, each parameter takes 2 bytes. Memory usage in bytes = 1 billion parameters × 2 bytes 1,000,000,000 × 2 = 2,000,000,000 bytes In gigabytes: ≈ 1.86 GB It depends on whether you use 32-bit or 16-bit precision, a model with 1 billion parameters would use approximately 3.73 GB or 1.86 GB of memory, respectively.
updated
a model
3 months ago
buithanhdam02/LSTM_Resnet50_Attention_HAR
liked
a model
3 months ago
answerdotai/ModernBERT-base
View all activity
Organizations
None yet
buithanhdam02
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
2 models
3 months ago
answerdotai/ModernBERT-base
Fill-Mask
•
Updated
Jan 15
•
3.44M
•
791
Xkev/Llama-3.2V-11B-cot
Image-Text-to-Text
•
Updated
Dec 16, 2024
•
5.18k
•
147
liked
a model
5 months ago
5CD-AI/Vintern-3B-beta
Image-Text-to-Text
•
Updated
Dec 6, 2024
•
731
•
34