wi-lab commited on
Commit
b12b73c
·
verified ·
1 Parent(s): d79f32e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -50,7 +50,7 @@ For example, the following figure demonstrates the advantages of using **LWM-v1.
50
 
51
  ---
52
 
53
- ## **🧩 Puzzle Pieces that Redefine LWM-v1.0**
54
 
55
  #### **1️⃣ Breaking Barriers**
56
  🔓 No Channel Size Limitation
@@ -79,9 +79,7 @@ For example, the following figure demonstrates the advantages of using **LWM-v1.
79
  | Parameters | 600K | **2.5M** |
80
  | Sequence Length Support | 128 | **512** |
81
 
82
- ---
83
-
84
- ## **Detailed Explanation of Changes in LWM-v1.1**
85
 
86
  ### **No Channel Size Limitation**
87
  In **LWM-v1.0**, the model was pre-trained on a single (N, SC) = (32, 32) pair, which limited its generalization to other channel configurations. Wireless communication systems in the real world exhibit vast variability in the number of antennas (N) at base stations and subcarriers (SC). To address this limitation, **LWM-v1.1** was pre-trained on **20 distinct (N, SC) pairs**, ranging from smaller setups like (8, 32) to more complex setups like (128, 64). This variety enables the model to effectively handle diverse channel configurations and ensures robust generalization without overfitting to specific configurations.
@@ -342,7 +340,7 @@ The corresponding scripts for these processes can be found in the **`downstream.
342
 
343
  ---
344
 
345
- ## **1. INFERENCE & DOWNSTREAM TASKS**
346
 
347
  ### **Loading Required Packages and Modules**
348
 
@@ -640,7 +638,7 @@ chs = lwm_inference(
640
 
641
  ---
642
 
643
- ## **2. PRE-TRAINING LWM-v1.1**
644
 
645
  This section details the process of pre-training the **LWM-v1.1** model, including data preparation, model initialization, and optimization settings. Each step has been carefully designed to enable the model to learn robust and general-purpose embeddings for wireless channel data.
646
 
@@ -866,7 +864,7 @@ pretrained_model = train_lwm(
866
 
867
  ---
868
 
869
- ### **Explore the Interactive Demo**
870
 
871
  Experience **LWM** interactively via our Hugging Face Spaces demo:
872
  [**Try the Interactive Demo!**](https://huggingface.co/spaces/wi-lab/lwm-interactive-demo)
 
50
 
51
  ---
52
 
53
+ # **🧩 Puzzle Pieces that Redefine LWM-v1.0**
54
 
55
  #### **1️⃣ Breaking Barriers**
56
  🔓 No Channel Size Limitation
 
79
  | Parameters | 600K | **2.5M** |
80
  | Sequence Length Support | 128 | **512** |
81
 
82
+ # **Detailed Changes in LWM-v1.1**
 
 
83
 
84
  ### **No Channel Size Limitation**
85
  In **LWM-v1.0**, the model was pre-trained on a single (N, SC) = (32, 32) pair, which limited its generalization to other channel configurations. Wireless communication systems in the real world exhibit vast variability in the number of antennas (N) at base stations and subcarriers (SC). To address this limitation, **LWM-v1.1** was pre-trained on **20 distinct (N, SC) pairs**, ranging from smaller setups like (8, 32) to more complex setups like (128, 64). This variety enables the model to effectively handle diverse channel configurations and ensures robust generalization without overfitting to specific configurations.
 
340
 
341
  ---
342
 
343
+ # **1. INFERENCE & DOWNSTREAM TASKS**
344
 
345
  ### **Loading Required Packages and Modules**
346
 
 
638
 
639
  ---
640
 
641
+ # **2. PRE-TRAINING LWM-v1.1**
642
 
643
  This section details the process of pre-training the **LWM-v1.1** model, including data preparation, model initialization, and optimization settings. Each step has been carefully designed to enable the model to learn robust and general-purpose embeddings for wireless channel data.
644
 
 
864
 
865
  ---
866
 
867
+ # **Explore the Interactive Demo**
868
 
869
  Experience **LWM** interactively via our Hugging Face Spaces demo:
870
  [**Try the Interactive Demo!**](https://huggingface.co/spaces/wi-lab/lwm-interactive-demo)