Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
|
@@ -368,7 +368,7 @@ def process_hdf5_file(uploaded_file, percentage):
|
|
| 368 |
sys.stdout = capture # Redirect print statements to capture
|
| 369 |
|
| 370 |
try:
|
| 371 |
-
model_repo_url = "https://huggingface.co/
|
| 372 |
model_repo_dir = "./LWM"
|
| 373 |
|
| 374 |
# Step 1: Clone the repository if not already done
|
|
@@ -491,7 +491,7 @@ with gr.Blocks(css="""
|
|
| 491 |
gr.Markdown("""
|
| 492 |
<div class="bold-highlight">
|
| 493 |
π Explore the pre-trained **LWM Model** here:
|
| 494 |
-
<a target="_blank" href="https://huggingface.co/
|
| 495 |
</div>
|
| 496 |
""")
|
| 497 |
|
|
@@ -501,20 +501,20 @@ with gr.Blocks(css="""
|
|
| 501 |
|
| 502 |
# Explanation section with creative spacing and minimal design
|
| 503 |
gr.Markdown("""
|
| 504 |
-
|
| 505 |
-
|
| 506 |
-
|
| 507 |
-
|
| 508 |
-
|
| 509 |
-
|
| 510 |
-
|
| 511 |
-
|
| 512 |
-
|
|
|
|
|
|
|
|
|
|
| 513 |
</ul>
|
| 514 |
-
</
|
| 515 |
-
<li>πΊοΈ **Dataset**: A combination of six scenarios from the DeepMIMO dataset (excluded from LWM pre-training) highlights the model's strong generalization abilities.</li>
|
| 516 |
-
</ul>
|
| 517 |
-
</div>
|
| 518 |
""")
|
| 519 |
#gr.Markdown("""
|
| 520 |
#<div class="explanation-box">
|
|
@@ -541,21 +541,21 @@ with gr.Blocks(css="""
|
|
| 541 |
|
| 542 |
# Explanation section with creative spacing
|
| 543 |
gr.Markdown("""
|
| 544 |
-
|
| 545 |
-
|
| 546 |
-
|
| 547 |
-
|
| 548 |
-
|
| 549 |
-
|
| 550 |
-
|
| 551 |
-
|
| 552 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 553 |
</ul>
|
| 554 |
-
</
|
| 555 |
-
<li>π **Tip**: You can find guidance on how to structure your dataset in the provided model repository.</li>
|
| 556 |
-
<li>πΌ **No Downstream Model**: Instead of a complex downstream model, we classify each sample based on its distance to the centroid of training samples from each class (LoS/NLoS).</li>
|
| 557 |
-
</ul>
|
| 558 |
-
</div>
|
| 559 |
""")
|
| 560 |
#gr.Markdown("""
|
| 561 |
#<div class="explanation-box">
|
|
|
|
| 368 |
sys.stdout = capture # Redirect print statements to capture
|
| 369 |
|
| 370 |
try:
|
| 371 |
+
model_repo_url = "https://huggingface.co/wi-lab/lwm"
|
| 372 |
model_repo_dir = "./LWM"
|
| 373 |
|
| 374 |
# Step 1: Clone the repository if not already done
|
|
|
|
| 491 |
gr.Markdown("""
|
| 492 |
<div class="bold-highlight">
|
| 493 |
π Explore the pre-trained **LWM Model** here:
|
| 494 |
+
<a target="_blank" href="https://huggingface.co/wi-lab/lwm">https://huggingface.co/wi-lab/lwm</a>
|
| 495 |
</div>
|
| 496 |
""")
|
| 497 |
|
|
|
|
| 501 |
|
| 502 |
# Explanation section with creative spacing and minimal design
|
| 503 |
gr.Markdown("""
|
| 504 |
+
<div style="background-color: #f0f0f0; padding: 15px; border-radius: 10px; color: #333;">
|
| 505 |
+
<h3 style="color: #0056b3;">π‘ <b>Beam Prediction Task</b></h3>
|
| 506 |
+
<ul style="padding-left: 20px;">
|
| 507 |
+
<li><b>π― Goal</b>: Predict the strongest <b>mmWave beam</b> from a predefined codebook using Sub-6 GHz channels.</li>
|
| 508 |
+
<li><b>βοΈ Adjust Settings</b>: Use the sliders to control the training data percentage and task complexity (beam count) to explore model performance.</li>
|
| 509 |
+
<li><b>π§ Inferences</b>:
|
| 510 |
+
<ul>
|
| 511 |
+
<li>π First, the LWM model extracts features.</li>
|
| 512 |
+
<li>π€ Then, the downstream residual 1D-CNN model (500K parameters) makes beam predictions.</li>
|
| 513 |
+
</ul>
|
| 514 |
+
</li>
|
| 515 |
+
<li><b>πΊοΈ Dataset</b>: A combination of six scenarios from the DeepMIMO dataset (excluded from LWM pre-training) highlights the model's strong generalization abilities.</li>
|
| 516 |
</ul>
|
| 517 |
+
</div>
|
|
|
|
|
|
|
|
|
|
| 518 |
""")
|
| 519 |
#gr.Markdown("""
|
| 520 |
#<div class="explanation-box">
|
|
|
|
| 541 |
|
| 542 |
# Explanation section with creative spacing
|
| 543 |
gr.Markdown("""
|
| 544 |
+
<div style="background-color: #f0f0f0; padding: 15px; border-radius: 10px; color: #333;">
|
| 545 |
+
<h3 style="color: #0056b3;">π <b>LoS/NLoS Classification Task</b></h3>
|
| 546 |
+
<ul style="padding-left: 20px;">
|
| 547 |
+
<li><b>π― Goal</b>: Classify whether a channel is <b>LoS</b> (Line-of-Sight) or <b>NLoS</b> (Non-Line-of-Sight).</li>
|
| 548 |
+
<li><b>π Dataset</b>: Use the default dataset (a combination of six scenarios from the DeepMIMO dataset) or upload your own dataset in <b>h5py</b> format.</li>
|
| 549 |
+
<li><b>π‘ Custom Dataset Requirements:</b>
|
| 550 |
+
<ul>
|
| 551 |
+
<li>π οΈ <b>channels</b> array: Shape (N,32,32)</li>
|
| 552 |
+
<li>π·οΈ <b>labels</b> array: Binary LoS/NLoS values (1/0)</li>
|
| 553 |
+
</ul>
|
| 554 |
+
</li>
|
| 555 |
+
<li><b>π Tip</b>: You can find guidance on how to structure your dataset in the provided model repository.</li>
|
| 556 |
+
<li><b>πΌ No Downstream Model</b>: Instead of a complex downstream model, we classify each sample based on its distance to the centroid of training samples from each class (LoS/NLoS).</li>
|
| 557 |
</ul>
|
| 558 |
+
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 559 |
""")
|
| 560 |
#gr.Markdown("""
|
| 561 |
#<div class="explanation-box">
|