mittal commited on
Commit
b4403ae
·
verified ·
1 Parent(s): 8936455

Create Tutorial.md

Browse files
Files changed (1) hide show
  1. Tutorial.md +191 -0
Tutorial.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Getting Started with WavePulse Radio Transcripts Dataset
2
+
3
+ This tutorial will help you get started with using the WavePulse Radio Transcripts dataset from Hugging Face.
4
+
5
+ ## Prerequisites
6
+
7
+ Before starting, make sure you have the required packages installed:
8
+
9
+ ```bash
10
+ pip install datasets
11
+ pip install huggingface-hub
12
+ ```
13
+
14
+ ## Basic Setup
15
+
16
+ First, let's set up our environment with some helpful configurations:
17
+
18
+ ```python
19
+ from datasets import load_dataset
20
+ import huggingface_hub
21
+
22
+ # Increase timeout for large downloads
23
+ huggingface_hub.constants.HF_HUB_DOWNLOAD_TIMEOUT = 60
24
+
25
+ # Set up cache directory (optional)
26
+ cache_dir = "wavepulse_dataset"
27
+ ```
28
+
29
+ ## Loading Strategies
30
+
31
+ ### 1. Loading a Specific State (Recommended for Beginners)
32
+ Instead of loading the entire dataset, start with one state:
33
+
34
+ ```python
35
+ # Load data for just New York
36
+ ny_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts",
37
+ "NY",
38
+ cache_dir=cache_dir)
39
+ ```
40
+
41
+ ### 2. Streaming Mode (Memory Efficient)
42
+ If you're working with limited RAM:
43
+
44
+ ```python
45
+ # Stream the dataset
46
+ stream_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts",
47
+ streaming=True,
48
+ cache_dir=cache_dir)
49
+
50
+ # Access data in a streaming fashion
51
+ for example in stream_dataset["train"].take(5):
52
+ print(example["text"])
53
+ ```
54
+
55
+ ### 3. Loading a Small Sample
56
+ For testing or exploration:
57
+
58
+ ```python
59
+ # Load just 1000 examples
60
+ sample_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts",
61
+ split="train[:1000]",
62
+ cache_dir=cache_dir)
63
+ ```
64
+
65
+ ## Common Tasks
66
+
67
+ ### 1. Filtering by Date Range
68
+
69
+ ```python
70
+ # Filter for August 2024
71
+ filtered_ds = dataset.filter(
72
+ lambda x: "2024-08-01" <= x['datetime'] <= "2024-08-31"
73
+ )
74
+ ```
75
+
76
+ ### 2. Finding Specific Stations
77
+
78
+ ```python
79
+ # Get unique stations
80
+ stations = set(dataset["train"]["station"])
81
+
82
+ # Filter for a specific station
83
+ station_ds = dataset.filter(lambda x: x['station'] == 'KENI')
84
+ ```
85
+
86
+ ### 3. Analyzing Transcripts
87
+
88
+ ```python
89
+ # Get all segments from a specific transcript
90
+ transcript_ds = dataset.filter(
91
+ lambda x: x['transcript_id'] == 'AK_KAGV_2024_08_25_13_00'
92
+ )
93
+
94
+ # Sort segments by their index to maintain order
95
+ sorted_segments = sorted(transcript_ds, key=lambda x: x['segment_index'])
96
+ ```
97
+
98
+ ## Best Practices
99
+
100
+ 1. **Memory Management**:
101
+ - Start with a single state or small sample
102
+ - Use streaming mode for large-scale processing
103
+ - Clear cache when needed: `from datasets import clear_cache; clear_cache()`
104
+
105
+ 2. **Disk Space**:
106
+ - Ensure at least 75-80 GB free space for full dataset
107
+ - Use state-specific loading to reduce space requirements
108
+ - Regular cache cleanup
109
+
110
+ 3. **Error Handling**:
111
+ - Always include timeout configurations
112
+ - Implement retry logic for large downloads
113
+ - Handle connection errors gracefully
114
+
115
+ ## Example Use Cases
116
+
117
+ ### 1. Basic Content Analysis
118
+
119
+ ```python
120
+ # Count segments per station
121
+ from collections import Counter
122
+
123
+ station_counts = Counter(dataset["train"]["station"])
124
+ print("Most common stations:", station_counts.most_common(5))
125
+ ```
126
+
127
+ ### 2. Time-based Analysis
128
+
129
+ ```python
130
+ # Get distribution of segments across hours
131
+ import datetime
132
+
133
+ hour_distribution = Counter(
134
+ datetime.datetime.fromisoformat(dt).hour
135
+ for dt in dataset["train"]["datetime"]
136
+ )
137
+ ```
138
+
139
+ ### 3. Speaker Analysis
140
+
141
+ ```python
142
+ # Analyze speaker patterns in a transcript
143
+ def analyze_speakers(transcript_id):
144
+ segments = dataset.filter(
145
+ lambda x: x['transcript_id'] == transcript_id
146
+ )
147
+ speakers = [seg['speaker'] for seg in segments]
148
+ return Counter(speakers)
149
+ ```
150
+
151
+ ## Common Issues and Solutions
152
+
153
+ 1. **Timeout Errors**:
154
+ ```python
155
+ # Increase timeout duration
156
+ huggingface_hub.constants.HF_HUB_DOWNLOAD_TIMEOUT = 120
157
+ ```
158
+
159
+ 2. **Memory Errors**:
160
+ ```python
161
+ # Use streaming to process in chunks
162
+ for batch in dataset.iter(batch_size=1000):
163
+ process_batch(batch)
164
+ ```
165
+
166
+ 3. **Disk Space Issues**:
167
+ ```python
168
+ # Check available space before downloading
169
+ import shutil
170
+ total, used, free = shutil.disk_usage("/")
171
+ print(f"Free disk space: {free // (2**30)} GB")
172
+ ```
173
+
174
+ ## Need Help?
175
+
176
+ - Dataset documentation: https://huggingface.co/datasets/nyu-dice-lab/wavepulse-radio-raw-transcripts
177
+ - Project website: https://wave-pulse.io
178
+ - Report issues: https://github.com/nyu-dice-lab/wavepulse/issues
179
+
180
+ Remember to cite the dataset in your work:
181
+
182
+ ```bibtex
183
+ @article{mittal2024wavepulse,
184
+ title={WavePulse: Real-time Content Analytics of Radio Livestreams},
185
+ author={Mittal, Govind and Gupta, Sarthak and Wagle, Shruti and Chopra, Chirag
186
+ and DeMattee, Anthony J and Memon, Nasir and Ahamad, Mustaque
187
+ and Hegde, Chinmay},
188
+ journal={arXiv preprint arXiv:2412.17998},
189
+ year={2024}
190
+ }
191
+ ```