File size: 4,555 Bytes
b4403ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
# Getting Started with WavePulse Radio Transcripts Dataset

This tutorial will help you get started with using the WavePulse Radio Transcripts dataset from Hugging Face.

## Prerequisites

Before starting, make sure you have the required packages installed:

```bash
pip install datasets
pip install huggingface-hub
```

## Basic Setup

First, let's set up our environment with some helpful configurations:

```python
from datasets import load_dataset
import huggingface_hub

# Increase timeout for large downloads
huggingface_hub.constants.HF_HUB_DOWNLOAD_TIMEOUT = 60

# Set up cache directory (optional)
cache_dir = "wavepulse_dataset"
```

## Loading Strategies

### 1. Loading a Specific State (Recommended for Beginners)
Instead of loading the entire dataset, start with one state:

```python
# Load data for just New York
ny_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts", 
                         "NY",
                         cache_dir=cache_dir)
```

### 2. Streaming Mode (Memory Efficient)
If you're working with limited RAM:

```python
# Stream the dataset
stream_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts",
                            streaming=True,
                            cache_dir=cache_dir)

# Access data in a streaming fashion
for example in stream_dataset["train"].take(5):
    print(example["text"])
```

## Common Tasks

### 1. Filtering by Date Range

```python
# Filter for August 2024
filtered_ds = dataset.filter(
    lambda x: "2024-08-01" <= x['datetime'] <= "2024-08-31"
)
```

### 2. Finding Specific Stations

```python
# Get unique stations
stations = set(dataset["train"]["station"])

# Filter for a specific station
station_ds = dataset.filter(lambda x: x['station'] == 'KENI')
```

### 3. Analyzing Transcripts

```python
# Get all segments from a specific transcript
transcript_ds = dataset.filter(
    lambda x: x['transcript_id'] == 'AK_KAGV_2024_08_25_13_00'
)

# Sort segments by their index to maintain order
sorted_segments = sorted(transcript_ds, key=lambda x: x['segment_index'])
```

## Best Practices

1. **Memory Management**:
   - Start with a single state or small sample
   - Use streaming mode for large-scale processing
   - Clear cache when needed: `from datasets import clear_cache; clear_cache()`

2. **Disk Space**:
   - Ensure at least 75-80 GB free space for full dataset
   - Use state-specific loading to reduce space requirements
   - Regular cache cleanup

3. **Error Handling**:
   - Always include timeout configurations
   - Implement retry logic for large downloads
   - Handle connection errors gracefully

## Example Use Cases

### 1. Basic Content Analysis

```python
# Count segments per station
from collections import Counter

station_counts = Counter(dataset["train"]["station"])
print("Most common stations:", station_counts.most_common(5))
```

### 2. Time-based Analysis

```python
# Get distribution of segments across hours
import datetime

hour_distribution = Counter(
    datetime.datetime.fromisoformat(dt).hour 
    for dt in dataset["train"]["datetime"]
)
```

### 3. Speaker Analysis

```python
# Analyze speaker patterns in a transcript
def analyze_speakers(transcript_id):
    segments = dataset.filter(
        lambda x: x['transcript_id'] == transcript_id
    )
    speakers = [seg['speaker'] for seg in segments]
    return Counter(speakers)
```

## Common Issues and Solutions

1. **Timeout Errors**:
   ```python
   # Increase timeout duration
   huggingface_hub.constants.HF_HUB_DOWNLOAD_TIMEOUT = 120
   ```

2. **Memory Errors**:
   ```python
   # Use streaming to process in chunks
   for batch in dataset.iter(batch_size=1000):
       process_batch(batch)
   ```

3. **Disk Space Issues**:
   ```python
   # Check available space before downloading
   import shutil
   total, used, free = shutil.disk_usage("/")
   print(f"Free disk space: {free // (2**30)} GB")
   ```

## Need Help?

- Dataset documentation: https://huggingface.co/datasets/nyu-dice-lab/wavepulse-radio-raw-transcripts
- Project website: https://wave-pulse.io
- Report issues: https://github.com/nyu-dice-lab/wavepulse/issues

Remember to cite the dataset in your work:

```bibtex
@article{mittal2024wavepulse,
  title={WavePulse: Real-time Content Analytics of Radio Livestreams},
  author={Mittal, Govind and Gupta, Sarthak and Wagle, Shruti and Chopra, Chirag 
          and DeMattee, Anthony J and Memon, Nasir and Ahamad, Mustaque 
          and Hegde, Chinmay},
  journal={arXiv preprint arXiv:2412.17998},
  year={2024}
}
```