chenghao commited on
Commit
51ed362
·
verified ·
1 Parent(s): c2c1c7b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +467 -253
README.md CHANGED
@@ -1,253 +1,467 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- - config_name: 100_tos
5
- features:
6
- - name: document
7
- dtype: string
8
- splits:
9
- - name: train
10
- num_bytes: 5240826
11
- num_examples: 92
12
- download_size: 2497746
13
- dataset_size: 5240826
14
- - config_name: 10_tos
15
- features:
16
- - name: document
17
- dtype: string
18
- splits:
19
- - name: train
20
- num_bytes: 1920213
21
- num_examples: 20
22
- download_size: 718890
23
- dataset_size: 1920213
24
- - config_name: 142_tos
25
- features:
26
- - name: document
27
- dtype: string
28
- splits:
29
- - name: train
30
- num_bytes: 12968483
31
- num_examples: 140
32
- download_size: 4884205
33
- dataset_size: 12968483
34
- - config_name: cuad
35
- features:
36
- - name: document
37
- dtype: string
38
- splits:
39
- - name: train
40
- num_bytes: 1180620
41
- num_examples: 28
42
- download_size: 484787
43
- dataset_size: 1180620
44
- - config_name: memnet_tos
45
- features:
46
- - name: document
47
- dtype: string
48
- splits:
49
- - name: train
50
- num_bytes: 5607746
51
- num_examples: 100
52
- download_size: 2012157
53
- dataset_size: 5607746
54
- - config_name: multilingual_unfair_clause
55
- features:
56
- - name: document
57
- dtype: string
58
- splits:
59
- - name: train
60
- num_bytes: 22775210
61
- num_examples: 200
62
- download_size: 9557263
63
- dataset_size: 22775210
64
- - config_name: polisis
65
- features:
66
- - name: document
67
- dtype: string
68
- splits:
69
- - name: train
70
- num_bytes: 3137858
71
- num_examples: 4570
72
- - name: validation
73
- num_bytes: 802441
74
- num_examples: 1153
75
- - name: test
76
- num_bytes: 967678
77
- num_examples: 1446
78
- download_size: 1827549
79
- dataset_size: 4907977
80
- - config_name: privacy_glue/piextract
81
- features:
82
- - name: document
83
- dtype: string
84
- splits:
85
- - name: validation
86
- num_bytes: 7106934
87
- num_examples: 4116
88
- - name: train
89
- num_bytes: 18497078
90
- num_examples: 12140
91
- download_size: 5707087
92
- dataset_size: 25604012
93
- - config_name: privacy_glue/policy_detection
94
- features:
95
- - name: document
96
- dtype: string
97
- splits:
98
- - name: train
99
- num_bytes: 13657226
100
- num_examples: 1301
101
- download_size: 6937382
102
- dataset_size: 13657226
103
- - config_name: privacy_glue/policy_ie
104
- features:
105
- - name: type_i
106
- dtype: string
107
- - name: type_ii
108
- dtype: string
109
- splits:
110
- - name: test
111
- num_bytes: 645788
112
- num_examples: 6
113
- - name: train
114
- num_bytes: 2707213
115
- num_examples: 25
116
- download_size: 1097051
117
- dataset_size: 3353001
118
- - config_name: privacy_glue/policy_qa
119
- features:
120
- - name: document
121
- dtype: string
122
- splits:
123
- - name: test
124
- num_bytes: 1353787
125
- num_examples: 20
126
- - name: dev
127
- num_bytes: 1230490
128
- num_examples: 20
129
- - name: train
130
- num_bytes: 5441319
131
- num_examples: 75
132
- download_size: 2418472
133
- dataset_size: 8025596
134
- - config_name: privacy_glue/polisis
135
- features:
136
- - name: document
137
- dtype: string
138
- splits:
139
- - name: train
140
- num_bytes: 3073878
141
- num_examples: 4570
142
- - name: validation
143
- num_bytes: 786299
144
- num_examples: 1153
145
- - name: test
146
- num_bytes: 947434
147
- num_examples: 1446
148
- download_size: 1816140
149
- dataset_size: 4807611
150
- - config_name: privacy_glue/privacy_qa
151
- features:
152
- - name: document
153
- dtype: string
154
- splits:
155
- - name: train
156
- num_bytes: 12099109
157
- num_examples: 27
158
- - name: test
159
- num_bytes: 4468753
160
- num_examples: 8
161
- download_size: 1221943
162
- dataset_size: 16567862
163
- - config_name: privacy_glue__piextract
164
- features:
165
- - name: document
166
- dtype: string
167
- splits:
168
- - name: validation
169
- num_bytes: 7106934
170
- num_examples: 4116
171
- - name: train
172
- num_bytes: 18497078
173
- num_examples: 12140
174
- download_size: 5707087
175
- dataset_size: 25604012
176
- configs:
177
- - config_name: 100_tos
178
- data_files:
179
- - split: train
180
- path: 100_tos/train-*
181
- - config_name: 10_tos
182
- data_files:
183
- - split: train
184
- path: 10_tos/train-*
185
- - config_name: 142_tos
186
- data_files:
187
- - split: train
188
- path: 142_tos/train-*
189
- - config_name: cuad
190
- data_files:
191
- - split: train
192
- path: cuad/train-*
193
- - config_name: memnet_tos
194
- data_files:
195
- - split: train
196
- path: memnet_tos/train-*
197
- - config_name: multilingual_unfair_clause
198
- data_files:
199
- - split: train
200
- path: multilingual_unfair_clause/train-*
201
- - config_name: polisis
202
- data_files:
203
- - split: train
204
- path: privacy_glue/polisis/train-*
205
- - split: validation
206
- path: privacy_glue/polisis/validation-*
207
- - split: test
208
- path: privacy_glue/polisis/test-*
209
- - config_name: privacy_glue/piextract
210
- data_files:
211
- - split: validation
212
- path: privacy_glue/piextract/validation-*
213
- - split: train
214
- path: privacy_glue/piextract/train-*
215
- - config_name: privacy_glue/policy_detection
216
- data_files:
217
- - split: train
218
- path: privacy_glue/policy_detection/train-*
219
- - config_name: privacy_glue/policy_ie
220
- data_files:
221
- - split: test
222
- path: privacy_glue/policy_ie/test-*
223
- - split: train
224
- path: privacy_glue/policy_ie/train-*
225
- - config_name: privacy_glue/policy_qa
226
- data_files:
227
- - split: test
228
- path: privacy_glue/policy_qa/test-*
229
- - split: dev
230
- path: privacy_glue/policy_qa/dev-*
231
- - split: train
232
- path: privacy_glue/policy_qa/train-*
233
- - config_name: privacy_glue/polisis
234
- data_files:
235
- - split: train
236
- path: privacy_glue/polisis/train-*
237
- - split: validation
238
- path: privacy_glue/polisis/validation-*
239
- - split: test
240
- path: privacy_glue/polisis/test-*
241
- - config_name: privacy_glue/privacy_qa
242
- data_files:
243
- - split: train
244
- path: privacy_glue/privacy_qa/train-*
245
- - split: test
246
- path: privacy_glue/privacy_qa/test-*
247
- - config_name: privacy_glue__piextract
248
- data_files:
249
- - split: validation
250
- path: privacy_glue/piextract/validation-*
251
- - split: train
252
- path: privacy_glue/piextract/train-*
253
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ dataset_info:
4
+ - config_name: 100_tos
5
+ features:
6
+ - name: document
7
+ dtype: string
8
+ splits:
9
+ - name: train
10
+ num_bytes: 5240826
11
+ num_examples: 92
12
+ download_size: 2497746
13
+ dataset_size: 5240826
14
+ - config_name: 10_tos
15
+ features:
16
+ - name: document
17
+ dtype: string
18
+ splits:
19
+ - name: train
20
+ num_bytes: 1920213
21
+ num_examples: 20
22
+ download_size: 718890
23
+ dataset_size: 1920213
24
+ - config_name: 142_tos
25
+ features:
26
+ - name: document
27
+ dtype: string
28
+ splits:
29
+ - name: train
30
+ num_bytes: 12968483
31
+ num_examples: 140
32
+ download_size: 4884205
33
+ dataset_size: 12968483
34
+ - config_name: cuad
35
+ features:
36
+ - name: document
37
+ dtype: string
38
+ splits:
39
+ - name: train
40
+ num_bytes: 1180620
41
+ num_examples: 28
42
+ download_size: 484787
43
+ dataset_size: 1180620
44
+ - config_name: memnet_tos
45
+ features:
46
+ - name: document
47
+ dtype: string
48
+ splits:
49
+ - name: train
50
+ num_bytes: 5607746
51
+ num_examples: 100
52
+ download_size: 2012157
53
+ dataset_size: 5607746
54
+ - config_name: multilingual_unfair_clause
55
+ features:
56
+ - name: document
57
+ dtype: string
58
+ splits:
59
+ - name: train
60
+ num_bytes: 22775210
61
+ num_examples: 200
62
+ download_size: 9557263
63
+ dataset_size: 22775210
64
+ - config_name: polisis
65
+ features:
66
+ - name: document
67
+ dtype: string
68
+ splits:
69
+ - name: train
70
+ num_bytes: 3137858
71
+ num_examples: 4570
72
+ - name: validation
73
+ num_bytes: 802441
74
+ num_examples: 1153
75
+ - name: test
76
+ num_bytes: 967678
77
+ num_examples: 1446
78
+ download_size: 1827549
79
+ dataset_size: 4907977
80
+ - config_name: privacy_glue__piextract
81
+ features:
82
+ - name: document
83
+ dtype: string
84
+ splits:
85
+ - name: validation
86
+ num_bytes: 7106934
87
+ num_examples: 4116
88
+ - name: train
89
+ num_bytes: 18497078
90
+ num_examples: 12140
91
+ download_size: 5707087
92
+ dataset_size: 25604012
93
+ - config_name: privacy_glue__policy_detection
94
+ features:
95
+ - name: document
96
+ dtype: string
97
+ splits:
98
+ - name: train
99
+ num_bytes: 13657226
100
+ num_examples: 1301
101
+ download_size: 6937382
102
+ dataset_size: 13657226
103
+ - config_name: privacy_glue__policy_ie
104
+ features:
105
+ - name: type_i
106
+ dtype: string
107
+ - name: type_ii
108
+ dtype: string
109
+ splits:
110
+ - name: test
111
+ num_bytes: 645788
112
+ num_examples: 6
113
+ - name: train
114
+ num_bytes: 2707213
115
+ num_examples: 25
116
+ download_size: 1097051
117
+ dataset_size: 3353001
118
+ - config_name: privacy_glue__policy_qa
119
+ features:
120
+ - name: document
121
+ dtype: string
122
+ splits:
123
+ - name: test
124
+ num_bytes: 1353787
125
+ num_examples: 20
126
+ - name: dev
127
+ num_bytes: 1230490
128
+ num_examples: 20
129
+ - name: train
130
+ num_bytes: 5441319
131
+ num_examples: 75
132
+ download_size: 2418472
133
+ dataset_size: 8025596
134
+ - config_name: privacy_glue__polisis
135
+ features:
136
+ - name: document
137
+ dtype: string
138
+ splits:
139
+ - name: train
140
+ num_bytes: 3073878
141
+ num_examples: 4570
142
+ - name: validation
143
+ num_bytes: 786299
144
+ num_examples: 1153
145
+ - name: test
146
+ num_bytes: 947434
147
+ num_examples: 1446
148
+ download_size: 1816140
149
+ dataset_size: 4807611
150
+ - config_name: privacy_glue__privacy_qa
151
+ features:
152
+ - name: document
153
+ dtype: string
154
+ splits:
155
+ - name: train
156
+ num_bytes: 12099109
157
+ num_examples: 27
158
+ - name: test
159
+ num_bytes: 4468753
160
+ num_examples: 8
161
+ download_size: 1221943
162
+ dataset_size: 16567862
163
+ configs:
164
+ - config_name: 100_tos
165
+ data_files:
166
+ - split: train
167
+ path: 100_tos/train-*
168
+ - config_name: 10_tos
169
+ data_files:
170
+ - split: train
171
+ path: 10_tos/train-*
172
+ - config_name: 142_tos
173
+ data_files:
174
+ - split: train
175
+ path: 142_tos/train-*
176
+ - config_name: cuad
177
+ data_files:
178
+ - split: train
179
+ path: cuad/train-*
180
+ - config_name: memnet_tos
181
+ data_files:
182
+ - split: train
183
+ path: memnet_tos/train-*
184
+ - config_name: multilingual_unfair_clause
185
+ data_files:
186
+ - split: train
187
+ path: multilingual_unfair_clause/train-*
188
+ - config_name: polisis
189
+ data_files:
190
+ - split: train
191
+ path: privacy_glue/polisis/train-*
192
+ - split: validation
193
+ path: privacy_glue/polisis/validation-*
194
+ - split: test
195
+ path: privacy_glue/polisis/test-*
196
+ - config_name: privacy_glue__policy_detection
197
+ data_files:
198
+ - split: train
199
+ path: privacy_glue/policy_detection/train-*
200
+ - config_name: privacy_glue__policy_ie
201
+ data_files:
202
+ - split: test
203
+ path: privacy_glue/policy_ie/test-*
204
+ - split: train
205
+ path: privacy_glue/policy_ie/train-*
206
+ - config_name: privacy_glue__policy_qa
207
+ data_files:
208
+ - split: test
209
+ path: privacy_glue/policy_qa/test-*
210
+ - split: dev
211
+ path: privacy_glue/policy_qa/dev-*
212
+ - split: train
213
+ path: privacy_glue/policy_qa/train-*
214
+ - config_name: privacy_glue__polisis
215
+ data_files:
216
+ - split: train
217
+ path: privacy_glue/polisis/train-*
218
+ - split: validation
219
+ path: privacy_glue/polisis/validation-*
220
+ - split: test
221
+ path: privacy_glue/polisis/test-*
222
+ - config_name: privacy_glue__privacy_qa
223
+ data_files:
224
+ - split: train
225
+ path: privacy_glue/privacy_qa/train-*
226
+ - split: test
227
+ path: privacy_glue/privacy_qa/test-*
228
+ - config_name: privacy_glue__piextract
229
+ data_files:
230
+ - split: validation
231
+ path: privacy_glue/piextract/validation-*
232
+ - split: train
233
+ path: privacy_glue/piextract/train-*
234
+ ---
235
+
236
+ # A collection of Terms of Service or Privacy Policy datasets
237
+
238
+ ## Annotated datasets
239
+
240
+ ### CUAD
241
+
242
+ Specifically, the 28 service agreements from [CUAD](https://www.atticusprojectai.org/cuad), which are licensed under CC BY 4.0 (subset: `cuad`).
243
+
244
+ <details>
245
+ <summary>Code</summary>
246
+
247
+ ```python
248
+ import datasets
249
+ from tos_datasets.proto import DocumentQA
250
+
251
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "cuad")
252
+
253
+ print(DocumentQA.model_validate_json(ds["document"][0]))
254
+ ```
255
+
256
+ </details>
257
+
258
+ ### 100 ToS
259
+
260
+ From [Annotated 100 ToS](https://data.mendeley.com/datasets/dtbj87j937/3), CC BY 4.0 (subset: `100_tos`).
261
+
262
+ <details>
263
+ <summary>Code</summary>
264
+
265
+ ```python
266
+ import datasets
267
+ from tos_datasets.proto import DocumentEUConsumerLawAnnotation
268
+
269
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "100_tos")
270
+
271
+ print(DocumentEUConsumerLawAnnotation.model_validate_json(ds["document"][0]))
272
+ ```
273
+
274
+ </details>
275
+
276
+ ### Multilingual Unfair Clause
277
+
278
+ From [CLAUDETTE](http://claudette.eui.eu/corpora/index.html)/[Multilingual Unfair Clause](https://github.com/nlp-unibo/Multilingual-Unfair-Clause-Detection), CC BY 4.0 (subset: `multilingual_unfair_clause`).
279
+
280
+ It was built from [CLAUDETTE](http://claudette.eui.eu/corpora/index.html)/[25 Terms of Service in English, Italian, German, and Polish (100 documents in total) from A Corpus for Multilingual Analysis of Online Terms of Service](http://claudette.eui.eu/corpus_multilingual_NLLP2021.zip).
281
+
282
+ <details>
283
+ <summary>Code</summary>
284
+
285
+ ```python
286
+ import datasets
287
+ from tos_datasets.proto import DocumentClassification
288
+
289
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "multilingual_unfair_clause")
290
+
291
+ print(DocumentClassification.model_validate_json(ds["document"][0]))
292
+ ```
293
+
294
+ </details>
295
+
296
+ ### Memnet ToS
297
+
298
+ From [100 Terms of Service in English from Detecting and explaining unfairness in consumer contracts through memory networks](https://github.com/federicoruggeri/Memnet_ToS), MIT (subset: `memnet_tos`).
299
+
300
+ <details>
301
+
302
+ <summary>Code</summary>
303
+
304
+ ```python
305
+ import datasets
306
+ from tos_datasets.proto import DocumentClassification
307
+
308
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "memnet_tos")
309
+
310
+ print(DocumentClassification.model_validate_json(ds["document"][0]))
311
+ ```
312
+
313
+ </details>
314
+
315
+ ### 142 ToS
316
+
317
+ From [142 Terms of Service in English divided according to market sector from Assessing the Cross-Market Generalization Capability of the CLAUDETTE System](http://claudette.eui.eu/corpus_142_ToS.zip), Unknown (subset: `142_tos`). This should also includes [50 Terms of Service in English from "CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service"](http://claudette.eui.eu/ToS.zip).
318
+
319
+ <details>
320
+ <summary>Code</summary>
321
+
322
+ ```python
323
+ import datasets
324
+ from tos_datasets.proto import DocumentClassification
325
+
326
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "142_tos")
327
+
328
+ print(DocumentClassification.model_validate_json(ds["document"][0]))
329
+ ```
330
+
331
+ </details>
332
+
333
+ ### 10 ToS/PP
334
+
335
+ From [5 Terms of Service and 5 Privacy Policies in English and German (10 documents in total) from Cross-lingual Annotation Projection in Legal Texts](https://bitbucket.org/a-galaxy/cross-lingual-annotation-projection-in-legal-texts), GNU GPL 3.0 (subset: `10_tos`)
336
+
337
+ <details>
338
+ <summary>Code</summary>
339
+
340
+ ```python
341
+ import datasets
342
+ from tos_datasets.proto import DocumentClassification
343
+
344
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "10_tos")
345
+
346
+ print(DocumentClassification.model_validate_json(ds["document"][0]))
347
+ ```
348
+
349
+ </details>
350
+
351
+ ### PolicyQA
352
+
353
+ > [!IMPORTANT]
354
+ > This dataset seems to have some annotation issues where __unanswerable__ questions are still answered with SQuAD-v1 format instead of the v2 format.
355
+
356
+ From [PolicyQA](https://github.com/wasiahmad/PolicyQA), MIT (subset: `privacy_glue/policy_qa`).
357
+
358
+ <details>
359
+ <summary>Code</summary>
360
+
361
+ ```python
362
+ import datasets
363
+ from tos_datasets.proto import DocumentQA
364
+
365
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_qa")
366
+
367
+ print(DocumentQA.model_validate_json(ds["train"]["document"][0]))
368
+ ```
369
+
370
+ </details>
371
+
372
+ ### PolicyIE
373
+
374
+ From [PolicyIE](https://github.com/wasiahmad/PolicyIE), MIT (subset: `privacy_glue/policy_ie`).
375
+
376
+ <details>
377
+ <summary>Code</summary>
378
+
379
+ ```python
380
+ import datasets
381
+ from tos_datasets.proto import DocumentSequenceClassification, DocumentEvent
382
+
383
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_ie")
384
+
385
+ print(DocumentSequenceClassification.model_validate_json(ds["train"]["type_i"][0]))
386
+ print(DocumentEvent.model_validate_json(ds["train"]["type_ii"][0]))
387
+ ```
388
+
389
+ </details>
390
+
391
+ ### Policy Detection
392
+
393
+ From [policy-detection-data](<https://github.com/infsys-lab/policy-detection-data>, GPL 3.0 (subset: `privacy_glue/policy_detection`).
394
+
395
+ <details>
396
+ <summary>Code</summary>
397
+
398
+ ```python
399
+ import datasets
400
+ from tos_datasets.proto import DocumentClassification
401
+
402
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_detection")
403
+
404
+ print(DocumentClassification.model_validate_json(ds["train"]["document"][0]))
405
+ ```
406
+
407
+ </details>
408
+
409
+ ### Polisis
410
+
411
+ From [Polisis](https://github.com/SmartDataAnalytics/Polisis_Benchmark), Unknown (subset: `privacy_glue/polisis`).
412
+
413
+ <details>
414
+ <summary>Code</summary>
415
+
416
+ ```python
417
+ import datasets
418
+ from tos_datasets.proto import DocumentClassification
419
+
420
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/polisis")
421
+
422
+ print(DocumentClassification.model_validate_json(ds["test"]["document"][0]))
423
+ ```
424
+
425
+ </details>
426
+
427
+ ### PrivacyQA
428
+
429
+ From [PrivacyQA](https://github.com/AbhilashaRavichander/PrivacyQA_EMNLP), MIT (subset: `privacy_qa`).
430
+
431
+ <details>
432
+ <summary>Code</summary>
433
+
434
+ ```python
435
+ import datasets
436
+ from tos_datasets.proto import DocumentClassification
437
+
438
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/privacy_qa")
439
+
440
+ print(DocumentClassification.model_validate_json(ds["test"]["document"][0]))
441
+ ```
442
+
443
+ </details>
444
+
445
+ ### Piextract
446
+
447
+ From [Piextract](https://github.com/um-rtcl/piextract_dataset), Unknown (subset: `privacy_glue/piextract`).
448
+
449
+ <details>
450
+ <summary>Code</summary>
451
+
452
+ ```python
453
+ import datasets
454
+ from tos_datasets.proto import DocumentSequenceClassification
455
+
456
+ ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/piextract")
457
+
458
+ print(DocumentSequenceClassification.model_validate_json(ds["train"]["document"][0]))
459
+ ```
460
+
461
+ </details>
462
+
463
+ ## WIP
464
+
465
+ - <del>[Annotated Italian TOS sentences](https://github.com/i3-fbk/LLM-PE_Terms_and_Conditions_Contracts), Apache 2.0</del> Only sentence level annotations, missing original full text
466
+ - <del>[Huggingface](https://huggingface.co/datasets/CodeHima/TOS_Dataset), MIT</del> Only sentence level annotations, missing original full text
467
+ - [ ] [ToSDR API](https://developers.tosdr.org/dev/get-service-v2), Unknown