winobes commited on
Commit
48b11ee
·
1 Parent(s): 2ac9868

add readme

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - sv
4
+ license: cc-by-4.0
5
+ size_categories:
6
+ - 1M<n<10M
7
+ pretty_name: Open Riksdag-103
8
+ tags:
9
+ - diachronic
10
+ - semantic change
11
+ dataset_info:
12
+ - config_name: sentences
13
+ features:
14
+ - name: sentence
15
+ dtype: string
16
+ - name: doc_type
17
+ dtype: string
18
+ - name: doc_id
19
+ dtype: string
20
+ - name: date
21
+ dtype: timestamp[s]
22
+ splits:
23
+ - name: train
24
+ num_bytes: 8436701719
25
+ num_examples: 56846721
26
+ download_size: 1516962051
27
+ dataset_size: 8436701719
28
+ - config_name: target-103
29
+ features:
30
+ - name: sentence
31
+ dtype: string
32
+ - name: doc_type
33
+ dtype: string
34
+ - name: doc_id
35
+ dtype: string
36
+ - name: date
37
+ dtype: timestamp[s]
38
+ - name: lemma
39
+ dtype: string
40
+ - name: start
41
+ dtype: int32
42
+ - name: end
43
+ dtype: int32
44
+ - name: pos
45
+ dtype: string
46
+ splits:
47
+ - name: train
48
+ num_bytes: 8138875561
49
+ num_examples: 33393155
50
+ download_size: 1434826241
51
+ dataset_size: 8138875561
52
+ ---
53
+
54
+ This is a dataset of text from the Riksdag, Sweden's national legislative body.
55
+
56
+ The original data is availble without a license under the Re-use of Public Administration Documents Act (2010:566) at https://data.riksdagen.se/data/dokument
57
+
58
+ This dataset is derivative of a version compiled by Språkbanken Text (SBX) at the University of Gothenburg (Sweden). That version consists of XML files split by source document type (motions, questions, protocol, etc.) and includes additional linguistic annotations. It is available under a CC BY 4.0 license at https://spraakbanken.gu.se/resurser/rd
59
+
60
+ The focus of this huggingface dataset is to organise the data for fine-grained diachronic modeling. In a nutshell, this version offers:
61
+
62
+ - all sentences including one or more of 103 target words, which were chosen by TF-IDF (described below)
63
+ - per-month subsets (with all document types combined)
64
+ - one line per sentence (sentences shorter than 4 words were discarded)
65
+ - data includes: date, document_type, document_id, target_word, and text.
66
+
67
+ The dataset builder requires a `years` argument, which must be an interable of years between 1979 and 2019 (inclusive). This can be supplied to the `load_dataset` function as a keyword argument.
68
+ For example, to load all of the data:
69
+
70
+ ```python
71
+ from datasets import load_dataset
72
+ data = load_dataset('ChangeIsKey/openRD-103', years=range(1979,2020))
73
+ ```
74
+
75
+ The data can take some time to load/extract. Using [dataset streaming](https://huggingface.co/docs/datasets/stream) may be an option. Size of data by decade can be found below:
76
+
77
+ | | bytes | sentences | tokens |
78
+ |-------|-------|-----------|--------|
79
+ | 1979 | 118Mb | 0.409M | 10M |
80
+ | 1980s | 1.4Gb | 4.7M | 118M |
81
+ | 1990s | 2.2Gb | 5.3M | 202M |
82
+ | 2000s | 4.0Gb | 11.8M | 338M |
83
+ | 2010s | 4.4Gb | 14.1M | 361M |
84
+ | total | 13Gb | 36.9M | 279M |
85
+
86
+ License is CC BY 4.0 with attribution.