Datasets:

GiliGold commited on
Commit
72d4302
·
verified ·
1 Parent(s): 682c8bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -24
README.md CHANGED
@@ -52,23 +52,65 @@ For more information see: [ArXiv](https://arxiv.org/abs/2405.18115)
52
  ## Usage
53
 
54
  #### Option 1: HuggingFace
55
- For the [All Features Sentences](#all_features_sentences) subset:
56
  ```python
57
  from datasets import load_dataset
58
- knesset_corpus = load_dataset("HaifaCLGroup/knessetCorpus", name="all_features_sentences", split='train', streaming=True) #streaming is recommended
59
-
 
 
 
 
 
 
 
 
60
  ```
61
- For the [Non-Morphological Features Sentences](#non-morphological_features_sentences) subset:
62
- * Ideal if morpho-syntactic annotations aren't relevant to your work, providing a less disk space heavy option.
 
 
 
63
  ```python
64
  from datasets import load_dataset
65
- knesset_corpus = load_dataset("HaifaCLGroup/knessetCorpus", "no_morph_all_features_sentences", split='train', streaming=True)#streaming is recommended
66
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ```
68
- See [Subsets](#subsets) for other subsets options and change the name field accordingly.
 
 
 
 
 
69
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
- #### Option 2: ElasticSearch
72
  IP address, username and password for the es server and [Kibana](http://34.0.64.248:5601/):
73
 
74
  ##### Credentials for Kibana:
@@ -97,22 +139,7 @@ for hit in resp['hits']['hits']:
97
  print("id: %(sentence_id)s: speaker_name: %(speaker_name)s: sentence_text: %(sentence_text)s" % hit["_source"])
98
  ```
99
 
100
- #### Option 3: Directly from files
101
- ```python
102
- import json
103
 
104
- path = <path to committee_full_sentences.jsonl> #or any other sentences jsonl file
105
- with open(path, encoding="utf-8") as file:
106
- for line in file:
107
- try:
108
- sent = json.loads(line)
109
- except Exception as e:
110
- print(f'couldnt load json line. error:{e}.')
111
- sent_id = sent["sentence_id"]
112
- sent_text = sent["sentence_text"]
113
- speaker_name = sent["speaker_name"]
114
- print(f"ID: {sent_id}, speaker name: {speaker_name}, text: {sent_text")
115
- ```
116
 
117
  ## Subsets
118
 
 
52
  ## Usage
53
 
54
  #### Option 1: HuggingFace
55
+ For the sentences subsets such as [All Features Sentences](#all_features_sentences) subset:
56
  ```python
57
  from datasets import load_dataset
58
+ subset_name = "all_features_sentences" #or "no_morph_all_features_sentences"
59
+ knesset_corpus = load_dataset("HaifaCLGroup/knessetCorpus", name=subset_name, split='train', streaming=True) #streaming is recommended
60
+
61
+ for example in knesset_corpus:
62
+ speaker_name = example["speaker_name"]
63
+ sentence_text = example["sentence_text"]
64
+ gender = example["speaker_gender"]
65
+ faction = example["current_faction_name"]
66
+ knesset_num = int(example["knesset_number"])
67
+ print(f'knesset_number: {knesset_num}, speaker_name: {speaker_name}, sentenece_text: {sentence_text}, speaker_gender: {gender}, faction: {faction}')
68
  ```
69
+
70
+ * The [Non-Morphological Features Sentences](#non-morphological_features_sentences) subset is Ideal if morpho-syntactic annotations aren't relevant to your work, providing a less disk space heavy option.
71
+
72
+
73
+ For the [Protocols](#protocols) subset:
74
  ```python
75
  from datasets import load_dataset
76
+ subset_name = "protocols"
77
+ knesset_corpus = load_dataset("HaifaCLGroup/knessetCorpus", name=subset_name, split='train', streaming=True) #streaming is recommended
78
+
79
+ for example in knesset_corpus:
80
+ protocol_name = example["protocol_name"]
81
+ session_name = example["session_name"]
82
+ protocol_sentences = example["protocol_sentences"]
83
+ if isinstance(protocol_sentences, dict):
84
+ protocol_sentences = [
85
+ {key: value[i] for key, value in protocol_sentences.items()}
86
+ for i in range(len(next(iter(protocol_sentences.values()))))
87
+ ]
88
+ for sent in protocol_sentences:
89
+ speaker_name = sent["speaker_name"]
90
+ text = sent["sentence_text"]
91
+ print(f'protocol: {protocol_name}, session: {session_name}, speaker: {speaker_name}, text: {text}')
92
  ```
93
+
94
+ See [Subsets](#subsets) for other subsets options and change the subset_name field accordingly.
95
+
96
+ #### Option 2: Directly from files
97
+ ```python
98
+ import json
99
 
100
+ path = <path to committee_full_sentences.jsonl> #or any other sentences jsonl file
101
+ with open(path, encoding="utf-8") as file:
102
+ for line in file:
103
+ try:
104
+ sent = json.loads(line)
105
+ except Exception as e:
106
+ print(f'couldnt load json line. error:{e}.')
107
+ sent_id = sent["sentence_id"]
108
+ sent_text = sent["sentence_text"]
109
+ speaker_name = sent["speaker_name"]
110
+ print(f"ID: {sent_id}, speaker name: {speaker_name}, text: {sent_text")
111
+ ```
112
 
113
+ #### Option 3: ElasticSearch
114
  IP address, username and password for the es server and [Kibana](http://34.0.64.248:5601/):
115
 
116
  ##### Credentials for Kibana:
 
139
  print("id: %(sentence_id)s: speaker_name: %(speaker_name)s: sentence_text: %(sentence_text)s" % hit["_source"])
140
  ```
141
 
 
 
 
142
 
 
 
 
 
 
 
 
 
 
 
 
 
143
 
144
  ## Subsets
145