File size: 1,403 Bytes
0a2cfbb
 
 
8e27ffa
 
 
 
 
 
bedf7cd
 
 
 
0a2cfbb
 
8e27ffa
0a2cfbb
 
8e27ffa
0a2cfbb
 
8e27ffa
0a2cfbb
8e27ffa
 
0a2cfbb
 
 
 
 
 
 
 
 
 
47bae96
 
 
 
 
69d4c8f
47bae96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
dataset_info:
  features:
  - name: answer
    dtype: string
  - name: question
    dtype: string
  - name: context
    dtype: string
  - name: input_ids
    sequence: int32
  - name: labels
    sequence: int64
  splits:
  - name: train
    num_bytes: 788165403
    num_examples: 118695
  - name: test
    num_bytes: 98388509
    num_examples: 14835
  - name: validation
    num_bytes: 98339161
    num_examples: 14838
  download_size: 45704542
  dataset_size: 984893073
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: validation
    path: data/validation-*
---

Dataset used for training text to sql.
I've pre-tokenized this for faster loading. 


Here is the prompt formation for the tokenizer code:
```
def tokenize_function(example):
    start_prompt = "Tables:\n"
    middle_prompt = "\n\nQuestion:\n"
    end_prompt = "\n\nAnswer:\n"

    data_zip = zip(example['context'], example['question'])
    prompt = [start_prompt + context + middle_prompt + question + end_prompt for context, question in data_zip]
    example['input_ids'] = tokenizer(prompt, padding="max_length", truncation=True, return_tensors="pt").input_ids
    example['labels'] = tokenizer(example['answer'], padding="max_length", truncation=True, return_tensors="pt").input_ids
#     print(prompt[0])
#     print()

    return example
```