Divyaksh Shukla
Merge branch 'main' of https://huggingface.co/datasets/Exploration-Lab/CS779-Fall25
fff01fb
license: cc-by-nc-nd-4.0 | |
dataset_info: | |
- config_name: default | |
features: | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 678202759.44 | |
num_examples: 27000 | |
- name: val | |
num_bytes: 75355862.16 | |
num_examples: 3000 | |
- name: test | |
num_bytes: 188389655.4 | |
num_examples: 7500 | |
download_size: 520788766 | |
dataset_size: 941948277.0 | |
- config_name: email-corpus | |
features: | |
- name: file | |
dtype: string | |
- name: message | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 1424661264 | |
num_examples: 517401 | |
download_size: 606473179 | |
dataset_size: 1424661264 | |
- config_name: indic-corpus | |
features: | |
- name: lang_id | |
dtype: string | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 242818701 | |
num_examples: 11 | |
download_size: 105665834 | |
dataset_size: 242818701 | |
- config_name: wiki-topics | |
features: | |
- name: article | |
dtype: string | |
- name: category | |
dtype: string | |
- name: text | |
dtype: string | |
- name: __index_level_0__ | |
dtype: int64 | |
splits: | |
- name: train | |
num_bytes: 338290047 | |
num_examples: 8000 | |
- name: test | |
num_bytes: 84610785 | |
num_examples: 2000 | |
download_size: 236620494 | |
dataset_size: 422900832 | |
- config_name: Assignment-3-word2vec | |
features: | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 59964905 | |
num_examples: 200000 | |
download_size: 59964905 | |
dataset_size: 59964905 | |
- config_name: Assignment-3-word2vec-analogy | |
features: | |
- name: word1 | |
dtype: string | |
- name: word2 | |
dtype: string | |
- name: word3 | |
dtype: string | |
splits: | |
- name: test | |
num_bytes: 25776 | |
num_examples: 1000 | |
download_size: 25776 | |
dataset_size: 25776 | |
- config_name: Assignment-3-naive-bayes | |
features: | |
- name: text | |
dtype: string | |
- name: category | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 690247310 | |
num_examples: 8000 | |
- name: test | |
num_bytes: 171875690 | |
num_examples: 2000 | |
download_size: 862123000 | |
dataset_size: 862123000 | |
- config_name: Assignment-3-em | |
features: | |
- name: text | |
dtype: string | |
- name: category | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 683095685 | |
num_examples: 8000 | |
- name: test | |
num_bytes: 171875690 | |
num_examples: 2000 | |
download_size: 854971375 | |
dataset_size: 854971375 | |
- config_name: Assignment-4 | |
features: | |
- name: id | |
dtype: string | |
- name: tokens | |
sequence: string | |
- name: ner_tags | |
sequence: int32 | |
splits: | |
- name: train | |
num_bytes: 38232055 | |
num_examples: 75827 | |
- name: val | |
num_bytes: 5464732 | |
num_examples: 10851 | |
- name: test | |
num_bytes: 10939918 | |
num_examples: 21657 | |
download_size: 54636705 | |
dataset_size: 54636705 | |
- config_name: Deep-learning-assignment | |
features: | |
- name: Category | |
dtype: string | |
- name: Description | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 15089359 | |
num_examples: 120000 | |
- name: test | |
num_bytes: 1014560 | |
num_examples: 7600 | |
download_size: 16103919 | |
dataset_size: 16103919 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: val | |
path: data/val-* | |
- split: test | |
path: data/test-* | |
- config_name: email-corpus | |
data_files: | |
- split: train | |
path: email-corpus/train-* | |
- config_name: indic-corpus | |
data_files: | |
- split: train | |
path: indic-corpus/train-* | |
- config_name: wiki-topics | |
data_files: | |
- split: train | |
path: wiki-topics/train-* | |
- split: test | |
path: wiki-topics/test-* | |
- config_name: Assignment-3-word2vec | |
data_files: | |
- split: train | |
path: Assignment-3/word2vec/train* | |
- config_name: Assignment-3-word2vec-analogy | |
data_files: | |
- split: test | |
path: Assignment-3/word2vec/test* | |
- config_name: Assignment-3-naive-bayes | |
data_files: | |
- split: train | |
path: Assignment-3/naive_bayes/train* | |
- split: test | |
path: Assignment-3/naive_bayes/test_nb_with_labels* | |
- config_name: Assignment-3-em | |
data_files: | |
- split: train | |
path: Assignment-3/em/train* | |
- split: test | |
path: Assignment-3/em/test* | |
- config_name: Assignment-4 | |
data_files: | |
- split: train | |
path: Assignment-4/train* | |
- split: test | |
path: Assignment-4/test* | |
- split: val | |
path: Assignment-4/val* | |
- config_name: Deep-learning-assignment | |
data_files: | |
- split: train | |
path: Deep-learning-assignment/train* | |
- split: test | |
path: Deep-learning-assignment/test* | |
# CS779-Fall 2025 IIT-Kanpur | |
Instructor: Dr. Ashutosh Modi | |
## Assignment-3 | |
There are 3 main tasks in Assignment-3: | |
1. Neural Network Implementation from Scratch for Word2Vec using Wikipedia Text | |
2. Naive Bayes Classifier for Topic classification on Wikipedia Articles | |
3. Expectation-Maximization Based clustering on Wikipedia Articles | |
The data can be fetched using the datasets API as follows: | |
```python | |
from datasets import load_dataset | |
# Word2Vec Dataset | |
word2vec_train = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-3-word2vec", split="train") | |
word2vec_test = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-3-word2vec-analogy", split="test") | |
# Naive Bayes | |
naive_bayes_train = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-3-naive-bayes", split="train") | |
naive_bayes_test = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-3-naive-bayes", split="test") | |
# Expectation-Maximization | |
em_train = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-3-em", split="train") | |
em_test = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-3-em", split="test") | |
``` | |
## Assignment-4 | |
This assignment involves Named Entity Recognition (NER) on a Hindi dataset (NED) on a custom dataset. The dataset consists of sentences with tokens and their corresponding NER tags. The list of NER tags includes: | |
- B-FESTIVAL | |
- B-GAME | |
- B-LANGUAGE | |
- B-LITERATURE | |
- B-LOCATION | |
- B-MISC | |
- B-NUMEX | |
- B-ORGANIZATION | |
- B-PERSON | |
- B-RELIGION | |
- B-TIMEX | |
- I-FESTIVAL | |
- I-GAME | |
- I-LANGUAGE | |
- I-LITERATURE | |
- I-LOCATION | |
- I-MISC | |
- I-NUMEX | |
- I-ORGANIZATION | |
- I-PERSON | |
- I-RELIGION | |
- I-TIMEX | |
- O | |
The data can be fetched using the datasets API as follows: | |
```python | |
from datasets import load_dataset | |
# Train | |
train = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-4", split="train") | |
# Test | |
test = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-4", split="test") | |
# Validation | |
val = load_dataset("Exploration-Lab/CS779-Fall25", "Assignment-4", split="val") | |
``` | |
## Deep Learning Assignment | |
This assignment involves text classification on a dataset containing product descriptions and their corresponding categories. The dataset consists of two columns: "Category" and "Description". The "Category" column contains the category labels, while the "Description" column contains the product descriptions. The data can be fetched using the datasets API as follows: | |
```python | |
from datasets import load_dataset | |
# Train | |
train = load_dataset("Exploration-Lab/CS779-Fall25", "Deep-learning-assignment", split="train") | |
# Test | |
test = load_dataset("Exploration-Lab/CS779-Fall25", "Deep-learning-assignment", split="test") | |
``` | |