misery-index / README.md
bansalaman18's picture
Upload dataset
47b51d1 verified
metadata
license: cc-by-4.0
task_categories:
  - text-regression
  - text-classification
language:
  - en
tags:
  - psychology
  - emotion
  - distress
  - misery
  - sentiment-analysis
  - regression
size_categories:
  - 100<n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: 'Ep #'
      dtype: string
    - name: Misery
      dtype: string
    - name: Score
      dtype: int64
    - name: VNTO
      dtype: string
    - name: Reward
      dtype: int64
    - name: Win
      dtype: string
    - name: Comments
      dtype: string
    - name: question_tag
      dtype: string
    - name: level
      dtype: string
  splits:
    - name: train
      num_bytes: 53815
      num_examples: 516
  download_size: 24234
  dataset_size: 53815

Misery Index Dataset

Dataset Description

The Misery Index Dataset comprises 516 textual descriptions of real-world or imagined scenarios, each annotated with a corresponding misery score on a continuous scale from 0 (no misery) to 100 (extreme misery). These misery ratings represent subjective estimates of emotional distress associated with each event.

This dataset was created and analyzed in the research paper "Leveraging Large Language Models for Predictive Analysis of Human Misery" by Bishanka Seal, Rahul Seetharaman, Aman Bansal, and Abhilash Nandy.

Note: If the Hugging Face dataset viewer is not available, you can still access the full dataset by downloading the CSV file directly or using the methods shown in the usage examples below.

Dataset Summary

This dataset is designed for research in emotional AI, sentiment analysis, and psychological modeling. It enables researchers to develop and evaluate models that can predict human emotional responses to various life situations with fine-grained precision.

Key Features:

  • 516 scenarios with diverse emotional contexts
  • Continuous scale (0-100) for precise misery measurement
  • Minimal preprocessing to preserve emotional texture
  • Balanced distribution across misery levels
  • Categorized events for structured analysis

Dataset Structure

Data Fields

  • Ep # (string): Source episode identifier (e.g., "1x01", "2x03")
  • Misery (string): A short English-language description of a miserable situation
  • Score (int): Numeric label indicating misery level (0-100 scale)
  • VNTO (string): Content type flag (T=Text, V=Video, N=News, O=Other, P=Punishment)
  • Reward (int): Reward value from original game show context (0-15000)
  • Win (string): Win/loss indicator (y/n)
  • Comments (string): Additional comments, notes, or source information
  • question_tag (string): Question categorization tag
  • level (string): Difficulty or context level

Data Splits

The dataset contains a single train split with 516 examples.

from datasets import load_dataset

# Method 1: Using datasets library (preferred)
dataset = load_dataset("path/to/misery-index")
print(dataset["train"][0])

# Method 2: Direct CSV loading (if dataset viewer unavailable)
import pandas as pd
from huggingface_hub import hf_hub_download

file_path = hf_hub_download("path/to/misery-index", "Misery_Data.csv", repo_type="dataset")
df = pd.read_csv(file_path)
print(df.head())
# Output example:
# {
#   'Ep #': '1x01',
#   'Misery': 'You Send a Nude Selfie to HR by mistake', 
#   'Score': 70,
#   'VNTO': 'T',
#   'Reward': 0,
#   'Win': '',
#   'Comments': '',
#   'question_tag': '1_base',
#   'level': ''
# }

Dataset Statistics

  • Total examples: 516
  • Mean misery score: 56.45
  • Standard deviation: 17.59
  • Score range: 11-100
  • Percentiles:
    • 25th: 43
    • 50th: 56
    • 75th: 69

Category Distribution

  1. Other/Miscellaneous: 26.4%
  2. Family or Relationship Issues: 16.3%
  3. Accidents or Mishaps: 15.3%
  4. Medical Emergencies: ~10%
  5. Embarrassment: ~8%
  6. Physical Injury: ~7%
  7. Animal-related Incidents: ~6%
  8. Crime or Legal Trouble: <5%
  9. Professional/Work-related: <5%
  10. Gross/Disgusting Events: <5%

Research Paper & Code

📄 Paper: "Leveraging Large Language Models for Predictive Analysis of Human Misery"

💻 Code Repository: GitHub - Misery_Data_Exps_GitHub

Data Sources

The data was aggregated from three primary sources:

  1. Misery Index blog curated by Bobby MGSK
  2. Jericho Blog consolidated dataset
  3. Associated Google Spreadsheet with structured entries

Original sources:

Usage Examples

Basic Loading and Exploration

from datasets import load_dataset
import pandas as pd

# Load the dataset
dataset = load_dataset("path/to/misery-index")

# Convert to pandas for analysis
df = dataset["train"].to_pandas()

# Basic statistics
print(f"Dataset size: {len(df)}")
print(f"Average misery score: {df['Score'].mean():.2f}")
print(f"Score range: {df['Score'].min()}-{df['Score'].max()}")

# Sample scenarios by misery level
print("\nLow misery scenarios:")
print(df[df['Score'] < 30]['Misery'].head())

print("\nHigh misery scenarios:")
print(df[df['Score'] > 80]['Misery'].head())

Regression Task

from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Prepare features and targets
X = df['Misery'].values
y = df['Score'].values

# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Vectorize text
vectorizer = TfidfVectorizer(max_features=1000, stop_words='english')
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)

# Train model
model = LinearRegression()
model.fit(X_train_vec, y_train)

# Predict and evaluate
y_pred = model.predict(X_test_vec)
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.2f}")

Classification Task (Binned Misery Levels)

import numpy as np
from sklearn.ensemble import RandomForestClassifier

# Create misery level bins
def bin_misery(score):
    if score < 33:
        return "Low"
    elif score < 67:
        return "Medium"
    else:
        return "High"

df['misery_level'] = df['Score'].apply(bin_misery)

# Classification pipeline
X = df['Misery'].values
y = df['misery_level'].values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)

clf = RandomForestClassifier(n_estimators=100, random_state=42)
clf.fit(X_train_vec, y_train)

accuracy = clf.score(X_test_vec, y_test)
print(f"Classification Accuracy: {accuracy:.2f}")

Applications

This dataset is valuable for:

  1. Emotion Recognition: Developing models to predict emotional responses to textual scenarios
  2. Psychological Research: Understanding factors that contribute to human distress
  3. Content Moderation: Identifying potentially distressing content
  4. Mental Health Applications: Building tools for emotional support and intervention
  5. Game Design: Creating balanced difficulty curves in narrative games
  6. Educational Tools: Teaching empathy and emotional intelligence

Research Findings

The associated research paper demonstrates several key findings:

  • Few-shot prompting significantly outperforms zero-shot approaches for misery prediction
  • GPT-4o achieved the highest performance with 61.79% accuracy in structured evaluation
  • Binary comparisons are easier for LLMs than precise scalar predictions (74.9% vs 43.1% accuracy)
  • Retrieval-augmented prompting using BERT embeddings improves prediction quality
  • Feedback-driven adaptation enables models to refine predictions iteratively

The "Misery Game Show" evaluation framework introduced in the paper provides a novel approach to testing LLM capabilities in emotional reasoning tasks.

Ethical Considerations

  • Content Warning: Dataset contains descriptions of distressing scenarios including accidents, medical emergencies, and personal tragedies
  • Subjectivity: Misery scores reflect subjective human judgments and may vary across cultures and individuals
  • Bias: Original data sources may contain demographic or cultural biases
  • Use Responsibly: Should not be used to cause distress or for malicious purposes

Limitations

  1. Cultural Bias: Ratings may reflect Western cultural perspectives on distress
  2. Temporal Bias: Scenarios reflect contemporary (2010s-2020s) life situations
  3. Subjectivity: Individual misery perceptions may vary significantly
  4. Limited Scope: May not cover all possible distressing scenarios
  5. Language: English-only content limits cross-cultural applicability

Citation

If you use this dataset in your research, please cite:

@article{seal2024leveraging,
  title={Leveraging Large Language Models for Predictive Analysis of Human Misery},
  author={Seal, Bishanka and Seetharaman, Rahul and Bansal, Aman and Nandy, Abhilash},
  journal={arXiv preprint arXiv:2508.12669},
  year={2024},
  url={https://arxiv.org/pdf/2508.12669}
}

For the dataset specifically:

@dataset{misery_index_2024,
  title={Misery Index Dataset: Textual Scenarios with Emotional Distress Ratings},
  author={Seal, Bishanka and Seetharaman, Rahul and Bansal, Aman and Nandy, Abhilash},
  year={2024},
  url={https://huggingface.co/datasets/path/to/misery-index},
  note={Dataset of 516 scenarios with misery ratings from 0-100},
  howpublished={Hugging Face Datasets}
}

License

This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Dataset Card Contact

For questions or issues regarding this dataset, please open an issue or contact the dataset maintainers.