Downtown-Case's picture
Upload folder using huggingface_hub
4e30c65 verified
metadata
license: apache-2.0
library_name: transformers
base_model:
  - EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
datasets:
  - jondurbin/gutenberg-dpo-v0.1
  - nbeerbower/gutenberg2-dpo
  - nbeerbower/gutenberg-moderne-dpo

Quantization

Quantized using the default exllamav2 quantization script/dataset, with the following changes:

  • Context length for the calibration/quantization phases were both forced to 8192, as the script does not respect CLI changes by default and simply uses 512/2048 as context lengths.
  • Fewer rows, but ultimately, much more data was used.
  • A few rows of an "extra" dataset, with some examples of long, coherent text and this model's chat tokens, were added to the dataset.

The goal is less degredation from quantization at long context. But I tried to stay as close to default exl2 quantization parameters as possible, as straying too far from them only seems to degrade performance.

image/png

EVA-Gutenberg3-Qwen2.5-32B

EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.

Method

ORPO tuned with 8x A100 for 2 epochs.