mistral-11b-128k / README.md
winglian's picture
Update README.md
81d0210
metadata
license: apache-2.0
pipeline_tag: text-generation
language:
  - en
tags:
  - pretrained
inference:
  parameters:
    temperature: 0.7

Mistral YARN 128k 11b

This is a mergekit merge of the Nous Research's Yarn-Mistral-7b-128k Large Language Model (LLM) to create an 11 billion parameter pretrained generative text model with a context