huggingkot's picture
Update README.md
878f5a8 verified
|
raw
history blame
543 Bytes
metadata
library_name: mlc-llm
tags:
  - mlc-llm
  - web-llm
  - incremental-pretraining
  - sft
  - reinforcement-learning
  - roleplay
  - cot
language:
  - en
  - zh
base_model:
  - btaskel/Tifa-DeepsexV2-7b-MGRPO-safetensors
pipeline_tag: text-generation

This is a MLC converted weight from Tifa-DeepsexV2-7b-MGRPO-safetensors model in MLC format q4f16_1.

The model can be used for projects MLC-LLM and WebLLM.