|
--- |
|
license: apache-2.0 |
|
library_name: transformers |
|
--- |
|
# Laser-Dolphin-Mixtral-4x7b-dpo |
|
|
|
 |
|
|
|
Credit to Fernando Fernandes and Eric Hartford for their project [laserRMT](https://github.com/cognitivecomputations/laserRMT) |
|
|
|
This model is a medium-sized MoE implementation based on [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) |
|
|
|
The process is outlined in this [notebook](https://github.com/cognitivecomputations/laserRMT/blob/main/examples/laser-dolphin-mixtral-2x7b.ipynb) |
|
|
|
## Code Example |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
def generate_response(prompt): |
|
""" |
|
Generate a response from the model based on the input prompt. |
|
|
|
Args: |
|
prompt (str): Prompt for the model. |
|
|
|
Returns: |
|
str: The generated response from the model. |
|
""" |
|
# Tokenize the input prompt |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
|
# Generate output tokens |
|
outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id) |
|
|
|
# Decode the generated tokens to a string |
|
response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
|
|
return response |
|
|
|
# Load the model and tokenizer |
|
model_id = "macadeliccc/laser-dolphin-mixtral-4x7b-dpo" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) |
|
|
|
prompt = "Write a quicksort algorithm in python" |
|
|
|
# Generate and print responses for each language |
|
print("Response:") |
|
print(generate_response(prompt), "\n") |
|
``` |
|
|
|
## Example output |
|
|
|
can you write me a quicksort algorithm in python? |
|
|
|
Sure, here's a quicksort algorithm implemented in Python: |
|
|
|
```python |
|
def quicksort(arr): |
|
if len(arr) <= 1: |
|
return arr |
|
pivot = arr[len(arr) // 2] |
|
left = [x for x in arr if x < pivot] |
|
middle = [x for x in arr if x == pivot] |
|
right = [x for x in arr if x > pivot] |
|
return quicksort(left) + middle + quicksort(right) |
|
``` |
|
|
|
This implementation uses the median of the array as the pivot. |
|
It first checks if the array has one or fewer elements, in which case it is already sorted and can be returned as is. |
|
Otherwise, it selects the pivot as the middle element of the array. Then, it partitions the array into three sub-arrays: elements less than the pivot, elements equal to the pivot, and elements greater than the pivot. |
|
It recursively sorts the left and right sub-arrays and concatenates the results with the middle sub-array to obtain the final sorted array. |
|
|
|
|
|
## GGUF |
|
|
|
Q4_K_M and Q5_K_M quants are available [here](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-4x7b-dpo-GGUF) |
|
|
|
 |
|
|
|
## Eval |
|
|
|
**Model evaluated in 4bit** |
|
|
|
----Benchmark Complete---- |
|
+ 2024-01-24 15:03:08 |
|
+ Time taken: 37.4 mins |
|
+ Prompt Format: Mistral |
|
+ Model: macadeliccc/laser-dolphin-mixtral-4x7b-dpo |
|
+ Score (v2): 71.04 |
|
+ Parseable: 169.0 |
|
--------------- |
|
|
|
|
|
## Citations |
|
|
|
Fernando Fernandes Neto and Eric Hartford. "Optimizing Large Language Models Using Layer-Selective Rank Reduction and Random Matrix Theory." 2024. |
|
|
|
```bibtex |
|
@article{sharma2023truth, |
|
title={The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction}, |
|
author={Sharma, Pratyusha and Ash, Jordan T and Misra, Dipendra}, |
|
journal={arXiv preprint arXiv:2312.13558}, |
|
year={2023} } |
|
``` |
|
|
|
```bibtex |
|
@article{gao2021framework, |
|
title={A framework for few-shot language model evaluation}, |
|
author={Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and others}, |
|
journal={Version v0. 0.1. Sept}, |
|
year={2021} |
|
} |
|
``` |