Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -11,9 +11,9 @@ tags: 
     | 
|
| 11 | 
         
             
            - fine-tuning
         
     | 
| 12 | 
         
             
            ---
         
     | 
| 13 | 
         | 
| 14 | 
         
            -
            # Model Card for MamaBot-Llama 
     | 
| 15 | 
         | 
| 16 | 
         
            -
            MamaBot-Llama 
     | 
| 17 | 
         | 
| 18 | 
         
             
            ## Model Details
         
     | 
| 19 | 
         | 
| 
         @@ -26,7 +26,7 @@ MamaBot-Llama-1 is an opensource fine-tuned large language model developed by He 
     | 
|
| 26 | 
         | 
| 27 | 
         
             
            ### Model Sources
         
     | 
| 28 | 
         | 
| 29 | 
         
            -
            - **Repository:** [MamaBot-Llama-1 on Hugging Face](https://huggingface.co/ 
     | 
| 30 | 
         | 
| 31 | 
         
             
            ## Uses
         
     | 
| 32 | 
         | 
| 
         @@ -63,8 +63,8 @@ Use the code below to get started with the model. 
     | 
|
| 63 | 
         
             
            !pip install -q -U bitsandbytes
         
     | 
| 64 | 
         | 
| 65 | 
         
             
            from transformers import AutoModelForCausalLM, AutoTokenizer
         
     | 
| 66 | 
         
            -
            tokenizer = AutoTokenizer.from_pretrained(' 
     | 
| 67 | 
         
            -
            model = AutoModelForCausalLM.from_pretrained(' 
     | 
| 68 | 
         | 
| 69 | 
         
             
            def generate_response(user_message):
         
     | 
| 70 | 
         
             
                tokenizer.chat_template = "{%- for message in messages %}{{ bos_token + '[INST] ' + message['content'] + ' [/INST]' if message['role'] == 'user' else ' ' + message['content'] + ' ' + eos_token }}{%- endfor %}"
         
     | 
| 
         @@ -154,13 +154,13 @@ The training and inference were conducted using the Hugging Face Transformers li 
     | 
|
| 154 | 
         
             
              author = {HelpMum},
         
     | 
| 155 | 
         
             
              title = {MamaBot-Llama-1},
         
     | 
| 156 | 
         
             
              year = {2024},
         
     | 
| 157 | 
         
            -
              howpublished = {\url{https://huggingface.co/ 
     | 
| 158 | 
         
             
            }
         
     | 
| 159 | 
         
             
            ```
         
     | 
| 160 | 
         | 
| 161 | 
         
             
            **APA:**
         
     | 
| 162 | 
         | 
| 163 | 
         
            -
            HelpMum. (2024). MamaBot-Llama-1. Retrieved from https://huggingface.co/ 
     | 
| 164 | 
         | 
| 165 | 
         
             
            ## Model Card Contact
         
     | 
| 166 | 
         | 
| 
         | 
|
| 11 | 
         
             
            - fine-tuning
         
     | 
| 12 | 
         
             
            ---
         
     | 
| 13 | 
         | 
| 14 | 
         
            +
            # Model Card for MamaBot-Llama
         
     | 
| 15 | 
         | 
| 16 | 
         
            +
            MamaBot-Llama is an opensource fine-tuned large language model developed by HelpMum to assist with maternal healthcare by providing accurate and reliable answers to questions about pregnancy and childbirth. The model has been fine-tuned on Llama 3.1 8b-instruct using a dataset of maternal healthcare questions and answers.
         
     | 
| 17 | 
         | 
| 18 | 
         
             
            ## Model Details
         
     | 
| 19 | 
         | 
| 
         | 
|
| 26 | 
         | 
| 27 | 
         
             
            ### Model Sources
         
     | 
| 28 | 
         | 
| 29 | 
         
            +
            - **Repository:** [MamaBot-Llama-1 on Hugging Face](https://huggingface.co/HelpMumHQ/MamaBot-Llama)
         
     | 
| 30 | 
         | 
| 31 | 
         
             
            ## Uses
         
     | 
| 32 | 
         | 
| 
         | 
|
| 63 | 
         
             
            !pip install -q -U bitsandbytes
         
     | 
| 64 | 
         | 
| 65 | 
         
             
            from transformers import AutoModelForCausalLM, AutoTokenizer
         
     | 
| 66 | 
         
            +
            tokenizer = AutoTokenizer.from_pretrained('HelpMumHQ/MamaBot-Llama')
         
     | 
| 67 | 
         
            +
            model = AutoModelForCausalLM.from_pretrained('HelpMumHQ/MamaBot-Llama')
         
     | 
| 68 | 
         | 
| 69 | 
         
             
            def generate_response(user_message):
         
     | 
| 70 | 
         
             
                tokenizer.chat_template = "{%- for message in messages %}{{ bos_token + '[INST] ' + message['content'] + ' [/INST]' if message['role'] == 'user' else ' ' + message['content'] + ' ' + eos_token }}{%- endfor %}"
         
     | 
| 
         | 
|
| 154 | 
         
             
              author = {HelpMum},
         
     | 
| 155 | 
         
             
              title = {MamaBot-Llama-1},
         
     | 
| 156 | 
         
             
              year = {2024},
         
     | 
| 157 | 
         
            +
              howpublished = {\url{https://huggingface.co/HelpMumHQ/MamaBot-Llama}},
         
     | 
| 158 | 
         
             
            }
         
     | 
| 159 | 
         
             
            ```
         
     | 
| 160 | 
         | 
| 161 | 
         
             
            **APA:**
         
     | 
| 162 | 
         | 
| 163 | 
         
            +
            HelpMum. (2024). MamaBot-Llama-1. Retrieved from https://huggingface.co/HelpMumHQ/MamaBot-Llama
         
     | 
| 164 | 
         | 
| 165 | 
         
             
            ## Model Card Contact
         
     | 
| 166 | 
         |