Responsible-AI-Moderation Model

Table of Contents

Introduction

The Moderation Model module acts as a central hub for machine learning models for prompt injection, toxicity, jailbreak, restricted topic, custom theme and refusal checks. It provides the endpoints to utilize the response generated by these models.

Features

The Moderation Model module acts as a wrapper for the traditional AI models we are using for various checks like prompt injection, jailbreak, toxicity etc.

Installation

To run the application, first we need to install Python and the necessary packages:

  1. Install Python (version >= 3.9 & <3.12) from the official website and ensure it is added to your system PATH.

  2. Clone the repository : responsible-ai-ModerationModel:

    git clone <repository-url>
    
  3. Navigate to the responsible-ai-ModerationModel directory:

    cd responsible-ai-ModerationModel
    
  4. Create a virtual environment:

    python -m venv venv
    
  5. Activate the virtual environment:

    • On Windows:
      .\venv\Scripts\activate
      
  6. Go to the requirements directory where the requirement.txt file is present. In the requirement.txt file comment the

    lib/torch-2.2.0+cu118-cp39-cp39-linux_x86_64.whl  
    

    Note: Download appropriate torch version supporting python version which is installed [i.e if Python version is 3.10 use torch-2.2.0+cu118-cp310-cp310-linux_x86_64.whl, where cp310 denotes python version 3.10 and linux denotes OS which can be linux/win and not applicable for Mac]

    Note: If working in windows as this is for linux and replace

    lib/
    

    with

    ../lib/
    

Note: If working in Mac Os, run the below command after running requirement.txt sh pip install --pre torch torchvision torchaudio \--extra-index-url https://download.pytorch.org/whl/nightly/cpu

Download and place the en_core_web_lg-3.5.0-py3-none-any.whl inside the lib folder. en_core_web_lg and install the requirements:

```sh
pip install -r requirement.txt
```

Note: when running requirement.txt, if getting error related to "cuda-python" then comment cuda-python from requirement.txt file and run pip install again Install the fastapi library as well, use the following command: sh pip install fastapi

Set Configuration Variables

After installing all the required packages, configure the variables necessary to run the APIs.

  1. Navigate to the src directory:

    cd ..
    
  2. Locate the .env file, which contains keys like the following:

workers=1
WORKERS="${workers}"
# DB_NAME="${dbname}"
# DB_USERNAME="${username}"
# DB_PWD="${password}"
# DB_IP="${ipaddress}"
# DB_PORT="${port}"
# MONGO_PATH="mongodb://${DB_USERNAME}:${DB_PWD}@${DB_IP}:${DB_PORT}/"
# MONGO_PATH= "mongodb://localhost:27017/"
  1. Replace the placeholders with your actual values.

Models Required

The following models are required to run the application. Download all the model files from the links provided, and place it in the folder name provided.

  1. Prompt Injection Files required to download here are : model.safetensors, config.json, tokenizer_config.json, tokenizer.json, special_tokens_map.json. Name the folder as 'dbertaInjection'.

  2. Restricted Topic Files required to download here are : model.safetensors, added_tokens.json, config.json, special_tokens_map.json, spm.model, tokenizer.json, tokenizer_config.json. Name the folder as 'restricted-dberta-base-zeroshot-v2'.

  3. Sentence Transformer Model Files required to download here are : 1_Pooling folder, pytorch_model.bin, vocab.txt, tokenizer.json, tokenizer_config.json, special_tokens_map.json, sentence_bert_config.json, modules.json, config.json, config_sentence_transformers.json. Name the folder as 'multi-qa-mpnet-base-dot-v1'.

  4. Detoxify Files required to download here are : vocab.json, tokenizer.json, merges.txt, config.json. Now download the model checkpoint file from this url and keep it under this folder - toxic_model_ckpt_file Name the folder as 'detoxify'.

Place the above folders in a folder named 'models' in the following way: 'responsible-ai-mm-flask/models'.

Running the Application

Once we have completed all the aforementioned steps, we can start the service.

  1. Navigate to the src directory:

  2. Run main.py file:

    python main.py
    
  3. PORT_NO : Use the Port No that is configured in .env file.

    Open the following URL in your browser:

http://localhost:<PORT_NO>/rai/v1/raimoderationmodels/docs

Note:- To address the issue where the Passport Number is not recognized in Privacy, modify the "piiEntitiesToBeRedacted" field in the privacy() under service.py file (line no: 98) from None to an empty list []. This adjustment ensures that the Passport Number is correctly identified.

License

The source code for the project is licensed under the MIT license, which you can find in the LICENSE.txt file.

Contact

If you have more questions or need further insights please feel free to connect with us @ [email protected]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.