Judges

TRL provides judges to easily compare two completions.

Make sure to have installed the required dependencies by running:

pip install trl[llm_judge]

Define your own judge

To define your own judge, you need to subclass BaseJudge and implement the BaseJudge.judge() method that returns a list of 0/1 indicating which completion is better. Here is a dummy example where we define a simple judge that favors longer completions:

from trl import BaseJudge

class LengthBasedJudge(BaseJudge):
    def judge(self, prompts, completion_pairs, shuffle_order=False):
        return [0 if len(c1) > len(c2) else 1 for c1, c2 in completion_pairs]

You can then use this judge as follows:

judge = LengthBasedJudge()
judge.judge(
    prompts=["What is the capital of France?", "What is the biggest planet in the solar system?"],
    completion_pairs=[["Paris", "The capital of France is Paris."], ["Jupiter is the biggest planet in the solar system.", "Jupiter"]],
)  # Outputs: [1, 0]

TRL also provides a BaseAPIJudge class that can be used to define judges that interact with an API. You can subclass BaseAPIJudge and implement the BaseAPIJudge.get_response() method that should return the response from the API. For an example, see the HuggingFaceJudge class.

BaseJudge

class trl.BaseJudge

< >

( )

Base class for LLM judges.

Example:

class MockJudge(BaseJudge):
    def judge(self, prompts, completion_pairs, shuffle_order=True):
        return [random.choice([0, 1]) for _ in range(len(prompts))]

judge = MockJudge()
judge.judge(
    prompts=["What is the capital of France?", "What is the capital of Germany?"],
    completion_pairs=[["Paris", "Marseille"], ["Munich", "Berlin"]]
)  # [0, 0]

judge

< >

( prompts: List completion_pairs: List shuffle_order: bool = True )

Parameters

  • prompts (List[str]) — List of prompts.
  • completion_pairs (List[List[str]]) — List of completion pairs, where each pair is a list of two strings.
  • shuffle_order (bool) — Whether to shuffle the order of the completion pairs, to avoid positional bias.

Judge the completion pairs for the given prompts.

BaseAPIJudge

class trl.BaseAPIJudge

< >

( system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 )

Parameters

  • system_prompt (str, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used.
  • max_tries (int, optional) — The maximum number of retries for a request. Defaults to 5.
  • max_workers (int, optional) — The maximum number of parallel requests. Defaults to 8.

Base class for LLM judges reached via an API.

The subclasses of this class should implement the get_response method to interact with the API.

Example:

class MockAPIJudge(BaseAPIJudge):
    def get_response(self, content):
        return random.choice(["0", "1"])

judge = MockAPIJudge()
judge.judge(
    prompts=["What is the capital of France?", "What is the capital of Germany?"],
    completion_pairs=[["Paris", "Marseille"], ["Munich", "Berlin"]]
)  # [1, 1]

get_response

< >

( content: str )

Parameters

  • content (str) — The string content.

Get the response from the API for the given content.

HuggingFaceJudge

class trl.HuggingFaceJudge

< >

( model = 'meta-llama/Meta-Llama-3-70B-Instruct' system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 token: Optional = None )

Parameters

  • model (str, optional) — The model to use for the judge. Defaults to “meta-llama/Meta-Llama-3-70B-Instruct”.
  • system_prompt (str, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used.
  • max_tries (int, optional) — The maximum number of retries for a request. Defaults to 5.
  • max_workers (int, optional) — The maximum number of parallel requests. Defaults to 8.
  • token (str, optional) — The Hugging Face API token to use for the InferenceClient.

Judge based on the Hugging Face API.

MockAPIJudge

class trl.MockAPIJudge

< >

( system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 )

Mock judge that returns a random choice instead of interacting with an API.

MockJudge

class trl.MockJudge

< >

( )

Mock judge that randomly selects a model for each completion pair.

OpenAIJudge

class trl.OpenAIJudge

< >

( model = 'gpt-4-turbo-preview' system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 )

Parameters

  • model (str, optional) — The model to use for the judge. Defaults to “gpt-4-turbo-preview”.
  • system_prompt (str, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used.
  • max_tries (int, optional) — The maximum number of retries for a request. Defaults to 5.
  • max_workers (int, optional) — The maximum number of parallel requests. Defaults to 8.

Judge based on the OpenAI API.

PairRMJudge

class trl.PairRMJudge

< >

( )

LLM judge based on the PairRM model from AllenAI.

See: https://huggingface.co/llm-blender/PairRM

< > Update on GitHub