TRL provides judges to easily compare two completions.
Make sure to have installed the required dependencies by running:
pip install trl[llm_judge]
To define your own judge, you need to subclass BaseJudge and implement the BaseJudge.judge() method that returns a list of 0/1 indicating which completion is better. Here is a dummy example where we define a simple judge that favors longer completions:
from trl import BaseJudge
class LengthBasedJudge(BaseJudge):
def judge(self, prompts, completion_pairs, shuffle_order=False):
return [0 if len(c1) > len(c2) else 1 for c1, c2 in completion_pairs]
You can then use this judge as follows:
judge = LengthBasedJudge()
judge.judge(
prompts=["What is the capital of France?", "What is the biggest planet in the solar system?"],
completion_pairs=[["Paris", "The capital of France is Paris."], ["Jupiter is the biggest planet in the solar system.", "Jupiter"]],
) # Outputs: [1, 0]
TRL also provides a BaseAPIJudge class that can be used to define judges that interact with an API. You can subclass BaseAPIJudge and implement the BaseAPIJudge.get_response() method that should return the response from the API. For an example, see the HuggingFaceJudge class.
Base class for LLM judges.
Example:
class MockJudge(BaseJudge):
def judge(self, prompts, completion_pairs, shuffle_order=True):
return [random.choice([0, 1]) for _ in range(len(prompts))]
judge = MockJudge()
judge.judge(
prompts=["What is the capital of France?", "What is the capital of Germany?"],
completion_pairs=[["Paris", "Marseille"], ["Munich", "Berlin"]]
) # [0, 0]
( prompts: List completion_pairs: List shuffle_order: bool = True )
Judge the completion pairs for the given prompts.
( system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 )
Parameters
Base class for LLM judges reached via an API.
The subclasses of this class should implement the get_response
method to interact with the API.
Example:
class MockAPIJudge(BaseAPIJudge):
def get_response(self, content):
return random.choice(["0", "1"])
judge = MockAPIJudge()
judge.judge(
prompts=["What is the capital of France?", "What is the capital of Germany?"],
completion_pairs=[["Paris", "Marseille"], ["Munich", "Berlin"]]
) # [1, 1]
Get the response from the API for the given content.
( model = 'meta-llama/Meta-Llama-3-70B-Instruct' system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 token: Optional = None )
Parameters
str
, optional) — The model to use for the judge. Defaults to “meta-llama/Meta-Llama-3-70B-Instruct”. str
, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used. int
, optional) — The maximum number of retries for a request. Defaults to 5. int
, optional) — The maximum number of parallel requests. Defaults to 8. str
, optional) — The Hugging Face API token to use for the InferenceClient. Judge based on the Hugging Face API.
( system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 )
Mock judge that returns a random choice instead of interacting with an API.
Mock judge that randomly selects a model for each completion pair.
( model = 'gpt-4-turbo-preview' system_prompt: Optional = None max_tries: int = 5 max_workers: int = 8 )
Parameters
str
, optional) — The model to use for the judge. Defaults to “gpt-4-turbo-preview”. str
, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used. int
, optional) — The maximum number of retries for a request. Defaults to 5. int
, optional) — The maximum number of parallel requests. Defaults to 8. Judge based on the OpenAI API.
LLM judge based on the PairRM model from AllenAI.