In this game, we want to run a sentence similarity model, I’m going to use all-MiniLM-L6-v2.
It’s a BERT Transformer model. It’s already trained so we can use it directly.
But here, I have two solutions to run it, I can:
Both are valid solutions, but they have advantages and disadvantages.
I run the model on a remote server, and send API calls from the game. I can use an API service to help deploy the model.
For instance, Hugging Face provides an API service called Inference API (free for prototyping and experimentation) that allows you to use AI models via simple API calls. And we have a Unity plugin to access and use Hugging Face AI models from within Unity projects.
Usually, you use an API if you use a very big model that couldn’t run on a player machine. For instance if you use big models like Llama 2.
I run the model locally: on the player machine. To be able to do that I use two libraries.
Unity Sentis: the neural network inference library that allow us to run our AI model directly inside our game.
The Hugging Face Sharp Transformers library: a Unity plugin of utilities to run Transformer 🤗 models in Unity games.
Since the sentence similarity model we’re going to use is small, we decided to run it locally.