fine-tuning project focused on Icelandic-English and vice versa translation Primary Model: Meta-LLaMA 3.1 (8B) I selected Meta-LLaMA 3.1 (8B) as the primary model due to its state-of-the-art multilingual text generation capabilities. While it may not explicitly focus on Icelandic, its vast multilingual training and highly optimized transformer-based architecture make it a strong candidate for fine-tuning on Icelandic-English translations. Its ability to generalize across languages ensures robust performance for high-quality and context-aware translations. With 8 billion parameters, it balances scalability with computational efficiency, making it well-suited for large-scale translation tasks. Secondary Model: Gemma 2 (9B) Gemma 2 (9B) is chosen as the secondary model due to its focus on multilingual understanding and generation tasks. Trained on a wide array of diverse and high-quality datasets, it has demonstrated strong performance across multiple languages, including low-resource languages like Icelandic. Gemma 2’s architecture, with its 9 billion parameters, provides greater capacity for capturing intricate linguistic patterns and nuances. This makes it ideal for fine-tuning tasks requiring precise language translations. Its scalable nature ensures that it can handle complex translation contexts, complementing Meta-LLaMA 3.1’s strengths. Rationale for Using Both Models The combination of Meta-LLaMA 3.1 (8B) and Gemma 2 (9B) leverages the generative power and adaptability of Meta-LLaMA with the nuanced multilingual capabilities of Gemma 2. By fine-tuning both models on Icelandic-English parallel datasets, you can evaluate their respective strengths and weaknesses, ensuring the final translation solution is robust, accurate, and contextually aware.