AgGPT-9 and AgGPT-6m - Local AI Models

Overview

This repository contains two versions of the AgGPT model:

  • AgGPT-9: The full version of the AgGPT model, offering advanced AI capabilities for high-performance systems.
  • AgGPT-6m: A lighter version of AgGPT, optimized for devices with less computing power.

Both models are designed for local use and provide advanced AI assistance.

Features

  • AgGPT-9:

    • High-performance model optimized for more capable systems.
    • Suitable for intensive computational tasks.
    • Can process larger context windows and generate high-quality responses.
  • AgGPT-6m:

    • A lighter, more efficient version for less computing-capable devices.
    • Optimized for devices with lower memory and processing power.
    • Provides similar functionality to AgGPT-9, but with reduced resource demands.

Requirements

Before running the models, make sure your system meets the following requirements:

  • Python 3.11.7 or higher
  • llama_cpp library installed (use pip install llama_cpp)
  • A compatible device for running the models (refer to individual model requirements below)

AgGPT-9 Requirements:

  • High-performance machine
  • 8GB+ RAM
  • NPU recommended

AgGPT-6m Requirements:

  • 1GB+ RAM
  • Sufficient storage

License

The software is released under the AGGPT-9 Software License Agreement. For full details, please refer to the LICENSE file.

Downloads last month
4
GGUF
Model size
7.24B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.