Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

This repository contains the LoRA adapter weights from the fine-tuning of the Llama 3 (8B) model on patent documents. It is optimized for generating embeddings from patent texts. These embeddings are useful for tasks such as classification, clustering, and retrieval. The model leverages domain-specific training, using the second step of the llm2Vec approach (unsupervised contrastive learning), to capture the language of patents, offering high-quality representations for patent analysis.

Framework versions

  • PEFT 0.12.0
Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for saroyehun/Llama3-8B-Instruct-mntp-unsup-simcse-patent

Adapter
(755)
this model