- 1.52 kB initial commit
- 5.42 kB Upload folder using huggingface_hub (#1)
- 51 Bytes Upload folder using huggingface_hub (#1)
- 0 Bytes Update README.md
model.pt Detected Pickle imports (23)
- "transformers.generation.configuration_utils.GenerationConfig",
- "__builtin__.set",
- "transformers.models.mistral.modeling_mistral.MistralDecoderLayer",
- "transformers.models.mistral.modeling_mistral.MistralMLP",
- "transformers.models.mistral.modeling_mistral.MistralRotaryEmbedding",
- "torch.FloatStorage",
- "quanto.nn.qlinear.QLinear",
- "torch._utils._rebuild_parameter",
- "quanto.tensor.qtype.qtype",
- "transformers.models.mistral.modeling_mistral.MistralForCausalLM",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.mistral.modeling_mistral.MistralModel",
- "collections.OrderedDict",
- "transformers.models.mistral.configuration_mistral.MistralConfig",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.mistral.modeling_mistral.MistralSdpaAttention",
- "torch.float16",
- "torch.nn.modules.activation.SiLU",
- "torch.device",
- "transformers.models.mistral.modeling_mistral.MistralRMSNorm",
- "torch.HalfStorage",
- "torch.nn.modules.container.ModuleList",
- "torch.float8_e4m3fn"
How to fix it?
15 GB Upload folder using huggingface_hub (#1) - 1.04 kB Upload folder using huggingface_hub (#1)
- 416 Bytes Upload folder using huggingface_hub (#1)
- 1.8 MB Upload folder using huggingface_hub (#1)
- 493 kB Upload folder using huggingface_hub (#1)
- 1.72 kB Upload folder using huggingface_hub (#1)