Add Llama-2-7B rank 256
f3b8098
caldera-rank-256-4B-factors-downdate-RHT-ft.pt
Detected Pickle imports (29)
- "torch.nn.modules.container.ModuleList",
- "transformers.models.llama.modeling_llama.LlamaDecoderLayer",
- "lib.codebook.latticee8_padded12_rvq4bit.E8P12RVQ4B_codebook",
- "torch.FloatStorage",
- "caldera.decomposition.quantized_layer.CalderaQuantizedLinear",
- "torch.nn.modules.linear.Linear",
- "transformers.models.llama.modeling_llama.LlamaRMSNorm",
- "torch._utils._rebuild_tensor_v2",
- "torch.BFloat16Storage",
- "torch.nn.modules.activation.SiLU",
- "torch._utils._rebuild_parameter",
- "transformers.models.llama.modeling_llama.LlamaForCausalLM",
- "torch.BoolStorage",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.llama.modeling_llama.LlamaMLP",
- "lib.codebook.latticee8_padded12.E8P12_codebook",
- "torch.LongStorage",
- "caldera.decomposition.quantized_layer.LatticeQuantizedParameter",
- "torch.int64",
- "transformers.models.llama.configuration_llama.LlamaConfig",
- "transformers.generation.configuration_utils.GenerationConfig",
- "collections.OrderedDict",
- "__builtin__.set",
- "torch.HalfStorage",
- "transformers.models.llama.modeling_llama.LlamaRotaryEmbedding",
- "transformers.models.llama.modeling_llama.LlamaSdpaAttention",
- "transformers.models.llama.modeling_llama.LlamaModel",
- "torch.float16",
- "torch.IntStorage"
How to fix it?
4.42 GB
Add Llama-2-7B rank 256
caldera-rank-256-4B-factors-downdate-no-RHT-ft.pt
Detected Pickle imports (29)
- "lib.codebook.latticee8_padded12_rvq4bit.E8P12RVQ4B_codebook",
- "torch.nn.modules.activation.SiLU",
- "transformers.models.llama.modeling_llama.LlamaRotaryEmbedding",
- "torch._utils._rebuild_parameter",
- "transformers.generation.configuration_utils.GenerationConfig",
- "transformers.models.llama.modeling_llama.LlamaSdpaAttention",
- "transformers.models.llama.modeling_llama.LlamaRMSNorm",
- "caldera.decomposition.quantized_layer.CalderaQuantizedLinear",
- "transformers.models.llama.modeling_llama.LlamaModel",
- "lib.codebook.latticee8_padded12.E8P12_codebook",
- "caldera.decomposition.quantized_layer.LatticeQuantizedParameter",
- "__builtin__.set",
- "transformers.models.llama.modeling_llama.LlamaMLP",
- "torch.HalfStorage",
- "collections.OrderedDict",
- "torch.LongStorage",
- "torch.nn.modules.sparse.Embedding",
- "torch.FloatStorage",
- "transformers.models.llama.modeling_llama.LlamaForCausalLM",
- "transformers.models.llama.configuration_llama.LlamaConfig",
- "torch.nn.modules.container.ModuleList",
- "torch._utils._rebuild_tensor_v2",
- "torch.int64",
- "torch.BFloat16Storage",
- "torch.nn.modules.linear.Linear",
- "torch.BoolStorage",
- "transformers.models.llama.modeling_llama.LlamaDecoderLayer",
- "torch.float16",
- "torch.IntStorage"
How to fix it?
4.42 GB
Add Llama-2-7B rank 256
-
685 Bytes
Add Llama-2-7B rank 256
-
183 Bytes
Add Llama-2-7B rank 256
-
414 Bytes
Add Llama-2-7B rank 256
-
1.84 MB
Add Llama-2-7B rank 256
-
918 Bytes
Add Llama-2-7B rank 256