Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
pilancilab
/
CALDERA-compressed-models
like
0
Follow
Pilanci Lab
6
Model card
Files
Files and versions
Community
d4812c1
CALDERA-compressed-models
/
Llama-2-7b-2Bit-4BitFactors-Rank64
1 contributor
History:
1 commit
nsagan
Add Llama-2-7b Rank 128 and Rank 64 models
d4812c1
about 2 months ago
caldera-rank-64-4B-factors-downdate-RHT-ft.pt
pickle
Detected Pickle imports (28)
"collections.OrderedDict"
,
"lib.codebook.latticee8_padded12.E8P12_codebook"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.llama.modeling_llama.LlamaSdpaAttention"
,
"torch.HalfStorage"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"torch.float16"
,
"caldera.decomposition.quantized_layer.CalderaQuantizedLinear"
,
"transformers.models.llama.modeling_llama.LlamaRotaryEmbedding"
,
"torch.nn.modules.activation.SiLU"
,
"torch.FloatStorage"
,
"torch.LongStorage"
,
"torch.nn.modules.linear.Linear"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"torch.nn.modules.container.ModuleList"
,
"torch.int64"
,
"transformers.models.llama.modeling_llama.LlamaMLP"
,
"torch.BoolStorage"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"torch._utils._rebuild_parameter"
,
"torch.BFloat16Storage"
,
"torch.IntStorage"
,
"caldera.decomposition.quantized_layer.LatticeQuantizedParameter"
,
"__builtin__.set"
How to fix it?
3.13 GB
LFS
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago
caldera-rank-64-4B-factors-downdate-no-RHT-ft.pt
pickle
Detected Pickle imports (28)
"collections.OrderedDict"
,
"lib.codebook.latticee8_padded12.E8P12_codebook"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.llama.modeling_llama.LlamaSdpaAttention"
,
"torch.HalfStorage"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"torch.float16"
,
"caldera.decomposition.quantized_layer.CalderaQuantizedLinear"
,
"transformers.models.llama.modeling_llama.LlamaRotaryEmbedding"
,
"torch.nn.modules.activation.SiLU"
,
"torch.FloatStorage"
,
"torch.LongStorage"
,
"torch.nn.modules.linear.Linear"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"torch.nn.modules.container.ModuleList"
,
"torch.int64"
,
"transformers.models.llama.modeling_llama.LlamaMLP"
,
"torch.BoolStorage"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"torch._utils._rebuild_parameter"
,
"torch.BFloat16Storage"
,
"torch.IntStorage"
,
"caldera.decomposition.quantized_layer.LatticeQuantizedParameter"
,
"__builtin__.set"
How to fix it?
3.13 GB
LFS
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago
config.json
Safe
685 Bytes
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago
generation_config.json
Safe
183 Bytes
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago
special_tokens_map.json
Safe
414 Bytes
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago
tokenizer.json
Safe
1.84 MB
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago
tokenizer_config.json
Safe
918 Bytes
Add Llama-2-7b Rank 128 and Rank 64 models
about 2 months ago