Fetching metadata from the HF Docker repository... Daniel Bustamante Ospina
New bidirectional model
db62c91 - .idea App for dog recognition (pet similarity)
- 1.48 kB initial commit
- 43 Bytes Changes in the model loading
- 273 Bytes Update README.md
- 3.03 kB New bidirectional model
- 693 Bytes Changes in the model loading
- 991 kB New bidirectional model
- 768 kB New bidirectional model
- 31 Bytes App for dog recognition (pet similarity)
- 1.96 kB App for dog recognition (pet similarity)
vit_model_complete.pt Detected Pickle imports (26)
- "transformers.models.clip.modeling_clip.CLIPEncoderLayer",
- "torch.LongStorage",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.clip.configuration_clip.CLIPTextConfig",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.float32",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.container.ModuleList",
- "torch._C._nn.gelu",
- "torch.nn.modules.conv.Conv2d",
- "__builtin__.set",
- "transformers.models.clip.modeling_clip.CLIPTextTransformer",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.clip.modeling_clip.CLIPTextEmbeddings",
- "transformers.models.clip.modeling_clip.CLIPAttention",
- "torch.nn.modules.linear.Linear",
- "transformers.models.clip.modeling_clip.CLIPVisionEmbeddings",
- "transformers.models.clip.configuration_clip.CLIPConfig",
- "transformers.models.clip.modeling_clip.CLIPEncoder",
- "transformers.models.clip.modeling_clip.CLIPModel",
- "transformers.models.clip.configuration_clip.CLIPVisionConfig",
- "collections.OrderedDict",
- "transformers.activations.GELUActivation",
- "transformers.models.clip.modeling_clip.CLIPMLP",
- "transformers.models.clip.modeling_clip.CLIPVisionTransformer",
- "torch.FloatStorage"
How to fix it?
10.2 GB Changes in the model loading vit_processor_complete.pt Detected Pickle imports (7)
- "tokenizers.models.Model",
- "_codecs.encode",
- "transformers.models.clip.processing_clip.CLIPProcessor",
- "tokenizers.AddedToken",
- "tokenizers.Tokenizer",
- "transformers.models.clip.image_processing_clip.CLIPImageProcessor",
- "transformers.models.clip.tokenization_clip_fast.CLIPTokenizerFast"
How to fix it?
1.53 MB Changes in the model loading