Fetching metadata from the HF Docker repository...
Update README.md
85b5203
verified
-
1.22 kB
Upload pipeline.pkl
-
258 Bytes
Update README.md
-
644 Bytes
Update app.py
pipeline.pkl
Detected Pickle imports (32)
- "tokenizers.models.Model",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.linear.Linear",
- "tokenizers.AddedToken",
- "builtins.set",
- "tokenizers.Tokenizer",
- "transformers.models.bart.modeling_bart.BartEncoderLayer",
- "torch._utils._rebuild_tensor_v2",
- "transformers.activations.GELUActivation",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.bart.modeling_bart.BartAttention",
- "transformers.models.bart.modeling_bart.BartEncoder",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.device",
- "sklearn.linear_model._logistic.LogisticRegression",
- "torch.storage._load_from_bytes",
- "torch.float32",
- "whatlies.language._hftransformers_lang.HFTransformersLanguage",
- "transformers.models.bart.modeling_bart.BartDecoderLayer",
- "collections.OrderedDict",
- "transformers.pipelines.feature_extraction.FeatureExtractionPipeline",
- "numpy.ndarray",
- "numpy.dtype",
- "sklearn.pipeline.Pipeline",
- "transformers.models.bart.modeling_bart.BartModel",
- "torch._utils._rebuild_parameter",
- "transformers.models.bart.configuration_bart.BartConfig",
- "transformers.models.bart.modeling_bart.BartLearnedPositionalEmbedding",
- "torch._C._nn.gelu",
- "transformers.models.bart.tokenization_bart_fast.BartTokenizerFast",
- "transformers.models.bart.modeling_bart.BartDecoder",
- "joblib.numpy_pickle.NumpyArrayWrapper"
How to fix it?
559 MB
Upload pipeline.pkl
-
40 Bytes
Update requirements.txt