distilbart-12-6-hybrid-2048-in-tuned-0109-1254

This model is a fine-tuned version of broadfield-dev/distilbart-12-6-hybrid-2048-in on the broadfield-dev/abisee_cnn_dailymail_sum_512-Broadfield dataset.

Training Details

  • Task: SEQ_2_SEQ_LM
  • Epochs: 4
  • Learning Rate: 2e-05
  • Gradient Accumulation Steps: 4

Entity Labels

['LABEL_0', 'LABEL_1', 'LABEL_2']

Usage

from transformers import pipeline
summarizer = pipeline("summarization", model="broadfield-dev/distilbart-12-6-hybrid-2048-in-tuned-0109-1254")
text = "Your long text here..."
print(summarizer(text))
Downloads last month
2
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for broadfield-dev/distilbart-12-6-hybrid-2048-in-tuned-0109-1254

Dataset used to train broadfield-dev/distilbart-12-6-hybrid-2048-in-tuned-0109-1254