Model Card for Model ID

This model is part of LLMs (Logic Language Models) fine-tuned to answer logic problems, particularly syllogisms, containing premises and conclusions. The model is based on Flan-T5-Large and fine-tuned in Natural Language (NL) and Common Logic Interchange Format (CLIF).

Model Details

Model Description

  • Developed by: Hanna Abi Akl
  • Model type: Logic Language Model
  • Language(s) (NLP): English + CLIF
  • License: MIT
  • Finetuned from model: Flan-T5-Large

Uses

Direct Use

This model is to be used on logical datasets of premises and conclusions. It can answer True, False of Uncertain to logic problems.

Downstream Use

The model can be fine-tuned on logical datasets similar to FOLIO.

Training Details

Training Data

  • Folio
  • SemEval 2026 Task 11.1

Training Procedure

Supervised Fine-Tuning (SFT)

Speeds, Sizes, Times

A100 GPU

Testing Data, Factors & Metrics

Testing Data

  • SemEval 2026 Task 11.1

Metrics

  • Accuracy
  • Precision
  • Recall
  • F1

Environmental Impact

  • Hardware Type: A100 GPU
  • Hours used: 2
  • Cloud Provider: Google Cloud
  • Carbon Emitted: 0.14
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HannaAbiAkl/LOGIC-NL-CLIF-Flan-T5-Large

Finetuned
(205)
this model

Dataset used to train HannaAbiAkl/LOGIC-NL-CLIF-Flan-T5-Large