Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ValueError
Message: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/hyrinmansoor/text2frappe-s2-flan@189ea2917779b2c48a17727343470fb921bf8dfe/train.json.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
raise ValueError(
ValueError: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/hyrinmansoor/text2frappe-s2-flan@189ea2917779b2c48a17727343470fb921bf8dfe/train.json.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text2frappe-s2-flan-field
π ChangAI β Stage 2 (Field Selection with Flan-T5)
This model is part of the ChangAI pipeline for converting natural-language ERP questions into executable Frappe SQL queries.
- Stage 1 (Doctype Detection): text2frappe-s1-roberta
- Stage 2a (Field Ranking): text2frappe-s2-sbert
- Stage 2b (Field Selection β this model):
text2frappe-s2-flan-field - Stage 3 (Query Generation): text2frappe-s3-flan-query
π What it does
Given a Doctype and a user question, this model selects the most relevant fields from the Frappe metadata that should be used to answer the query.
It acts as the decision maker after field ranking:
- Takes the ranked fields from SBERT
- Filters & finalizes which fields to use
- Ensures required defaults like
nameare included when needed
This helps Stage 3 (Query Generator) build valid, executable SQL.
ποΈ Model Architecture
- Base model:
google/flan-t5-base - Fine-tuned task: Conditional generation (seq2seq)
- Input format: JSON-like text containing question + doctype + candidate fields
- Output format: A list of selected fields (comma-separated)
π§ Example
Input:
Doctype: Sales Invoice
Question: show overdue invoices with customer name
Top fields: [name, posting_date, due_date, customer_name, outstanding_amount, company, status]
Output:
name, customer_name, due_date, outstanding_amount
π Training Data
Synthetic + curated dataset based on ERPNext Doctype metadata
Training samples follow this structure:
- Anchor: question + doctype
- Positives: required fields
- Negatives: irrelevant fields
Rules enforced:
- Non-count queries always include
name - Only valid metadata fields are used
- Non-count queries always include
π Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "hyrinmansoor/text2frappe-s2-flan-field"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
inp = """Doctype: Sales Invoice
Question: show overdue invoices with customer name
Candidate fields: [name, posting_date, due_date, customer_name, outstanding_amount, company, status]"""
inputs = tokenizer(inp, return_tensors="pt")
outputs = model.generate(**inputs, max_length=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# -> "name, customer_name, due_date, outstanding_amount"
π Related Models
- Doctype Detection: text2frappe-s1-roberta
- Field Ranking: text2frappe-s2-sbert
- Query Generation: text2frappe-s3-flan-query
π Notes
- Works best with ERPNext v14+ doctypes
- Can be extended to custom doctypes by retraining with updated metadata
- Part of the ChangAI open-source project: plain-language ERP queries β SQL
- Downloads last month
- 4