paper_id stringlengths 10 10 | title stringlengths 5 214 | abstract stringlengths 1 4.31k | full_text stringclasses 1
value | authors stringlengths 5 1.22k | decision stringclasses 6
values | year int64 2.03k 2.03k | api_raw_submission stringlengths 1.16k 7.03k | review stringlengths 0 28.2k | reviewer_id stringlengths 3 53 | rating float64 0 10 ⌀ | confidence float64 1 5 ⌀ | api_raw_review stringlengths 2 29.3k | criteria_count dict | reward_value float64 -10 4 | reward_value_length_adjusted float64 -749.35 3.41 | length_penalty float64 0 750 | reward_u float64 0 4 | reward_h float64 0 0.18 | meteor_score float64 0 0 | criticism float64 0 1 | example float64 0 1 | importance_and_relevance float64 0 1 | materials_and_methods float64 0 1 | praise float64 0 0.71 | presentation_and_reporting float64 0 1 | results_and_discussion float64 0 1 | suggestion_and_solution float64 0 1 | dimension_scores dict | overall_score float64 -10 4 | source stringclasses 1
value | review_src stringclasses 1
value | relative_rank int64 0 0 | win_prob float64 0 0 | thinking_trace stringclasses 1
value | prompt stringclasses 1
value | prompt_length int64 0 0 | conversations null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
IKJyRyHpHV | Revisiting Multilingual Data Mixtures in Language Model Pretraining | The impact of different multilingual data mixtures in pretraining large language models (LLMs) has been a topic of ongoing debate, often raising concerns about potential trade-offs between language coverage and model performance (i.e., the curse of multilinguality). In this work, we investigate these assumptions by tra... | Negar Foroutan, Paul Teiletche, Ayush Kumar Tarun, Antoine Bosselut | Reject | 2,026 | {"id": "IKJyRyHpHV", "forum": "IKJyRyHpHV", "content": {"title": {"value": "Revisiting Multilingual Data Mixtures in Language Model Pretraining"}, "authors": {"value": ["Negar Foroutan", "Paul Teiletche", "Ayush Kumar Tarun", "Antoine Bosselut"]}, "authorids": {"value": ["~Negar_Foroutan1", "~Paul_Teiletche1", "~Ayush_... | [Summary]:
This paper trains a number of multilingual language models to investigate several previously documented assumptions about how multilingual data mixtures affect a model’s quality. In particular, the paper:
* Trains multilingual models with a fixed amount of non-English data, but while varying the amount of En... | ICLR.cc/2026/Conference/Submission25639/Reviewer_nKHM | 6 | 3 | {"id": "Yj1eC3sxbR", "forum": "IKJyRyHpHV", "replyto": "IKJyRyHpHV", "content": {"summary": {"value": "This paper trains a number of multilingual language models to investigate several previously documented assumptions about how multilingual data mixtures affect a model\u2019s quality. In particular, the paper:\n* Trai... | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 12,
"praise": 1,
"presentation_and_reporting": 16,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 33
} | 1.181818 | -5.131154 | 6.312972 | 1.181818 | 0.016364 | 0 | 0.181818 | 0 | 0 | 0.363636 | 0.030303 | 0.484848 | 0.090909 | 0.030303 | {
"criticism": 0.18181818181818182,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0.36363636363636365,
"praise": 0.030303030303030304,
"presentation_and_reporting": 0.48484848484848486,
"results_and_discussion": 0.09090909090909091,
"suggestion_and_solution": 0.030303030303030304... | 1.181818 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
IKJyRyHpHV | Revisiting Multilingual Data Mixtures in Language Model Pretraining | The impact of different multilingual data mixtures in pretraining large language models (LLMs) has been a topic of ongoing debate, often raising concerns about potential trade-offs between language coverage and model performance (i.e., the curse of multilinguality). In this work, we investigate these assumptions by tra... | Negar Foroutan, Paul Teiletche, Ayush Kumar Tarun, Antoine Bosselut | Reject | 2,026 | {"id": "IKJyRyHpHV", "forum": "IKJyRyHpHV", "content": {"title": {"value": "Revisiting Multilingual Data Mixtures in Language Model Pretraining"}, "authors": {"value": ["Negar Foroutan", "Paul Teiletche", "Ayush Kumar Tarun", "Antoine Bosselut"]}, "authorids": {"value": ["~Negar_Foroutan1", "~Paul_Teiletche1", "~Ayush_... | [Summary]:
The paper primarily studies the impact of multilingual data mixtures during pretraining under various conditions on downstream performance. It challenges several previously held beliefs about multilingual pretraining, showing that (i) when both pivot languages and less represented languages are present in su... | ICLR.cc/2026/Conference/Submission25639/Reviewer_GiHe | 8 | 4 | {"id": "QQz7HSnLU2", "forum": "IKJyRyHpHV", "replyto": "IKJyRyHpHV", "content": {"summary": {"value": "The paper primarily studies the impact of multilingual data mixtures during pretraining under various conditions on downstream performance. It challenges several previously held beliefs about multilingual pretraining,... | {
"criticism": 2,
"example": 3,
"importance_and_relevance": 4,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 5,
"results_and_discussion": 2,
"suggestion_and_solution": 4,
"total": 22
} | 1.545455 | 0.267078 | 1.278377 | 1.545455 | 0.060909 | 0 | 0.090909 | 0.136364 | 0.181818 | 0.454545 | 0.181818 | 0.227273 | 0.090909 | 0.181818 | {
"criticism": 0.09090909090909091,
"example": 0.13636363636363635,
"importance_and_relevance": 0.18181818181818182,
"materials_and_methods": 0.45454545454545453,
"praise": 0.18181818181818182,
"presentation_and_reporting": 0.22727272727272727,
"results_and_discussion": 0.09090909090909091,
"suggestion_... | 1.545455 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
IKJyRyHpHV | Revisiting Multilingual Data Mixtures in Language Model Pretraining | The impact of different multilingual data mixtures in pretraining large language models (LLMs) has been a topic of ongoing debate, often raising concerns about potential trade-offs between language coverage and model performance (i.e., the curse of multilinguality). In this work, we investigate these assumptions by tra... | Negar Foroutan, Paul Teiletche, Ayush Kumar Tarun, Antoine Bosselut | Reject | 2,026 | {"id": "IKJyRyHpHV", "forum": "IKJyRyHpHV", "content": {"title": {"value": "Revisiting Multilingual Data Mixtures in Language Model Pretraining"}, "authors": {"value": ["Negar Foroutan", "Paul Teiletche", "Ayush Kumar Tarun", "Antoine Bosselut"]}, "authorids": {"value": ["~Negar_Foroutan1", "~Paul_Teiletche1", "~Ayush_... | [Summary]:
This paper investigates some questions related to multilingual data mixtures for LLM pre-training by training 1–3B parameter LLMs on multilingual corpora covering 25–400 languages.
The examined assumptions and corresponding findings are:
1\. More English data comes at the cost of performance in other lang... | ICLR.cc/2026/Conference/Submission25639/Reviewer_Vkmh | 4 | 4 | {"id": "cIbaLoXOzW", "forum": "IKJyRyHpHV", "replyto": "IKJyRyHpHV", "content": {"summary": {"value": "This paper investigates some questions related to multilingual data mixtures for LLM pre-training by training 1\u20133B parameter LLMs on multilingual corpora covering 25\u2013400 languages.\n\nThe examined assumption... | {
"criticism": 7,
"example": 1,
"importance_and_relevance": 6,
"materials_and_methods": 27,
"praise": 1,
"presentation_and_reporting": 14,
"results_and_discussion": 10,
"suggestion_and_solution": 6,
"total": 47
} | 1.531915 | -16.712574 | 18.244489 | 1.531915 | 0.047234 | 0 | 0.148936 | 0.021277 | 0.12766 | 0.574468 | 0.021277 | 0.297872 | 0.212766 | 0.12766 | {
"criticism": 0.14893617021276595,
"example": 0.02127659574468085,
"importance_and_relevance": 0.1276595744680851,
"materials_and_methods": 0.574468085106383,
"praise": 0.02127659574468085,
"presentation_and_reporting": 0.2978723404255319,
"results_and_discussion": 0.2127659574468085,
"suggestion_and_s... | 1.531915 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
IKJyRyHpHV | Revisiting Multilingual Data Mixtures in Language Model Pretraining | The impact of different multilingual data mixtures in pretraining large language models (LLMs) has been a topic of ongoing debate, often raising concerns about potential trade-offs between language coverage and model performance (i.e., the curse of multilinguality). In this work, we investigate these assumptions by tra... | Negar Foroutan, Paul Teiletche, Ayush Kumar Tarun, Antoine Bosselut | Reject | 2,026 | {"id": "IKJyRyHpHV", "forum": "IKJyRyHpHV", "content": {"title": {"value": "Revisiting Multilingual Data Mixtures in Language Model Pretraining"}, "authors": {"value": ["Negar Foroutan", "Paul Teiletche", "Ayush Kumar Tarun", "Antoine Bosselut"]}, "authorids": {"value": ["~Negar_Foroutan1", "~Paul_Teiletche1", "~Ayush_... | [Summary]:
The paper investigates how multilingual data mixtures affect LLM pretraining, focusing on the "curse of multilinguality." The authors train 1B–3B parameter models with up to 400 languages using the mC4 and FineWeb2 datasets. They test four key assumptions:
1. English hurts multilinguality: Increasing Englis... | ICLR.cc/2026/Conference/Submission25639/Reviewer_CfNz | 4 | 4 | {"id": "IA4ZvikrjE", "forum": "IKJyRyHpHV", "replyto": "IKJyRyHpHV", "content": {"summary": {"value": "The paper investigates how multilingual data mixtures affect LLM pretraining, focusing on the \"curse of multilinguality.\" The authors train 1B\u20133B parameter models with up to 400 languages using the mC4 and Fine... | {
"criticism": 7,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 18,
"praise": 3,
"presentation_and_reporting": 15,
"results_and_discussion": 7,
"suggestion_and_solution": 5,
"total": 37
} | 1.540541 | -7.550139 | 9.09068 | 1.540541 | 0.041081 | 0 | 0.189189 | 0 | 0.054054 | 0.486486 | 0.081081 | 0.405405 | 0.189189 | 0.135135 | {
"criticism": 0.1891891891891892,
"example": 0,
"importance_and_relevance": 0.05405405405405406,
"materials_and_methods": 0.4864864864864865,
"praise": 0.08108108108108109,
"presentation_and_reporting": 0.40540540540540543,
"results_and_discussion": 0.1891891891891892,
"suggestion_and_solution": 0.1351... | 1.540541 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
kiVIVBmMTP | SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation | Modern enterprises are increasingly adopting business document understanding workflows that leverage Vision Language Models (VLMs) for optical character recognition (OCR), given their ability to jointly model layout and language. However, deployment is impeded by data and compute barriers: large enterprises face de-ide... | Akshata A Bhat, Sharath Naganna, Saiful Haq, Prashant Khatri, Neha Arun, Niyati Chhaya, Piyush Pandey, Pushpak Bhattacharyya | Unknown | 2,026 | {"id": "kiVIVBmMTP", "forum": "kiVIVBmMTP", "content": {"title": {"value": "SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation"}, "authors": {"value": ["Akshata A Bhat", "Sharath Naganna", "Saiful Haq", "Prashant Khatri", "Neha Arun", "Niyati Chhaya", "Piyush Pandey", "Pushpak Bhattacha... | [Summary]:
This paper presents SAVIOR, a sample-efficient data curation method for adapting Vision-Language Models to OCR. SAVIOR identifies common failure cases and builds two datasets: SAVIOR-TRAIN (2,234 samples) and SAVIOR-Bench (509 samples). A fine-tuned Qwen2.5-VL-7B model trained on this data outperforms Paddle... | ICLR.cc/2026/Conference/Submission25642/Reviewer_i9UA | 2 | 5 | {"id": "3nbb5BIdUL", "forum": "kiVIVBmMTP", "replyto": "kiVIVBmMTP", "content": {"summary": {"value": "This paper presents SAVIOR, a sample-efficient data curation method for adapting Vision-Language Models to OCR. SAVIOR identifies common failure cases and builds two datasets: SAVIOR-TRAIN (2,234 samples) and SAVIOR-B... | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 9,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 13
} | 1.384615 | 1.384615 | 0 | 1.384615 | 0.013846 | 0 | 0.307692 | 0 | 0 | 0.692308 | 0.076923 | 0.230769 | 0.076923 | 0 | {
"criticism": 0.3076923076923077,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0.6923076923076923,
"praise": 0.07692307692307693,
"presentation_and_reporting": 0.23076923076923078,
"results_and_discussion": 0.07692307692307693,
"suggestion_and_solution": 0
} | 1.384615 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
kiVIVBmMTP | SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation | Modern enterprises are increasingly adopting business document understanding workflows that leverage Vision Language Models (VLMs) for optical character recognition (OCR), given their ability to jointly model layout and language. However, deployment is impeded by data and compute barriers: large enterprises face de-ide... | Akshata A Bhat, Sharath Naganna, Saiful Haq, Prashant Khatri, Neha Arun, Niyati Chhaya, Piyush Pandey, Pushpak Bhattacharyya | Unknown | 2,026 | {"id": "kiVIVBmMTP", "forum": "kiVIVBmMTP", "content": {"title": {"value": "SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation"}, "authors": {"value": ["Akshata A Bhat", "Sharath Naganna", "Saiful Haq", "Prashant Khatri", "Neha Arun", "Niyati Chhaya", "Piyush Pandey", "Pushpak Bhattacha... | [Summary]:
This paper made two contributions
1. It curates a new dataset called SAVIOR-TRAIN with 2k samples for training visual language models for OCR tasks: it contains document image and “OCR” pairs.
2. It also releases a benchmark called SAVIOR-Bench that contains five hundred financial documents and plus a new ... | ICLR.cc/2026/Conference/Submission25642/Reviewer_vx7o | 2 | 4 | {"id": "iQ5yL44Mr1", "forum": "kiVIVBmMTP", "replyto": "kiVIVBmMTP", "content": {"summary": {"value": "This paper made two contributions \n1. It curates a new dataset called SAVIOR-TRAIN with 2k samples for training visual language models for OCR tasks: it contains document image and \u201cOCR\u201d pairs. \n2. It also... | {
"criticism": 6,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 15,
"praise": 3,
"presentation_and_reporting": 11,
"results_and_discussion": 1,
"suggestion_and_solution": 6,
"total": 26
} | 1.769231 | -0.898 | 2.667231 | 1.769231 | 0.06 | 0 | 0.230769 | 0.076923 | 0.076923 | 0.576923 | 0.115385 | 0.423077 | 0.038462 | 0.230769 | {
"criticism": 0.23076923076923078,
"example": 0.07692307692307693,
"importance_and_relevance": 0.07692307692307693,
"materials_and_methods": 0.5769230769230769,
"praise": 0.11538461538461539,
"presentation_and_reporting": 0.4230769230769231,
"results_and_discussion": 0.038461538461538464,
"suggestion_a... | 1.769231 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
kiVIVBmMTP | SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation | Modern enterprises are increasingly adopting business document understanding workflows that leverage Vision Language Models (VLMs) for optical character recognition (OCR), given their ability to jointly model layout and language. However, deployment is impeded by data and compute barriers: large enterprises face de-ide... | Akshata A Bhat, Sharath Naganna, Saiful Haq, Prashant Khatri, Neha Arun, Niyati Chhaya, Piyush Pandey, Pushpak Bhattacharyya | Unknown | 2,026 | {"id": "kiVIVBmMTP", "forum": "kiVIVBmMTP", "content": {"title": {"value": "SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation"}, "authors": {"value": ["Akshata A Bhat", "Sharath Naganna", "Saiful Haq", "Prashant Khatri", "Neha Arun", "Niyati Chhaya", "Piyush Pandey", "Pushpak Bhattacha... | [Summary]:
This paper describes a 'data curation methodology' and a resulting training and test set of financial documents for OCR. The authors fine-tune (LoRA) QWEN-2.5VL and compare it to the VLMs available through the API (Gemini, GPT, etc) as well as some open-source/commercial OCR models (Nanonets, PaddleOCR). The... | ICLR.cc/2026/Conference/Submission25642/Reviewer_Lzsk | 2 | 5 | {"id": "YZVs5JA0hU", "forum": "kiVIVBmMTP", "replyto": "kiVIVBmMTP", "content": {"summary": {"value": "This paper describes a 'data curation methodology' and a resulting training and test set of financial documents for OCR. The authors fine-tune (LoRA) QWEN-2.5VL and compare it to the VLMs available through the API (Ge... | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 13,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 22
} | 1.181818 | -0.096559 | 1.278377 | 1.181818 | 0.041364 | 0 | 0.136364 | 0 | 0.090909 | 0.590909 | 0.045455 | 0.136364 | 0.045455 | 0.136364 | {
"criticism": 0.13636363636363635,
"example": 0,
"importance_and_relevance": 0.09090909090909091,
"materials_and_methods": 0.5909090909090909,
"praise": 0.045454545454545456,
"presentation_and_reporting": 0.13636363636363635,
"results_and_discussion": 0.045454545454545456,
"suggestion_and_solution": 0.... | 1.181818 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
kiVIVBmMTP | SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation | Modern enterprises are increasingly adopting business document understanding workflows that leverage Vision Language Models (VLMs) for optical character recognition (OCR), given their ability to jointly model layout and language. However, deployment is impeded by data and compute barriers: large enterprises face de-ide... | Akshata A Bhat, Sharath Naganna, Saiful Haq, Prashant Khatri, Neha Arun, Niyati Chhaya, Piyush Pandey, Pushpak Bhattacharyya | Unknown | 2,026 | {"id": "kiVIVBmMTP", "forum": "kiVIVBmMTP", "content": {"title": {"value": "SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation"}, "authors": {"value": ["Akshata A Bhat", "Sharath Naganna", "Saiful Haq", "Prashant Khatri", "Neha Arun", "Niyati Chhaya", "Piyush Pandey", "Pushpak Bhattacha... | [Summary]:
This paper proposes SAVIOR to enhance the OCR capabilities of MLLMs. The SAVIOR consists of SAVIOR-TRAIN, a training set with 2,234 samples and SAVIOR-Bench, a benchmark with 509 financial documents. Moreover, the authors finetune a Qwen2.5-VL-7B model with SAVIOR-TRAIN, yielding SAVIOR-OCR. However, this pa... | ICLR.cc/2026/Conference/Submission25642/Reviewer_Mk3q | 2 | 4 | {"id": "6b0KeFUWXg", "forum": "kiVIVBmMTP", "replyto": "kiVIVBmMTP", "content": {"summary": {"value": "This paper proposes SAVIOR to enhance the OCR capabilities of MLLMs. The SAVIOR consists of SAVIOR-TRAIN, a training set with 2,234 samples and SAVIOR-Bench, a benchmark with 509 financial documents. Moreover, the aut... | {
"criticism": 10,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 19,
"praise": 1,
"presentation_and_reporting": 7,
"results_and_discussion": 0,
"suggestion_and_solution": 1,
"total": 37
} | 1.135135 | -7.955544 | 9.09068 | 1.135135 | 0.020811 | 0 | 0.27027 | 0.054054 | 0.054054 | 0.513514 | 0.027027 | 0.189189 | 0 | 0.027027 | {
"criticism": 0.2702702702702703,
"example": 0.05405405405405406,
"importance_and_relevance": 0.05405405405405406,
"materials_and_methods": 0.5135135135135135,
"praise": 0.02702702702702703,
"presentation_and_reporting": 0.1891891891891892,
"results_and_discussion": 0,
"suggestion_and_solution": 0.0270... | 1.135135 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
y5rLR9xZpn | Quantum-Inspired Image Encodings for Financial Time-Series Forecasting | This study proposes a quantum-inspired methodology that transforms time-series data into complex-valued image representations for prediction. Unlike classical encodings such as the Gramian Angular Field (GAF), Recurrence Plot (RP), and Markov Transition Field (MTF), which rely on additive pairwise relations, our approa... | Henry Woo, Gunnho Song, Taeyoung Park | Unknown | 2,026 | {"id": "y5rLR9xZpn", "forum": "y5rLR9xZpn", "content": {"title": {"value": "Quantum-Inspired Image Encodings for Financial Time-Series Forecasting"}, "authors": {"value": ["Henry Woo", "Gunnho Song", "Taeyoung Park"]}, "authorids": {"value": ["~Henry_Woo1", "~Gunnho_Song1", "~Taeyoung_Park1"]}, "keywords": {"value": ["... | [Summary]:
This paper proposes a quantum-inspired representation learning framework for financial time-series forecasting. It maps 1D normalized observations into probabilistic amplitudes and injects local temporal structure through phase-function encoding. The proposed approach outperforms the other baselines on two f... | ICLR.cc/2026/Conference/Submission25645/Reviewer_zuHu | 4 | 3 | {"id": "Be9tZIuwG1", "forum": "y5rLR9xZpn", "replyto": "y5rLR9xZpn", "content": {"summary": {"value": "This paper proposes a quantum-inspired representation learning framework for financial time-series forecasting. It maps 1D normalized observations into probabilistic amplitudes and injects local temporal structure thr... | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 14,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 17
} | 1.529412 | 1.276893 | 0.252519 | 1.529412 | 0.047647 | 0 | 0.117647 | 0.058824 | 0.058824 | 0.823529 | 0.117647 | 0.058824 | 0.117647 | 0.176471 | {
"criticism": 0.11764705882352941,
"example": 0.058823529411764705,
"importance_and_relevance": 0.058823529411764705,
"materials_and_methods": 0.8235294117647058,
"praise": 0.11764705882352941,
"presentation_and_reporting": 0.058823529411764705,
"results_and_discussion": 0.11764705882352941,
"suggestio... | 1.529412 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
y5rLR9xZpn | Quantum-Inspired Image Encodings for Financial Time-Series Forecasting | This study proposes a quantum-inspired methodology that transforms time-series data into complex-valued image representations for prediction. Unlike classical encodings such as the Gramian Angular Field (GAF), Recurrence Plot (RP), and Markov Transition Field (MTF), which rely on additive pairwise relations, our approa... | Henry Woo, Gunnho Song, Taeyoung Park | Unknown | 2,026 | {"id": "y5rLR9xZpn", "forum": "y5rLR9xZpn", "content": {"title": {"value": "Quantum-Inspired Image Encodings for Financial Time-Series Forecasting"}, "authors": {"value": ["Henry Woo", "Gunnho Song", "Taeyoung Park"]}, "authorids": {"value": ["~Henry_Woo1", "~Gunnho_Song1", "~Taeyoung_Park1"]}, "keywords": {"value": ["... | [Summary]:
The paper proposes a quantum-inspired method for financial time series that transforms one-dimensional signals into two-dimensional images for forecasting. This transformation also enables classical encodings to suit CNN-based forecaster. Using five-minute U.S. equity data, such as S&P 500 and Russell 3000, ... | ICLR.cc/2026/Conference/Submission25645/Reviewer_Jsfq | 4 | 4 | {"id": "qL2dY6YRcx", "forum": "y5rLR9xZpn", "replyto": "y5rLR9xZpn", "content": {"summary": {"value": "The paper proposes a quantum-inspired method for financial time series that transforms one-dimensional signals into two-dimensional images for forecasting. This transformation also enables classical encodings to suit ... | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 13,
"praise": 0,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 15
} | 1.266667 | 1.203537 | 0.06313 | 1.266667 | 0.012667 | 0 | 0.2 | 0 | 0 | 0.866667 | 0 | 0.133333 | 0.066667 | 0 | {
"criticism": 0.2,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0.8666666666666667,
"praise": 0,
"presentation_and_reporting": 0.13333333333333333,
"results_and_discussion": 0.06666666666666667,
"suggestion_and_solution": 0
} | 1.266667 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
y5rLR9xZpn | Quantum-Inspired Image Encodings for Financial Time-Series Forecasting | This study proposes a quantum-inspired methodology that transforms time-series data into complex-valued image representations for prediction. Unlike classical encodings such as the Gramian Angular Field (GAF), Recurrence Plot (RP), and Markov Transition Field (MTF), which rely on additive pairwise relations, our approa... | Henry Woo, Gunnho Song, Taeyoung Park | Unknown | 2,026 | {"id": "y5rLR9xZpn", "forum": "y5rLR9xZpn", "content": {"title": {"value": "Quantum-Inspired Image Encodings for Financial Time-Series Forecasting"}, "authors": {"value": ["Henry Woo", "Gunnho Song", "Taeyoung Park"]}, "authorids": {"value": ["~Henry_Woo1", "~Gunnho_Song1", "~Taeyoung_Park1"]}, "keywords": {"value": ["... | [Summary]:
This paper introduces a novel "quantum-inspired" framework for financial time-series forecasting. The core idea is to encode one-dimensional time-series data into two-dimensional, complex-valued image representations, which are then fed into a Convolutional Neural Network (CNN) for prediction. Specifically, ... | ICLR.cc/2026/Conference/Submission25645/Reviewer_6Coq | 2 | 4 | {"id": "Sk6xnJfQG3", "forum": "y5rLR9xZpn", "replyto": "y5rLR9xZpn", "content": {"summary": {"value": "This paper introduces a novel \"quantum-inspired\" framework for financial time-series forecasting. The core idea is to encode one-dimensional time-series data into two-dimensional, complex-valued image representation... | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 8,
"materials_and_methods": 9,
"praise": 4,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 14
} | 2 | 1.984218 | 0.015782 | 2 | 0.087857 | 0 | 0.214286 | 0 | 0.571429 | 0.642857 | 0.285714 | 0 | 0.214286 | 0.071429 | {
"criticism": 0.21428571428571427,
"example": 0,
"importance_and_relevance": 0.5714285714285714,
"materials_and_methods": 0.6428571428571429,
"praise": 0.2857142857142857,
"presentation_and_reporting": 0,
"results_and_discussion": 0.21428571428571427,
"suggestion_and_solution": 0.07142857142857142
} | 2 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
FPLNSx1jmL | Improving Developer Emotion Classification via LLM-Based Augmentation | Detecting developer emotion in the informative data stream of technical commit messages is a critical task for gauging signals of burnout or bug introduction, yet it exposes a significant failure point for large language models whose emotion taxonomies are ill-suited for technical contexts in the field of software engi... | Fahmida Haque Fariha, Insaniyat Ishan, S. M. Hozaifa Hossain, Mushfiq Zahid, Zawad Al Akram, Md. Aquib Azmain | Reject | 2,026 | {"id": "FPLNSx1jmL", "forum": "FPLNSx1jmL", "content": {"title": {"value": "Improving Developer Emotion Classification via LLM-Based Augmentation"}, "authors": {"value": ["Fahmida Haque Fariha", "Insaniyat Ishan", "S. M. Hozaifa Hossain", "Mushfiq Zahid", "Zawad Al Akram", "Md. Aquib Azmain"]}, "authorids": {"value": [... | [Summary]:
The paper tackles developer emotion classification in technical commit messages, arguing that general-purpose langauge models and emotion taxonomies fail in this specialized context. To address this, the authors make two main contributions. First, they introduce a dataset of 2K GitHub commit messages manuall... | ICLR.cc/2026/Conference/Submission25649/Reviewer_tdxY | 2 | 4 | {"id": "lGp0K7AQpl", "forum": "FPLNSx1jmL", "replyto": "FPLNSx1jmL", "content": {"summary": {"value": "The paper tackles developer emotion classification in technical commit messages, arguing that general-purpose langauge models and emotion taxonomies fail in this specialized context. To address this, the authors make ... | {
"criticism": 13,
"example": 4,
"importance_and_relevance": 1,
"materials_and_methods": 27,
"praise": 3,
"presentation_and_reporting": 15,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 42
} | 1.595238 | -11.677785 | 13.273023 | 1.595238 | 0.025476 | 0 | 0.309524 | 0.095238 | 0.02381 | 0.642857 | 0.071429 | 0.357143 | 0.047619 | 0.047619 | {
"criticism": 0.30952380952380953,
"example": 0.09523809523809523,
"importance_and_relevance": 0.023809523809523808,
"materials_and_methods": 0.6428571428571429,
"praise": 0.07142857142857142,
"presentation_and_reporting": 0.35714285714285715,
"results_and_discussion": 0.047619047619047616,
"suggestion... | 1.595238 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
FPLNSx1jmL | Improving Developer Emotion Classification via LLM-Based Augmentation | Detecting developer emotion in the informative data stream of technical commit messages is a critical task for gauging signals of burnout or bug introduction, yet it exposes a significant failure point for large language models whose emotion taxonomies are ill-suited for technical contexts in the field of software engi... | Fahmida Haque Fariha, Insaniyat Ishan, S. M. Hozaifa Hossain, Mushfiq Zahid, Zawad Al Akram, Md. Aquib Azmain | Reject | 2,026 | {"id": "FPLNSx1jmL", "forum": "FPLNSx1jmL", "content": {"title": {"value": "Improving Developer Emotion Classification via LLM-Based Augmentation"}, "authors": {"value": ["Fahmida Haque Fariha", "Insaniyat Ishan", "S. M. Hozaifa Hossain", "Mushfiq Zahid", "Zawad Al Akram", "Md. Aquib Azmain"]}, "authorids": {"value": [... | [Summary]:
This paper examines emotional classification across more than 2,000 GitHub "commit messages that have been human-labeled with a four-label scheme". The core method, CommiTune, fine-tunes a LLaMA model to generate paraphrased training data and then fine-tunes CodeBERT on the expanded set. Zero-shot baselines... | ICLR.cc/2026/Conference/Submission25649/Reviewer_KLi7 | 4 | 3 | {"id": "DkEeeHjjjJ", "forum": "FPLNSx1jmL", "replyto": "FPLNSx1jmL", "content": {"summary": {"value": "This paper examines emotional classification across more than 2,000 GitHub \"commit messages that have been human-labeled with a four-label scheme\". The core method, CommiTune, fine-tunes a LLaMA model to generate pa... | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 13,
"praise": 3,
"presentation_and_reporting": 9,
"results_and_discussion": 5,
"suggestion_and_solution": 6,
"total": 35
} | 1.142857 | -6.495839 | 7.638696 | 1.142857 | 0.04 | 0 | 0.085714 | 0 | 0.028571 | 0.371429 | 0.085714 | 0.257143 | 0.142857 | 0.171429 | {
"criticism": 0.08571428571428572,
"example": 0,
"importance_and_relevance": 0.02857142857142857,
"materials_and_methods": 0.37142857142857144,
"praise": 0.08571428571428572,
"presentation_and_reporting": 0.2571428571428571,
"results_and_discussion": 0.14285714285714285,
"suggestion_and_solution": 0.17... | 1.142857 | iclr2026 | openreview | 0 | 0 | 0 | null | |||
FPLNSx1jmL | Improving Developer Emotion Classification via LLM-Based Augmentation | Detecting developer emotion in the informative data stream of technical commit messages is a critical task for gauging signals of burnout or bug introduction, yet it exposes a significant failure point for large language models whose emotion taxonomies are ill-suited for technical contexts in the field of software engi... | Fahmida Haque Fariha, Insaniyat Ishan, S. M. Hozaifa Hossain, Mushfiq Zahid, Zawad Al Akram, Md. Aquib Azmain | Reject | 2,026 | {"id": "FPLNSx1jmL", "forum": "FPLNSx1jmL", "content": {"title": {"value": "Improving Developer Emotion Classification via LLM-Based Augmentation"}, "authors": {"value": ["Fahmida Haque Fariha", "Insaniyat Ishan", "S. M. Hozaifa Hossain", "Mushfiq Zahid", "Zawad Al Akram", "Md. Aquib Azmain"]}, "authorids": {"value": [... | [Summary]:
This paper looks at a unique and tricky problem of finding emotions in GitHub commit messages. Normal emotion labels like joy or anger don't really fit the professional tone of developers. So the authors make a new dataset of 2,000 human-labeled commits, using four custom classes: Satisfaction, Frustration, ... | ICLR.cc/2026/Conference/Submission25649/Reviewer_tKF6 | 4 | 3 | {"id": "uN3m0YeMlc", "forum": "FPLNSx1jmL", "replyto": "FPLNSx1jmL", "content": {"summary": {"value": "This paper looks at a unique and tricky problem of finding emotions in GitHub commit messages. Normal emotion labels like joy or anger don't really fit the professional tone of developers. So the authors make a new da... | {
"criticism": 3,
"example": 5,
"importance_and_relevance": 3,
"materials_and_methods": 19,
"praise": 4,
"presentation_and_reporting": 11,
"results_and_discussion": 3,
"suggestion_and_solution": 6,
"total": 42
} | 1.285714 | -11.987309 | 13.273023 | 1.285714 | 0.041429 | 0 | 0.071429 | 0.119048 | 0.071429 | 0.452381 | 0.095238 | 0.261905 | 0.071429 | 0.142857 | {
"criticism": 0.07142857142857142,
"example": 0.11904761904761904,
"importance_and_relevance": 0.07142857142857142,
"materials_and_methods": 0.4523809523809524,
"praise": 0.09523809523809523,
"presentation_and_reporting": 0.2619047619047619,
"results_and_discussion": 0.07142857142857142,
"suggestion_an... | 1.285714 | iclr2026 | openreview | 0 | 0 | 0 | null |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 25