Dataset Viewer
Auto-converted to Parquet Duplicate
venue
stringclasses
11 values
paper_openreview_id
stringlengths
9
13
paragraph_idx
int64
1
1.07k
section
stringlengths
2
9.5k
content
stringlengths
0
271k
ICLR.cc/2025/Conference
zkNCWtw2fd
1
Title
SYNERGISTIC APPROACH FOR SIMULTANEOUSOPTIMIZATION OF MONOLINGUAL, CROSS-LINGUAL,AND MULTILINGUAL INFORMATION RETRIEVAL
ICLR.cc/2025/Conference
zkNCWtw2fd
2
Abstract
Information retrieval across different languages is an increasingly important challenge in natural language processing. Recent approaches based on multilingualpre-trained language models have achieved remarkable success, yet they oftenoptimize for either monolingual, cross-lingual, or multilingual retrieval performance...
ICLR.cc/2025/Conference
zkNCWtw2fd
3
1 INTRODUCTION
Information retrieval (IR) across different languages is an increasingly important challenge in naturallanguage processing. However, optimizing information retrieval systems for multilingual scenarios isnot a straightforward task, as it requires considering multiple distinct retrieval settings, each withits own set of ...
ICLR.cc/2025/Conference
zkNCWtw2fd
4
1 INTRODUCTION
Recent approaches to multilingual information retrieval have leveraged multilingual pre-trainedlanguage models such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) to encodequeries and documents (Karpukhin et al., 2020). While these models can transfer relevance matchingcapabilities across languages, th...
ICLR.cc/2025/Conference
zkNCWtw2fd
5
2.1 CONTRASTIVE LEARNING
Throughout the paper, we utilize the dual-encoder architecture with shared parameters, which iscommonly used for dense retrieval (DR; Ni et al., 2022). Contrastive learning is a method for trainingDR models by contrasting positive pairs against negatives. Specifically, given a batch of triplets, eachof which consists o...
ICLR.cc/2025/Conference
zkNCWtw2fd
6
2.2 BATCH SAMPLING
Baseline Batch Sampling. We study the following training batching procedures introduced by(Roy et al., 2020). (i) Monolingual batching (coined as X-X-mono model) creates each batch withmono language, where all the triplets consist of queries and passages in the same language. Notethat we sample the language used to cre...
ICLR.cc/2025/Conference
zkNCWtw2fd
7
2.2 BATCH SAMPLING
As shown in (Roy et al., 2020), the X-Y model is more effective in cross-lingual retrieval scenariosand shows reduced language bias; however, the X-X-mono surpasses the X-Y model in monolingualretrieval. These results inspire us to explore whether simply combining the two batch samplingapproaches can achieve improvemen...
ICLR.cc/2025/Conference
zkNCWtw2fd
8
2.2 BATCH SAMPLING
Figure 1: Illustrative example of monolingual, cross-lingual, and multilingual information retrieval.
ICLR.cc/2025/Conference
zkNCWtw2fd
9
2.2 BATCH SAMPLING
Figure 2: Illustrations of the proposed hybrid batch sampling (assuming we only have training datain English, Arabic, and Japanese), where our model is exposed to monolingual and cross-lingualbatches with the respective probability of α and β = 1 − α.
ICLR.cc/2025/Conference
zkNCWtw2fd
10
2.2 BATCH SAMPLING
Hybrid Batch Sampling.In this work, we propose to combine the two aforementioned baselinesampling strategies. Specifically, when creating batch training data, we set α and β = 1 − α as therespective probability of using monolingual and cross-lingual batching as shown in Fig. 2.1
ICLR.cc/2025/Conference
zkNCWtw2fd
11
2.2 BATCH SAMPLING
1In the experiments, we found out that setting the hyperparameters α and β to 0.5 resulted in the best balance
ICLR.cc/2025/Conference
zkNCWtw2fd
12
2.2 BATCH SAMPLING
between the performance of the proposed model on monolingual and multilingual evaluations.
ICLR.cc/2025/Conference
zkNCWtw2fd
13
3 EXPERIMENTAL SETUP
This section presents the experimental setup for evaluating the proposed hybrid batch training strategy.We first discuss the training process, including datasets, and multilingual pre-trained models. Next,we introduce the evaluation datasets and metrics used to assess the performance of the fine-tunedmodels. Finally, w...
ICLR.cc/2025/Conference
zkNCWtw2fd
26
4.1 SUMMARY OF MAIN RESULTS
In particular, Tables 3 through 6 showcase the MAP and Recall scores for zero-shot monolingual,cross-lingual, and multilingual retrieval tasks on the XQuAD-R and MLQA-R datasets, consideringboth fine-tuned XLM-R and LaBSE models.
ICLR.cc/2025/Conference
zkNCWtw2fd
37
4.2
Table 4: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the MLQA-R dataset for a fine-tuned XLM-R model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined.
ICLR.cc/2025/Conference
vVlNBaiLdN
1
Title
002003004005006007
ICLR.cc/2025/Conference
zkNCWtw2fd
14
3.1 TRAINING
Datasets. To conduct the study of batch sampling, parallel query-passage training pairs are requiredsuch that we can construct cross-lingual triplets, where each query and its relevant (or irrelevant)passage are in different languages. mMARCO (Bonifacio et al., 2021) is the only dataset with parallelqueries and passage...
ICLR.cc/2025/Conference
zkNCWtw2fd
15
3.1 TRAINING
Training Setup. We apply the baseline and our proposed hybrid batching to fine-tune two representative multilingual pre-trained models: (i) XLM-RoBERTa (XLM-R) (Conneau et al., 2020);and (ii) language-agnostic BERT sentence embedding (LaBSE) (Feng et al., 2022). Model trainingexperiments were conducted using one NVIDIA...
ICLR.cc/2025/Conference
zkNCWtw2fd
16
3.1 TRAINING
Hyperparameter Tuning for Hybrid Batch Sampling. To determine the optimal values for thehyperparameters α and β in our hybrid batch sampling approach, we conducted a comprehensive gridsearch. We evaluated α values ranging from 0 to 1, with β always set to 1 − α. Each configurationwas tested on a held-out validation set...
ICLR.cc/2025/Conference
zkNCWtw2fd
17
3.2 EVALUATION
Datasets. We evaluate the retrieval effectiveness of different models on three distinct datasets:XQuAD-R (Roy et al., 2020) and MLQA-R (Roy et al., 2020).2 XQuAD-R and MLQA-R are questionanswering datasets with parallel questions and passages in 11 languages and 7 languages, respectively.Thus, these two datasets can be...
ICLR.cc/2025/Conference
zkNCWtw2fd
18
3.2 EVALUATION
2The evaluation of the models is conducted on datasets that are completely separate and distinct from theones used for training. More specifically, the models have not encountered any data samples, whether fromthe training or testing splits, of the evaluation datasets during their training process. This ensures an unbi...
ICLR.cc/2025/Conference
zkNCWtw2fd
19
3.2 EVALUATION
XQuAD-R (↑)
ICLR.cc/2025/Conference
zkNCWtw2fd
20
3.2 EVALUATION
Table 1: Main experiments on XQuAD-R and MLQA-R. mAP (marco averaged across all languages)numbers are reported. Mo., CR., and Mul. denote monolingual, cross-lingual, and multilingualretrieval settings. respectively.
ICLR.cc/2025/Conference
zkNCWtw2fd
21
3.2 EVALUATION
MLQA-R (↑) XLM-R LaBSE Sampling Mo..792.755.798.808.801.817 X-XX-YHybridX-XX-YHybrid language bias (↓) Model XLM-R LaBSE Sampling XQuAD-R MLQA-R410295287262225221 X-XX-YHybridX-XX-YHybrid Metrics and Settings. We report the mean average precision (mAP) for XQuAD-R and MLQA-Rsince the metric considers the retrieval qual...
ICLR.cc/2025/Conference
zkNCWtw2fd
22
3.2 EVALUATION
Model
ICLR.cc/2025/Conference
zkNCWtw2fd
23
4.1 SUMMARY OF MAIN RESULTS
Zero-shot Retrieval Evaluation. We report the effectiveness of different batch sampling strategiesin Table 1. We observe that X-X and X-Y sampling only perform well in monolingual and crosslingual retrieval settings, respectively. These results indicate that optimization for either monolingualor cross-lingual retrieval...
ICLR.cc/2025/Conference
zkNCWtw2fd
24
4.1 SUMMARY OF MAIN RESULTS
3The results for the Recall metric are in Section 4.2.1.4The performance of the models is evaluated on certain languages, such as Greek (el) and Vietnamese (vi),which were not included in the training data. This aspect of the evaluation process aims to assess the ability ofthe models to handle languages they have not b...
ICLR.cc/2025/Conference
zkNCWtw2fd
25
4.1 SUMMARY OF MAIN RESULTS
Table 2: Language bias in multilingual retrieval.
ICLR.cc/2025/Conference
9DrPvYCETp
22
3 SHARED RECURRENT MEMORY TRANSFORMER
R(s, u, a(U )) : S × U × An → R, O(s, a) : S × A → O.
ICLR.cc/2025/Conference
zkNCWtw2fd
27
4.1 SUMMARY OF MAIN RESULTS
Language Bias Evaluation. To gain insight into why hybrid batch sampling achieves strongperformance in multilingual retrieval settings, we investigate the language bias exhibited by modelsfine-tuned using different batch sampling strategies. Following Huang et al. (2023b), we measure thelanguage bias using the maximum ...
ICLR.cc/2025/Conference
zkNCWtw2fd
28
4.2
IN-DEPTH ANALYSIS 4.2.1 ZERO-SHOT RETRIEVAL EVALUATION ON XQUAD-R AND MLQA-R We present the experimental results of our proposed hybrid batching approach for improving theretrieval performance of fine-tuned multilingual language models across various tasks and datasets.We compare our method with two baseline training b...
ICLR.cc/2025/Conference
zkNCWtw2fd
29
4.2
Consistent improvement across languages and tasks: Tables 3 through 6 demonstrate the performance of the proposed hybrid batching approach when applied to the XLM-R and LaBSE models onthe XQuAD-R and MLQA-R datasets. Our method consistently achieves the highest mean MAP andmean R@1 scores across monolingual and cross-l...
ICLR.cc/2025/Conference
zkNCWtw2fd
30
4.2
Balanced performance across evaluation metrics: The proposed approach strikes a balance between the X-X-mono (optimized for monolingual retrieval setting) and X-Y (crosslingual/multilingual retrieval settings) baselines. This compromise is evident when analyzing theperformance of individual languages across different r...
ICLR.cc/2025/Conference
zkNCWtw2fd
31
4.2
5Note that in XQuAD-R and MLQA-R, each query only has one relevant passage in each language.
ICLR.cc/2025/Conference
zkNCWtw2fd
32
4.2
Evaluation of Fine-tuned XLM-R Model on XQuAD-R Dataset
ICLR.cc/2025/Conference
zkNCWtw2fd
33
4.2
Table 3: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the XQuAD-R dataset for a fine-tuned XLM-R model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined.
ICLR.cc/2025/Conference
zkNCWtw2fd
34
4.2
Monolingual Cross-lingual Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean R@1 Monolingual Cross-lingual R@10 Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean Evaluation of...
ICLR.cc/2025/Conference
zkNCWtw2fd
35
4.2
MAP
ICLR.cc/2025/Conference
zkNCWtw2fd
36
4.2
Competitive reduction in average rank distance compared to cross-lingual batching. Theproposed approach exhibits competitive performance in reducing the average rank distance comparedto the strong X-Y baseline. In Table 7 (XQuAD-R), the proposed method achieves the best mean rankdistance of 286.6 using XLM-R, outperfor...
ICLR.cc/2025/Conference
gtVo4xcpFI
31
3.3 BENCHMARK DATASET CONSTRUCTION
Amount Description
ICLR.cc/2025/Conference
gtVo4xcpFI
32
57 60
Focus on evaluating the grasp of the LLM on fundamental hardware concepts and principles.
ICLR.cc/2025/Conference
zkNCWtw2fd
38
4.2
Table 5: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the XQuAD-R dataset for a fine-tuned LaBSE model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined.
ICLR.cc/2025/Conference
zkNCWtw2fd
39
4.2
Table 6: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the MLQA-R dataset for a fine-tuned LaBSE model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined.
ICLR.cc/2025/Conference
zkNCWtw2fd
40
4.2
Tables 7 and 8 present a comprehensive comparison of the average rank distance metric6 (Huang et al.,2023a) across different multilingual retrieval tasks using fine-tuned XLM-R and LaBSE models. Theproposed approach is evaluated against two baseline methods: X-X-mono and X-Y, on two datasets:XQuAD-R (Table 7) and MLQA-...
ICLR.cc/2025/Conference
zkNCWtw2fd
41
5 CONCLUSION
Developing IR models that can handle queries and documents across many languages is increasinglycritical. In this work, we introduced a hybrid batch training strategy to optimize IR systems formonolingual, cross-lingual, and multilingual performance simultaneously. By fine-tuning multilinguallanguage models on a mix of...
ICLR.cc/2025/Conference
zkNCWtw2fd
42
5 CONCLUSION
6Rank distance is the average, over all queries and their relevant documents, of the difference between themaximum and minimum ranks assigned by an MLIR model to parallel (semantically similar) relevant documentsacross different languages.
ICLR.cc/2025/Conference
zkNCWtw2fd
43
6 LIMITATIONS
This work focuses on optimizing retrieval performance but does not address issues related to resultdiversity, fairness, or transparency in multilingual settings. For example, it may reflect societalbiases present in the training data. Addressing these concerns is important for building equitablemultilingual retrieval s...
ICLR.cc/2025/Conference
zkNCWtw2fd
44
6 LIMITATIONS
Furthermore, the experiments focus only on the XQuAD-R, MLQA-R, and MIRACL benchmarkdatasets. While these cover a range of languages, they may not be fully representative of real-worldmultilingual information retrieval needs. The robustness of the results to other domains, questiontypes, and retrieval scenarios is an e...
ICLR.cc/2025/Conference
zkNCWtw2fd
45
6 LIMITATIONS
Average Rank Distance over XQuAD-R Dataset
ICLR.cc/2025/Conference
zkNCWtw2fd
46
6 LIMITATIONS
Table 7: Comparison of the rank distances among relevant documents of the XQuAD-R dataset acrossrank lists generated by fine-tuned XLM-R and LaBSE models for zero-shot multilingual retrievaltasks under different training batch types. The best result is highlighted in bold, and the second-bestresult is underlined.
ICLR.cc/2025/Conference
zkNCWtw2fd
47
6 LIMITATIONS
XLM-R Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean Average Rank Distance over MLQA-R Dataset XLM-R LaBSE Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeneshivizh Mean
ICLR.cc/2025/Conference
zkNCWtw2fd
48
6 LIMITATIONS
LaBSE
ICLR.cc/2025/Conference
viQ1bLqKY0
1
Title
EXECUTION-EVAL: CAN LANGUAGE MODELS EXECUTE REAL-WORLD CODE?
ICLR.cc/2025/Conference
viQ1bLqKY0
2
Abstract
As language models advance, traditional benchmarks face challenges of datasetsaturation and disconnection from real-world performance, limiting our understanding of true model capabilities. We introduce EXecution-Eval (EXE), abenchmark designed to assess LLMs’ ability to execute code and predict programstates. EXE atte...
ICLR.cc/2025/Conference
viQ1bLqKY0
3
1 INTRODUCTION
Language model benchmarks are facing challenges of rapid saturation (Ott et al., 2022) and anincreasing disconnect from real-world performance perceived by end-users (Zheng et al., 2023).Due to this, benchmarks are being continually created to address failure modes; e.g. SuperGLUEtargeting GLUE’s low problem difficulty...
ICLR.cc/2025/Conference
viQ1bLqKY0
4
1 INTRODUCTION
Hence, to maximise an evaluation’s utility we aim to minimise the common failure modes of; a)difficulty, not ensuring an unbound scale of small trivial problems to complex multi-step problems,b) diversity, not ensuring a representative distribution across a large space of problems, c) novelty,not ensuring continually f...
ICLR.cc/2025/Conference
PwxYoMvmvy
49
5 Conclusions
Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graphisomorphism testing and function approximation with gnns. Advances in neural informationprocessing systems, 32, 2019.
ICLR.cc/2025/Conference
gtVo4xcpFI
33
57 60
Apply concepts to new and complex scenarios for generalization.
ICLR.cc/2025/Conference
viQ1bLqKY0
5
1 INTRODUCTION
Motivated by these challenges we introduce EXecutionEval (EXE), an evaluation replicating oneof the primary tasks humans perform while coding; predicting and comparing a final program statefor a given set of inputs - seen in Figure 1. EXE is designed to avoid the aforementioned failuremodes; emphasising difficulty (smo...
ICLR.cc/2025/Conference
viQ1bLqKY0
6
1 INTRODUCTION
EXE also holds theoretical inspiration. (Fowler et al., 2022) et al have replicated positive pedagogical correlations found by (Lopez et al., 2008) between the abilities of CS1 students to ”trace”programs (i.e. manually predict outputs and write the internal state out line by line) and their abilities to pass code writ...
ICLR.cc/2025/Conference
viQ1bLqKY0
7
1 INTRODUCTION
Figure 1: An example task from Apache Airflow’s Github repository (code simplified to fit withindiagram). EXE sources tasks from 1,000 Python repositories, generates test cases for them, andcompares the LLM’s ability to execute code against python’s interpreter.
ICLR.cc/2025/Conference
viQ1bLqKY0
8
2 EVALUATION FRAMEWORK
As seen in Figure 1, an EXE task is to predict a function’s return value or error from: a) a codesnippet and b) a set of input arguments. Code snippets are extracted from PyPi’s most popular 1,000python projects hosted on GitHub, we select our snippets to be pure (i.e. deterministic, no sideeffects), language model gen...
ICLR.cc/2025/Conference
viQ1bLqKY0
9
2 EVALUATION FRAMEWORK
Through these stages of filtering, the original top 1,000 repositories are filtered down to the 33,875task instances which comprise EXE. A high level breakdown of these task instances across repositories is presented in Figure 3. We note some repositories are overrepresented primarily due to beingmore modern (using typ...
ICLR.cc/2025/Conference
viQ1bLqKY0
10
2 EVALUATION FRAMEWORK
Figure 2: Three stage EXE task generation pipeline. Detailed example tasks and generated inputscan be found in Appendix A.1.
ICLR.cc/2025/Conference
viQ1bLqKY0
11
2 EVALUATION FRAMEWORK
Figure 3: We observe task counts per repository to have a near logarithmic falloff. Note: Basedon manual observations, several repositories are removed from EXE due to thousands of similarfunctions with only single modifications, for example changing a url address.
ICLR.cc/2025/Conference
viQ1bLqKY0
12
2.1 TASK FORMATION
Model input. The model is given a complete snippet of code alongside the input state to be executed.The model is then tasked to predict the resulting return value, or in the case that an exception is raisedthe model is instructed to generate an exception type and value. In practice, we prompt modelswith an odata json r...
ICLR.cc/2025/Conference
viQ1bLqKY0
13
2.1 TASK FORMATION
Evaluation metrics. To evaluate a proposed solution, we use the pass@k metric (Chen et al., 2021),comparing the ground truth and the generated prediction as json objects (set and frozensetare sorted before conversion to json lists). If the original code produced an exception, we comparethe type and message (excluding s...
ICLR.cc/2025/Conference
viQ1bLqKY0
14
2.2 FEATURES OF EXE
Diversity of inputs and outputs. Unlike many benchmarks focused on a particular subject matterarea, a task in this eval may require a model to perform mathematical reasoning, logical inference,bit manipulation, string operations, loop execution, or to maintain multiple internal variables duringcomputation. Furthermore,...
ICLR.cc/2025/Conference
viQ1bLqKY0
15
2.2 FEATURES OF EXE
Continually updatable. Both our code collection and task input generation processes can createnew tasks with minimal human oversight. Simply re-running our code collection to pull the latest commits or directing it towards an uncollected Python GitHub repository will create new task instances. Furthermore we can contin...
ICLR.cc/2025/Conference
viQ1bLqKY0
16
2.2 FEATURES OF EXE
Cost effective scalability. With generation of new tasks requiring an average of 1,112 input tokens(batch of 15) and evaluation of tasks typically requiring 1,123 tokens, ExecEval can be generated,tested and continually updated at a fraction of the cost of human-curated benchmarks. Our initialdataset of 33,875 cases ha...
ICLR.cc/2025/Conference
PwxYoMvmvy
50
5 Conclusions
Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2020.
ICLR.cc/2025/Conference
PwxYoMvmvy
51
5 Conclusions
Weilin Cong, Morteza Ramezani, and Mehrdad Mahdavi. On provable benefits of depth in traininggraph convolutional networks. Advances in Neural Information Processing Systems, 34:9936–9949, 2021.
ICLR.cc/2025/Conference
viQ1bLqKY0
17
2.2 FEATURES OF EXE
Long multi-step problems with smooth difficulty scaling. We provide a continuous spectrumof task difficulties, ranging from 1-step, one-line functions to multi-file, multi-class, multi-100step tasks. Our most complex tasks include function call depths (non-recursive) of up to 13 levels(median: 2), separate identifier c...
ICLR.cc/2025/Conference
viQ1bLqKY0
18
2.2 FEATURES OF EXE
To address this, we observe a mechanism inspired by the SKILL-MIX evaluation (Yu et al., 2023)that leverages the typed nature of our function selection process. This approach allows us to create even more complex tasks by chaining functions where the output type of one matches the inputtype of another, or by combining ...
ICLR.cc/2025/Conference
viQ1bLqKY0
19
2.2 FEATURES OF EXE
Error prediction. To test the full spectrum of code execution we further generate test cases designedto trigger exceptions. Many of these require in-depth analysis to see ahead of time, for examplepredicting an invalid array index through multiple functions. While debugging exceptions is oneof the more challenging soft...
ICLR.cc/2025/Conference
viQ1bLqKY0
20
3 RESULTS
We report our evaluation results across different SOTA models alongside our findings across different task statistics below.
ICLR.cc/2025/Conference
viQ1bLqKY0
21
3 RESULTS
Model
ICLR.cc/2025/Conference
viQ1bLqKY0
22
3 RESULTS
Table 1: EXE Pass@1 results GPT-4oGPT-4o-miniLlama3.1-8BLlama3.1-405BClaude3.5-SonnetMistral-Large-2407 LLMs can execute real-world code, achieving results in-line with code generation benchmarks.We find EXE shows similar relative model performance between models as seen in coding benchmarks such as HumanEval (Chen et ...
ICLR.cc/2025/Conference
viQ1bLqKY0
23
3 RESULTS
EXE dataset (Pass@1) Errors (Pass@1)
ICLR.cc/2025/Conference
viQ1bLqKY0
24
3 RESULTS
Prior works such as Learning To Execute (Zaremba & Sutskever, 2014) and CRUX-Eval (Gu et al.,2024) have placed justifiable limitations on code complexity; removing mathematical operations,limiting line count, disallowing custom classes and only having one singular function to name a few.We hypothesised that these are n...
ICLR.cc/2025/Conference
viQ1bLqKY0
25
3 RESULTS
Figure 4: Left - We show the relative accuracy of different models across the top 20 packages by taskcount. Both the relative differences between models and the relative differences between packagesare within expectations from other coding benchmarks (Jimenez et al., 2023). Right - We show themagnitude of diversity acr...
ICLR.cc/2025/Conference
viQ1bLqKY0
26
3 RESULTS
ExecEval provides a smooth curve of task difficulties. We set out to ensure a) our evaluationdoes not induce saturation from a bounded distribution of task difficulties, b) our evaluation doesnot induce an ”AI overhang” by not having a smooth transition between difficulties and, c) thecorrelated factors affecting diffi...
ICLR.cc/2025/Conference
viQ1bLqKY0
27
3 RESULTS
As shown in Figure 5 several task statistics such as ”lines of code”, ”processing time” and ”numberof function calls” all correlate log-linearly with a model’s achieved pass@1 score. These correlationsprovide preliminary evidence towards c) as they align with simplistic human intuition, i.e. more linesof code, more com...
ICLR.cc/2025/Conference
viQ1bLqKY0
28
3 RESULTS
Beyond evaluation-wide difficulty scaling, EXE also demonstrates diversity and varying difficultylevels within individual task sets. Each function has up to 15 generated test cases, allowing us toanalyse variance per task set. To measure execution path diversity, we collect runtime statistics(detailed in Appendix A.6) ...
ICLR.cc/2025/Conference
viQ1bLqKY0
41
4 RELATED WORK
Recent trends in benchmark design have emphasised the importance of diverse, multi-step problemsand agentic capabilities. Works like Jimenez et al. (2023) have introduced benchmarks that requiresolving real world software engineering problems while Zhou et al. (2023) has enabled evaluationof complex instruction followi...
ICLR.cc/2025/Conference
PwxYoMvmvy
52
5 Conclusions
Nima Dehmamy, Albert-L´aszl´o Barab´asi, and Rose Yu. Understanding the representation power ofgraph neural networks in learning graph topology. Advances in Neural Information ProcessingSystems, 32, 2019.
ICLR.cc/2025/Conference
gtVo4xcpFI
34
57 60
Divide the difficulty based on the number of lines of code, type,and design time.
ICLR.cc/2025/Conference
viQ1bLqKY0
29
3 RESULTS
ExecEval’s test case generation scales. While EXE today includes up to 15 test cases per task, ouranalysis demonstrates EXE’s generation pipeline can scale significantly further without plateauing.As shown in Figure 6, generation of novel test case continues well beyond 300 cases per task whilemaintaining all quality c...
ICLR.cc/2025/Conference
viQ1bLqKY0
30
3 RESULTS
rate. This aligns with intuition - generating novel, base64 images poses significantly more difficultythan generating diverse string or numeric inputs.
ICLR.cc/2025/Conference
viQ1bLqKY0
31
3 RESULTS
Figure 5: Pass@1 for all tasks across four of our code metrics. The shaded area represents variance,and the opacity is scaled with count of samples. Processing time is measured in microseconds.
ICLR.cc/2025/Conference
viQ1bLqKY0
32
3 RESULTS
Importantly, our token efficiency analysis (right plot) reveals that significant scaling is possiblewithout proportional prompt growth. By randomly selecting and injecting just 60 prior cases intothe generation prompt, we can effectively generate over 1,000 novel cases. This sublinear tokengrowth suggests the potential...
ICLR.cc/2025/Conference
viQ1bLqKY0
33
3 RESULTS
LLMs struggle with certain coding features. As EXE contains a diverse set of tasks, we areable to observe model performance differing greatly based on coding features used in any task.To illustrate: floating point math operations such as multiplications (GPT-4o: 43 mean Pass@1)significantly increase task difficulty, ho...
ICLR.cc/2025/Conference
viQ1bLqKY0
34
3 RESULTS
Figure 6: Test case generation analysis across eleven diverse Python functions sourced from popular libraries including Azure, PyTorch, Langchain, and NLTK. Functions range from geometriccomputations (torchvision) to SQL regex (snowflake-python-connector). Left: Cumulative uniquevalidated test cases per generation batc...
ICLR.cc/2025/Conference
viQ1bLqKY0
35
3 RESULTS
With the above metrics, and those seen in Figure 7, their mean Pass@k decreases as their countincreases. To reduce the risk of our metrics being a proxy for longer problems we show the effectscan still be seen below in Figure 8 after normalisation by lines of code (only lines with executablesyntax tokens are counted).
ICLR.cc/2025/Conference
viQ1bLqKY0
36
3 RESULTS
Figure 7: Three examples of high pass@1 rate tasks that contain large amounts of function calls.Left - Charset-normaliser performs 300+ function calls to define ranges of unicode characters uponinitialisation; this constant has little effect on task difficulty but is used frequently and hence appearsin many tasks. Midd...
ICLR.cc/2025/Conference
viQ1bLqKY0
37
4 RELATED WORK
There is a rich history of work on evaluating language models’ abilities in reasoning, execution,and multi-step problem-solving across various domains. These efforts span from natural languageprocessing to mathematical reasoning, and from code generation to program execution. Our work,EXecution-Eval (EXE), builds upon ...
ICLR.cc/2025/Conference
viQ1bLqKY0
38
4 RELATED WORK
Code generation benchmarks have been the foundation of evaluating the coding abilities of languagemodels. Works like HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) establishedstandardised datasets for assessing code synthesis from natural language descriptions. These effortshave expanded to cover multiple...
ICLR.cc/2025/Conference
viQ1bLqKY0
39
4 RELATED WORK
Figure 8: Pass@1for all tasks across four of our code metrics normalised by line of code count(limited to GPT models for readability). All four of the above metrics previously showed a negativeimpact as they increased, interestingly we now observe branching statements having little to noimpact and return statements sur...
ICLR.cc/2025/Conference
viQ1bLqKY0
40
4 RELATED WORK
The concept of ”learning to execute” itself has a long history, Zaremba & Sutskever (2014) exploredneural networks’ ability to learn and execute simple programs. Graves et al. (2014) constructed thefirst Neural Turing Machines with (Kaiser & Sutskever, 2015; Reed & de Freitas, 2015; Dehghaniet al., 2018) all building f...
ICLR.cc/2025/Conference
PwxYoMvmvy
53
5 Conclusions
Chenhui Deng, Zichao Yue, and Zhiru Zhang. Polynormer: Polynomial-expressive graph trans former in linear time. arXiv preprint arXiv:2403.01232, 2024.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
14

Collection including ulab-ai/ResearchArcade-openreview-paragraphs