new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

May 15

WILD: a new in-the-Wild Image Linkage Dataset for synthetic image attribution

Synthetic image source attribution is an open challenge, with an increasing number of image generators being released yearly. The complexity and the sheer number of available generative techniques, as well as the scarcity of high-quality open source datasets of diverse nature for this task, make training and benchmarking synthetic image source attribution models very challenging. WILD is a new in-the-Wild Image Linkage Dataset designed to provide a powerful training and benchmarking tool for synthetic image attribution models. The dataset is built out of a closed set of 10 popular commercial generators, which constitutes the training base of attribution models, and an open set of 10 additional generators, simulating a real-world in-the-wild scenario. Each generator is represented by 1,000 images, for a total of 10,000 images in the closed set and 10,000 images in the open set. Half of the images are post-processed with a wide range of operators. WILD allows benchmarking attribution models in a wide range of tasks, including closed and open set identification and verification, and robust attribution with respect to post-processing and adversarial attacks. Models trained on WILD are expected to benefit from the challenging scenario represented by the dataset itself. Moreover, an assessment of seven baseline methodologies on closed and open set attribution is presented, including robustness tests with respect to post-processing.

  • 17 authors
·
Apr 28, 2025

The Base Dependent Behavior of Kaprekar's Routine: A Theoretical and Computational Study Revealing New Regularities

Consider the following process: Take any four-digit number which has at least two distinct digits. Then, rearrange the digits of the original number in ascending and descending order, take these two numbers, and find the difference between the two. Finally, repeat this routine using the difference as the new four-digit number. In 1949, D. R. Kaprekar became the first to discover that this process, known as the Kaprekar Routine, would always yield 6174 within 7 iterations. Since this number remains unchanged after an application of the Kaprekar Routine, it became known as Kaprekar's Constant. Previous works have shown that the only base 10 Kaprekar's Constants are 495 and 6174, the 3-digit and 4-digit case. However, little attention has been given to other bases or determining which digit cases and which bases have a Kaprekar's Constant. This paper analyzes the behavior of the Kaprekar Routine in the 3-digit case, deriving an expression for all 3-digit Kaprekar Constants. In addition, the author developed a series of C++ programs to analyze the paths integers followed to their respective Kaprekar's Constant. Surprisingly, it was determined from this program that the most commonly required number of iterations required to reach Kaprekar's Constant for 3-digit integers was consistently 3, regardless of base. When loaded as a matrix, the iteration requirement data demonstrates a precise recurring relationship reminiscent of Pascal's Triangle.

  • 1 authors
·
Oct 16, 2017

Positional Description Matters for Transformers Arithmetic

Transformers, central to the successes in modern Natural Language Processing, often falter on arithmetic tasks despite their vast capabilities --which paradoxically include remarkable coding abilities. We observe that a crucial challenge is their naive reliance on positional information to solve arithmetic problems with a small number of digits, leading to poor performance on larger numbers. Herein, we delve deeper into the role of positional encoding, and propose several ways to fix the issue, either by modifying the positional encoding directly, or by modifying the representation of the arithmetic task to leverage standard positional encoding differently. We investigate the value of these modifications for three tasks: (i) classical multiplication, (ii) length extrapolation in addition, and (iii) addition in natural language context. For (i) we train a small model on a small dataset (100M parameters and 300k samples) with remarkable aptitude in (direct, no scratchpad) 15 digits multiplication and essentially perfect up to 12 digits, while usual training in this context would give a model failing at 4 digits multiplication. In the experiments on addition, we use a mere 120k samples to demonstrate: for (ii) extrapolation from 10 digits to testing on 12 digits numbers while usual training would have no extrapolation, and for (iii) almost perfect accuracy up to 5 digits while usual training would be correct only up to 3 digits (which is essentially memorization with a training set of 120k samples).

  • 6 authors
·
Nov 21, 2023