--- dataset_info: features: - name: image dtype: image - name: chart_type dtype: string - name: question dtype: string - name: question_class dtype: string - name: gold sequence: string - name: correct_or_misleading dtype: int64 - name: misleader dtype: string - name: annotations sequence: string splits: - name: train num_examples: 6059 - name: val num_examples: 6146 - name: test num_examples: 6042 tags: - charts - data-visualization - visual-question-answering - image-classification - misleading-visualizations - arxiv:2601.12983 task_categories: - image-classification - visual-question-answering pretty_name: AttackViz --- # AttackViz AttackViz is a chart-image dataset for studying correct and misleading data visualizations. It was introduced in the paper [ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation](https://arxiv.org/abs/2601.12983). Each example contains a rendered chart image, metadata about the chart and question type, the expected gold answer, a binary label indicating whether the chart is correct or misleading, a misleading-visualization category, and serialized chart annotations. ## Dataset Structure The dataset contains 18,247 examples across three splits: | Split | Examples | |---|---:| | train | 6,059 | | val | 6,146 | | test | 6,042 | ## Fields - `image`: chart image. - `chart_type`: chart family. Values include `v_bar`, `h_bar`, and `line`. - `question`: natural-language question associated with the chart. - `question_class`: question category. Values include `compound`, `comparison`, `min_max`, `data_retrieval`, and `arithmetic`. - `gold`: list of expected answer strings. - `correct_or_misleading`: binary label where `0` means correct and `1` means misleading. - `misleader`: misleading visualization type, or `none` for correct charts. - `annotations`: list of serialized JSON strings containing the underlying chart specification and rendering metadata. ## Chart Types | Type | Examples | |---|---:| | v_bar | 7,414 | | h_bar | 7,348 | | line | 3,485 | ## Question Classes | Class | Examples | |---|---:| | compound | 4,581 | | comparison | 4,258 | | min_max | 4,172 | | data_retrieval | 3,989 | | arithmetic | 1,247 | ## Label Distribution | Label | Meaning | Examples | |---:|---|---:| | 0 | correct | 6,373 | | 1 | misleading | 11,874 | ## Misleading Types | Type | Examples | |---|---:| | none | 6,373 | | inappropriate_use_of_stacked | 2,689 | | misrepresentation | 1,694 | | inverted_axis | 1,674 | | inappropriate_axis_range | 1,370 | | 3d | 1,276 | | inappropriate_use_of_log_scale | 942 | | inappropriate_use_of_line | 548 | | truncated_axis | 498 | | ineffective_color_scheme | 498 | | inappropriate_item_order | 460 | | dual_axis | 225 | ## Loading ```python from datasets import load_dataset dataset = load_dataset("jgermanmx/AttackViz") print(dataset) print(dataset["train"][0]) ``` ## Intended Use This dataset can be used to evaluate or train models for chart understanding, misleading visualization detection, visual question answering over charts, and analysis of how design choices affect interpretation. ## Citation If you use AttackViz, please cite: ```bibtex @misc{ortizbarajas2026chartattack, title = {ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation}, author = {Jesus-German Ortiz-Barajas and Jonathan Tonglet and Vivek Gupta and Iryna Gurevych}, year = {2026}, eprint = {2601.12983}, archivePrefix = {arXiv}, primaryClass = {cs.CL}, doi = {10.48550/arXiv.2601.12983}, url = {https://arxiv.org/abs/2601.12983} } ``` ## Notes The `annotations` field stores JSON as strings. Parse individual entries with `json.loads` when structured access to chart specifications is needed.