The 19th International Conference on
Document Analysis and
Recognition
September 16-21, 2025 Wuhan, Hubei, China
The 19th International Conference on
Document Analysis and
Recognition
September 16-21, 2025 Wuhan, Hubei, China
Competitions
The ICDAR 2025 Organizing Committee is supporting a set of competitions that address current research challenges related to areas of document analysis and recognition. Contact: icdar2025competition@gmail.com.
1. ICDAR 2025 Competition on Historical Map Text Detection, Recognition, and Linking
Organizers:
Yijun Lin, Solenn Tual, Zekun Li, Leeje Jang, Yao-Yi Chiang, Jerod Weinman, Joseph Chazalon, Edwin Carlinet, Julien Perret, Nathalie Abadie, Bertrand Duménieu, Ta-Chien Chan, Hsiung-Ming Liao, Wen-Rong Su
Description:
After a successful 2024 edition, this 2025 edition still focuses on text detection and recognition in challenging historical map images, and introduces several new features: expanded and new datasets (with Chinese characters), large synthetic training data, and improved evaluation.
This edition proposes four main tasks: (1) word detection, (2) phrase detection, (3) word detection and recognition, and (4) phrase detection and recognition.
2. ICDAR 2025 Competition on Understanding Chinese College Entrance Exam Papers
Organizers:
Liangcai Gao, Qinqin Yan, He Zhang, Tianrui Zong, Wenhao Yu and Lin Yang
Description:
This competition aims towards a deeper understanding of documents with intricate layouts. Documents are in Chinese Language. This edition proposes the task of Question Answering on Chinese Exam Papers.
3. ICDAR 2025 Competition on FEw-Shot Text line segmentation of ancient handwritten documents (FEST)
Organizers:
Silvia Zottin, Axel De Nardin, Giuseppe Branca, Claudio Piciarelli, Gian Luca Foresti
Description:
The competition challenges participants to develop a few-shot learning system capable of performing text line segmentation of ancient manuscripts using only three annotated images per manuscript for training. The proposed dataset features a diverse collection of ancient manuscripts with varying layouts, levels of degradation, and non-standard formatting, reflecting real-world complexities.
4. ICDAR 2025 Competition on End-to-End Document Image Machine Translation Towards Complex Layouts
Organizers:
Chengqing Zong,Yaping Zhang,Yang Zhao,Lu Xiang,Yu Zhou, Zhiyang Zhang, Yupu Liang, Zhiyuan Chen
Description:
This competition (DIMT 2025 Challenge@ICDAR) aims to advance research in Document Image Machine Translation (DIMT) by addressing the challenges of translating text embedded in document images from one language to another. It focuses on multi-modal processing, combining textual content and document layouts to bridge the gap between optical character recognition (OCR) and natural language processing (NLP). This edition proposes two main tracks: OCR-free DIMT (Track 1) and OCR-based DIMT (Track 2) , which correspond to end-to-end translation without OCR output and with OCR output, respectively. Each track includes two subtasks—small models and large models—to evaluate performance differences between large language models (LLMs) and smaller, lightweight models on complex document layouts. By promoting innovative techniques for end-to-end document translation, this challenge aims to push the boundaries of document intelligence and multilingual capabilities, fostering advancements in multi-modal document understanding.
5. ICDAR 2025 Competition on Multi-lingual Roadside Scene Text Recognition
Organizers:
Ajoy Mondal, and C V Jawahar
Description:
This document proposes the ICDAR 2025 Competition on Multi-lingual Roadside Scene Text Recognition (ICDAR 2025 MLT-RSTR), to be held at the 19th International Conference on Document Analysis and Recognition (ICDAR 2025). Similar competitions were held at IC-DAR 2017 and ICDAR 2019 [9]. This edition introduces five main tasks: (i) Multi-lingual Text Word Detection (Task-A), (ii) Cropped Word Script Identification (Task-B), (iii) Joint Text Word Detection and Script Identification (Task-C), and (iv) End-to-End Text Word detection and
Recognition (Task-D). By engaging researchers in scene text detection and recognition, we aim to establish valuable benchmarks for advancing academic contributions in this field.
6. ICDAR 2025 Competition on Automatic Classification of Literary Epochs
Organizers:
Marina Litvak, Irina Rabaev, Ricardo Campos, Alípio Jorge, Adam Jatowt, Roza Bass, Hugo Sousa
Description:
The ICDAR 2025 Competition on Automatic Classification of Literary Epochs (CoLiE) aims to advance the field of temporal text analysis by challenging participants to develop automated methods for dating literary texts, that is, automatically predicting the time in which the text was written. This competition presents two main tasks: Task 1, Literary Epochs Classification and Task 2, ChronoText Classification. Each participant can participate in both or only one of them. To join the competition, click the "Join Competition" in top right corner of the Kaggle page for the task. In addition, please fill in the registration form. We believe this shared task will help advance the field of information retrieval and natural language processing and contribute to a deeper understanding of literary history. This competition is open to anyone passionate about information retrieval, machine learning, and natural language processing. Whether you are a seasoned expert or a newcomer to the field, we welcome you to participate and extend the boundaries of automated text analysis!
7. ICDAR 2025 Competition on Handwritten Text Recognition and Understanding
Description:
Handwritten Text Recognition (HTR) has progressed enormously in the last two decades because of the new technologies based on Deep Neural Network and the availability of training data. Excellent results can be obtained currently if the task is easy, with regular layout and easy writing styles. However, the results get worse when complex layouts are considered, the writing gets complex (complex vocabularies, ancient language, named entities, abbreviations) or new languages for which the HTR problem has not been researched enough. In addition, the above-mentioned progress has posed new challenges and problems. This competition aims at advancing in HTR by proposing new tasks and open problems, namely, HTR for complex writing styles, text understanding and non-Latin languages.
Three scenarios are proposed in this edition of the HTR ICDAR competition:
1.Handwritten Text Recognition and Semantic Information Extraction in Difficult Historical Documents (HTR-Simancas)
2.Handwritten Notes Understanding
3.Indic Handwritten Document Recognition
Researchers, students and practitioners are invited to participate in this competition. Interested entrants can participate in all track, two or just one of them. Please, check the details and instructions about each track in the corresponding web page.
8. ICDAR 2025 Competition on Comics Understanding in the Era of Foundational Models
Organizers:
Emanuele Vivoli, Artemis Llabrés, Mohamed Ali Soubgui, Dimosthenis Karatzas
Description:
This competition (ICDAR 2025-ComPa-FOMO) aims towards a deeper understanding of Comics Panel sequences understanding. Images are from the Golden Age American Comics collections, in English, gathered from web. They are divided in Panels and sorted using automatic processes. This edition proposes three tasks based on corresponding skills: multi-frame reasoning (Task A), comparisons (Task B) and temporal understanding (Task C). All the tasks are framed as "Pick a Panel”, where the task is to pick the correct panel among options.
9. ICDAR 2025 Competition on Glyph Detection in 15th-Century European Printed Documents
Organizers:
Mathias Seuret, Vincent Christlein
Description:
This competition is focused on the detection of glyphs in early European printed documents from the 15th century. The primary aim is to build an extensive corpus of glyphs by accurately extracting a large number of characters, with an emphasis on high precision rather than complete coverage. We will provide dataset that contains multiple historical printed documents with varying image quality.