Detail

EXSCLAIM! Validation Dataset - Selections from Amazon Mechanical Turk Benchmark

Schwenker, Eric; Jiang, Weixin; Spreadbury, Trevor; Ferrier, Nicola; Cossairt, Oliver; Chan, Maria K. Y.

Organizations

MDF Open

Year

2021

Source Name

exclaim_validation

License

CC-BY 4.0

Contacts

Eric Schwenker <e.schwenker89@gmail.com> Maria Chan <mchan@anl.gov>

DOI

10.18126/a6jr-yfoq View on Datacite
Due to recent improvements in image resolution and acquisition speed, materials microscopy is experiencing an explosion of published imaging data. The standard publication format, while sufficient for traditional data ingestion scenarios where a select number of images can be critically examined and curated manually, is not conducive to large-scale data aggregation or analysis. Most images in publications are presented as components of a larger figure with their explicit context buried in the main body or caption text, so even if aggregated, collections of images with weak or no digitized contextual labels have limited value. To solve the problem of curating labeled microscopy data from literature, the authors the EXSCLAIM! Python toolkit for the automatic EXtraction, Separation, and Caption-based natural Language Annotation of IMages from scientific literature. We highlight the methodology behind the construction of EXSCLAIM! and demonstrate its ability to extract and label open-source scientific images at high volume. This dataset is used to validate the classification and bounding box prediction accuracy of the FigureSeparator component of the EXSCLAIM! pipeline.