Nilsson Kari
Contact
- Position:
- Finnish Centre for Astronomy with ESO
- Address
- Finland
Miscellaneous Information
- Miscellaneous Information
-
Abstract Reference: 30807
Identifier: P2.13
Presentation: Poster presentation
Key Theme: 2 Management of Scientific and Data Analysis Projects
The design strategy of scientific data quality control software for Euclid missionAuthors:
Nilsson Kari, Brescia Massimo, Cavuoti Stefano, Hagfors Haugan Stein V.The most valuable asset of a space mission like Euclid are the data. Due to their huge volume, the automatic quality control becomes a crucial aspect over the entire lifetime of the experiment. Here we focus on the design strategy for the Science Ground Segment (SGS) Data Quality Common Tools (DQCT), which has the main role in providing software solutions to gather, evaluate, and record quality information about the raw and derived data products from a primarily scientific perspective. The stakeholders for this system include Consortium scientists, users of the science data, and the ground segment data management system itself. The SGS DQCT will provide a quantitative basis for evaluating the application of reduction and calibration reference data (flat-fields, linearity correction, reference catalogs, etc.), as well as diagnostic tools for quality parameters, flags, trend analysis diagrams and any other metadata parameter produced by the pipeline, collected in incremental quality reports specific to each data level and stored on the Euclid Archive during pipeline processing. In a large program like Euclid, it is prohibitively expensive to process large amount of data at the pixel level just for the purpose of quality evaluation. Thus, all measures of quality at the pixel level are implemented in the individual pipeline stages, and passed along as metadata in the production. In this sense most of the tasks related to science data quality are delegated to the pipeline stages, even though the responsibility for science data quality is managed at a higher level. The DQCT subsystem of the SGS is currently under development, but its path to full realization will likely be different than that of other subsystems. This is primarily due to a high level of parallelism and to the wide pipeline processing redundancy. For instance the mechanism of double Science Data Center for each processing function, the data quality tools have not only to be widely spread over all pipeline segments and data levels, but also to minimize the occurrences of potential diversity of solutions implemented for similar functions, ensuring the maximum of coherency and standardization for quality evaluation and reporting in the SGS.