filename
stringlengths
18
35
content
stringlengths
1.53k
616k
source
stringclasses
1 value
template
stringclasses
1 value
1034_PrECISE_668858.md
# Introduction In a research project it is common that several partners work together and produce a lot of data related to the project. Therefore, it is important to specify in an early stage of the project what data will be generated, how it will be shared between the project partners and if it will be publicly available. A data management plan (DMP) is a tool which should assist in managing the data created during the project. In general, the DMP of the PrECISE project will specify what data is already available and what data will be generated, collected, and processed during the project. It should also provide information whether and how data will be exploited and open for public and re-use. The DMP includes information on what standards and methodologies will be used and how the data will be handled during and after the research project (how the data will be curated and preserved). The DMP is not a fixed document. It will evolve and gain more precision and substance during the lifespan of the PrECISE project. The first version of the DMP, including information from the first six months of the project, includes the following: * Methodology (Chapter 2): Data production, storage, dissemination and anaylsis * Data Generation (Chapter 3): Data set description, research data identification * Processing and explanation of generated data (Chapter 4): Information about the entity, which is responsible for the data, how the data is collected, an identification of the end-users of the data, and research data identification. * Accessibility – Data sharing, archiving and preserveration (Chapter 5) # Methodology The DMP describes how data are managed and handled within the PrECISE project among consortium members. In short, clinical and genetic data are produced by UZH, whereas proteomic data are provided by ETH Zurich. Every data point is anonymized, using a two step process: First, a pseudonym is given by the lab information system PathPro at UZH, where only UZH has access. Second, another independent ID is given to each data point on the PrECISE server by AstridBio. Two PrECISE servers will be set up by AstridBio: one for clinical data collection and storage and another one for experimental (genomics, proteomics) data collection and storage. and the key to this second pseudonym is only known by AstridBio. As a result, distribution of data to the other consortium members can be regarded anonymized. Other consortium members, other than UZH, only have access to coded (ETHZ, AstridBio) or anonymized data (all remaining groups). The relevant contributions and data flows are summarizes in the figure below: Data will be provided by UZH and ETHZ. Data storage and dissemination will be done by AstridBio. The consortium members on the right side of the figure will provide algorithm outlines, code and analysis results as stated in the grant proposal. During the project, data and algorithms will shared within members of the project. Upon publication, anonymized data and algorithms/ code will be made publicly available. In Figure 1, data production, storage, dissemination and analysis is summarized for the PrECISE project. Figure 1: PrECISE data production, storage, dissemination and analysis # Data Generation The data generation, which is illustrated in Table 1, is described below. **_Data set reference and name:_ ** Identifier for the data set to be produced. **_Data set description:_ ** Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration of reuse. **_Research Data Identification_ ** The boxes (D, A, AI, U and I) symbolize a set of questions that should be clarified for all datasets produced in this project. **Discoverable:** Are the data and associated software produced and/or used in the project discoverable (and readily located), identifiable by means of a standard identification mechanism (e.g. Digital Object Identifier). **Accessible:** Are the data and associated software produced and/or used in the project accessible and in what modalities, scope, licenses (e.g. licencing framework for research and education, embargo periods, commercial exploitation, etc.) **Assessable and intelligible:** Are the data and associated software produced and/or used in the project assessable for and intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for the purpose of peer review, are data provided in a way that judgements can be made about reliability and the competence of those who created them)? **Useable beyond the original purpose for which it was collected:** Are the data and associated software produced and/or used in the project usable by third parties even long time after the collection of the data (e.g. is the data safely stored in certified repositories for long term preservation and curation; is it stored together with the minimum software, metadata and documentation to make it useful; is the data useful for the wider public needs and usable for the likely purposes of non-specialists)? **Interoperable to specific quality standards:** Are the data and associated software produced and/or used in the project interoperable allowing data exchange between researchers, institutions, organisations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins?) It is recommended to make an “x” to each applicable box and explain it literally in more detail afterwards. <table> <tr> <th> **Data Nr.** </th> <th> **Respon-** **sible Bene-** **ficiary** </th> <th> **Data set reference and name** </th> <th> **Data set description** </th> <th> </th> <th> **Research data identification** </th> <th> </th> </tr> <tr> <th> **End user (e.g.** **university, research** **organization,** **SME’s, scientific publication)** </th> <th> **Existence of similar data** **(link,** **information)** </th> <th> **Possibility for integration and** **reuse (Y/N) + information** </th> <th> **D** 1 </th> <th> **A** 2 </th> <th> **AI 3 ** </th> <th> **U** 3 </th> <th> **I** 4 </th> </tr> <tr> <td> 1 </td> <td> UZH </td> <td> PPM39_r naSeq </td> <td> Members of PhosphoNetPPM project in the framework of SystemsX.ch initiative; members of the PrECISE project </td> <td> Similar data was produced in the framework of The Cancer Genome Atlas project (http://cancerge nome.nih.gov/ca ncersselected/pr ostatecancer) </td> <td> Yes. Data can be integrated with information on DNA and proteins from the same samples and compared to /integrated with data from TCGA </td> <td> Digital Object Indentifiers (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> Yes, see Chapter 5 </td> <td> Data is being presently analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> Yes, see Chapter 5 </td> <td> Data formats adhere to commonly used scientific standards (e.g. BAM format, VCF files etc.) </td> </tr> <tr> <td> 2 </td> <td> UZH </td> <td> PPM39_e xomeSeq </td> <td> Members of PhosphoNetPPM project in the framework of SystemsX.ch initiative; members of the PrECISE project </td> <td> Similar data was produced in the framework of The Cancer Genome Atlas project (http://cancerge nome.nih.gov/ca ncersselected/pr ostatecancer) </td> <td> Yes. Data can be integrated with information on RNA and proteins from the same samples and compared to /integrated with data from TCGA </td> <td> Digital Object Indentifiers (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> Yes, see Chapter 5 </td> <td> Data is being presently analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> Yes, see Chapter 5 </td> <td> Data formats adhere to commonly used scientific standards (e.g. BAM format) </td> </tr> <tr> <td> 3 </td> <td> UZH/E THZ </td> <td> PPM39_s wathMS </td> <td> Members of PhosphoNetPPM </td> <td> \- </td> <td> Yes. Data can be integrated </td> <td> Digital Object Indentifiers </td> <td> Yes, see </td> <td> Data is being presently </td> <td> Yes, see Chapter 5 </td> <td> Data formats will be </td> </tr> </table> <table> <tr> <th> **Data Nr.** </th> <th> **Respon-** **sible Bene-** **ficiary** </th> <th> **Data set reference and name** </th> <th> **Data set description** </th> <th> **Research data identification** </th> </tr> <tr> <th> **End user (e.g.** **university, research** **organization,** **SME’s, scientific publication)** </th> <th> **Existence of similar data** **(link,** **information)** </th> <th> **Possibility for integration and** **reuse (Y/N) + information** </th> <th> **D 1 ** </th> <th> **A 2 ** </th> <th> **AI 3 ** </th> <th> **U 4 ** </th> <th> **I 5 ** </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> project in the framework of SystemsX.ch initiative; members of the PrECISE project </td> <td> </td> <td> with information on DNA and RNA from the same samples and compared to /integrated with data from TCGA </td> <td> (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> Chapter 5 </td> <td> analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> </td> <td> developed as part of this study. </td> </tr> <tr> <td> 4 </td> <td> UZH </td> <td> PPM39_cl inicopathologic </td> <td> Members of PhosphoNetPPM project in the framework of SystemsX.ch initiative; members of the PrECISE project </td> <td> \- </td> <td> Yes. Data can be integrated with information on DNA, RNA and protein from the same samples. </td> <td> Digital Object Indentifiers (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> Yes, see Chapter 5 </td> <td> Data is being presently analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> Yes, see Chapter 5 </td> <td> **I** Data formats adhere to commonly used scientific standards (WHO/ISUP201 6, GCP) </td> </tr> <tr> <td> 5 </td> <td> UZH </td> <td> PPP500_ swathMS </td> <td> Members of the PrECISE project </td> <td> \- </td> <td> Yes. Data can be integrated with clinicopathological information from the same samples. </td> <td> Digital Object Indentifiers (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> Yes, see Chapter 5 </td> <td> Data is being presently analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> Yes, see Chapter 5 </td> <td> Data formats will be developed as part of this study. </td> </tr> <tr> <td> 6 </td> <td> UZH </td> <td> PPP500_ </td> <td> Members of the </td> <td> \- </td> <td> Yes. Data can </td> <td> Digital Object </td> <td> Yes, </td> <td> Data is being </td> <td> Yes, see </td> <td> Data formats </td> </tr> <tr> <td> **Data Nr.** </td> <td> **Respon-** **sible Bene-** **ficiary** </td> <td> **Data set reference and name** </td> <td> **Data set description** </td> <td> **Research data identification** </td> </tr> <tr> <td> **End user (e.g.** **university, research** **organization,** **SME’s, scientific publication)** </td> <td> **Existence of similar data** **(link,** **information)** </td> <td> **Possibility for integration and** **reuse (Y/N) + information** </td> <td> **D 1 ** </td> <td> **A 2 ** </td> <td> **AI 3 ** </td> <td> **U 4 ** </td> <td> **I 5 ** </td> </tr> <tr> <td> </td> <td> </td> <td> clinicopathologic </td> <td> PrECISE project </td> <td> </td> <td> be integrated with proteomic information from the same samples. </td> <td> Indentifiers (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> see Chapter 5 </td> <td> presently analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> Chapter 5 </td> <td> adhere to commonly used scientific standards (WHO/ISUP201 6, GCP). </td> </tr> <tr> <td> 7 </td> <td> UZH </td> <td> CRPC_he terogeneit y_ampliS eq </td> <td> Members of the PrECISE project </td> <td> Similar data was produced in the framework of The Cancer Genome Atlas project (http://cancerge nome.nih.gov/ca ncersselected/pr ostatecancer) </td> <td> Yes. Data can be integrated with proteomic information from the same patients. </td> <td> Digital Object Indentifiers (DOIs) will be provided to ensure that data and algorithms are discoverable. </td> <td> Yes, see Chapter 5 </td> <td> Data is being presently analysed and assembled for publication in a high- impact peerreviewed journal. </td> <td> Yes, see Chapter 5 </td> <td> Data formats adhere to commonly used scientific standards (e.g. BAM format) </td> </tr> </table> Table 1: PrECISE data generation # Chapter 4 Processing and explanation of generated data The following sections provide some additional information to the listed data introduced in Chapter 3. This information includes the entity, which is responsible for the data, how the data is collected, an identification of the end-users of the data, and research data identification. ## 4.1 PPM39_rnaSeq This data set consists of RNA sequencing data for 39 prostate cancer patients of the University Hospital Zurich. More specifically, for all 39 patients, the RNA of prostate cancer tissue and the matching normal tissue was sequenced. In addition, for 27 of the patients, a second cancer tissue sample from the same prostate tumour was analysed the same way. _**4.1.1 Responsible Beneficiary** _ The data was produced by UZH. ### 4.1.2 Gathering Process RNA sequencing was performed at the Functional Genomics Center Zurich. RNAseq libraries were generated using the TruSeq RNA stranded kit with PolyA- enrichment (Illumina, San Diego, CA, USA). Libraries were sequenced in paired- end mode on an Illumina HiSeq 2500 platform. After base calling, de- multiplexing and in-silico adaptor-trimming, the data was aligned to the hg19 reference genome (UCSC version) using STAR aligner v.2.2.3 (Dobin et al., 2013). Expression values were calculated using the FeatureCount software (Liao et al, 2014). ### 4.1.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project. ### 4.1.4 Research Data Identification Within the scientific community, several initiatives have been launched to cover the minimal standards for the annotation of a sequencing project, such as MIGS or MISEEQE. However these standards are rather preliminary and quite extensive. For practical reasons, we decided to follow the example of the public data repository Gene Expression Omnibus (GEO), which requires a minimal set of metadata to be uploaded with every sequencing experiment that is submitted to the archive. This set contains descriptive information and protocols for the overall study and individual samples, and references to processed and raw data file names. A template for this set can be found under links (see bibliography, links - Chapter 8). The most important part of the metadata belonging to each data file is covered by the clinical data (e.g. age, tissue type, survival data, time to relapse, BSA levels etc.), which allows for setting individual samples into context with each other. A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX_TA1, _TA2, _No, _Blood) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. ## 4.2 PPM39_exomeSeq This data set consists of DNA sequencing data for 39 prostate cancer patients of the University Hospital Zurich. More specifically, for all 39 patients, the DNA of prostate cancer tissue, the matching normal tissue and blood was sequenced. In addition, for 27 of the patients, a second cancer tissue sample from the same prostate tumour was analysed the same way. _**4.2.1 Responsible Beneficiary** _ The data was produced by UZH. ### 4.2.2 Gathering Process DNA samples were sequenced in paired-end mode on an Illumina HiSeq 2500 platform. After base calling, de-multiplexing and in-silico adaptor-trimming (using Trimmomatic, Bolder et al., 2014), the data was aligned to the hg19 reference genome (UCSC version) using the Bowtie2 software (Langmead et al., 2012). Bam files containing the mapped reads were preprocessed in the following way: Indel information was used to realign individual read using the RealignerTargetCreator and IndelRealigner option of the Genome Analysis Tools Kit (McKenna et al., 2010). Mate-pair information between mates was verified and fixed using Picard tools (http://picard.sourceforge.net) and single base were recalibrated using GATK BaseRecalibrator. After preprocessing, variant calling was carried out by comparing normal or tumor prostate tissue samples with matched blood samples using the programs MuTect (Cibulskis et al., 2013 ) and , independently, Strelka (Saunders et al., 2012). Somatic variants that were not detected by both programs were filtered out using CLC Genomics Workbench (CLC Genomics Workbench 8.0.3, https://www.qiagenbioinformatics.com) as well as those that had an entry in the dbsnp (Sherry et al., 2001) common database or those that represented synonymous variants without predicted effects on splicing. ### 4.2.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project. ### 4.2.4 Research Data Identification (see also 4.1.4) A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX_TA1, _TA2, _No, _Blood) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. ## 4.3 PPM39_swathMS This data set consists of Swath-MS proteomics data for 39 prostate cancer patients of the University Hospital Zurich. More specifically, for all 39 patients, the protein lysates of prostate cancer tissue, and the matching normal tissue was analysed. In addition, for 27 of the patients, a second cancer tissue sample from the same prostate tumour was analysed the same way. ### 4.3.1 Responsible Beneficiary The data was produced by the Institute for Molecular Systems Biology of the ETH Zurich (ETHZ). _**4.3.2 Gathering Process** _ Production of data has been described in detail by Tiannan et al., Nature Medicine 2015. ### 4.3.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project. ### 4.3.4 Research Data Identification A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX_TA1, _TA2, _No) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. ## 4.4 PPM39_clinico-pathologic This data set consists of clinical follow-up data (age at diagnosis, Progression-free survival time and status) and pathological data (UICC TNM classification, Gleason grade, WHO/ISUP 2016 grade group), for 39 prostate cancer patients of the University Hospital Zurich. More specifically, for all 39 patients, prostate cancer tissue was graded. In addition, for 27 of the patients, a second (lower grade) cancer tissue area from the same prostate tumour was selected and grade the same way. _**4.4.1 Responsible Beneficiary** _ The data was produced by UZH. ### 4.4.2 Gathering Process The 2016 WHO/ISUP classification of urological tumors was used to stage and grade the respective lesions. Clinical follow-up data of patients of where retrieved from the Zurich proCOC ( **Pro** state **C** ancer **O** utcomes **C** ohort) study. ### 4.4.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project. ### 4.4.4 Research Data Identification A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. ## 4.5 PPP500_swathMS This data set consists of Swath-MS proteomics data for 500 prostate cancer patients of the University Hospital Zurich. More specifically, for all 500 patients, the protein lysates of prostate cancer tissue and matching normal tissue was analysed. ### 4.5.1 Responsible Beneficiary The data was produced by the Institute for Molecular Systems Biology of the ETH Zurich (ETHZ). _**4.5.2 Gathering Process** _ Production of data has been described in detail by Tiannan et al., Nature Medicine 2015. ### 4.5.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project ### 4.5.4 Research Data Identification A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX_TA1, _TA2, _No) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. ## 4.6 PPP500_clinico-pathologic This data set consists of clinical follow-up data (age at diagnosis, Progression-free survival time and status) and pathological data (UICC TNM classification, Gleason grade, WHO/ISUP 2016 grade group), for 500 prostate cancer patients of the University Hospital Zurich. More specifically, for all 500 patients, prostate cancer tissue was graded. _**4.6.1 Responsible Beneficiary** _ The data was produced by UZH. ### 4.6.2 Gathering Process The 2016 WHO/ISUP classification of urological tumors was used to stage and grade the respective lesions. Clinical follow-up data of patients of where retrieved from the Zurich proCOC ( **Pro** state **C** ancer **O** utcomes **C** ohort) study. ### 4.6.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project. ### 4.6.4 Research Data Identification A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. ## 4.7 CRPC_heterogeneity_ampliSeq This data set consists of DNA sequencing data for 10 prostate cancer patients of the University Hospital Zurich. More specifically, for all 10 patients, the genomic DNA of 10 different tumour areas (CRPC) and of a matching normal tissue area will be sequenced. _**4.7.1 Responsible Beneficiary** _ The data will be produced by UZH. ### 4.7.2 Gathering Process DNA samples will sequenced using Amplicon sequencing on the Ion Torrent Proton System (genes _PTEN, AR, TP53, SPOP, FOXA1_ ). ### 4.7.3 End-User of the Data Biofinformatics researchers of the PhosphonetPPM (SystemsX.ch) project and the PrECISE project. ### 4.7.4 Research Data Identification (see also 4.1.4) A pseudonym will be provided by the lab information system PathPro of UZH (format: GXX.XXXX_TA1-12, _No) to share data with AstridBio and other members of the consortium. Only UZH can unblind these data. This numbering system provides a unique indentifyer to each sample, linking all omics, clinical and pathological data. This PathoPro identifier will be replaced by AstridBio when setting up the data on the PrECISE server. # Chapter 5 Accessibility – Data sharing, archiving and preservation We will store clinical and experimental data on separate servers. Experimental data will be accessible for up- and download using encrypted standard file transfer protocols. We will separate data produced within the PrECISE project and data uploaded from other data repositories (e.g. TCGA data). The respective meta-data will be stored together with the clinical data. Clinical data will be accessible via web applications using encryption. The system for the data management will be based on the SmartBioBank software ( _http://www.smartbiobank.com_ ). This software allows for integration of clinical and experimental/genomic data and enables data sharing and database merging between independent research groups. The clinical data and the meta- data should be searchable via a web-based interface. In the beginning of the project, data will be shared exclusively between members of the PrECISE project. As a sharing platform for small amounts of data up to 100MB the already existing project SVN repository _https://precise.technikon.com_ is used. This allows easy synchronization between the partners as well as data versioning. It has to be noted that only project partners have access to the project SVN. A data transfer agreement has to be signed for getting access to unpublished data (see Chapter 9). The PrECISE Executive Board will make decisions on the publication of the data. Most journals in the biomedical field require the submission of genomic data together with standard meta-data to public repositories. This means that in case of publishing our findings in scientific journals, we will make the data available to the public. In this case, the data sets will get a persistent identifier. External researchers will be able to apply for access to the PrECISE data before the general release to the public. In this case an application form needs to be filled detailing a research plan, ethical approval etc.. The application form will be available on the PrECISE project website together with information on approval process and data content of the PrECISE database. The Executive board can grant access to the entire database or restrict access to selected data sets. Non-profit organizations should cover the expenses of the consortium related to the data migration etc. For-profit organizations will have to pay an additional fee for data access. The data can be used to discover new biomarkers or sets of biomarkers, which allow for improved diagnostics of prostate cancer. In addition, different levels of data were acquired of a complex biological systems transitioning from one developmental stage (i.e. normal tissue) to another (i.e. tumour tissue). Integrating these data sets may result in important discoveries that will elucidate the dynamics of biological systems that fails to control homeostasis resulting in disease. No data will be deleted. Instead, preservation of raw and meta-data will be reached by two efforts: first, setup of a data server by AstridBio; second, publication of raw and meta-data in journals specialized in long-term preservation of biomedical data (e.g. Nature Scientific Data using the DataVerse platform). Through this, no costs (time/effort) to prepare the data for sharing/preservation are involved, and data will be publicly available for a very long time period (decades). # Chapter 6 Summary and Conclusion This data management plan outlines the handling of data generated within the PrECISE project, during and after the project lifetime. As the deliverable will be kept as a living document it will be regularly updated by the consortium (the next time in M18 within the periodic report). The partners put into write their plans and guarded expectations regarding valuable and publishable data.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1036_5G-XHaul_671551.md
# 1 Executive Summary The purpose of the Data Management Plan (DMP) is to describe the main elements of the data management policy that will be used by 5G-XHaul with regard to the datasets generated by the project. This DMP will describe the format and the way to store, archive and share the data created within the project. The data may include, but not limited to, code, publications, as well as measured data, e.g. field trials. DMP’s content will be updated from its creation (Month 6), to the end of the project (Month 36). The 5G-XHaul consortium will support the Open Data Initiative and will ensure that part of the collected data will be made available to the public. Our goal in this document is to: 1. specify the data that will be collected during the lifetime of 5G-XHaul; 2. investigate the best practices and guidelines for sharing the project outcomes, as well as facilitating open access (OA) to research data; and 3. define how the data collected in the project will be made available to third parties in contexts such as scientific scrutiny, peer review and use for research purposes. # 2 Introduction During the lifetime of 5G-XHaul, data of different nature will be generated and collected. The main goal of the 5G-XHaul Data Management Plan (DMP) is to outline the types of data foreseen for generation throughout the project, as well as the context and procedures of this generation. It also outlines the protocols followed to assess the generated/collected data with respect to their sensitiveness and outlines data acquisition plan for the duration of the project. Following the EC template on “Guidelines on Data Management in Horizon 2020” ( _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi- oadata-mgt_en.pdf_ ) , the DMP includes the following major components: * Data set reference and name, * Data set description, * Standards and metadata, * Data sharing and; * Archiving and preservation (including storage and backup) Following this file, the DMP clarifies that scientific generated research data will be easily: * Discoverable * Accessible * Assessable and intelligible * Useable beyond the original purpose for which it was collected * Interoperable to specific quality standards The remainder of the deliverable is structured as follows. Section 3 presents the rights to access the data generated in the project. Section 4 summarizes the rights to access to 5G-XHaul publications. In Section 5, the data to be gathered from the context of the project is presented. In Section 6, the data collection processes are outlined. In Section 7, the data are handled according to the different categories regarding confidentiality, i.e. there are data which are confidential and need special protection, data which are not confidential and can be shared, as well as data which depend on the informed consent of the participant. Section 8 addressed the terms of archiving and preservation. Finally, Section 9 summarizes the deliverable. The 5G-XHaul consortium will attempt to maximise the visibility and exploitation of the project results and its long-term impact, by providing as many as possible publicly available results that can be easily accessed and reused. # 3 Open Access to research data Open access (OA) to research data refers to the right to access and re-use digital research data generated by projects. OA data repositories will be setup by the consortium during the start of the project. However, partner hosted repositories, as well as external repositories will be used to ensure maximum visibility, serve as backups, and ensure availability well after the end of the project. With respect to software modules developed during the course of the project, whenever possible, i.e. not violating Intellectual Property Rights (IPR), they will also be provided under open-source license to allow for their reuse, adaptation and further enhancement to match possibly different application contexts and serve as a baseline for future business and research endeavours. The project will be using the Open Access Infrastructure for Research in Europe (OpenAIRE) ( _https://www.openaire.eu/_ ) [1][2], as well as exploiting the expected support to be provided on research data management for projects funded under Horizon 2020. OpenAIRE is a service that has been built to offer exactly this functionality and may be used to reference both the publication and the data. The lead author is responsible for getting approvals and then sharing the data and metadata on Zenodo ( _http://zenodo.org/_ ) . Zenodo is a repository supported by CERN and the EU OpenAire project which is open, free, searchable and structured with flexible licensing allowing for storing all types of data: datasets, images, presentations, publications and software. Some of the features are: * The repository has backup and archiving capabilities. * The repository allows for integration github.com where the project code will be stored. GitHub provides a free and flexible tool for code developing and storage. * Zenodo assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily- and uniquely-citable EU expects funded researchers to manage and share research data in a manner that maximizes opportunities for future research and complies with best practice in the relevant subject domain, that is: * The dataset has clear scope for wider research use * The dataset is likely to have long-term value for research or other purposes * The dataset have broad utility for reference and use by research communities * The dataset represents a significant output of the research project Openly accessible research data, generated during 5G-XHaul project, will be accessed, mined, exploited, reproduced and disseminated free of charge for the user. # 4 Open access to scientific publications The scientific publications will be published under the Open Access (OA) paradigm to the largest possible extent and in accordance to the quality standards set by the project. Since numerous publications are expected during the course of the project, the project’s approach will be to exploit both “ **gold** “ and “ **green** ” OA practices. Publications that are expected to be highly impactful and cited will be provided through “ **gold** ” OA to ensure their maximum visibility. For the other publications, the “ **green** ” OA practice will be followed, making them publicly available as soon as possible also in their published version. OA wil be achieved through the following steps: 1. Any paper presenting the project results will acknowledge the project: The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 671551-5G-XHaul and display the EU emblem. 2. Any paper presenting the project results will be deposited by the time of publication the latest to a formal repository for scientific papers. If the organization has not a formal repository ( _https://www.openaire.eu/participate/deposit/idrepos_ ) , the paper can be uploaded in Zenodo. 3. Authors will ensure that the publisher accepts OA via self-archiving in their departments formal repository or via _http://zenodo.org/_ . 4. Authors can choose to pay “author processing charges” to ensure OA publishing, but still they have to deposit the paper in a formal repository for scientific papers (step 2). 5. Authors will ensure OA via the repository to the bibliographic metadata identifying the deposited publication. More specifically, the following will be included: 1. The terms “European Union (EU)” and “Horizon 2020”; 2. “Dynamically Reconfigurable Optical-Wireless Backhaul/Fronthaul with Cognitive Control Plane for Small Cells and Cloud-RANs-5G-XHaul”, Grant agreement number 671551; 3. Publication data, length of embargo period if applicable; and 4. A persistent identifier. 6. Each case will be examinated separately in order to decide if self-archiving of paying for OA publishing. In all cases, publications will be treated – in terms of storage – in a similar manner to research data, meaning that project, partner and externally hosted repositories will be used to maximise visibility and ensure long-term preservation of the articles. As aforementioned, research data and associated software needed to reproduce the results presented in publications will be also made available as much and as soon as possible. # 5 5G-XHaul context and data description The objectives of 5G-XHaul with respect to data gathering raise from the _definition of the use cases_ at one of the first tasks to be carried out within the first months of the project.Also, the usage scenarios play an important role when thinking about data to be stored. The appropriate identification of these datasets will be generated and collected to drive the project’s technical work. Preliminary examples of the datasets that will be made available in 5G-XHaul comprise: * real network traffic statistics and performance related information for mobile (2G/3G/4G) and fixed (xDSL) access network nodes, * anonymous data rate requirements of mobile users derived from testbeds, * anonymous mobility data of mobile users derived from testbeds, and * data on channel quality, interference and error rate between different network units (RRH, BBU-pool, …). # 6 Data collection COSMOTE provides already three datasets listed and described below. Telefonica will study the possibility of sharing some traffic traces from their networks, taking into account its legal obligations for protecting the confidentiality of its customers. The availability of the data is currently being analysed by Telefonica legal services in order to find a common position for all the 5G-PPP projects. ## 6.1 Dataset 1: Mobile Network Traffic Statistics (at Base Station level) This dataset will include real network traffic statistics for a number of 3G/4G/4G+ Base Stations (BS) covering a specific (urban) geographical area, located in Athens, Greece. More specifically, the information that will be provided will include: * BS configuration-related data * Performance-related statistics for the 3G and the 4G/4G+ BSs located in the area under consideration. ### 6.1.1 Data set reference and name ##### 6.1.1.1 Base Station Configuration Data This dataset will include the following information: 1. _BS ID_ : Starting from 1 up to the max. number of BSs that will be considered. We shall stress that this is not the real BS ID. 2. _4G/4G+ BS related configuration data_ (if the BS supports 4G): incl., the BS type (e.g. micro, macro), the number of carriers and the available bandwidth (in MHz). 3. _3G/HSPA/HSPA+ BS related configuration data_ (if the BS supports 3G), incl., the BS type (e.g. micro, macro), the number of carriers and the bandwidth per Radio Access Technology (RAT). Additional information that could be provided is the following: * Rough coverage per BS * Neighboring cell relations per BS * #inhabitants / Km 2 in the area under study **6.1.1.2 3G/HSPA/HSPA+ Performance-related statistics (@Base Station level)** The following performance-related data will be provided: * _Time_ : in the form of 2015-11-25 00:00:00.0 * _Cell Name_ : The BS ID (1,2,3, …) along with the sector ID (A,B,C, …)  _Speech Traffic (Erlang)_ : The total speech traffic in Erlangs. * _Data Traffic (MB)_ : The hourly UL/DL Data traffic in MB * _Data Rate Distribution_ : The distribution of traffic per data rate; for 8, 16, 32, 64 ,128, 256 & 384 Kbps and HSUPA data rates * _Cell Throughput (kbps)_ : The average HSDPA throughput in Kbps * _HSUPA Throughput (Kbps)_ : The average and max HSUPA throughput in Kbps * _User Throughput (Kbps)_ : The average HSDPA/HSUPA throughput per User * _Average number of Users_ : The average number of HSDPA/HSUPA Users * _Access Failure (% of total)_ : The total access failure rate as % of total sessions (for CS and PS connections) * _RRC Connection Setup and Access Failure rate_ : The access failure rate due to RRC failure as % of total sessions (for CS and PS connections) * _RRC Connection Setup And Access Attempts Per Request Cause_ : The number of RRC Setup and Access requests per type; that is, for Conversational Call, International Call, Background Call, Subscirber Traffic Call, Emergency Call, Cell Resellection, Registration, High Priority Signalling, Low Priority Signalling, Call Re-establishment, SRNC Relocation. * _DCH Rejection Rate_ : The DCH Rejection Rate (% of total sessions) for UL/DL Voice, Video and PS sessions. * _DCH Attempts_ : The DCH Access Attempts for Voice, Video and PS sessions. * _Drop Call Rate (% of total) for Speech Sessions per Failure Cause_ : The Drop Call Rate (% of total sessions) for Speech Sessions per Failure Cause, i.e. due to Iu, Radio, BTS, IuR, UE, RNC, transmission or pre-emption issues. The total number of releases is also measured. * _Drop Call Rate (% of total) for Video Sessions per Failure Cause_ : The Drop Call Rate (% of total sessions) for Video Sessions per Failure Cause, i.e. due to Iu, Radio, BTS, IuR, UE, RNC, transmission or pre-emption issues. The total number of releases is also measured. * _Drop Call Rate (% of total) for PS Sessions per Failure Cause_ : The Drop Call Rate (% of total sessions) for PS Sessions per Failure Cause, i.e. due to Iu, Radio, BTS, IuR, UE, RNC or transmission issues. The total number of releases is also measured. _Note_ : All counters are available on a per hour basis. **6.1.1.3 4G/4G+ Performance-related statistics (@Base Station level)** The following performance-related data will be provided: * _Time_ : in the form of 2015-11-25 00:00:00.0 * _Cell Name_ : The BS ID (1,2,3, …) along with the sector ID (A,B,C, …) * _Data Traffic (MB)_ : The hourly UL/DL Data traffic in MB * _RRC Drop ratio (%)_ : The Radio Resource Control (RRC) Drop ratio * _RRC Setup Failure Rate (%)_ : The RRC Setup Failure Rate * _E_RAB Failure Rate (%)_ : The E-RAB Failure Rate * _Mean Cell Throughput (Mbps)_ : The hourly average LTE UL/DL throughput in Mbps * _Max Cell Throughput (Mbps)_ : The maximum LTE UL/DL throughput in Mbps * _Average Number of UEs_ : The average number of mobile devices  _Max number of UEs_ : The maximum number of simultaneous UE sessions. * _Total UEs in eNB_ : The total number of UEs in the BS during the hour. * _% Carrier Aggregation (CA) capable UES_ : The percentage of CA capable UEs over the total UEs * _Intra-eNB HO Failure Rate (%)_ : The failure rate (% of total attempts) for Intra eNB Handovers. * _Inter-eNB HO over X2 Failure Rate (%)_ : The failure rate (% of total attempts) for Inter eNB Handovers over the X2 interface * _Inter-eNB HO over S1 Failure Rate (%)_ : The failure rate (% of total attempts) for Inter eNB Handovers over S1 interface * _Number of CS Fallback (CSFB) Attempts_ : The number of CSFB attempts in idle and connected mode * _Average PRB Usage per TTI (%)_ : The UL/DL average Physical Resource Blocks usage (i.e. PRBs Used / PRBs Available) per TTI * _Intra-eNB Latency_ : It measures the intra-eNB latency * Average Channel Quality Indicator (CQI): The average CQI value * _CQI Distribution per Level (00-15) (%)_ : The CQI distribution per level for each of the 16 CQI values as percentage of total samples * _Modulation and Code Scheme (MCS) Distribution per Order (%)_ : The % distribution of MCS usage (Low (MCS0-9), Medium (MCS 10-19), High (MCS20-28)). * _MCS Distribution per Scheme (MCS0-MCS28) (%)_ : The MCS distribution per Scheme for each of the 29 values as percentage of total samples * _Downlink (DL) Modulation Scheme Distribution (%)_ : The % distribution of Modulation Scheme usage in the DL (QPSK,16QAM, 64QAM) * _Uplink (UL) Modulation Scheme Distribution (%)_ : The % distribution of Modulation Scheme usage in the UL (QPSK,16QAM, 64QAM). * _Average Physical Uplink Control Channel (PUCCH) Signal to Interference plus Noise_ _Ratio (SINR)_ : The average PUCCH SINR * _Average Physical Uplink Shared Channel (PUSCH) SINR_ : The average PUSCH SINR _Note_ : The above counters will be available on a per quarterly basis, apart from the average/max UEs per eNB, CQI και SINR PUCCH/PUSCH counters. ### 6.1.2 Data set description ##### 6.1.2.1 Base Station Configuration Data This dataset will be provided in the following format: * BS ID: A number which discriminates the BS among the others i.e., 1, 2, 3, … max. * 4G/4G configuration data e.g., 3 sectors (A,B,C) x 20MHz o 3G/HSPA/HSPA+ configuration data, e.g.: * 3 sectors (A,B,C) x 21Mbps * 6 sectors (D,E,F,G,H,J) x 42Mbps * Uplink: 5.8Mbps for all cells o Location-related data: e.g., Latitude and Longitude (37.94369217, 23.71617261) _Note_ : All BSs’ coordinates have been dislocated from original ones; however the relative distances have been kept, providing thus the original (displaced) network topology. ##### 6.1.2.2 3G/HSPA/HSPA+ Performance-related statistics (@Base Station level) The counters described above for the 3G/HSPA/HSPA+ BSs will be given in the following format: ##### 6.1.2.3 4G/4G+ Performance-related statistics (@Base Station level) The counters described above for the 4G/4G+ BSs will be given in the following format: ### 6.1.3 Standards and metadata \- ## 6.2 Dataset 2: Fixed (Access) Network (xDSL) Traffic Statistics The information that will be provided will be described as soon as the consortium makes a decision on the information/dataset that will be utilized in the context of the project. ### 6.2.1 Data set reference and name ### 6.2.2 Data set description ### 6.2.3 Standards and metadata ## 6.3 Dataset 3: Anonymous Mobility related data of mobile users Network related data (Cell-ID, LAC/TAC, RAT, RSSI/RSRP/RSRQ/RSNR/CQI, call/session status (idle, off-hook)) for the area under consideration (see Dataset 1) along with user location (latitude, longitude) and time information (timestamp) will be provided, for a limited number of users (e.g. 2-5 users). The anonymity of the users/subscribers will be ensured by: (a) utilizing SIM (pool) cards used by COSMOTE employees only, for network performance/optimization purposes, that is, not belonging to real COSMOTE subscribers and (b) utilizing part of the device IMEI (that is, last 4 digits) for the subscriber identification (if needed). Details about the information that will be provided will be described as soon as the consortium makes a decision on the information/dataset that will be utilized in the context of the project. # 7 Data Management and sharing The process of making the 5G-XHaul data public and publishable at the repository will follow the procedures described in the Consortium Agreement. ## 7.1 Categories of data based on their confidentiality level _Open/Protected/Confidential_ <table> <tr> <th> Topic </th> <th> Objective </th> <th> Data Type </th> <th> Source </th> <th> Category </th> </tr> <tr> <td> Dataset 1 </td> <td> To be utilized for simulation purposes </td> <td> Mobile network traffic statistics @BS level </td> <td> Commercial network </td> <td> Confidential </td> </tr> <tr> <td> Dataset 2 </td> <td> To be utilized for simulation purposes </td> <td> Fixed access network traffic statistics </td> <td> Commercial network </td> <td> Confidential </td> </tr> <tr> <td> Dataset 3 </td> <td> To be utilized for simulation purposes </td> <td> Mobility-related statistics </td> <td> Commercial network </td> <td> Confidential </td> </tr> </table> ## 7.2 Data sharing **_COSMOTE’s Note_ ** : The data/information that will be provided by COSMOTE is strictly confidential and is intended to be used in the context and the timeframe of the 5G-XHaul project solely, while in case of relevant studies/publications COSMOTE shall be informed prior to submission. Data will be shared when the related deliverable or paper has been made available at an Open Access (OA) repository. The normal expectation is that data related to a publication will be openly shared. However, to allow the exploitation of any opportunities arising from the raw data and tools, data sharing will proceed only if all co-authors of the related publication agree. OA to research data in 5G-XHaul will be achieved through the following steps: 1. Update of the DMP 2. Data selection 3. Data deposit into a data repository 4. License the data for reuse (Horizon 2020 recommendation is to use CC0 or CC BY) 5. Provide info on tools needed for validation: everything that could help third party in validating the data (workflow, code, etc.) Independent of the choose, the authors will ensure that the repository: * Gives the submitted dataset a persistent and unique identifier, to make sure that research outputs in disparate repositories can be linked back to particular researchers and grants * Provides a landing page for each dataset, with metadata * Helps to track if the data has been used by providing access and download statistics * Keeps the data available in the long term, if desired * Provides guidance on how to cite the data that has been deposited As suggested from the European Commission, the partners will deposit at the same time the research data needed to validate the results presented in the deposited scientific publications. This timescale applies for data underpinning the publication and results presented: research papers written and published during the funding period will be made available with a subset of the data necessary to verify the research findings. The consortium will then make a newer, complete version of data, available within 6 months of project completion. Other data (not underpinning the publication) will be shared during the project life following a granular approach to data sharing, releasing subsets of data at distinct periods, rather than wait until the end of the project, in order to obtain feedback from the user community and refine it as necessary. An important aspect to take into account, is who is **allowed to access the data** . It could happen that some of the dataset shouldn’t be publicly accessible to everyone. In this case, a control mechanisms will be established. These include: * Authentication systems that limit read access to authorized users only * Procedures to monitor and evaluate, one to one, access requests: user must complete a request form stating the purpose for which they intend to use the data. * Adoption of a Data Transfer Agreement that outlines conditions for access and use of the data Each time a new dataset will be deposited, the consortiun will decide on who is allowed to access the data. Generally speaking, anonymised and aggregate data will be made freely available to everyone, whereas sensitive and confidential data will only be accessed by specific authorized users. # 8 Archiving and preservation The Guidelines on Data Management in Horizon 2020 require defining procedures that will be put in place for long-term preservation of the data and backup. Datasets will be maintained for 5 years following project completion. To ensure high-quality long-term management and maintenance of the dataset, the consortium will implement procedures to protect information over time. These procedures will permit a broad range of users to easily obtain, share, and properly interpret both active and archived information, and they will ensure that information are: * kept up-to-date in content and format so they remain easily accessible and usable; * protected from catastrophic events (e.g., fire and flood), user error, hardware failure, software failure or corruption security breaches, and vandalism. Both Zenodo and OpenAIRE are purpose-built services that aim to provide archiving and preservation of long-tail research data. In addition, the 5G-XHaul website, linking back to OpenAIRE, is expected to be available for at least 2 years after the end of the project. Regarding the second aspect, solutions dealing with disaster risk management and recovery, as well as with regular backups of data and off-site storage of backup sets, are always integrated when using the official data repositories, i.e. Zenodo; the partners will ensure the adoptions of similar solutions when choosing an institutional research data repository. # 9 Conclusions The purpose of the DMP is to support the data management life cycle for all data that will be collected, processed or generated by the 5G-XHaul project. The DMP is not a fixed document, but evolves during the lifespan of the project. This document is expected to mature during the project; more developed versions of the plan could be included as additional deliverables at later stages. The DMP will be updated at least by the mid-term and final review to fine-tune it to the data generated and the uses identified by the consortium since not all data or potential uses are clear at this stage of the project. # 10 Bibliography 1. The OpenAIRE Project - Open Access Infrastructure for Research in Europe, _http://www.openaire.eu_ 2. “OpenAire, Open access overview, What is open access?,” [Online]. Available: _https://www.openaire.eu/open-access-overview/open-access-info/overview-of- open-access_ .
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1039_CogNet_671625.md
# Introduction This deliverable includes two important topics, Data Management Plan (DMP) and Data Protection Agencies (DPA) Notifications. Any research project DMP should describe what data the project will generate, whether and how it will be exploited or made accessible for verification and re-use, and how it will be curated and preserved. CogNet project will participate in the EC’s Pilot on Research Data, and to this end it will deposit relevant data in a research data repository. The benefits from data sharing will include creating a community around the dataset, where publications allow validation of results. This will lead to more collaboration and advanced research, citing the dataset and thus making project results more visible with greater impact. To maximise the extent of the data dissemination, CogNet will implement provisions for third parties to access, mine, exploit and reproduce the data, including the information necessary for validating project’s results. This document establishes the plan to satisfy the EC expectations for data management, which are the following: * Any Horizon 2020 project is invited to submit a DMP as an early project deliverable if it is relevant to their research. * All projects submitting a research proposal to ‘Research and Innovation Actions’ and ‘Innovation Actions’ are required to include a short outline of their general data management policy. * All projects which are successfully funded under the Pilot on Open Research Data are expected to produce an initial DMP deliverable within the first six months of the project * Each of the required points for the DMP should be addressed on a dataset-by-dataset basis. * List file types, formats, and the reuse potential for other researchers. * Where possible use existing metadata standards which will allow for potential integration with other datasets. * Explain how your data will be shared, and the level of access to be provided (and why). * Use a repository service to deposit your data and where possible make it and the underlying metadata accessible to third parties, free of charge. * Arrange backup and storage procedures which are most suited to the partners and nature of your project. * Complete more detailed DMPs at regular intervals throughout your project when changes occur or as a minimum in the run up to mid-term and final reviews. The DMP addresses the main elements of the data management policy that will be used by the project participants with regard to all datasets. Hence, it also establishes some procedural mechanisms for participants with the responsibilities of Data Controllers and Processors. For each dataset, the following aspects will be considered: 1. _OVERVIEW OF POLICIES_ : with respect to relevant bordering international initiatives and programs. 2. _ROLES_ : identifying the different participants, the rules played and their responsibilities. 3. _DATA SET REFERENCE AND NAME_ : identifier for the data set to be produced. 4. _DATA SET DESCRIPTION_ : description of the data that will be generated or collected, its origin (in case it is collected), nature, scale, to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. 5. _STANDARDS AND METADATA_ : reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created. 6. _DATA INFRASTRUCTURE_ : details of data production and provision assets. 7. _DATA SHARING_ : description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related). 8. _ARCHIVING AND PRESERVATION (INCLUDING STORAGE AND BACKUP)_ : description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. In addition, any project that produces, collects or processes research data should also take into account that data used to produce the research results of the project should be: 1. _DISCOVERABLE_ : identifiable by means of a standard identification mechanism (e.g. Digital Object Identifier). 2. _ACCESSIBLE_ : the data and associated software are produced and/or used in the project accessible and in what modalities, scope, licenses (e.g. licensing framework for research and education, embargo periods, commercial exploitation, etc.). 3. _ASSESSABLE AND INTELLIGIBLE_ : covering the capacity of third parties of accessing and using data with a short learning curve. 4. _USEABLE BEYOND THE ORIGINAL PURPOSE FOR WHICH IT WAS COLLECTED_ : easing the application of the data in other domains or sectors. 5. _INTEROPERABLE TO SPECIFIC QUALITY STANDARDS_ : allowing data exchange between researchers, institutions, organisations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins). The following sections of the document are: * Section 2 - Data Management Plan: This section describes how each of the aforementioned data management aspects will be tackled in CogNet. During the different updates of this deliverable the details for each identified database will be included. * Section 3 - Data Protection Agency Notifications: This section includes the plan to obtain the required certification and notification of the data protection agencies of the member states where research is being carried out. Notifications, when applicable, will acknowledge receipt of notification of the project in the jurisdiction of Data Protection Agencies. During the different updates of this deliverable all the notifications sent and their receipts will be included. This deliverable is not a fixed document, and it will evolve during the lifespan of the project to include updates on both data management related information (e.g. details on datasets), and also updates related to the notifications to the applicable Data Protection Agencies (including those outside of Europe: e.g. WeFi data collected in USA). It is expected that at least one update per year will be needed. # Data Management Plan In section 1 Introduction the different aspects that have to be considered when managing each and all of the datasets have been described. The following sub-sections present the details of each of these aspects. ## Overview of policies Here the plan includes exploration activities to detect and apply existing procedures from: * Departments from the consortium that have data management guidelines. * Groups from outside the consortium that have data management guidelines. * Institutions/agencies that establish mandatory data protection and security policy. * Mandatory formal standards to improve accessibility and intelligibility for third parties. The concrete actions here comprise: * List any relevant institution on data management, data sharing and data security. * Clarify relevance of servers’ location with the data. ## Roles and responsibilities ### DMP coordinator This section appoints the responsible party for data management. The DMP coordinator, Vicomtech-IK4 (ES), as leader of the Task 1.5 Data Protection and privacy management, is responsible for implementing the DMP, and ensuring it is reviewed and revised. ### DMP participants As established in the Consortium Agreement (CA) 10.7.1 each party agrees to comply with all obligations and requirements of its corresponding national data protection legislation and the Data Protection Directive 95/46/EEC and to provide all legal documents and certifications strictly required for compliance with such legislation to the Consortium designated Party for data protection control. While in 10.7.2 The Parties agree that any Background, Results, Confidential Information and/or any and all data and/or information that is provided, disclosed or otherwise made available between the Parties during the implementation of the Action and/or for any Exploitation activities (“Shared Information”), shall not include personal data as defined by Article 2, Section (a) of the Data Protection Directive (95/46/EEC) (hereinafter referred to as “Personal Data”). Accordingly each Party agrees that it will take all necessary steps to ensure that all Personal Data is removed from the Shared Information, made illegible, or otherwise made inaccessible (i.e. deidentify) to the other Parties prior to providing the Shared Information to such other Parties. Finally, 10.7.3 notwithstanding the preceding paragraph, if Personal Data is involved and/or needed during the implementation of the Action and/or for any Exploitation activities, each Party who provides or otherwise make available to any other Party Shared Information containing Personal Data (“Contributor”) notifies this disclosure to the Consortium designated Party for data protection control and represents that: (i) it has the authority and/or the authorisation to disclose the Shared Information, if any, which it provides to the Parties under this CA; (ii) where legally required and relevant, it has obtained appropriate informed consents from all the individuals involved, or from any other applicable institution, all in compliance with applicable regulations; and (iii) there is no restriction in place that would prevent any such other Party from using the Shared Information in accordance with and for the purpose of the Agreement. ### DMP processing phases There are four main types of research data: * **Observational data** : captured in real time, typically cannot be reproduced exactly. * **Experimental data** : from labs and equipment, can often be reproduced but may be expensive to do so. * **Simulation data** : from models, can typically be reproduced if the input data is known. * **Derived or compiled data** : after data mining or statistical analysis has been done, can be reproduced if analysis is documented. The 5G networking data processed in CogNet centres on 3 phases: * First, a static non-synthetic dataset ( _**observational** _ ) captured from a real infrastructure, to establish research activities with a static dataset. This dataset will enable Machine Learning algorithms simulations and training; essentially processed by WP2, WP3, WP4 and WP5. * Second, a live synthetic ( _**simulation** _ ) / non-synthetic ( _**experimental** _ ) dataset captured in real time from a testbed or lab infrastructure to monitor it and test optimization strategies coming from the Machine Learning algorithms in a controlled infrastructure; essentially processed by WP3, WP4 and WP5. * Third, a live non-synthetic dataset ( _**observational** _ ) captured in real time from a telco operator infrastructure to monitor it and apply optimization strategies coming from the Machine Learning algorithms. In this case, data will be the central part of the demonstration and validation in WP6. For the first phase, CogNet has the intention of buying an anonymized collection of information from end user devices to network infrastructure nodes including information about network performance, application engagement and performance. This dataset for the first phase must satisfy directives for data protection of personal data. It has not been declared in the Consortium Agreement (CA) who is responsible for this first phase dataset, but all the project coordinator (TSSG) will perform the sourcing/purchase process. The reason, CogNet is not itself a legal entity such as a foundation or limited liability company. Thus, the responsibility of this dataset will fall on TSSG. Regarding the second phase, during the project some experiments using testbeds or labs will be done and the captured datasets could be interesting to the research community, in this case the infrastructure owner keeps the data rights. For the third phase, the telco operator that provides the monitored infrastructure owns the data captured from there. ### DMP collaborative responsibilities The responsibilities over the data depend on which EU state has jurisdiction over the ownership and the processing of the data in the CogNet project. This question is answered by determining who is responsible, i.e. who is _**'controller'** _ as that term is defined in the Data Protection Directive (95/46/EEC). This is the entity that "determines the purposes and means of the processing of personal data". CogNet is itself not a legal entity such as a foundation or limited liability company. Thus, the responsibility for data protection will fall on the member of CogNet who is the ‘ _**controller** _ ’, the entity which collects and owns the data in question. When data is shared between parties, the recipient is a _**'processor'** _ and may use the data only as specified by the controller. Thus, the decisive factor is which entity collected the data. Each of these parties is responsible for notifying its local data protection authority. ### DMP responsibilities The following responsibilities will apply to datasets on a per-dataset basis: * data acquisition / capture (raw) o For pre-existing datasets owned by any of the CogNet partners, the partner that owns that dataset and makes it available to the project will be the Data Controller. * For acquisition of new dataset, we will apply what was defined in section 5 of CogNET’s Description of the Action: “ _For the purpose of the project TSSG will be the Data Controller and Robert Mullins the Point of Contact_ ”. * For creation or capture of new datasets, the partner(s) that create or capture the dataset will be the Data Controller(s). This applies to all types of research data (observational, experimental, simulation, and derived or compiled data). * metadata production (derived) - machine learning research participants. * In parallel activities, each participant is responsible for their metadata and an aggregator responsible will be defined according to the volume of metadata aggregated by each individual. This means bigger volume generator establishes aggregation mechanism and format. * In case of pipelined processing, each participant is responsible for their metadata and for aggregating it on top of the available metadata. * data sharing – The Data Controller will determine the details of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Similarly, the Data Controller will identify the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). Recommendations from OpenAIRE project such as depositing in Zenodo repository data and publications will be the starting point before making any decision on data sharing. * Data archiving and preservation (including storage and backup) – depending on each dataset the data archiving and preservation procedures that will be put in place for longterm preservation of the data will be responsibility of the corresponding Data Controller. This includes the indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. ## Data set reference and name As described in detail in section 2.2.3 the 5G networking data that will be processed in CogNet pivot around 3 phases: static non-synthetic dataset (observational), live synthetic (simulation)/non-synthetic(experimental) dataset, and live non-synthetic dataset (observational). Both raw captured and metadata processed information will be included in the dataset. In each phase the dataset will be generated/bought and processed different datasets targeting the same information but from different infrastructures. For each dataset the corresponding Data Controller will assure that the above information is defined: * Contact Name - telephone and email contact details * Date of First Version - Date the first version was completed * Date of Last Update - Date it was last changed ## Data set description ### General description CogNet will harvest 5G networking data with anonymized collections of information from end user devices to network infrastructure nodes including information about network performance, application engagement and performance. A preliminary data category would cover: * Device data * Connectivity data o Primary connected network - canonical network name (/WLAN/SSID/BSSID or /3GPP/PLMN/CellLAC/CellID or /3GPP2/SID/NID/BSID) o Specific Network Technology – connected network interface (Wi-Fi, GPRS, EDGE, HSPA+, LTE, …) * Usage data (total and per session) o Rx- amount of data downloaded on the network by a device (in bytes) o Tx –amount of data uploaded on the network by a device (in bytes) o avgRx - average downlink throughput observed on the network by a device (in bps) o avgTx \- average uplink throughput observed on the network by a device (in bps) o maxRx - maximum downlink throughput observed on the network by a device (in bps) o maxTx - maximum uplink throughput observed on the network by a device (in bps) o avgLatencyRx - average downlink latency observed on the network by a device (in bps) o avgLatencyTx - average uplink latency observed on the network by a device (in bps) o maxLatencyRx - maximum downlink latency observed on the network by a device (in bps) o maxLatencyTx - maximum uplink delay latency on the network by a device (in bps) o avgJitterRx - average downlink jitter observed on the network by a device (in bps) o avgJitterTx - average uplink jitter observed on the network by a device (in bps) * KPI indicators related (local-node and global-infrastructure): * Indicators/counters composed for too late HOs, too early HOs, HOs to a wrong cell and HOs subsequent to a connection setup (HO-related) o Number of HOs (incoming+outgoing) o Ping-pong handovers o Handover success rate (HOSR) o Bit error rate (BER) o Block error rate (BLER) o Outage <table> <tr> <th> **CogNet** </th> <th> **Version 1.0** </th> <th> </th> <th> **Page 13 of 33** </th> </tr> </table> * Average throughput o Number of calls K10 Hours and/or data communicated o Call set-up success rate (CSSR) o System/cell load information o Energy expenses o OPEX linked to energy expenses o Access delay o Call setup delay o Handover delay o Block call rate (BCR) o Drop call rate (DCR) o support for ubiquitous communication o support for emergency calls o support for lawful interception o network neutrality o QoS delivery: * capacity * delay * packet loss * guaranteed capacity * preemption o Resource consumption * energy * network capacity * compute * storage * caches o Resource consumption: * signalling * sub-optimal data paths * management overhead (incl. monitoring) * Contextual data (location, velocity, movement mode, etc.) <table> <tr> <th> **CogNet** </th> <th> **Version 1.0** </th> <th> </th> <th> </th> <th> **Page 14 of 33** </th> </tr> </table> * Session start time - Date and time of day, milliseconds resolution o Geo-location source used when session started –GPS Where this information will be organised around: * Geo Binned to keep a reference of the network performance with a specific area. A postprocessor will aggregate data and measures for each area cell visited by the devices * Session records triggered by connectivity events related to: * connectivity changes in the employed network interface Besides, as we plan to manage data coming from virtual machines we will have related datasets containing parameters such as: * CPU * RAM usage * Disk usage * Network usage * User/system load * Network traffic categories * Incoming VNF traffic * Outcoming VNF traffic * VNF allocation time * VNF instantiation time * Number of VNF * Types of VNF ### Format Different features captured have standard format and units, such as the GPS position or the speed, or _de facto_ formats such as availability, utilization, CPU load delay or bandwidth throughput. Moreover, the general format the probes records networking data will have an analytical structure influenced and partially inherited from the data collection acquired for the first phase. Hence, despite the high density of nodes and the decentralised nature of the assets we will create a common and homogeneous data format coming from heterogeneous infrastructures and systems. ### Accuracy and Volume Our aim is to apply real-time policies 5G networks can be dynamically adapt to next generation challenges. This way, live monitoring and real time decision- making would just involve stateless data. However, the nature of the use cases and scenarios that require Machine Learning algorithms to detect the trends and patterns require a time window of data that spans last 24 hours in order to include the different human behaviours during a daily life cycle. So the different factors that come into the data volume equation are: * the number of nodes and end devices * the time window (historical values) * the sampling frequency * the number of features captured * the metadata created for each asset Because the input data has a short historical memory, machine learning is required to achieve real time performance and the effect of the decisions taken must be ready immediately (<1 second - <1 minute, depending on the SLA and the change dimension), the dataset provided covers 24 hours. ### Origin As described in detail in section 2.2.3 the 5G networking data that will be processed in CogNet pivot around 3 phases: static non-synthetic dataset (observational), live synthetic (simulation)/non-synthetic(experimental) dataset, and live non-synthetic dataset (observational). For the first phase, CogNet has the intention of buying an anonymized collection of information gathered from end user devices including information about network performance, application engagement, end user demand and usage patterns and performance. These datasets can come from: * telco operators partners belonging the consortium * companies specialized in big data platform to enable network operators to optimize resources using the largest sample database of its kind * other projects or communities that share and provide networking datasets For the second phase, CogNet partners will collect their own datasets from their controlled testing and experimental environments. For the final phase, the datasets will come from telco operators that participate in the CogNet consortium. When dealing with new or non-anonymised data, CogNet will follow the best practices and recommendations made by the Article 29 Data Protection Working Party in their Opinion 05/2014 on Anonymisation Techniques 1 to appropriately anonymise the data. <table> <tr> <th> **CogNet** </th> <th> </th> <th> **Version 1.0** </th> <th> </th> <th> </th> <th> **Page 16 of 33** </th> </tr> </table> ## Standards and metadata Network monitoring is an important function in network management because it helps achieve three goals in network management: performance monitoring, fault monitoring, and account monitoring. ### Standards Concerning the standards to be considered, the list to be taken into account for the formal dataset structure, nomenclature and taxonomy is: * RFC 2819 2 , 4502 3 Remote Network Monitoring Management Information Base * SMIv2 4 is the naming structure used for naming monitored objects. * MIB-II 5 defines how the set of objects monitored can be defined. The attributes of these objects should have network monitoring values. * NETCONF 6 is a network management protocol developed and standardized by the IETF 7 . It provides mechanisms to install, manipulate, and delete the configuration of network devices. * YANG 8 is a data modelling language for the NETCONF network configuration protocol that was developed by the NETMOD working group in the IETF. * IF-MAP 9 is an open standard that makes it possible for any authorized device or system to publish information to an IF-MAP server, to search that server for relevant information, and to subscribe to any updates to that information. * SNMP/SNMPv2 10 internet standardized protocol on network management. It is used extensively for network monitoring functions such as collection errors and user statistics. SNMPv2 provides the information exchange and backbone for network monitoring. * RMON/RMON2 11 is the standard of how to monitor internet traffic. This is a standard that is supposedly implemented by internet device vendors so that a network using RMONcompliant devices can be monitored using RMON-compliant software. RMON standard provides remote monitoring. Basically, RMON-compliant devices are created to check the status of the network. * SMON provides network monitoring to switched networks. * ATM-RMON is designed for ATM networks. This is a new standard based on RMON-like features. So the data collection will be done according to standardised data capture/recording process. The captured data will be collected employing generic networking probes already available at the network nodes or NFV assets and other specifically developed and deployed over the end-user devices. ### Versioning Concerning the dataset versioning, folders structure and filenames, to help people to find the data, the title/reference will include who created or contributed to the data, date of creation and under what conditions it can be accessed. Thus, each instance recording a network monitoring session would have this naming convention: * infrastructure identifier (telco operator or lab) * machine learning algorithms and version applied (highlighting that the dataset is being influenced by some deployed CogNet policies) * date and time * _[main service(s)]_ when possible including the stimulus to the network, main service(s) processed and delivered by the network and accessed by the users ### Structure According to the previous standards, and the structure for data aggregation that they establish, individual metadata will be aggregated. In the case of the metadata generation and the link to the base values, CogNet Standardization leader T7.3 (TID) will lead and coordinate the work within the consortium to define a format to do it. Concerning the methodology to do it, Network Management System (NMS) activities focuses on two alternatives: * the management network and the production network may either be separated physically (out-band management segment) or * the management network and the production network may share the same physical infrastructure (the VLAN segment of the network). This aspect is still under discussion and will be decided in the WP2 Architecture deliverable (D22). NMS should include three components: * configuration management * log management * network monitoring <table> <tr> <th> **CogNet** </th> <th> </th> <th> </th> <th> </th> <th> **Version 1.0** </th> <th> **Page 18 of 33** </th> </tr> </table> This is because the decisions taken from the monitored network data effect the network behaviour and hence, the dataset from this moment. The dataset will be accompanied by a timed log bookmarking the decisions taken. Thus, the documentation that will accompany the data helps secondary users to understand and reuse it. To control the consistency and quality of data collection, 2 consecutive samples (24 hours each) will be taken under similar social dimension conditions (working days, starting at the same time). Because the network conditions can be very changeable with highly dynamic behaviour and we want to keep these singularities no processes such as calibration, data entry validation, peer review of data or representation will be done. Concerning analytical and procedural information management, a preliminary process performed by the Machine Learning ‘ _**processor’** _ will remove repeat samples or measurements. ### Vocabulary The variables, vocabularies, units of measurement are being defined in the WP2 activities around the definition of Use Cases and scenarios (D21). At this moment, the parameters considered are led by typical telecom KPIs. ## Data infrastructure For the Network monitoring ( _**'controller'** _ ), including the probes to get networking values such as errors and user statistics, there are two alternatives: * the management network and the production network may either be separated physically (out-band management segment) or * they may share the same physical infrastructure (the VLAN segment of the network). This aspect is still under discussion and it will be decided in the WP2 Architecture deliverable (D22). Moreover, the Network Manager (‘ _**processor’** _ ) needs infrastructure to manage and process the data volume that comes from the Machine Learning algorithms (e.g. performance degeneration detection or traffic prioritization). The specific platform is still under discussion and it will be decided in the WP2 Architecture deliverable (D22). Its implications on the DMP will be analysed then. ## Data sharing Here most of the aspects related to copyright and Intellectual Property Rights (IPR) issues have been established in the 5G infrastructure Collaboration Agreement between each beneficiary with the EC in respect of a particular 5G Action and the Consortium Agreement. ### Ownership The CA does not establish ownership for each data collection but how it is to be split across partner sites in collaborative research activities. Here, the DMP states who will own the copyright and IPR of any data that CogNet will collect or create. Considering the dataset as the monitored values plus the metadata and produced logs. For each dataset the ownership will be shared: * data captured - telco operator or lab provider not only provide infrastructure but the probes or the setup to deploy them * metadata production - machine learning research participants o In parallel activities, each participant owns the metadata generated o In case of pipelined processing, each participant owns the metadata generated ### Licensing Regarding the licensing for reuse or any restrictions on the reuse of third- party data we have to keep in mind (as it is described in detail in section 2.2.3) that the 5G networking data that will be processed in CogNet pivots around 3 phases: static non-synthetic dataset (observational), live synthetic (simulation)/non-synthetic (experimental) dataset, and live non-synthetic dataset (observational).For the first phase, CogNet has the intention of buying an anonymous collection of information from end user devices to network infrastructure nodes including information about network performance, application engagement and performance. So this data collection cannot be shared. Considering permissions to reuse third-party data and any restrictions needed on data sharing, the datasets generated in the second and the final stage the will be in an Open Access basis (for non-commercial research/educational use only) from the project’s end. ### Ethics The first aspect to consider is the management of any ethical issues. This responsibility falls on the CogNet 'controller' who have to: * gain formal consent for data preservation, sharing and reuse. * protect the identity of participants via anonymisation. * assure data is stored, transferred and handled securely. The DMP coordinator, Vicomtech-IK4 (ES) will monitor that guidelines and recommendations collected in the DMP have been translated to project development. ### Accessibility The retention period of all data and information collected, stored and processed for the dataset will be for 5-years following the end of the project. During this period, unless otherwise decided by the consortium members, the database functionality will remain the same as during the project duration. The dataset will be accessible through the CogNet portal and if no other decision has been made with respect to the database by the consortium members, the datasets will be attached to the CogNet wiki for open-access. A record will track entities reusing the CogNet dataset by means of a data sharing agreement. These third parties will not have rights for re- distribution or of publishing the dataset partially or totally requiring the reference to the dataset in any publication or derived product. Any potential users that want to get access to the data will be under the DMP conditions respecting the sensitivity of data. Here, a nondisclosure agreement would give sufficient protection for confidential data. Any potential user that wants the get access would be guided to: 1. submit a "request" to the dataset _**controller** _ from the CogNet consortium (TSSG). This request will contain: * full name * organization and department * email address * description of intended use 2. After reviewing the request. If the _**controller** _ approves it, the user will receive an email with a special link to verify the email address. 3. Then the user is asked to agree to and sign the following terms of access: [RESEARCHER_FULLNAME] (the "Researcher") has requested permission to use the 5G networking dataset (the "Dataset"). In exchange for such permission, Researcher hereby agrees to the following terms and conditions: 1. Researcher shall use the Database only for non-commercial research and educational purposes. 2. _**Controller** _ makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify _**controller** _ , including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Dataset, including but not limited to Researcher's use of any copies of copyrighted dataset that he or she may create from the Dataset. 4. Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions. 5. _**Controller** _ reserves the right to terminate Researcher's access to the Database at any time. 6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 7. The law of the _**Controller’s** _ country shall apply to all disputes under this agreement. ## Archiving and preservation ### Availability The database will be accessible through the CogNet portal for five years following the end of the project. During this period, unless otherwise decided by the consortium members, the database functionality will remain the same as during the project duration. In this 5-year period and if no other decision has been made with respect to the database by the consortium members, the datasets will be attached to the CogNet wiki for open-access. Once the dataset becomes public after the project end, the consortium will publish the dataset in a chosen third-party service, ensuring that this does not conflict with any legal jurisdiction in which data are held or the protection of sensitive data. ### Security Because data managed is not confidential and not related to personal data or trade secrets and it will be anonymized the risks to data security are minimum but still managed according to ISO 27001. The following security requirements have been identified and are presented: * _Access Control (Authorization)_ . CogNet dataset repository must be capable of controlling the level of access that each user has depending on their role. There must be appropriate mechanisms to define and enforce such access control (e.g. firewalls, file systems permissions, secure log-in) including physical control. _This is provided by the IT department of the dataset maintainer or by the dataset repository technology itself_ . * _Authentication_ . It must guarantee that the system being accessed is the intended one and that the user is who claims to be. _During the project, the partners will have access using a private password. Once the datasets become public an e-mail based mechanism will grant access_ . * _Non-Repudiation_ . To ensure the capability to prevent users from denying that data files were accessed, altered or deleted, auditing processes must be implemented. _This is provided by the IT department of the dataset maintainer or by the dataset repository technology itself_ . * _Data Confidentiality_ . Within the scope of the project, the protection of information from unauthorized access and disclosure must be preserved by restricting per-user access and encrypting the information during transmission and also during storage. After the defined retention period expires, ensure information erasure/destruction. _This is provided by the IT department of the dataset maintainer or by the dataset repository technology itself_ . * _Communication Security_ . Communication only flows through encrypted communication channels. _This is provided by the IT department of the dataset maintainer or by the dataset repository technology itself_ . * _Data Integrity_ . CogNet must protect data from unauthorized, uncontrolled, or accidental alteration during storage or transmission with the use of checksum values, hash functions and digital signatures. _This is provided by the IT department of the dataset maintainer or by the dataset repository technology itself_ . * _Availability_ . Back-up mechanisms are a desirable property, mainly to avoid Denial of Service (DoS) attacks. _This is provided by the IT department of the dataset maintainer or by the dataset repository technology itself_ . ## Discoverable By means of the project portal the datasets can be discovered. Moreover, the project publications and dissemination activities done in WP7 will bring visibility of the results and datasets coming from CogNet to the academia and research communities. ## Accessible The datasets will be accessible through a FTP server to be downloaded following the accessing mechanism described before. ## Assessable and intelligible Thanks to the use of standards and a proper and documented metadata annotation the datasets will ensure a usable dataset in a short time. The activities done in WP7 will target this aspect. ## Useable CogNet aims solutions for a higher and more intelligent level of automated monitoring and management of networks and applications improve operational efficiencies and facilitate the requirements of 5G. The same way CogNet will employ the datasets to find underlying connections to detect trends, patterns and symptoms towards a specific diagnostic enabling automatic decision making to deploy preventive or corrective policies, this data that reflects human activities could be used in other domains or sectors such as smart cities, traffic road planning, energy production, etc. ## Interoperable Because there is not a standard for storing this kind of information the interoperability to allowing data exchange between researchers, institutions, organisations, countries, etc. and the re-combination with different datasets from different origins is very difficult. It always needs of a human interpretation of the data structure to manually create a data map. However, the utilization of standards for data capturing and the documented annotation will ease the data exchange. This aspect will be reviewed in WP7 (D731, D732). # Data Protection Agency Notifications According to applicable data protection law it may be necessary to notify data protection authorities in the jurisdiction where RTD activities will be carried. At the initial stage of the CogNet project the exact requirements and due diligence have been scoped and defined within the jurisdictions where the research will take place. The conclusions of the review of the applicable data protection law in the jurisdiction where RTD activities will be carried out in CogNet are that it will be necessary to notify the applicable Data Protection Agencies (DPA) of Ireland, Spain, and Italy, whereas in the case of Germany and Israel we understand that the law does not require these notifications. The notification procedures that have been followed are described in Article 29 Working Group document “Vadmecum on Notification Requirements” 12 . The rest of this section presents the conclusions of the review of the applicable data protection law in the jurisdiction where RTD activities will be carried out in CogNet. As stated in Section 5 of the project proposal, the notification requirements vary from one country to another and therefore no single timeline can be provided for completion of all notification procedures. In addition to the initial notifications, before testing phases of the CogNet project, the respective national DPAs of the participant countries where the field-trials, demonstrations, and feasibility experimentations will take place will be notified of the projects testing phases and more specifically on the personal data of the voluntary participants that will be utilized. ## Spanish Data Protection Agency Notification: ES This section includes the formal acknowledgement of receipt of notification of the CogNet project in the jurisdiction of the DMP coordinator, i.e. the document received from the Spanish Data Protection Agency on behalf of the agency’s Director that confirms the registration of the CogNet file. The Spanish Data Protection Agency (AEPD in the Spanish acronym) is the public law authority overseeing compliance with the legal provisions on the protection of personal data. In order to guarantee effective compliance with the Organic Act on Data Protection (LOPD), which is the basis of the system to ensure the right to protect personal data, the adequate involvement of all agents is essential. The AEPD is of the understanding that its functions must always be conducted with the priority objective of guaranteeing the protection of individual rights. Accordingly, actions are taken specifically aimed at enhancing citizens’ capacity to effectively contribute to data protection. Some of the outstanding activities performed by the Spanish Data Protection Agency are the dissemination of activities and of the right to protection of personal data, direct assistance in response to citizens’ queries, and procedures to protect rights of individuals. One of the most important instruments for better protecting the rights of citizens is the registry of filing systems. The evolution of the data on the filing systems registered at the Data Protection General Registry (RGPD) is considered a significant reference regarding compliance with the Data Protection Laws. The AEPD encourages the adoption of rules that are meant to complete the legal framework for data protection, and it also contributes to ensuring that the right to data protection is treated correctly in the legal provisions adopted with purposes that have no specific relation to data protection. Many of the important topics affecting data protection are of an international scope and certain concerns, such as security, reach well beyond national boundaries. Therefore, this important international dimension is present in all of the activities of the AEPD, which has been and continues to be involved in a number of international forums. ### Agency Notification: ES The CogNet information has being registered as a Privately Owned File (Fichero de Titularidad Privada), by introducing the required information in the Spanish Data Protection General Registry. As specified in Article 14 of the Organic Act on Data Protection, the right to consult the Registry enables any person to become acquainted of the existence of treatments of personal data, its purpose and the identity of the data controller. Notifications to the AEPD can be done in the following _link_ . **Figure 1 - Notification sent to the Spanish DPA** The information contained in the CogNet file registered in the Spanish Data Protection Agency has also being included in Vicomtech-IK4’s Security Document in compliance with the Organic Act on Data Protection. The information notified to the Agency regarding personal data protection of the project is structured as explained below. ### Private Ownership File: Detailed Information: ES **Figure 2 - Details of the notification sent to the Spanish DPA** ## Notification Processes: IE, IT, DE, ISR Besides the aforementioned notification to the Spanish Data Protection Agency, the required certification and approval of the data protection agencies of the remaining participating states (i.e. Ireland, Italy, Germany and Israel) where research has been carried out are presented in this section. ### Notification to the Data Protection Commissioner: IE This section includes the formal notification regarding the CogNet project to the Irish Data Protection Authority, the office of the Data Protection Commissioner which was established under the 1988 Data Protection Act. The Data Protection Amendment Act, 2003, updated the legislation, implementing the provisions of EU Directive 95/46. The Acts set out the general principle that individuals should be in a position to control how data relating to them is used. Registration with the Data Protection Commissioner can be done in the following _link_ . The notification to the Data Protection Commissioner has not been completed at the time of writing this first version of this deliverable. In the next update of this document the notification receipt will be included here. ### Notification to the Garante per la protezione dei dati personali: IT This section includes the formal notification regarding the CogNet project to the Italian Data Protection Authority (Garante per la protezione dei dati personali), an independent authority established to protect the fundamental rights related to the processing of personal data, and to ensure respect for individuals' dignity. Notifications to the Garante per la protezione dei dati personali can be done in the following _link_ . The notification to the Garante per la protezione dei dati personali has not been completed at the time of writing this first version of this deliverable. In the next update of this document the notification receipt will be included here. ### Notification to the Der Bundesbeauftragte für den Datenschutz und die Informationsfreiheit: DE The main legal source of data protection in Germany is the (Bundesdatenschutzgesetz) (BDSG), which implements Directive 95/46/EC on data protection. Additionally, each German state has a data protection law of its own. There are sectoral laws, including the Telecommunication Act (Telekommunikationsgesetz), which applies to providers of telecommunication services. The German Act Against Unfair Competition (Gesetz gegen den unlauteren Wettbewerb) (the “UCA”) dated 3 July 2004 and the revised German Telecommunications Act (Telekommunikationsgesetz) (the “TA”) dated 22 June 2004 (with the TA being applicable only to telecommunications service providers in addition to the UCA) both implemented Article 13 of the Privacy and Electronic Communications Directive. Further, the TMA may be amended to implement the Citizens’ Rights Directive as regards the storage of cookies. Note that German data protection law does not apply if a controller located in another country of the European Economic Area (EEA) collects, processes or uses personal data within Germany. The collection, processing and use of personal data is only admissible if expressly permitted by the BDSG or any other legal provision, or if the data subject has expressly consented in advance. The Federal Republic of Germany is a federation of 16 states which are not just provinces but states with their own original sovereign rights and legislative responsibilities. The supreme power of the State is divided between the federal and the state governments. The federal system of government also affects the supervision of data protection. In Germany there are 20 different federal and regional supervisory authorities responsible for monitoring the implementation of data protection. Data protection supervision in the private sector comes under the responsibility of the states. However, there is one exception: the telecommunications and postal services companies. Those firms are monitored by the federal government which has assigned that task to the Federal Data Protection Commissioner (Der Bundesbeauftragte für den Datenschutz und die Informationsfreiheit). According to Art 4 d and 4 e of FDPA Data Controllers have to notify if they engage in data storing and processing for the purpose of transferring data for business reasons; all others are exempted. According to the aforementioned Act (FDPA) all automated processing procedures have to be registered with the competent supervisory authority. However, this rule is not applied if the entity has appointed a data protection official (“privacy officer” under Directive 95/46/EC). Therefore, according to our interpretation of the German law 13 , CogNet does not need to be registered or notified, provided a Data Protection Official is appointed by the controller, which is the case, as Fraunhofer Gesellschaft has been appointed CogNet Data Protection Official in Germany. Therefore, the Data Protection Official at Fraunhofer Gesellschaft has been duly informed by the controller and keeps the information as set out by the law. For the CogNet project, an information note has been submitted to the Data Protection Official at Fraunhofer Gesellschaft to comply with the registration requirements set out in article 4e of the German Federal Data Protection Act. As required, the Data Protection Official at Fraunhofer Gesellschaft was informed of the elements as required under German legislation, including the name of the controller, the name of the persons responsible for the data processing, the persons authorised to access the data and the purposes. Shall it be needed, a consent form will be prepared for volunteers giving them duly information of their rights and a clear overview of what data will be processed for what purposes during the field-trials, demonstrations, and feasibility experimentations of the project. ### Notification to the Israeli Law Information and Technology Agency (ILITA): ISR EU Commission published on January 31, 2011, an announcement on the adequacy of data protection law in Israel 14 . The Article 29 Working Party’s favourable opinion on the level of adequacy under Israeli law 15 , contributed to the adoption of the decision, as well. Article 1 of the aforementioned Commission Decision says: 1. _For the purposes of Article 25(2) of Directive 95/46/EC, the State of Israel is considered as providing an adequate level of protection for personal data transferred from the European Union in relation to automated international transfers of personal data from the European Union or, where they are not automated, they are subject to further automated processing in the State of Israel._ 2. _The competent supervisory authority of the State of Israel for the application of the legal data protection standards in the State of Israel is the ‘Israeli Law, Information and Technology Authority (ILITA)’, referred to in the Annex to this Decision._ The decision set out a variety of findings that served as grounds for declaring data protection in Israel to be in conformity with EU standards. The Commission favourably mentions the semiconstitutional status of the right to privacy under the Human Dignity and Liberty basic law; the similarity in standards between the EU Data Protection Directive and Israel’s Privacy Protection Act; the existence of data protection provisions in legislation related to the financial, health and public sectors; the availability of administrative and judicial remedies; and the independence of the country’s data protection authority – the Israeli Law Information and Technology Agency (ILITA) 16 . Privacy is a constitutional right under Article 7 of Basic Law: Human Dignity and Liberty. In addition, the Privacy Protection Act, 5741-1981 (“PPA”), contains specific privacy legislation. Chapter B of the PPA deals with data protection. The PPA entered into force in 1981. The Privacy Law requires that the owner of a database register a database with ILITA if: * The database contains information on more than 10,000 people; * The database contains sensitive information on any number of people; * The database includes information on persons, and the information was not delivered to this database by them, on their behalf or with their consent; * The database belongs to a public body; or * The database is used for direct mail services. The registration system is based on registration of databases, as opposed to data controllers. Hence, if a data controller has several databases, such as human resources, customer data, and suppliers, it must register each database separately. In 2014, ILITA amended the database registration procedures, requiring the filing of a far more detailed application form than before, specifying, amongst other things, the methods and sources of data collection and the types of data in a database. The term “database” refers to a collection of data processed by computer but excludes information consisting solely of basic contact details if such details are not in themselves likely to infringe an individual’s privacy. The PPA restricts data transfers to third parties, including corporate affiliates, within or outside of Israel. An additional layer of regulation applies to international data transfers under the Privacy Protection Regulations (Transfer of Data to Databases Outside of Israel), 2001 (the “Transfer Regulations”). The Transfer Regulations apply to both inter- and intra-entity transfers of personal data outside of Israel. They permit transfers to: (i) EU Member States; (ii) other signatories of Council of Europe Convention 108; and (iii) a country “which receives data from Member States of the European Community, under the same terms of acceptance”. Therefore, according to our interpretation of the Israeli PPA, at this initial stage of the CogNet project it is not foreseen the need to obtain authorisation from the Database Registrar. # Conclusions This document has provided guidelines that will deliver simple and practical help during the implementation and validation stages of the CogNet platform, support compliance on underlying legal framework and promote the adoption of best data management practices. The DMP establishes a set of guidelines meeting each of the fundamental topics to be considered. These guidelines cover aspects such as applicable policies, roles, standards, infrastructures, sharing strategies, data processing, storage, retention and structure, legal compliance and compliance with market standards and best ethical and privacy practices, identification, accessibility, intelligibility, legitimate use for other purposes. These guidelines will be adopted at the early stages of the project. At the time this first version of the DMP was written, the datasets to be utilized by the platform have not been fully defined and therefore their description and the exact procedures to be adopted have not been discussed extensively. Moreover, to satisfy all the regulations, all applicable Data Protection Agencies (DPA) of the countries of the partners belonging to the CogNet project have been or are in the process of being notified (Ireland, Spain, and Italy), The DPA in Germany has not been notified as, in accordance to the Germany law, a Data Protection Official (Fraunhofer Gesellschaft) has been appointed by the controller. With regard to Israel, no database has been registered to the Data Protection Agency in Israel at this stage of the CogNet project we do not foresee any dataset in Israel to have information on more than 10,000 people, to contain sensitive information on any number of people, to include information on persons, and the information was not delivered to this database by them, on their behalf or with their consent, to belong to a public body, or to be used for direct mail services.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1040_VirtuWind_671648.md
# 1 ROLES AND RESPONSIBILITIES Roles and responsibilities for maintaining and updating the Data Management Plan (DMP) are linked to roles within VirtuWind. In case new personnel is assigned to a relevant role, responsibilities with respect to the DMP are also taken over. For details on the management roles and structure of VirtuWind see Description of Action (DoA), Section 3. The Data Management Plan is maintained by the Project Coordination Committee (PCC). Reviews of the DMP are a regular agenda item of PCC meetings, conference calls, and work package (WP) results will be checked with respect to relevant information for the DMP. WP leads are responsible that results of tasks within their work package are aligned with the definitions in the DMP. WP leads are also responsible that the table in Section 3 of this DMP is updated as soon as data according to the definition in Section 2 is created within their WP (for details on the update procedure see Section 6.2). Updates of the tables in Section 3 of the DMP are communicated from the Technical Management Committee (TMC) to the PCC together with the minutes of the monthly TMC calls (see also section on update procedures). In order to ensure that this DMP is implemented and followed, reviews (by PCC and/or TMC) of all kinds of project related documents (e.g., reports, deliverables, publications) will include also a check for used data and the proper documentation and use in-line with this DMP. In case the contact person for data is leaving the project, the affiliation of the original contact person will take over the responsibility and will assign a new contact person. # 2 EXPECTED DATA VirtuWind is a three year project and will produce a number of technical results relevant for SDN-based industrial networks. This includes data created in lab experiments and real world tests of industrial networks (specifically control networks of wind parks). Some data created and used in VirtuWind is related to critical infrastructures and will not be publically accessible. More details and reasoning will be given in the detailed description of the specific data sets in Section 3. Expected data will be created as: * raw data sets of network status data from lab experiments and real world data * data derived from raw data, including related data processing algorithms * technical and scientific publications (processed data) * reports and deliverables (processed data) # 3 DATA FORMATS AND METADATA ## 3.1 Data Formats Detailed descriptions of the expected information of each cell are given at the end of this section. <table> <tr> <th> **Data set reference** </th> <th> **Data set name** </th> <th> **Data Set Description** </th> <th> **Standards and metadata** </th> <th> **Data sharing** </th> <th> **Archiving and preservation (including storage and backup)** </th> <th> **Contact Person/ source of data** </th> </tr> <tr> <td> VirtuWind01 </td> <td> Test Data </td> <td> Test Data </td> <td> NA </td> <td> confidential </td> <td> \- </td> <td> PI, [email protected] </td> </tr> <tr> <td> Test </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> **Table 1: Data Formats Note:** **This table is linked to file “VirtuWind DMP.xlsx” -** **_https://overseer1.erlm.siemens.de/repository/Document/downloadWithName/VirtuWind%20DMP.xlsx?reqCode=downloadWithName &id=8 _ ** **_033083_ . ** **Insert text into the Excel file and update this linked table to make changes visible!** 5 The following table gives a detailed description of the fields used in the data formats table of Section 3.1. <table> <tr> <th> Data set reference and name </th> <th> Identifier for the data set to be produced </th> </tr> <tr> <td> Data set description </td> <td> Origin (in case it is collected), scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. </td> </tr> <tr> <td> Standards and metadata </td> <td> Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created </td> </tr> <tr> <td> Data sharing </td> <td> Gives dissemination level, where data is available (link), access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). **In case the dataset cannot be shared, the reasons for this should be mentioned here** (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related). </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> <td> In general, the procedure described in Section 5 will be applied. This cell gives a data specific description of the procedures that will be put in place for longterm preservation of the data (if required). Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered (if required). </td> </tr> </table> ## 3.2 Metadata VirtuWind plans to create and share data in relation to project deliverables or publications. Deliverables and publications will give all relevant information, including the meaning of data sets, the methods of data acquisition/processing, as well as specific methods/algorithms for usage (if required). Thus, deliverables and publication can be considered as main piece of metadata for all data sets created within the project. # 4 DATA SHARING AND ACCESS Data and metadata, as well as project related documents (release version of document – not raw format) with dissemination level “public” will be accessible via the project website _http://www.virtuwind.eu_ . Registration (free of charge) is required to get access. The dissemination level is initially proposed by the corresponding author and will be approved by PCC (details see Section 6.2). As far as possible depending on the publishers’ policy pre-prints of the publications will be made available open access via the project website, as well as OpenAIRE. In case embargo periods (e.g., given by publishers) have to be considered, open access will be given after the embargo period expires. VirtuWind has also a budget item allocated for open access which will be used to provide open access for publications of high importance/relevance. The decision making procedure for such “selected” open access publication will is given in the VirtuWind Dissemination Plan. Software which is required for using published data is made available via the project website _http://www.virtuwind.eu_ according to the corresponding license terms (e.g., open source licenses such as EPL, …). All modified GPL or similar copy left licensed code will be made publically available in _www.virtuwind.eu_ at project completion. # 5 DATA ARCHIVING AND PRESERVATION All project related documents (raw formats), deliverables, reports, publications, data, and other artifacts will be stored in a repository accessible during project duration for all partners. This repository is hosted (with backup) by the Coordinator and the link is/was distributed at the first consortium meeting. Access to the repository is given to registered persons from project partners only. The folder structure of the repository is managed by the coordinator and changes of the structure need to be coordinated with the Coordinator. Corresponding partners will keep the above mentioned repositories operational during the project life time. After project closure, repositories will be maintained for at least one more year. After project closure the administrating partner can change access policies (e.g., restricted access / access on demand) in order to keep maintenance costs at a minimum. 6. ANNEX: 6.1 Important Links: Current Version of the Data management plan (pdf): _https://overseer1.erlm.siemens.de/repository/Document/downloadWithName/Deliverable%20D1.4.docx?req_ _Code=downloadWithName &id=8033085 _ Excel file for data formats and metadata (to be updated by WP leads and TMC): _https://overseer1.erlm.siemens.de/repository/Document/downloadWithName/VirtuWind%20DMP.xlsx?reqCo_ _de=downloadWithName &id=8033083 _ ## 6.2 Update procedure for Table 1 Trigger: data (see definition in Section 2) is produced within a task 1. WP lead updates table with data format giving all required details for new data set including a proposal for the dissemination level. In case of questions, TMC/TM can assist. 2. WP lead informs TMC on update and TMC will create a new sub-version of the DMP (with updated table in Section 3). 3. TMC will inform PCC on the release of that new sub-version and PCC members can check the update on table 1 (incl. approval of dissemination level). In case PCC has objections, TM will be informed by PCC. TM then helps to clarify the open issues between PCC and WP lead. ## 6.3 Update procedure of this deliverable (except Table 1) Updates of this deliverable (not table 1 – for updating table 1 see section above) can be triggered by TMC and/or PCC. All releases of this deliverable are subject to PCC approval and will follow the decision making process described in the Consortium Agreement.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1041_CHARISMA_671704.md
# Executive Summary This document outlines the management guidelines and Data Management Plan for CHARISMA, including a set of rules and guidelines for an effective execution of the project operations in the different work packages and tasks by the whole consortium. The document provides an overview of the management structure and the major management and coordination roles, focusing on the responsibilities of the different boards and managers. It also defines the communication channels among partners (such as email distribution lists, conference calls, collaboration through sharing content, action points and issue tracking tool) that will facilitate the execution of the project, as well as a set of basic guidelines for using them. The management guidelines provide information about general rules for planning and reporting, defines the quality assurance roles and responsibilities, establishes a deliverable acceptance and review process, and defines the document management rules and tools. This document also includes the Data Management Plan (DMP) that deals with the project digital research data. The purpose of the DMP is to define a policy regarding the data management life cycle for all data that will be collected, processed and generated through the project execution. The DMP also specifies the data that will be made available to the public and the one that will be for internal use only. This document is intended to complement the information already included in the Description of Action (Annex 1 of the Grant agreement) and the Consortium Agreement of CHARISMA. The first part of the deliverable (Sections 1 and 2) defines the roles and functions of the partners and the individuals involved in the management structure providing a set of contact references that are necessary to carry out the work effectively and deliver quality results. The second part (Section 3) defines the project Quality Plan, which is based on best practices implementation in a project of this nature. Quality management of the project results is implemented as a continuous improvement process highly related to the project agile methodologies (SCRUM), adopted to manage the software development activities. The third part (Section 4) is devoted to the DMP which outlines the method used for collecting, organizing, backing up and storing the data generated in the framework of CHARISMA. # 1\. Management structure With the aim of a successful execution of CHARISMA, it is key to ensure that the project accomplishes the stated objectives, within the given deadlines and that the results can guarantee its exploitation after the project. From a technical point of view and from an administrative point of view it is also very important that the timing and budget are successfully met, as stated in the DoA. Currently, the project consortium comprises 13 partners. An additional partner will be incorporated to the consortium during the first months of the project as agreed with the European Commission. The Description of the Action (Part A) and the Consortium Agreement describe in detail the project management goals and the project management structure, and summarizing here their contents and obligations. **Figure 1-1: CHARISMA Project Management Structure** The management structure shown in Figure 1 has been set by the consortium aiming to accomplish the following general objectives: * Ensure that the project is conducted in accordance with EC rules; * Reach the objectives of the project within the agreed budget and time scales; * Coordinate the work and ensure effective communication between partners; * Ensure the quality of the work performed as well as of the deliverables; * Maximize the potential for exploiting results; * Manage properly foreground and IPR matters; * Ensure that decisions are taken on the basis of data and factual information; * Solve any problem or conflicting situation; * Set the quality policy, including quality objectives for the project and; * Ensure that the appropriate infrastructure is set up to support the objectives above. ## 1.1. Governance structure ### 1.1.1. General Assembly The General Assembly (GA) is the ultimate decision-making body of the consortium, meaning that the GA can make decisions on contractual matters, such as the budget, timeline, deliverables, PM shifts, adding/deleting partners. As outlined in the Consortium Agreement, the GA consists of one representative from each partner and is chaired by the Project Coordinator (PC). Each Partner has nominated a representative, with budget responsibility and able to assume the role as well as to represent contractor’s interests. In accordance with the Consortium agreement, the GA will assume the following responsibilities: * Provide overall direction and policy; * Control the satisfactory execution of the project in terms of road-map and monitor corrective actions, including deliverables review; * Budget follow-up and transfer, including budget re-allocation and allocation of the contingency funds; * Contractual changes including in the Consortium Agreement and selecting new contractors to enter in the partnership, termination of the contract and action against defaulting partners;  Approving all reports and deliverables required in the frame of the Grant Agreement;  Arbitrating on deadlock situations occurring within the WPs. The GA shall be convened at least once a year in the framework of ordinary meeting and at any time upon written request of the Technical Board or 1/3 of the members of the GA in an extraordinary meeting. Decisions will be taken by consensus whenever possible; only in case of conflict decisions will be taken by voting. In the latter case, the quorum has to be reached. In voting, each partner representative has one vote. Any decision may also be taken without the need of a meeting in person if the coordinator circulates to all members of the Consortium Body a written document which is then agreed by the defined majority of all members of the Consortium Body (see Section 6.2.3. of the Consortium Agreement). Such document shall include the deadline for responses. Any substantial changes agreed at this level would typically be reflected in amendments to the project work plan, contractual documents or updates of the Consortium Agreement as required. The GA meeting will normally take place as face-to-face meeting but may also be held by teleconference or other telecommunication means. It is required a minimum attendance of at least two-thirds of its members (Quorum: present or represented). The GA members are shown in the table below. <table> <tr> <th> **Participant name** </th> <th> **GA Member** </th> </tr> <tr> <td> **Fundacio Privada I2CAT, Internet i Innovació Digital a Catalunya** </td> <td> Eduard ESCALONA / Carles BOCK </td> </tr> <tr> <td> **Fraunhofer-Gesellschaft zur Foerderung der Angewandten Forschung E.V** </td> <td> Volker JUNGNICKEL </td> </tr> <tr> <td> **National Center for Scientific Research “Demokritos”** </td> <td> Tasos KOURTIS </td> </tr> <tr> <td> **APFutura Internacional Soluciones, SL** </td> <td> Oriol RIBA </td> </tr> <tr> <td> **Innoroute GMBH** </td> <td> Andreas FOGLAR </td> </tr> <tr> <td> **INCITES Consuting SARL** </td> <td> Theodoros ROKKAS </td> </tr> <tr> <td> **JCP-Consult SAS** </td> <td> Jean-Charles POINT </td> </tr> <tr> <td> **University of ESSEX** </td> <td> Mike PARKER </td> </tr> <tr> <td> **Cosmote Kinites Tilepikoinonies AE** </td> <td> Elina THEODOROPOULOU </td> </tr> <tr> <td> **Intracom S.A Telecom Solutions** </td> <td> Spiros SPIROU </td> </tr> <tr> <td> **Telekom Slovenije DD** </td> <td> Blaž PETERNEL </td> </tr> <tr> <td> **PT Inovaçao e Sitemas SA** </td> <td> Victor MARQUES </td> </tr> <tr> <td> **Ethernity Networks LTD** </td> <td> David LEVI </td> </tr> </table> Table 1-1: CHARISMA General Assembly Members ### 1.1.2. Project Coordinator <table> <tr> <th> Name </th> <th> Eduard Escalona </th> </tr> <tr> <td> Address </td> <td> Gran Capità 2-4 Edifici Nexus I </td> </tr> <tr> <td> Place </td> <td> Barcelona </td> </tr> </table> The **Project Coordinator (PC)** is responsible for implementing the decisions taken by the assembly and for tracking the overall project progress according to the work plan. The PC provides the Commission, as the official communication channel, with technical, managerial and financial information. The coordinator is the authorized contact to negotiate on behalf of the consortium meaning that the project administration will be under his supervision, responds to important changes during the lifetime of the project and coordinates necessary adaptation to meet the conditions of the external environment. In this role he is assisted by the **Management Team (MT)** and the **Technical & Innovation Manager (TIM). ** Dr. Eduard Escalona (i2CAT) is the Project Coordinator of CHARISMA. The project coordinator chairs the general Assembly. His major tasks are: * Consortium Agreement coordination; * Supervision of distribution of EC’s payments to partners; * Preparation of the support of the MT for the reports, cost statements and project documents required by the EC; * Solution provider of all conflicts among partners; * Organization and coordination of the internal review of EC deliverables; * Organization of EC review meetings; * Supervision of IPR and knowledge management; * Representative of the consortium in dissemination events; The PC is helped by assistants responsible for the daily administrative supervision and financial reports and by the Technical and Innovation Coordinator. <table> <tr> <th> </th> <th> Post Code </th> <th> 08034 </th> <th> </th> </tr> <tr> <td> </td> <td> Country </td> <td> España </td> <td> </td> </tr> <tr> <td> </td> <td> Telephone </td> <td> +34 93 553 25 10 </td> <td> </td> </tr> <tr> <td> </td> <td> Fax </td> <td> +34 93 553 25 20 </td> <td> </td> </tr> <tr> <td> </td> <td> E-mail </td> <td> [email protected]_ </td> <td> </td> </tr> </table> Table 1-2: Contact Details of the Project Coordinator _1.1.2.1. Project management Office_ A **Project Management Office (PMO)** comprising a team of employees familiar with administrative, legal, financial, communication and IPR issues will support the Project Coordinator in all the above responsibilities. The PMO gets advice from financial, legal, IPR specialists whenever required. The PMO Team: * Assists the Project Coordinator in the project management tasks; * Manages the delivery and the submission of administrative and financial documents; * Is a permanent contact point for the coordinator and all the partners regarding their participation in CHARISMA, responding to any relevant request and maintaining a high level of communication within the consortium; * PMO is in constant communication with the Project Coordinator about the status of the project (new results, new risks, modifications, etc.). PMO is also in regular contact with the WP and Task leaders, in order to maintain a close association with the implementation team of the project. The PMO team is in charge of the day-to-day project management providing support to the different project bodies. In accordance with the EC requirements, PMO are responsible for the administrative actions, including periodic reports, Certificates on Financial Statements and communication about submission of deliverables. Additionally, the PMO team prepares the required logistical, legal and administrative documents and supervises the overall running of the project. PMO are responsible for ensuring that all of the administrative steps required for effective progress of the project are fulfilled timely. The PMO team will rely on the partners’ local administrative staff and is composed by i2CAT internal staff. <table> <tr> <th> Name </th> <th> Sandrine Schwartz </th> </tr> <tr> <td> Address </td> <td> Gran Capità 2-4 Edifici Nexus I </td> </tr> <tr> <td> Place </td> <td> Barcelona </td> </tr> <tr> <td> Post Code </td> <td> 08034 </td> </tr> <tr> <td> Country </td> <td> España </td> </tr> <tr> <td> Telephone </td> <td> +34 93 553 25 10 </td> </tr> <tr> <td> Fax </td> <td> +34 93 553 25 20 </td> </tr> <tr> <td> E-mail </td> <td> [email protected]_ </td> </tr> </table> Table 1-3: Contact Details of the PMO representative _1.1.2.2. Technical and Innovation Management (TIM) UEssex_ The responsibility of the **Technical and Innovation Manager (TIM)** is to maintain the technical focus of the project as a whole and to coordinate the innovation actions. More specifically, the TIM chairs the Technical Board (TB). Dr. Michael Parker from the University of Essex is the TIM of CHARISMA and is responsible for: * Coordinating the technical work; * Monitoring the alignment of the project work with the project technical objectives; * Coordinating the technical issues of EC reviews; * Solving all technical conflicts among tasks; * Coordinating the dissemination of the technical information; * Coordinating the communication and ease the flow of information among partners; * Coordinating all technical reports within the deadlines agreed upon with the EC; * Supporting the Project Coordinator in supervising the progress of the project; * Monitoring and coordinating the research and innovation actions all along the project, in cooperation with the PC and the Technical Board. The TIM is also in regular contact with the WP and Task leaders, in order to maintain a close association with the implementation team of the project. With regard to innovation management, the TIM in CHARISMA will define a set of guidelines to WPLs, to be ratified by the TB, so that the project ensures proper sensibility to changing Cloud industry standards and technologies. This way the TIM, in cooperation with the TB and all project partners, will be open to detect new opportunities for CHARISMA for innovate ideas while they develop any phase of their solution. The TIM main innovation related tasks include: * Monitor all the overall exploitation-related activities in the project, according to the exploitation plan defined in WP5, and put emphasis on the overall exploitation impact of the project outcomes and its branding. * Provide support to the partners in order to define the access rights and usage of their background and project results. * Advice the project with the right policies for protecting partners/project IPR. * Support the project coordinator by setting up an open innovation model that, according to Innovation goals of H2020, facilitates, from the coordination point of view, the generation of Innovation all along the project and strengthen the exploitation impact of the project results. * Provide advice about tools and mechanisms to improve the management. <table> <tr> <th> Name </th> <th> Michael Parker </th> </tr> <tr> <td> Address </td> <td> Wivenhoe Park </td> </tr> <tr> <td> Place </td> <td> University of Essex </td> </tr> <tr> <td> Post Code </td> <td> CO4 3SQ </td> </tr> <tr> <td> Country </td> <td> United Kingdom </td> </tr> <tr> <td> Telephone </td> <td> XXXXXXX </td> </tr> <tr> <td> Fax </td> <td> XXXXXXX </td> </tr> <tr> <td> E-mail </td> <td> [email protected]_ </td> </tr> </table> Table 1-4: Contact Details of the Technical and Innovation Management ### 1.1.3. Technical Board (TB) The **Technical Board** consists of the Coordinator and all the work package leaders. The **TB** shall be chaired by the TIM and this board is accountable to the GA. The TIM is responsible for calling the meeting, preparing the agenda and writing the minutes. The TB is overall responsible for the success and smooth running of the technical overall management of the project, including assessment of progress reports, maintenance of work plans, resource re-allocation (if required) and first level conflict resolution. External factors, such unique opportunities for dissemination or novelties should also be considered. As mentioned above, the TB is especially involved in resolving technical conflicts between partners. All the decisions within this group are consensus driven. However, resolutions taken in the framework of the TB are binding and can only be overturned by a specific alternative decision of the GA. The TB meets regularly at least every 4 months and at any time upon request of any member of the TB. The members’ Board are continuously in contact between the meetings by e-mail and audio/video conferences. The TB should aim at achieving consensus of partners on important issues. _1.1.3.1. Work Package Leader (WPL)_ The **Work Package Leader (WPL)** is responsible for the technical management of the work package (WP). The WPL may be supported by a number of task leaders of the same WP that report to the WPL on a regular basis. The responsibility of each WPL is to ensure the activities of the WP proceed according to the project work plan. The WPL is responsible for the production of the relevant deliverables and may delegate parts of this responsibility to other WP participants. The WPL reports to the PC, the TB and the TIM regarding all technical coordination and progress reporting (e.g. periodic reports or financial reporting). The responsibilities of a WP leader are: * Manage and follow-up the progress and technical activity of the Work Package; * Follow-up the timely achievement of milestone and production of deliverables;  Report on the activity to the Project Management Committee; <table> <tr> <th> **Role** </th> <th> **Name** </th> <th> **Email** </th> <th> **Affiliation** </th> </tr> <tr> <td> WP1 Leader </td> <td> M.Parker </td> <td> [email protected] </td> <td> UESSEX </td> </tr> <tr> <td> WP2 Leader </td> <td> V.Jungnickel </td> <td> [email protected] </td> <td> Fraunhofer </td> </tr> <tr> <td> WP3 Leader </td> <td> A.Legarrea </td> <td> [email protected] </td> <td> i2CAT </td> </tr> <tr> <td> WP4 Leader </td> <td> E.Trouva </td> <td> [email protected] </td> <td> National Center for Scientific Research “Demokritos” </td> </tr> <tr> <td> WP5 Leader </td> <td> T.Rokkas </td> <td> [email protected] </td> <td> INCITES Consulting </td> </tr> <tr> <td> WP6 Leader </td> <td> E.Escalona </td> <td> [email protected] </td> <td> i2CAT </td> </tr> </table> Table 1-5: CHARISMA Technical Board _1.1.3.2. Task Leader (TL)_ **The Task Leader (TL)** is a partner that is appointed responsible for a particular task. The Task Leader will report to the WPL. All TLs follow the Document of Action and stick to specific project development plans defined by WPLs. ### 1.1.4. External Advisory Board (EAB) A panel of external experts will support the consortium all along the project duration in advising on project strategy, complex technical decisions and long-term sustainability issues. This group of expert will compose the **External Advisory Board (EAB)** of CHARISMA project. This Board (EAB) will serve as external adviser to the Project coordinator and the members of the GA and TB. The EAB will meet separately and interact with the different managing bodies being part of the management structure. In this way, regular communication will be established between the EAB and the different managing bodies (PC; GA and TB), in order to provide appropriate information and assist the managing bodies with independent strategic recommendations on the project objectives, actions and long term developments. Resorting to such instrument provides additional quality assurance and guidance for the project and its activities. The communication will be done by different means: meetings, conference call, emails, etc. The **CHARISMA EAB** is comprised of specialists from domains relevant to CHARISMA and will be appointed during the first year of the project. The EAB main tasks include: * overseeing quality of project deliverables (internal evaluation in a form of peer-reviews) when required; * orientating the project activities towards the expected results and objectives; * establishing the project strategy in accordance with the PC, GA and TB and adapt it during the project life, if necessary; * providing recommendations and contribute to the implementation of the project in order to maximize the impact of the project results; * advising and assistance on the dissemination, international discussion and promotion of project results. The members of the EAB will represent relevant international stakeholders and will be recognized as experts from academic and/or private sector. After a round of consultations, the GA will select a number of distinguished researchers and representatives of research institutions (or private sector??) in order to carry out these responsibilities. # 2\. Project Communication & Collaboration ## 2.1. Email Communication The email communication reflector of the project will be kindly hosted by the Coordinator, i2CAT who established and maintains the following email lists: **[email protected]_ ** The purpose of this email list is to exchange information between the CHARISMA administrative representatives and delegates. At least one administrative participant from each partner will be included in this email list. <table> <tr> <th> **Wp1** </th> <th> **[email protected]_ ** </th> </tr> <tr> <td> **Wp2** </td> <td> **[email protected]_ ** </td> </tr> <tr> <td> **Wp3** </td> <td> **[email protected]_ ** </td> </tr> <tr> <td> **Wp4** </td> <td> **[email protected]_ ** </td> </tr> <tr> <td> **Wp5** </td> <td> **[email protected]_ ** </td> </tr> </table> The purpose of this set of emails is to exchange information between all technical CHARISMA participants. The email address of all technical personnel participating in the different WPs will be included in this list of 6 emails list in accordance with the project's WPs. All emails that will be sent to the reflector will have the following format for their subject: [charisma-admin] Subject [charisma-wpx] Subject In case a new member joins the project, an email with the specific request should be sent to the i2CAT contacts for the project reachable at the following email addresses: * Eduard Escalona (i2CAT), [email protected] * Amaia Legarrea (i2CAT), [email protected] ## 2.2. Conference-Call Collaboration CHARISMA will utilise on a regular daily basis telephone conference calls to evaluate the progress of the project along the planned Sprints and define further actions. A dedicated conference bridge (updated every 6 months) will be provided by i2CAT. All details regarding the access to the bridge will be provided by i2CAT technical team and will be stored in the documents repository at the following page: http://confluence.i2cat.net/display/CHARISMA/CHARISMA+Home When video sharing will be required the consortium will use WebEx video conference tool provided by Cisco and hosted by i2CAT. Additional ad-hoc conference calls should be regulated by the following general rules: * The partners will schedule the date of the conference, using also appropriate support tools. (e.g., doodle). * An invitation to the project email list with the exact time, agenda and information on how to join the conference call will be circulated, at least one day before the conference call. * i2CAT technical team will manage the conference platform and inform the consortium how to login to that conference system. ## 2.3. Meetings Face to face meetings will be organized periodically throughout the project lifetime. Conference calls will also be used extensively for progress meetings. The foreseen frequency of meetings is: * Project Coordinator Body (PC, PMO, TIM): foreseen conference calls at least every three months and face to face meetings co-located with the General Assembly. * General Assembly: face to face meetings at least two times every year. Conference calls to be set up when needed. * Work packages: dedicated WP meetings will be organized by the WP leaders according to the emerging needs during the project lifetime. It is up to the WP leaders to try to arrange these meetings in co-location with General Assembly meetings. Extraordinary meetings may be organized upon written request of: * The TB for the GA meeting. * Any member of the TB for a TB meeting. Moreover, it is up to deliverable leaders to set up additional meetings or conference calls according to the specific activities in the workplan if required For each meeting the following documentation will be produced and uploaded on the respective folder in the document repository: * Meeting minutes from which it could be easily evinced the meeting objectives, agenda, location and required participants. * Main discussion items, decisions and actions (with owners and deadlines assigned). * A list of participants over each day of the meeting with signatures for each day. ### 2.3.1. Hosting a face to face meeting Throughout the project lifetime, members may be asked to host general project meetings. Tasks involved in hosting a meeting are divided into obligatory and recommended tasks: Obligatory tasks: * Provide meeting rooms with audiovisual equipment necessary for the presentations.  Provide network connectivity. General practice (recommended although not obligatory): * Provide water, coffee breaks and lunch. * Organize one social event (e.g., invite partners for one evening meal). # 3\. Quality Assurance The Project Coordinator will maintain a computer-based top-level project management system, based on a Gantt chart work-schedule model. This will be updated by inputs from the various managers at WP and task level. The Work Package Leaders will be encouraged to use a formal project management system, compatible with the one used by the Project Coordinator. Project meetings or teleconferences will take place as required by the workplan. ## 3.1. General Rules for Planning and Reporting The foreseen procedures and tools for the CHARISMA project are the following: * All formal meetings (GA, TB, etc.) will be notified at least three weeks in advance. Agenda, proposed resolutions, decisions and supporting documentation will be available to all attendees at least one week before the meeting. Issuing of all documents will be via the chairman, who is responsible for compiling all submissions from partners and will be appointed at the beginning of the meeting. * All meetings will be formally minuted and a draft version of the meeting minutes shall be sent to all the members within 10 calendar days after the completion of the meeting. * The minutes shall be considered as accepted if, within 15 calendar days from sending, no consortium member has sent an objection in writing to the chairperson with respect to the accuracy of the draft of the minutes. * The accepted minutes will be accessible from the documents repository: http://confluence.i2cat.net/display/CHARISMA/Management ## 3.2. Quality Assurance Roles and Responsibilities All documents and deliverables concerned in the Quality Assurance process will be evaluated through a dedicated internal team, **Quality Check Team (QCT)** , generally composed by at least two members of the CHARISMA consortium. The members of QCT are selected on the basis of their expertise and experience on the subject treated in the deliverable under consideration. As general rule the team should consist of: * The project Technical Coordinator for the overall quality management of the deliverable, having the overall responsibility for the qualitative integrity of the document; * The leader responsible for the work package or tasks related to the deliverable. The review process will be iterative until final acceptance by the QCT. This Quality Assurance procedure should be used by all partners involved in the documents release process, more precisely: * All partners directly involved in deliverable and prototype production. * The QCT involved in the review of the documents produced. ## 3.3. Deliverable Acceptance Process A clear quality process will be applied for the final acceptance of a deliverable: * The team responsible for the deliverable has to be in charge to verify the contents of the documents, assuring that the information contained is relevant and of importance to the research work and activity carried out in a given project task. * The Quality Check Team will make a first internal revision of the deliverable and assess if some modifications or conflict resolution would be needed. The Quality Check Team will assure the conformity of the document with the quality criteria and also act as interface to TMB, before the final submission of the work. The quality criteria against which every single deliverable will be evaluated include the following: 1. **Completeness** . The deliverable must address all aspects related to the purpose and scope for which the related research activity is carried out. Each deliverable leader is responsible for the content of the document and the QCT is responsible to supervise the compliance. 2. **Depth** . Each deliverable should have a coherent depth of information with respect to the deliverable scope, purpose and type of research activity described. The lead partner has the role to verify that the contents are detailed in the correct way. The Quality Check Team should evaluate if such depth is adequate, with respect to what was stated in the Description of Action. 3. **Accuracy** . The information provided in the deliverables should be supported by real and tangible motivations. All background information used in the documents should be linked to appropriate references. Moreover, foreground information and results, should be clearly described and should be technically supported in order to avoid any misinterpretation or misunderstanding. 4. **Relevance** . Each deliverable should provide information according to the scope of the specific research work and should be focused on the key aspects of the deliverable scope. 5. **Adherence** . Each deliverable must be produced using a common project template to have a uniform appearance and structure, independently from the originals authors of the document. The above quality criteria have to be considered by the authors when writing and drafting the deliverables since they constitute the basic principles on which the Quality Check Team operates for the evaluation of the work. ## 3.4. Deliverable Review Procedure The deliverable will be distributed by the lead partner in its first full draft version at least **two weeks before its official deadline** . The Quality Check Team will perform the review and send their comments back to the deliverable lead partner. Moreover, the QCT is responsible for deciding upon any conflict in the review process. The final rating of the deliverable draft is agreed by the Quality Check Team and will be marked as: * Fully accepted, * Minor revisions required,  Major revisions required,  Rejected. The above procedure is intended to be iterative. After the first round of review from the Quality Check Team, the lead partner and the document authors will integrate the revisions into the deliverable, addressing the comments provided by the Quality Check Team. The final revision will be produced when the deliverable is marked as fully accepted by the Quality Check Team. The whole review procedure should take no more than an agreed deadline, established before starting of the deliverable production. In the unlikely case of a deliverable marked as rejected at the end of the review process, the Project Coordinator will apply the needed actions to overcome the situation. ## 3.5. Periodic Progress Reporting In order to reflect the status and the progress of the project, a number of management reports are produced periodically. <table> <tr> <th> **Report** </th> <th> **Content** </th> <th> **Responsible** </th> <th> **Distribution** </th> <th> **Periodicity** </th> </tr> <tr> <td> **Semester Progress Report** </td> <td> Project progress and technical activities carried out by each partner. Includes costs and efforts for each partner. </td> <td> All partners </td> <td> Internal/PO </td> <td> Every 6 months </td> </tr> <tr> <td> **Periodic Report** </td> <td> Overview of project progress in terms of achievements of each WP. Includes costs and efforts for each partner. </td> <td> All partners </td> <td> EC </td> <td> M18 – M30 </td> </tr> <tr> <td> **Financial Statement** </td> <td> Financial Statements + Audit Certificate if required </td> <td> All partners </td> <td> EC </td> <td> M18 – M30 </td> </tr> </table> Table 3-1: Planned CHARISMA reporting ### 3.5.1. Semester Progress Report The Semester Progress report is made up of the following parts: * a textual part to report on work progress by each WP towards the project objectives, * a list of activities executed by each partner in each WP, * effort/cost estimates and projections by each partner for each WP. The Project Coordinator will keep the EC Project Officer informed about changes, problems, and deviations from the work plan or the budget. The preparation of the Semester Progress Report will be started by the Project Management Office at the closure of the reporting period (semester) and should be completed within the 30 days from the end of the reference period. All partners are invited to provide timely feedback and contributions for their parts. ### 3.5.2. Periodic Activity Report to the EC Based on the template produced by the EC, this official activity report is produced at M12 and at M30 by the Project Coordinator, the Technical Coordinator and all the WP leaders based on the information contained in the progress activity reports. This report states explicitly the advances of the CHARISMA project against the objectives and planned activities and should detail: * activities carried out and obtained results, * dissemination and exploitation actions,  resource consumption. This report will be used by the European Commission to assess the progress of the project in the period of time that will normally coincide with technical review meetings. ### 3.5.3. Financial Statements and Financial Summary Reports At the end of the M12 and the M30, each partner will provide the Financial Statements to the coordinator where all the costs referring to the period will be declared in order to claim reimbursement to the European Commission. This financial statement will be complemented with an audit certificate (when required) that will ensure that the costs declared by each participant are correct and documented. **Special clause to the contract** : audit certificates are required when accumulated funding surpasses 325.000 €. For example, if in Period 1, the claimed costs for reimbursement are 100.000 €, no audit certificate is required. If the requested funding for the project is under 325.000 €, no audit certificate is required. # 4\. Data Management Plan (DMP) This section of Data Management Plan is a life document and will be subject to change and evolve to be adapted to needs and requirements that may arise during the project life cycle. It is important to set up a Data Management Plan to achieve specific dissemination objectives and ease the access of the consortium and the general public to the data produced during the execution of CHARISMA to increase the research impact of the project and to save time by having a common repository to preserve the important data and maintain the integrity of the data. Making the data available to the community of researchers interested in the area can have a positive impact on the discovery of new applications and the general relevance of the work. The plan for Data Management should be set up at the beginning of the execution of the project, this way we could save time and resources during the project lifetime. A repository facility will provide a common reference archive where the consortium will save all the produced documentation and instruments for the project execution, avoiding the usage of different and replicated systems. Also, this will allow the interested parties to consult and review these data in the future. CHARISMA consortium is devoted to support open access to all the data produced during the project lifetime to the greatest extent that is possible. ### 4.1.1. Data set reference and name The infrastructure chosen to hold the documentation produced by the project (interim reports, cost statements, working papers, and deliverables) will be based on the Confluence solution by Atlassian: (http://confluence.i2cat.net/display/CHARISMA/CHARISMA+Home ). The **Project Coordinator (PC)** and the **Technical and Innovation Manager (TIM)** will be the ultimate responsible for maintaining this platform, with coherence of contents organization and availability to the partners. All the partners are encouraged to contribute with documents and information that could be deemed as useful to the project community. ### 4.1.2. Data set description _4.1.2.1. Documents Repository_ The repository of documents will be used to share and store all documentation relating to the execution of the project, both for official documentation (to be sent to the European Commission) and the documentation for only internal use: tasks and meeting reports, in-progress documentation, etc. The choice of the document repository has been defined considering the following general requirements: * supports document sharing between different accounts, * allows the definition of an organized documents structure, * permits the versioning of the documentation,  history tracking. The following pages have been created at least the following categories of documents: * **Dissemination Materials** : will contain papers, articles, newsletters, brochures resulting from the project tasks, trainings information, patents, etc. * **Documents** : will contain the repository of deliverables of the project available to all the partners. * **Meetings** : will contain documents presented or generated during plenary meetings and conference calls minutes. * **Templates & logos ** : in this category reference documents and guides to generate standard documents for the project have been uploaded as well as logos and visual material for media dissemination. * **Audits** : store material exchanged among partners for audits preparation. * **WP-x** : each WP data will have its dedicated space where the documentation specific to that WP will be accessible for all the partners including activities description, APs and others. Note that additional categories may also be created, in case it is deemed to be necessary. The document D5.1 includes the specifics of the confluence document repository tool, explains how to use this repository and other dissemination tools. It has been submitted and shared across the consortium in M2. Further information about the Confluence tool can be found in CHARISMA Deliverable D5.1 [1]. ### 4.1.3. Document Naming Convention _4.1.3.1. Official/Contractual Deliverables_ The official deliverable will be named using the following naming format **CHARISMA_DX.Y_Mmm_Vx.y.ext** where: **X:** is the WP number **Y:** is the deliverable number **Mmm:** is the project month in which the deliverable is finalized and sent to the PO **x:** is the version major number **y:** is the version minor number **ext:** is the extension (.docx, .pdf, .ppt, .xlsx, .exe, .zip) Note that the partner who has the responsibility for the document will have the authority to change the version number. _4.1.3.2. Internal/Public Documents_ The Internal documents will have the following format: **CHARISMA _WPw_ACR_TTTd__ShortTitle_Vx.y-YYYYMMDD.ext** Or the shorter version **CHARISMA _WPw_ACR_TTTd__Vx.y-YYYYMMDD.ext** where: **w:** is the WP number **ACR:** is the partner Acronym that initiated and has the responsibility for the document **TTT:** is a two or three letter acronym of the following <table> <tr> <th> QRR </th> <th> Quarterly Resource Report </th> </tr> <tr> <td> MAG </td> <td> Meeting Agenda </td> </tr> <tr> <td> MM </td> <td> Meeting Minutes </td> </tr> <tr> <td> MS </td> <td> Market Studies </td> </tr> <tr> <td> SW </td> <td> Software </td> </tr> </table> <table> <tr> <th> </th> <th> APL </th> <th> Action Points List </th> <th> </th> </tr> <tr> <th> TCM </th> <th> Teleconference Meeting Minutes </th> </tr> <tr> <th> TP </th> <th> Technical Presentation </th> </tr> <tr> <th> TPC </th> <th> Technical/Research Publication (Conference) </th> </tr> <tr> <th> TPJ </th> <th> Technical/Research Publication (Journal/Magazine) </th> </tr> <tr> <th> TR </th> <th> Technical Report </th> </tr> </table> **ShortTitle:** is an optional, explanatory short title of the document **d:** is the document number **x:** is the version major number **y:** is the version minor number **YYYY:** is the year **MM:** is the month **DD:** is the day **ext:** is the extension (.docx, .pdf, .ppt, .xlsx, .exe, .zip) The same procedure applied to the official/contractual documents will be applied also to the internal/public documents. ### 4.1.4. Reference documentation tools The following tools are suggested for document processing: * Document Processing: Microsoft Word 2010, * Spreadsheet Processing: Microsoft Excel 2010, * Presentations Processing: Microsoft PowerPoint 2010, * Compression Tool: 7-Zip, * Portable Document Format: Adobe Acrobat 8.0 or later. In case a partner aims to use a different software tool, he/she has to assure that the outcome is compatible with the above tools. ### 4.1.5. Code Repository A git repository will be made available for the consortium with the purpose of CHARISMA code storage repository. This will mean a common tool where all the code can be appropriately versioned, committed for testing, etc. in a secured manner. The tool it is likely to be STASH (again by Atlassian). However, this will be decided through the project lifecycle and in consensus with all the partners involved in code development. ### 4.1.6. Data sharing In general, for the CHARISMA data sharing purposes, all the documentation, reports, articles and additional data will be shared through the confluence tool to all the CHARISMA partners. Access to this platform is secured and granted to personal accounts, this can be requested to the project coordinator (i2CAT) through the usual channels of communication (email, mailing lists, etc.). The public documentation will be made available through the public website ( _http://www.charisma5g.eu_ ) . The final version of each document in .pdf will be uploaded to the web once they are finalised and approved by the consortium according to the criteria previously defined in section **¡Error! No se encuentra el origen de la referencia.** of this document. Only publicly available information will be included in the documentation to protect the intellectual property rights of the information shared within the consortium. Due to the innovative nature of CHARISMA, it is expected that partners will generate Intellectual Property that has to be protected through patents, yet made available for other partners for their own work in the CHARISMA project and exploited outside of the project by appropriate licensing. CHARISMA handling of IPR is completely in-line with Annex III of the Model Contract and the “Guide to Intellectual Property Rules for H2020 projects”. A summary of the way that access rights and IPR have been handled in the project’s Consortium Agreement. The intention of the consortium is that the research carried out as part of CHARISMA will be eligible to be published in technical journals, conferences and similar events. ### 4.1.7. Archiving and preservation (including storage and backup) All the information shared through the confluence tool will be backed-up regularly as a measure to avoid accidental or intentional data-loss in case a disruption happens in the system hosting the data management tool. Ideally the following the subsequent criteria will be performed: * Make several copies of the data (e.g. original + external/local + external/remote). * Copies should be geographically distributed (local vs. remote). * Cloud storage is the preferred option to carry out back up of the data. * It is recommended to execute complete backups of all the data in a monthly basis. Data archive should be encrypted to preserve privacy of the data. The data produced by CHARISMA will be storaged ideally uncompressed but if there are limitations of space and if the files production starts being unable to be handled, data will be compressed. It is recommended also to carry periodic test in the data to try to retrieve data files and make sure those are available.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1045_ETN-FPI_676401.md
# Introduction Full parallax imaging research requires databases for testing and comparison. The research conducted in the project will encompass various capture types (multiview, plenoptic), result images (light fields for different displays) and compression schemes. For generating ground truth data or interactive light field visualization the project is going to utilize datasets for real time rendering as well. A multiview image set is an image captured with multiple virtual or real cameras at the same time. A plenotic image is an image captured with a plenoptic camera. A light field image is an image captured from a scene with the light rays described as a vector field. A practical example of this is a light field directly describing the outgoing light rays of an autostereoscopic 3D display. All of these images can be compressed into video streams. For multiview images this can be a single stream or per camera streams, for plenoptic camera images this is a single stream, for light field images, this can be a single stream or per optical module streams. For real time rendering of light fields it is also necessary to work with test 3D scenes imported from different 3D file formats. These are usually containing geometry in the form of meshes or boundary representation surfaces and materials in various formats. They can also link to various types of texture files that describe surface properties. They can also contain scene graph related information and animation. For realistic environment lighting it is also necessary to use environment map textures. With HDR imaging the changes of lighting during the day and camera exposure settings can also be simulated in real time. The document will cover what kinds of datasets are available for comparison and testing. It will also introduce new datasets that will likely be created (captured or rendered) during the project. For each dataset it will describe data types, relevant standards and tools to interpret them. Plans for storage and archiving procedures of the datasets will also be laid out. The datasets will also be of use to future researchers as comparison to existing methods for capture and compression and (rendered) image quality. # Datasets captured or created during the project ## Captured plenoptic content _Data set reference and name_ ETN-FPI-MIUN-PLENO ### Data set description Plenoptic capture is currently in the planning phase. The document will be updated once more details become available on the capture setup and the captured data set. Contact person: Mårten Sjöström e-mail: [email protected] ## Captured multiview content dataset from Bayer camera array _Data set reference and name_ ETN-FPI-FHG-BAYER ### Data set description The dataset contains video and image files captured with the following setup: The Basler Array consists of 9 cameras in a 3x3 configuration. The cameras are synchronized via a trigger input and have global shutter sensors. The cameras output 1 Gigabit Ethernet data which is captured using a PC equipped with a 10 Gigabit Ethernet adapter along with a 10 Gigabit Ethernet switch. #### (a) (b) #### (c) (d) _**Figure 1: Basler Camera Array. (a) Schematic drawing. The distance between the optical axes of horizontal and vertical neighbors is 65 mm, e.g. the inter-axial distance is 65 mm. (b) Picture of the camera array. (c) Schematic drawing of the Basler cameras array inside the rigging box. (d) Perspective picture of the camera array.** _ ### Standards and metadata Captured video streams are available as: * Individual rectified video streams from 9 cameras * Framerate: 30 FPS * Resolution: 1920x1080 pixels * stream of PNG images * 8-16 bits per color channel Captured images are available as: * Individual rectified images from 9 cameras * Resolution: 1920x1080 pixels * PNG images * 8-16 bits per color channel Metadata: * Camera calibration data as text file ### Data sharing Several datasets will be captured for the project, especially during WS2. These will be shared among project partners via a local FTP server. Generally, download links will be available for a week and will be renewed on request. When and how datasets will be published to the general public is to be decided at a later date. Contact person Frederik Zilly e-mail: [email protected] ### Archiving and preservation Sequences will be stored on Fraunhofer IIS’s internal FTP server. Local backup policies will be applied. All data will be mirrored. ## Captured and compressed multiview content dataset from GoPro camera array _Data set reference and name_ ETN-FPI-FHG-GOPRO ### Data set description The dataset contains video files captured with the following setup: The GoPro camera array consists of 16 cameras in a 2x8 configuration. The cameras are synchronized via special back packs and have rolling shutter sensors. Each camera captures the videos individually on local storage (e.g. SD-cards). The read-out is performed offline. #### (a) #### (b) #### (c) #### (d) _**Figure 2: GoPro camera array. (a) Schematic drawing of the camera’s centers. The horizontal translation is equidistant, i.e. the distance of two horizontal neighbors is 60 mm in the drawing. The distance between vertical neighbors is 70 mm. (b) Picture of the 2x8 camera array. (c) Perspective drawing of the camera array. (d) Rear view of the cameras showing the red back packs connected with gray cables to ensure synchronization.** _ ### Standards and metadata Captured video is available as: * Individual rectified video streams from 16 cameras * Framerate: 30 FPS * Resolution: 1920x1080 pixels * Stream of lossless PNG images converted from x264 video streams.  8 bits per color channel Metadata: * Camera calibration data as text file Relevant standards: * PNG: ISO/IEC 15948:2003 * ISO/IEC 14496-10 – MPEG-4 Part 10, Advanced Video Coding ### Data sharing Several datasets will be captured for the project especially during WS2. These will be shared among project partners via a local FTP server. Generally download links will be available for a week and will be renewed on request. When and how datasets will be published to the general public is to be decided at a later date. Contact person: Frederik Zilly e-mail: [email protected] ### Archiving and preservation Sequences will be stored on Fraunhofer IIS’s internal FTP server. Local backup policies will be applied. All data are mirrored. ## Compressed multiview dataset from robot gantry _Data set reference and name_ ETN-FPI-FHG-ROBOTGANTRY ### Data set description The dataset contains image files captured with the following setup: The Robot gantry is suitable to capture static scenes using a translatable DSLR camera. The DSLR camera can be translated by 4 meters in horizontal and 50 cm in vertical direction. The images are recorded onto the camera data storage as RAW or JPG files. _**(a)** _ _**(b)** _ _**Figure 3: Robot gantry. (a) Picture of the full robot gantry frame. The camera can be translated by 4 meters in horizontal and 50 cm in vertical direction (b) Near view of a mountable DSLR camera.** _ ### Standards and metadata Captured video is available as: * Individual rectified video streams from ~20x20 camera positions * Framerate: N/A * Resolution: 24 MegaPixels * Stream of lossless PNG images. * 8-16 bits per color channel Metadata: * Camera calibration data as text file Relevant standards: * PNG: ISO/IEC 15948:2003 ### Data sharing Several datasets will be captured for the project especially during WS2. These will be shared among project partners via a local FTP server. Generally download links will be available for a week and will be renewed on request. When and how datasets will be published to the general public is to be decided at a later date. Contact person: Frederik Zilly e-mail: [email protected] ### Archiving and preservation Sequences will be stored on Fraunhofer IIS’s internal FTP server. Local backup policies will be applied. All data are mirrored. ## Compressed light field dataset ### Data set reference and name ETN-FPI-MIUN-PLENO Data set descriptionLight field compression is currently in the planning phase. The document will be updated if more data becomes available on light field compression. Contact person: Mårten Sjöström e-mail: [email protected] ## Light field datasets rendered for Holografika displays _Data set reference and name_ ETN-FPI-HOLO _Data set description_ The dataset contains video and image sequences rendered for one or more Holovizio light field displays. ### Standards and metadata Rendered video is available as * Per optical module x264 compressed light field videos Rendered images are available as: * Per optical module png light field images Metadata: * Documentation for the display’s light field * Holografika light field description file * Holografika SDKs Relevant standards: * PNG: ISO/IEC 15948:2003 * ISO/IEC 14496-10 – MPEG-4 Part 10, Advanced Video Coding ### Data sharing Image and video sequences can be requested from Holografika for testing and evaluation. Inclusion in published scientific work requires the explicit permission of Holografika. Holografika light field description files and Holografika SDKs are available only to partners signing an NDA. Holografika SDKs are currently only available with commercial licensing. Contact person: Attila Barsi e-mail: [email protected] ### Archiving and preservation Sequences are mirrored on Holografika’s internal file server. A backup server with mirrored data is available. An offline backup of the data is available on HDD. # Datasets used for rendering, testing, compression and comparison ## Complex 3D scene for light field rendering _Data set reference and name_ ETN-FPI-COMPLEX3D ### Data set description Big Buck Bunny is a Blender community open movie project. The dataset contains the original Blender project and various 2D, stereoscopic and multiview videos rendered from the same dataset. ### Standards and metadata The project files are available in Blender’s internal .blend file format. Licensed under Create Commons 3. Rendered video is available as * x264 compressed 4K stereo video * x264 compressed Full HD stereo video * x264 compressed 4K 2D video * x264 compressed Full HD 2D video * yuv uncomressed 1280x768 multiview video Rendered image files are available as * .exr 32 bit floating point 1280x768 multiview images * .png 1280x768 multiview images Metadata: * documentation for Holografika’s proprietary camera .xml format * camera placement in Holografika’s proprietary camera .xml format * tone mapping settings for HDR to LDR conversion Relevant standards: * PNG: ISO/IEC 15948:2003 * ISO/IEC 14496-10 – MPEG-4 Part 10, Advanced Video Coding ### Data sharing 2D, stereo and Multiview are available from the HTTP file servers. * Original scene, 2D and stereo videos _http://bbb3d.renderfarming.net/explore.html_ * Scenes setup for multiview rendering, multiview video and images _http://mpegftv:[email protected]/_ ### Archiving and preservation Scenes setup for multiview rendering, multiview video and images are backed up to a backup server and an offline backup HDD. Contact person: Attila Barsi e-mail: [email protected] ## Simple 3D scene for light field rendering _Data set reference and name_ ETN-FPI-SIMPLE3D _Data set description_ This is a 3D datasets for sci-fi vehicles from the movie Star Wars. Licensed under Create Commons 3. _Standards and metadata_ The scene files are available in various 3D formats including MAX4, 3DS and Lightwave .lwo formats. ### Data sharing 3D scene definition files are available at the following HTTP file server:  _http://www.scifi3d.com/list.asp?intCatID=8 &intGenreID=10 _ ### Archiving and preservation Models are mirrored on Holografika’s internal file server. A backup server with mirrored data is available. An offline backup of the data is available on HDD. Contact person: Attila Barsi e-mail: [email protected] ## Environment maps for 3D rendering _Data set reference and name_ ETN-FPI-ENVMAPS ### Data set description Environment maps for 3D scene rendering captured with ball mirrors. The primary purpose of these maps is to enable environment lighting. ### Standards and metadata Images are available as LDR .jpg and HDR .hdr images. Licensing allows the free samples to be used for commercial work. _Data sharing_ _https://www.hdri-hub.com/hdrishop/freesamples/freehdri_ ### Archiving and preservation Maps are mirrored on Holografika’s internal file server. A backup server with mirrored data is available. An offline backup of the data is available on HDD. Contact person: Attila Barsi e-mail: [email protected]
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1048_MaX_676598.md
# Executive Summary The MaX Centre of Excellence aims at supporting the needs of all the stakeholders involved in the field of materials modeling, simulation and design by providing new instruments and services in the form of data, codes, expertise and turnkey solutions to efficiently address the crucial challenges of novel materials development in the exascale computing era. This document provides a description of the strategies and solutions adopted within the MaX CoE to establish a high level materials’ informatics framework to curate, preserve and share all the data produced by the flagship codes. The core technology behind this objective is the AiiDA code, a python infrastructure designed to support different codes through a plugin interface, allow for an automated design and implementation of complex workflows and task tracking and able to store the full provenance of each object in a tailored database. AiiDA parses the input and output files and runs the calculations on high performance computing platforms, stores the data using uniform formats based on python dictionaries and preserve the full provenance in the form of a Directed Acyclic Graph (DAG). AiiDA also enables a social ecosystem where the simulation workflows and results can be openly shared. Both these aspects have been strengthened in the last version of the code. On one hand with the update of the AiiDA plugin and workflow systems on the other, with the development of the AiiDA REST (REpresentation Stateless Transfer) Api which also constitutes the backbone of the materialscloud.org portal and finally with the implementation of various exporters and converters to the most commonly used data formats and ontologies. # Introduction 2.1 About this document This document is deliverable D3.4 of the MaX project, it is the updated version of data management plan already designed and delivered in D3.1[1] at month 6. The documents describes the types of the data produced in the project, standards used for data and how the data is being curated, preserved and shared. The basic types of data and the way they are organized by AiiDA remains essentially unchanged with respect to D3.1[1] and they are hereafter reported in a similar way. Important developments have been carried out to ease the sharing of data and of the computational protocols used to generate them. Such developments include a critical revision of the AiiDA plugin system and the implementation of a REST Api compliant with the OPTIMaDe protocol. # Description of the data Within this project, various open-source first principle simulation codes such as Fleur, SIESTA, Quantum ESPRESSO and YAMBO are being developed (see table 1). The materials informatics framework AiiDA has been designed to allow the support of many different codes through a plugin interface. Plugins for the main codes within by this Center of Excellence (CoE) are already available as described in deliverable Table 1: The primary codes that will be used as part of this project. <table> <tr> <th> Name </th> <th> License </th> <th> Main Developers </th> </tr> <tr> <td> Fleur [3, 4] </td> <td> GNU-GPL </td> <td> Jülich </td> </tr> <tr> <td> Quantum ESPRESSO [5, 6] </td> <td> GNU-GPL </td> <td> SISSA, EPFL, CIN </td> </tr> <tr> <td> SIESTA [7, 8] </td> <td> SIESTA License </td> <td> ICN2, BSC </td> </tr> <tr> <td> YAMBO [9, 10] </td> <td> GNU-GPL </td> <td> CNR, CIN </td> </tr> <tr> <td> AiiDA [11, 12] </td> <td> MIT-BSD, EPFL License </td> <td> EPFL, Robert Bosch </td> </tr> <tr> <td> i-PI [13, 14] </td> <td> GNU-GPL/MIT-BSD </td> <td> EPFL </td> </tr> </table> D3.2[2] of this project. A general overview of the present status of the plugins can be found at http://www.aiida.net/plugins/ together with the contact of the reference person for each plugin and a link to the main repository. The page collects also numerous other plugins, for codes outside this CoE, which are being currently developed as part of other collaborations or by individual contributors. AiiDA promotes advanced programming models leveraging python abstraction layers to disseminate advanced functionalities to arbitrary quantum engines (i.e. simulation codes). It provides a model of automatic data generation and storage, to guarantee provenance, preservation, reproducibility and reuse. This platform is used to organize and coordinate thousands of simulations, it allows to acquire and store a variety of heterogeneous microscopic data from the calculations that can be subsequently queried for the desired material properties. Details on the calculation execution such as the parallelization scheme and execution time are also retained to empower performance optimization. Furthermore AiiDA allows for an automated design and implementation of complex workflows and task tracking, based on a scripting interface for job creation and submission. The inputs, results and computational procedures at each step of the workflows are collected and stored in a database. All plugins, workflows and the data produced through them are thought and designed to be openly shared as discussed afterward. 3.1 Types of data AiiDA provides automated solutions and various plugins for computer codes without a need for tuning code specific parameters. It stores the calculations, their inputs and their results (either parsed, extracted from Extensible Markup Language (XML) outputs or from text files with the appropriate dictionaries) in a database and its associated file repository. This data is generated from open-source electronicstructure material simulations codes that encompass key technologies such as allelectron, pseudopotentials, and localised basis sets, density-functional theory, timedependent density-functional theory, and many-body perturbation theory, multiscale/multiphysics modelling with a focus on quantum mechanics/molecular mechanics, solvation and electrochemistry, thermal/electrical transport, and complex magnetic properties. The specific data to be stored in the database within the input and output nodes of a calculation and the files to be retrieved and stored in a local repository are determined by each code plugin. This follows the specific characteristic of each simulations code and the physics of the problem under study. The design of each calculation node is documented by the plugin developers, see for example http://aiida- core.readthedocs.io/en/stable/plugins/ quantumespresso/pw.html. AiiDA also allows to use data from external open access databases of crystal structures for organic and inorganic compounds such as Crystallography Open Database (COD) [15] ,Theoretical Crystallography Open Database (TCOD) [16] and Inorganic Crystal Structure Database (ICSD) [17] to obtain the input atomic coordinates of crystalline materials. The platform also offers the possibility, at the workflow level, to copy large files (for example charge densities) to a data storage facility for later reuse and to save into the database a symbolic link to such remote folder. 3.2 Format and scale of the data AiiDA parses the input and output files mostly stored as text or XML and runs the calculations/codes on high performance computing platforms. The full provenance of each data object (inputs, outputs, calculations) is automatically stored in database in a format that enables the simulation results to be fully reproduced. The database has an associated repository with text and binary (machine-independent) files. We have developed uniform formats to define the most common raw and analyzed data irrespective of the different plugins. These standards contain data in dictionary format, exportable for instance to plain JavaScript Object Notation (JSON) (for example, StructureData, ParameterData data types in AiiDA store metadata in python dictionaries within a database). AiiDA however allows the flexibility to define new data structures and formats which might be strongly code depended. Currently we are using the PostgreSQL [18] open-source relational database to store our data. Overall they presently contains around 15M+ records and over the course of the next year it is expected to grow to 100M+ records (with 10 TB+ occupied disk space). Currently we use applications like Jmol, Visual Molecular Dynamics (VMD), PyMOL, VESTA, XCrySDen and Blender to visualise 2D and 3D structures, and Matplotlib, Gnuplot and Mathematica for plotting the data. # Data collection/generation 4.1 Methodologies for data collection/generation Data for the project are created and collected by using the AiiDA framework for the management of the simulations. AiiDA plugins and workflows have been written for different simulation codes in order to support the simulation with at least the codes used within this Center of Excellence, but also to support other codes available in the community (See http://www.aiida.net/plugins/) . By using AiiDA, the full provenance of all calculations is preserved from initial inputs to final outputs, as well as all steps along the way, in the form of a Directed Acyclic Graph (DAG). This allows any output data to be retrospectively checked for quality if there are questions about how it was generated. Workflows on the other hand provide a means of proactive quality assurance whereby a series of steps is designed and implemented by a domain expert and packaged as a workflow. A workflow can then be executed by experts and non-experts alike, with internal checks and heuristics that attempt to ensure the quality of data with respect to convergence and other relevant simulation parameters. Furthermore, by having a standard way of running particular calculations, it becomes much easier to compare and validate results. Raw inputs and outputs from computer simulations codes are be stored directly so that they may be re-parsed or manually inspected if necessary. Otherwise data are be stored as standard, code-independent objects in the AiiDA framework (e.g. crystal structures, band structures, pseudopotentials, _k_ -point paths, etc) allowing easy querying and manipulation of results from a variety of simulation software packages. All naming of these input and output files are handled internally by AiiDA and files can be retrieved for particular calculations by either issuing a query to match specific search criteria or directly by using the Universally unique identifier (UUID) of a known simulation. 4.2 Data quality and standards As mentioned previously the combination of persistent provenance and workflows is used in combination to maintain consistency and quality. Our provenance model also acts as a form of documentation storing all the steps that lead to any result in the database. One aspect of the project involves the dissemination of a library of, so called, pseudopotentials which contain information about the quantum mechanical properties of the outermost electrons (those relevant in chemical bonds) for the elements of the periodic table. Currently there may be many different pseudopotentials for each element, however this makes it difficult to compare calculations, and worse, some pseudopotentials are not accurate enough to give reliable results for some of the calculations they are being used in. By providing a standard set of pseudopotentials that have been thoroughly tested we alleviate both these problems. We are also involved in working on ontologies in collaboration with the TCOD team. During this collaboration, we have implemented exporters for calculations managed using AiiDA to the domain-specific ontology that is being built within the TCOD project, so that also calculation results (such as crystal energies or atomic forces) can be stored in a code-independent format. # Data management, documentation and curation 5.1 Managing, storing and curating data In AiiDA, all data (calculations, their inputs and their outputs) generated by running high-throughput simulations on local or remote servers is naturally stored on those computers. Moreover, relevant inputs and outputs are persisted in the AiiDA repository, composed both of a folder-like structure and of a database. For the latter, we use PostgreSQL, a powerful open source object- relational database. The format for storing data (depending on the specific type of data) is defined by the specific AiiDA data plugins, described in detail in the code documentation. The data format of common objects (crystal structures, band structures, force constants, etc.) is the same for all objects of the same type, even if generated by different computer codes, to facilitate data exchange, queries, and the bridging of different simulation tools. Moreover, each data format is accompanied by data import and export functions from/to standard formats (for example Crystallographic Information File (CIF) files for crystal structures) Importer and exporter for commonly used formats such as ASE (Atomic Simulation Environment) and PyMatgen format have been developed. Other import/export capabilities can be transparently added through upon necessity. Every data object is a node in the DAG where links between nodes keeps track of the data provenance (who generated the data, with which parameters, etc.), allowing for easy regeneration of the same data with the same inputs. Moreover, beside common metadata (user/owner, creation and last modification date, etc.) any further metadata can be attached to any node of the database (data and calculations). Also, AiiDA provides data sharing capabilities, both to share portions of the calculations database with selected groups of users and collaborators, and to export the data to public repositories as described in more details in D3.3[19]. Currently the full database and part of the file repository (small files) are stored on a server at École Polytechnique Fédérale de Lausanne, Switzerland (EPFL) while the remaining part of the file repository is stored on a Centro Svizzero di Calcolo Scientifico - Swiss National Supercomputing Centre, Switzerland (CSCS) server. The policy defining what a large file is depends on the application and is defined within the AiiDA workflows used to generate the data. On the EPFL server, there are backup scripts running every day performing a full backup of database and an incremental backup of the file repository. The data stored at CSCS server is also backed up daily. 5.2 Metadata standards and data documentation There are several key pieces of simulation software that will be used for this project as described in table 1. Typically a simulation is ran by supplying one or more input files which along with the primary data of interest (be it a configuration of atoms in space, the electronic structure, a material property, etc) will typically produce auxiliary data (metadata) which can vary greatly from software to software. To interface software with AiiDA, a plugin is written that converts AiiDA nodes (used as input) to the actual input files required by the code, and parses outputs allowing these to be stored in the database in a standard way, such that it can later be queried using the AiiDA Application Programming Interface (API). The AiiDA code itself can be considered to be _data_ in this context. AiiDA is fully documented both in the form of descriptions of functions that make up the API and as guides describing steps such as the installation procedure, configuring users, setting up codes, etc. The documentation is shipped with the code and can also be found online at http:// aiida-core.readthedocs.org/en/latest/. To ease the deployment procedure a dockerfile [20] for AiiDA installation with default parameters has been realized (See https://bitbucket.org/aiida_team/aiida_core_docker) this method will be also extended to each code with a plugin interface with AiiDA in order to readily install a working AiiDA and simulation code instance. Figure 1: MaX project timeline, compared with the Swiss Marvel project. 5.3 Data preservation strategy and standards As part of the MARVEL National Centre of Competence in Research (NCCR) project [21], funded by the Swiss National Science Foundation (SNSF), a large storage allocation has been purchased at CSCS which will be used to store and retain large files until at least 2018. It is foreseen that this will be extended until 2026 (see timeline in fig. 1). In the meantime we expect to obtain other funding opportunities to enable further preservation of the data. This applies also to all the data on the materialscloud.org platform that is being developed at EPFL, where consortium members will be free to share data publicly and privately (with selected collaborators) under the condition that data be made publicly available within approximately a year of depositing. The Materials Cloud platform is expected to be fully operative in April 2017. Entire AiiDA graphs or part of them could be directly shared through the Materials Cloud portal. The data retained, being generated with AiiDA, will include full provenance of all simulations carried out. Some large output files may not be preserved if they are judged to be easy to reproduce and unlikely to be needed after the simulation has completed. This policy is code-dependent and sensible defaults are defined within each AiiDA plugin, but can easily be changed by the user or group who runs the simulations and generates the data. Together the Materials Cloud and AiiDA aim to enable researchers to trivially adopt aspects of the Findable, Accessible, Interoperable and Re-usable (FAIR) principles as laid out by Force11. Any calculation ran through AiiDA is automatically interoperable (we define an open standard and provide tools to convert data to other libraries and are adding Open Databases Integration for Materials Design (OPTiMaDe) compatibility) and re-usable (by virtue of the provenance tracking and workflows). The Materials Cloud on the other hand ensures that the data is both findable and accessible. Section 7 outlines further details. # Data security and confidentiality 6.1 Formal information/data security standards AiiDA adopts a distributed approach whereby an AiiDA instance (the code plus the associated database) can be hosted on an individual’s machine, a group server or a national or international server. Instances within a group should be managed and secured by the group itself or an appointed administrator. We provide a means of sharing results either with collaborators or the public at large via the materialscloud.org website which will be running a full AiiDA instance. In this case, EPFL and CSCS (for the AiiDA repository and the large files, respectively) will be responsible for maintaining data security. Data transport and access will be carried out over secure communication channels, i.e. Secure Shell (SSH), and access to the database will be restricted to authorised users only. The AiiDA database does not store users’ private SSH keys and therefore any possible compromise of the database does not lead to a security breach that extends beyond the data stored in the database itself. 6.2 Main risks to data security Access to data and the execution of simulations is typically initiated by opening an SSH connection. This protocol itself is considered to be highly secure and is widely used. Also, SSH keys are used to connect rather than the password. Moreover, these keys are not stored in the AiiDA database; instead, AiiDA uses the keys of the Linux user under which the AiiDA daemon is running, therefore there is no additional security risk with respect to standard SSH connections. In any case, should a private key be obtained by an individual other than the authorised user there are typically system logs that keep track of all access, and the specific SSH key can be disabled to stop further activity. In addition, AiiDA, keeps an extensive log of the activities carried out which can be examined retrospectively in such a case. # Data sharing and access 7.1 Suitability for sharing The data generated in this project is highly suitable for sharing. Given that a simulation may take many hundreds (if not thousands) of Central Processing Unit (CPU) hours it is beneficial to the community to be able to access these without having to recompute them. In addition to raw data there will be a curated section of the materialscloud.org, which uses directly AiiDA as a backend, that will contain results condensed from many simulations in a form that gives an overview of particular property or area of interest. AiiDA provides a social ecosystem where the simulation results, materials and provenance data can be shared. It provides plugins to import crystal structures from many common formats and directly from external databases such as ICSD [17] or COD. It has also COD and TCOD exporters to export data to these external databases. Moreover it is fully interoperable with commonly used data formats for crystal structures such as XSF[22], ASE[23] and Pymatgen[24]. 7.2 Discovery by potential users of the research data Data will be discoverable by the following means: * The materialscloud.org website which will host a public facing frontend enabling access to publicly shared results. * A private section of the website will allow dissemination with selected (authorised) collaborators. However users will be obliged to make this data publicly available approximately a year after depositing it. * Publications will contain references to the database (including UUIDs) indicating where the data used for that study can be found. * Publications that use results from the AiiDA repository will be encouraged to cite the paper describing the software infrastructure. 7.3 Data access and reusability ## 7.3.1 REST interface and Optimade support A key feature to enable large-scale data access is represented by the development of the AiiDA REST (REpresentation Stateless Transfer) Api, an interface that assigns Universal Resource Identifiers (URIs) to the objects stored in an AiiDA instance, thus making them obtainable via HTTP requests in a programmatic manner. The REST Api can be coupled to authentication/authorization modules so that the AiiDA administrator can choose the degree of accessibility of the resources. Moreover the AiiDA developers signed-on to provide a URI syntax compliant with the standards defined in the OPTIMaDe ( "Open Databases Integration for Materials Design) protocol to let the users interrogate heterogeneous databases with the same syntax. Finally, the AiiDA REST Api can be used as an essential building block for on-line services that expose a repository of data persisted in AiiDA. A paramount example is materialscloud.org whose back-end largely relies on the AiiDA REST Api. ## 7.3.2 Sharing of plugins workflows Point A1.1 of the FAIR principles requires that “the protocol is open, free, and universally implementable”. This is facilitated with a flexible plugins system being developed that enables developers to extend AiiDA to support their own codes and formats and share this functionality with the community via the Materials Cloud. Similarly, workflows that encode scientific expertise on how to carry out a series of steps to arrive at a result can be disseminated via the Materials Cloud. A critically revised version of the workflow engine and of the plugin system distribution, has been implemented in the latest release AiiDA v0.8 and described in details in D3.3[19]. This simplifies both the sharing and reuse of these two high level code components. 7.4 Governance of access The ultimate decision about sharing the data will lie with the PI and the author of the data, however in general it is expected that all data from published work within this Center of Excellence will be made publicly available. We are in the process of preparing a data sharing policy for the data added to the materialscloud.org web portal, that will require users to make their data available under a Creative Commons license after a maximum period, which is likely to be one year. The core of the AiiDA APIs (“aiida_core”) is released under an open-source MIT license and is available to download for free. The full code, including the full content of “aiida_core”, plus a set of useful additional plugins, are distributed as the “aiida_epfl” package, available free of charge for academic users. To download, users have to register with their academic email. The main terms of the “aiida_epfl” license are available at http://theossrv1.epfl.ch/aiida_download/. Finally, most of the material simulation codes are open source. As described in the proposal, also the remaining ones are in the process of introducing a multiple-licensing system to comply with the open-source requirements. # Relevant institutional, departmental or study policies on data sharing and data security Data will be generated by different institutions, and the data policies of the respective institutions will apply. For data shared on materialscloud.org, the policies of EPFL and CSCS will apply (as the data is expected to be stored at these two institutions. In particular, EPFL provides a combined document “Directive concerning research integrity and good scientific practice at EPFL (LEX 3.3.2)” [25] for all data policies. The data policies from CSCS are instead explained in the document “Data_Storage_policy_V2.pdf” [26].
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1049_MaX_676598.md
# Executive Summary The MaX Centre of Excellence aims at supporting the needs of all the stakeholders involved in the field of materials modeling, simulation and design by providing new instruments and services in the form of data, codes, expertise and turnkey solutions to efficiently address the crucial challenges of novel materials development in the exascale computing era. This document provides a description of the strategies and solutions adopted within the MaX CoE to establish a high level materials’ informatics framework to curate, preserve and share all the data produced by the flagship codes. The core technology behind this objective is the AiiDA code, a python infrastructure designed to support different codes through a plugin interface, allow for an automated design and implementation of complex workflows and task tracking and able to store the full provenance of each object in a tailored database. AiiDA parses the input and output files and runs the calculations on high performance computing platforms, stores the data using uniform formats based on python dictionaries and preserve the full provenance in the form of a Directed Acyclic Graph (DAG). AiiDA also enables a social ecosystem where the simulation workflows and results can be openly shared. # Introduction 2.1 About this document This document is deliverable D3.1 of the MaX project and briefly describes the types of data produced in the project, standards used for data and how the data is being curated, preserved and shared. This is a living document and will be updated continuously in the course of the project. The acronyms used are summarised in the glossary at the end. # Description of the data Within this project, multiple open-source computer codes including Fleur, SIESTA, Quantum ESPRESSO, YAMBO, etc. for electronic-structure calculations and materials modeling are being developed (see Table 1). The materials informatics framework AiiDA has been designed to support different codes, and plugins are already available or are being currently developed to support all codes of this Centre of Excellence (CoE). AiiDA promotes advanced programming models through Python abstraction layers to disseminate advanced functionalities to arbitrary quantum engines (i.e. simulation codes). It provides a model of automatic data generation and storage, to guarantee provenance, preservation, reproducibility and reuse. This platform will be used to organise and coordinate thousands of simulations, searching for optimal properties and performance by acquiring a variety of heterogeneous microscopic data from the calculations. It should ideally allow for an automated design and implementation of complex workflows and task tracking, based on a scripting interface for job creation and submission. The results will feed a database of structures and properties that will in turn drive further simulations. The data thus generated will be used for instance for data-mining and machine-learning, or to build classical neural networks to further ramp up the time and length scales accessible to numerical modeling. Table 1: The primary codes that will be used as part of this project. <table> <tr> <th> Name </th> <th> License </th> <th> Main Developers </th> </tr> <tr> <td> FLEUR [1, 2] </td> <td> GNU-GPL </td> <td> Jülich </td> </tr> <tr> <td> QUANTUM ESPRESSO [3, 4] </td> <td> GNU-GPL </td> <td> SISSA, EPFL, CIN </td> </tr> <tr> <td> SIESTA [5, 6] </td> <td> SIESTA License </td> <td> ICN2, BSC </td> </tr> <tr> <td> YAMBO [7, 8] </td> <td> GNU-GPL </td> <td> CNR, CIN </td> </tr> <tr> <td> AIIDA [9, 10] </td> <td> MIT-BSD, EPFL License </td> <td> EPFL, Robert Bosch </td> </tr> <tr> <td> I-PI [11, 12] </td> <td> GNU-GPL/MIT-BSD </td> <td> EPFL </td> </tr> </table> 3.1 Types of data AiiDA provides automated solutions and various plugins for computer codes without a need for tuning code specific parameters. It stores the calculations, their inputs and their results (either parsed, extracted from Extensible Markup Language (XML) outputs or from text files with the appropriate dictionaries) in a database and its associated file repository. This data is generated from open-source electronic-structure material simulations codes that encompass key technologies such as all-electron, pseudopotentials, and localised basis sets, density-functional theory, time- dependent density-functional theory, and many-body perturbation theory, multiscale/multiphysics modelling with a focus on quantum mechanics/molecular mechanics, solvation and electrochemistry, thermal/electrical transport, and complex magnetic properties. The key codes are described in Table 1. We are also using the Crystallography Open Database (COD) [13] and Theoretical Crystallography Open Database (TCOD) [14] external open access databases of crystal structures for organic and inorganic compounds to obtain the input atomic coordinates of crystalline materials. 3.2 Format and scale of the data AiiDA parses the input and output files mostly stored as text or XML and runs the calculations/codes on high performance computing platforms. The full provenance of each data object (inputs, outputs, calculations) is automatically stored in database in a format that enables the simulation results to be fully reproduced. The database has an associated repository with text and binary (machine-independent) files. We have used a uniform format to define the main raw and analysed data irrespective of the different plugins (e.g. Quantum ESPRESSO). These formats contain data in dictionary format, exportable for instance to plain JavaScript Object Notation (JSON) (for example, StructureData, ParameterData data types in AiiDA store metadata in python dictionaries within a database). Currently we are using the PostgreSQL [15] open-source relational database to store our data. It contains around 1M+ records but over the course of the next year and a half it is expected to grow to 100M+ records (with 10 TB+ occupied disk space). Currently we use applications like Jmol, Visual Molecular Dynamics (VMD), PyMOL, VESTA, XCrySDen and Blender to visualise 2D and 3D structures, and Matplotlib, Gnuplot and Mathematica for plotting the data. AiiDA provides a social ecosystem where the simulation results, materials and provenance data and scientific workflows can be shared. It provides plugins to import crystal structures from many common formats and directly from external databases such as Inorganic Crystal Structure Database (ICSD) [16] or COD. It has also COD and TCOD exporters to export data to these external databases. This allows us to share data easily and also ensures their long-term preservation. # Data collection/generation 4.1 Methodologies for data collection/generation Data for the project will be created and collected by using the AiiDA framework for the management of the simulations. AiiDA plugins and workflows are being written for different simulation codes in order to support the simulation with at least the codes used within this Centre of Excellence, but also to support other codes available in the community. By using AiiDA, the full provenance of all calculations is preserved from initial inputs to final outputs, as well as all steps along the way, in the form of a Directed Acyclic Graph (DAG). This allows any output data to be retrospectively checked for quality if there are questions about how it was generated. Workflows on the other hand provide a means of proactive quality assurance whereby a series of steps is designed and implemented by a domain expert and packaged as a workflow. A workflow can then be executed by experts and non-experts alike, with internal checks and heuristics that attempt to ensure the quality of data with respect to convergence and other relevant simulation parameters. Furthermore, by having a standard way of running particular calculations, it becomes much easier to compare and validate results. Raw inputs and outputs from computer simulations codes will be stored directly so that they may be re-parsed or manually inspected if necessary. Otherwise data will be stored as standard, code-independent objects in the AiiDA framework (e.g. crystal structures, band structures, pseudopotentials, _k_ -point paths, etc) allowing easy querying and manipulation of results from a variety of simulation software packages. All naming of these input and output files are handled internally by AiiDA and files can be retrieved for particular calculations by either issuing a query to match specific search criteria or directly by using the Universally unique identifier (UUID) of a known simulation. 4.2 Data quality and standards As mentioned previously the combination of persistent provenance and workflows will be used in combination to maintain consistency and quality. Our provenance model also acts as a form of documentation storing all the steps that lead to any result in the database. One aspect of the project involves the dissemination of a library of, so called, pseudopotentials which contain information about the quantum mechanical properties of the outermost electrons (those relevant in chemical bonds) for the elements of the periodic table. Currently there may be many different pseudopotentials for each element, however this makes it difficult to compare calculations, and worse, some pseudopotentials are not accurate enough to give reliable results for some of the calculations they are being used in. By providing a standard set of pseudopotentials that have been thoroughly tested we alleviate both these problems. We are also involved in working on ontologies in collaboration with the TCOD team. During this collaboration, we have implemented exporters for calculations managed using AiiDA to the domain-specific ontology that is being built within the TCOD project, so that also calculation results (such as crystal energies or atomic forces) can be stored in a code-independent format. # Data management, documentation and curation 5.1 Managing, storing and curating data. In AiiDA, all data (calculations, their inputs and their outputs) generated by running highthroughput simulations on local or remote servers is naturally stored on those computers. Moreover, relevant inputs and outputs are persisted in the AiiDA repository, composed both of a folder-like structure and of a database. For the latter, we use PostgreSQL, a powerful open source object- relational database. The format for storing data (depending on the specific type of data) is defined by the specific AiiDA data plugins, described in detail in the code documentation. The data format of common objects (crystal structures, band structures, force constants, etc.) is the same for all objects of the same type, even if generated by different computer codes, to facilitate data exchange, queries, and the bridging of different simulation tools. Moreover, each data format is accompanied by data import and export functions from/to standard formats (for instance Crystallographic Information File (CIF) files for crystal structures). Further export functions can be added transparently and will be developed during the project. Every data object is a node in the DAG where links between nodes keeps track of the data provenance (who generated the data, with which parameters, etc.), allowing for easy regeneration of the same data with the same inputs. Moreover, beside common metadata (user/owner, creation and last modification date, etc.) any further metadata can be attached to any node of the database (data and calculations). Also, AiiDA provides data sharing capabilities, both to share portions of the calculations database with selected groups of users and collaborators, and to export the data to public repositories. Currently the full database and part of the file repository (small files) are stored on a server at École Polytechnique Fédérale de Lausanne, Switzerland (EPFL) while the remaining part of the file repository is stored on a Centro Svizzero di Calcolo Scientifico Swiss National Supercomputing Centre, Switzerland (CSCS) server. The policy defining what a large file is depends on the application and is defined within the AiiDA workflows used to generate the data. On the EPFL server, there are backup scripts running every day performing a full backup of database and an incremental backup of the file repository. The data stored at CSCS server is also backed up daily. 5.2 Metadata standards and data documentation There are several key pieces of simulation software that will be used for this project as described in Table 1. Typically a simulation is ran by supplying one or more input files which along with the primary data of interest (be it a configuration of atoms in space, the electronic structure, a material property, etc) will typically produce auxiliary data (metadata) which can vary greatly software to software. To interface software with AiiDA, a plugin is written that converts AiiDA nodes (used as input) to the actual input files required by the code, and parses outputs allowing these to be stored in the database in a standard way, such that it can later be queried using the AiiDA Application Programming Interface (API). The AiiDA code itself can be considered to be _data_ in this context. AiiDA is fully documented both in the form of descriptions of functions that make up the API and as guides describing steps such as the installation procedure, configuring users, setting up codes, etc. The documentation is shipped with the code and can also be found online at http://aiida- core.readthedocs.org/en/latest/. 5.3 Data preservation strategy and standards As part of the MARVEL National Centre of Competence in Research (NCCR) project [17], funded by the Swiss National Science Foundation (SNSF), a large storage allocation is being purchased at CSCS which will be used to store and retain large files until at least 2018. Later, data will be handled by until 2026 (see timeline in fig. 1). In the meantime we expect to obtain other funding opportunities for further future preservation of the data. This applies also to all the data on the materialscloud.org platform that is being developed at EPFL, where consortium members will be free to share data publicly and privately (with selected collaborators) under the condition that data be made publicly available within approximately a year of depositing. The data retained, being generated with AiiDA, will include full provenance of all simulations carried out. Some large output files may not be preserved if they are judged to be easy to reproduce and unlikely to be needed after the simulation has completed. This policy is code-dependent and sensible defaults are defined within each AiiDA plugin, but can be easily changed by the user or group who runs the simulations and generates the data. Figure 1: MaX project timeline, compared with the Swiss Marvel project. # Data security and confidentiality 6.1 Formal information/data security standards AiiDA adopts a distributed approach whereby an AiiDA instance (the code plus the associated database) can be hosted on an individual’s machine, a group server or a national or international server. Instances within a group should be managed and secured by the group itself or an appointed administrator. We provide a means of sharing results either with collaborators or the public at large via the materialscloud.org website which will be running a full AiiDA instance. In this case, EPFL and CSCS (for the AiiDA repository and the large files, respectively) will be responsible for maintaining data security. Data transport and access will be carried out over secure communication channels, i.e. Secure Shell (SSH), and access to the database will be restricted to authorised users only. The AiiDA database does not store users’ private SSH keys and therefore any possible compromise of the database does not lead to a security breach that extends beyond the data stored in the database itself. 6.2 Main risks to data security Access to data and the execution of simulations is typically initiated by opening an SSH connection. This protocol itself is considered to be highly secure and is widely used. Also, SSH keys are used to connect rather than the password. Moreover, these keys are not stored in the AiiDA database; instead, AiiDA uses the keys of the Linux user under which the AiiDA daemon is running, therefore there is no additional security risk with respect to standard SSH connections. In any case, should a private key be obtained by an individual other than the authorised user there are typically system logs that keep track of all access, and the specific SSH key can be disabled to stop further activity. In addition, AiiDA, keeps an extensive log of the activities carried out which can be examined retrospectively in such a case. # Data sharing and access 7.1 Suitability for sharing The data generated in this project is highly suitable for sharing. Given that a simulation may take many hundreds (if not thousands) of Central Processing Unit (CPU) hours it is beneficial to the community to be able to access these without having to recompute them. In addition to raw data there will be a curated section of the materialscloud.org that will contain results condensed from many simulations in a form that gives an overview of particular property or area of interest. 7.2 Discovery by potential users of the research data Data will be discoverable by the following means: * The materialscloud.org website which will host a public facing frontend enabling access to publicly shared results. * A private section of the website will allow dissemination with selected (authorised) collaborators. However users will be obliged to make this data publicly available approximately a year after depositing it. * Publications will contain references to the database (including UUIDs) indicating where the data used for that study can be found. * Publications that use results from the AiiDA repository will be encouraged to cite the paper describing the software infrastructure. 7.3 Governance of access The ultimate decision about sharing the data will lie with the PI and the author of the data, however in general it is expected that all data from published work within this Centre of Excellence will be made publicly available. We are in the process of preparing a data sharing policy for the data added to the materialscloud.org web portal, that will require users to make their data available under a Creative Commons license after a maximum period, which is likely to be one year. The core of the AiiDA APIs (“aiida_core”) is released under an open-source MIT license and is available to download for free. The full code, including the full content of “aiida_core”, plus a set of useful additional plugins, are distributed as the “aiida_epfl” package, available free of charge for academic users. To download, users have to register with their academic email. The main terms of the “aiida_epfl” license are available at http://theossrv1.epfl.ch/aiida_download/. Finally, most of the material simulation codes are open source. As described in the proposal, also the remaining ones are in the process of introducing a multiple-licensing system to comply with the open-source requirements. # Relevant institutional, departmental or study policies on data sharing and data security Data will be generated by different institutions, and the data policies of the respective institutions will apply. For data shared on materialscloud.org, the policies of EPFL and CSCS will apply (as the data is expected to be stored at these two institutions. In particular, EPFL provides a combined document “Directive concerning research integrity and good scientific practice at EPFL (LEX 3.3.2)” [18] for all data policies. The data policies from CSCS are instead explained in the document “Data_Storage_policy_V2.pdf” [19].
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1050_GenTree_676876.md
GenTree are numbers, texts, images and sounds generated by the partners during the project and **needed to validate** the results presented in scientific publications and other communication documents. They include metadata, i.e. the information describing the managed research data. This document does not concern data preexisting the GenTree project. These data are part of the “background” contributions supplied by the GenTree partners and are identified in the Consortium Agreement document. This document does not concern either data originated from third party producers. These data are managed according to the rules defined by these third parties. With the objective of **stimulating the use and re-use of data** and to contribute to the development of the “Open Science” strategy of the European Union through **data sharing** , the GenTree data policy framework provides recommendations and rules for data management and accessibility. It is based on existing documents published in the scientific literature, such as Michener (2015, "Ecological data sharing” Ecologial Informatics) and the guidelines produced by the European Commission (2016, Guidelines on Data Management in Horizon 2020). The **general purpose** of this document is: 1. to identify the different types of data (origin and authorship, see annex 1 for definition of terms) that will be collected during GenTree and define the different categories of users for these data during GenTree and after it is finished. 2. to provide documents and recommendations for data management and publication: * a Data Management Plan template (Annex 2); * recommendations for data sharing (Annex 3); * data licenses adapted to specific Intellectual Properties (Annex 4); c) to provide support for the production and management of discovery metadata: - metadata standards compliant with international requirements and constraints; - a metadata management system. This document only focuses on data, including output data from modeling activities. Models and modeling platforms involved in the GenTree project are out of its scope. Intellectual property and sharing rules related to these components are addressed in the Consortium Agreement. Plant material and DNA are also outside the scope of this document. Their exchange and transfer will be addressed in a separate Material Transfer Agreement document. # 3- Data Management Plan All GenTree research activities that collect or generate data should provide a Data Management Plan (DMP). Although there is no unique rule to define at which scale the DMP should be elaborated, it must correspond to a coherent data set relative to their management. The H2020 DMP template (Annex 2) should be preferably used by GenTree partners. For each data set, the following information needs to be provided: Data set reference and name, data set description, standards and metadata used, how data will be shared, how data will be archived and preserved (including storage and backup). # 4- Data sharing agreement for GenTree partners A data sharing agreement (Annex 3) has to be signed by all GenTree partners in order to ensure the full sharing of data among partners. Data will be immediately accessible for all partners, without any embargo. Studies based on specific data sets must be proposed as coresearch activities to the partner(s) owning the intellectual property rights. # 5- Data identification/citation GenTree data will be stored in GnpIS ( _https://urgi.versailles.inra.fr/Tools/GnpIS_ ) , a dedicated multispecies integrative Information System which ensures confidential treatment, secured archiving, browsing, visualizing and use and re-use of data. GnpIS is a licensed information system that will be used as is, without further development from its current state, by GenTree partners. GnpIS cannot be installed at GenTree partner laboratories. GenTree recommends that data sets should be identified by a DOI. GnpIS makes it possible to generate DOI for data sets. Data and their DOI will remain accessible to partners for a minimum of 10 years after they are stored in GnpIS. GenTree encourages the publication of data papers making the data produced by GenTree accessible to the community of researchers or stakeholders interested in the management and sustainable use of forest genetic resources in Europe. GenTree encourages that publications using GenTree data clearly identify and acknowledge such data and their Information System, either by citing the relevant data papers or by linking to the proper DOI in the GenTree Information System. # 6- Metadata standards and tools for discovery All GenTree datasets should be described by standardized metadata for discovery purposes. GenTree metadata will be stored and managed using the same dedicated Information System as for data, GnpIS ( _https://urgi.versailles.inra.fr/Tools/GnpIS_ ) . Metadata will follow the ISO19115/19139 standards and will be compliant with the EU INSPIRE directive. GenTree metadata will be fully and freely accessible. Each Partner shall supply GnpIS, the GenTree Information System, with its metadata as soon as data are generated. Each Partner shall also supply the GenTree Information System with Metadata from third parties, according to the rules defined by such third party. # GenTree DMP template Templates are based on H2020 and DCC templates ( _http://www.dcc.ac.uk/resources/howguides/develop-data-plan_ ) . The purpose of the Data Management Plan (DMP) is to provide an analysis of the main elements of the data management policy that will be used by the applicants with regard to all the datasets that will be generated by the project. The DMP is not a fixed document, but evolves during the lifespan of the project. The DMP should address the points below on a dataset by dataset basis and should reflect the current status of reflection within the consortium about the data that will be produced. GenTree data should be discoverable, accessible, assessable and intelligible, useable beyond the original purpose for which they were collected and, finally, interoperable to specific quality standards. In GenTree, there are 3 main categories of data: site description, DNA sequences and phenotypic traits. For each data set, the following 5 main items should be addressed and specified: **1- Data set reference and name** Identifier for the data set to be produced. ## **2- Data set description** Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and re-use. Questions to consider on data set description: * What data will you create? Guidance: * Give a brief description of the data that will be created, noting its content and coverage ## **3- Standards and metadata** Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created. Questions to consider on standards and metadata: * How will you capture / create the metadata? * Can any of this information be created automatically? * What metadata standards will you use and why? Guidance: * Metadata should be created to describe the data and aid discovery. Consider how you will capture this information and where it will be recorded, e.g. in a database with links to each item, in a ‘readme’ text file, in file headers etc. * Researchers are strongly encouraged to use community standards to describe and structure data, where these are in place. The DCC offers a catalogue of disciplinary metadata standards. ## **4- Data sharing** Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacyrelated, security-related). Questions to consider on methods for data sharing: * How will you make the data available to others? * With whom will you share the data, and under what conditions? Guidance: * Consider where, how, and to whom the data should be made available. Will you share data via a data repository, handle data requests directly or use another mechanism? * The methods used to share data will be dependent on a number of factors such as the type, size, complexity and sensitivity of data. Mention earlier examples to show a track record of effective data sharing. Questions to consider on restrictions for sharing: * Are any restrictions on data sharing required? e.g. limits on who can use the data, when and for what purpose. * What restrictions are needed and why? * What action will you take to overcome or minimise restrictions? Guidance: * Outline any expected difficulties in data sharing, along with causes and possible measures to overcome these. Restrictions to data sharing may be due to participant confidentiality, consent agreements or IPR. Strategies to limit restrictions may include: anonymising or aggregating data; gaining participant consent for data sharing; gaining copyright permissions; and agreeing a limited embargo period. Questions to consider on data repository: * Where (i.e. in which repository) will the data be deposited? Guidance: * Most research funders recommend the use of established data repositories, community databases and related initiatives to aid data preservation, sharing and re-use. * An international list of data repositories is available via Databib or Re3data. **5- Archiving and preservation (including storage and backup)** Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. Questions to consider on Preservation Plan: * What is the long-term preservation plan for the dataset? e.g. deposit in a data repository * Will additional resources be needed to prepare data for deposit or meet charges from data repositories? Guidance: * Researchers should consider how datasets that have long-term value will be preserved and curated beyond the lifetime of the grant. Also outline the plans for preparing and documenting data for sharing and archiving. * If you do not propose to use an established repository, the data management plan should demonstrate that resources and systems will be in place to enable the data to be curated effectively beyond the lifetime of the grant. Questions to consider on Resourcing: * What additional resources are needed to deliver your plan? * Is additional specialist expertise (or training for existing staff) required? * Do you have sufficient storage and equipment or do you need to cost in more? * Will charges be applied by data repositories? * Have you costed in time and effort to prepare the data for sharing / preservation? Guidance: * Carefully consider any resources needed to deliver the plan. Where dedicated resources are needed, these should be outlined and justified. Outline any relevant technical expertise, support and training that is likely to be required and how it will be acquired. Provide details and justification for any hardware or software which will be purchased or additional storage and backup costs that may be charged by IT services. * Funding should be included to cover any charges applied by data repositories, for example to handle data of exceptional size or complexity. Also remember to cost in time and effort to prepare data for deposit and ensure it is adequately documented to enable re-use. If you are not depositing in a data repository, ensure you have appropriate resources and systems in place to share and preserve the data. * See UKDS guidance on costing data management. # GenTree Data Sharing Agreement **1\. Conditions for supplying Data and Metadata to the GenTree Information System 1.1 Data and Metadata coming from Parties of the GenTree Project** ## a) Data and Metadata defined as Background * Each Party shall supply the GenTree Information System with its Data and Metadata as soon as possible, if these Data and Metadata are needed for carrying out the activities of the research project, unless these Data or Metadata have been excluded or limited in the Consortium Agreement. * The Parties will provide a citation (acknowledgement) reference for each Data set included as a Metadata into the GenTree Information System ## b) Data and Metadata (Foreground) produced by the Parties * Each Party shall supply the GenTree Information System with its Data and Metadata as soon as they are generated if these Data and Metadata are needed for carrying out the activities of the research project. * The Parties will provide a citation (acknowledgement) reference of each Data set provided into the GenTree Information System ### 1.2 Data and Metadata coming from third Parties  The Parties shall supply the GenTree Information System with Data and Metadata from third parties, according to the rules defined by each such third party. ### 2 Access policy to Data and Metadata during, outside and after the GenTree Project #### 2.1 Access to Metadata  Public access to the Metadata will be given once these are entered into the GenTree Information System. **2.2 Access to Data** ## a) Access to Data defined as Background * Access to Data defined as Background will be given to the GenTree partners as needed for the implementation of their research activities and projects in compliance with the Consortium Agreement. * Access to Data defined as Background will be given to the scientific community upon prior authorisation given by the Party who owns the data or rights pertaining to such data ## b) Access to Data defined as Foreground * Access to Data defined as Foreground obtained within the GenTree Project, will be given to the GenTree partners once they are entered into the GenTree Information System, and to the public, after a maximum delay of 18 (eighteen) months after the end of GenTree. * Parties who own the Data will endeavour to shorten this period of 18 (eighteen) months. The period of 18 (eighteen) months can be extended through a decision of the GenTree ExCom, after prior consultation of the owner the Data. ### 3 Publication/communication #### 3.1 Metadata public release  Metadata shall be released to the Public as soon as possible. Any publication or communication of Data and/or Metadata shall be made according to the provisions of article 8.4 (Dissemination) of the Consortium Agreement. #### 3.2 Referencing and acknowledgement * Any publication of the results obtained within the GenTree project must reference the project as follows: “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 676876”. * Any publication of results obtained that use Data and/or Metadata shared within the GenTree Information System, must reference the Data and/or Metadata owner, the citation reference of the dataset (whenever provided in the metadata) and the project. * Any publication of a synthesis of Data and/or Metadata shared in the GenTree information system, including for dissemination or training, must reference the Data and/or Metadata owner, the citation reference of the datasets (whenever provided in the metadata) and the project.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1051_ATLAS_678760.md
and identified, indicating, in particular, the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy- related, security-related).  **Archiving and preservation** (including storage and backup) - Description of the procedures that will be put in place for the long-term preservation of the data. An indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. # 3 ATLAS’ Grant Agreement - Article 29 – Dissemination of results, open access and visibility of EU funding The Data Management Plan and Open Science Policy of project ATLAS specifically address Article 29 of the Grant agreement. It is included here as a key reference. ## 29.1 Obligation to disseminate results _Unless it goes against their legitimate interests, each beneficiary must — as soon as possible — ‘disseminate’ its results by disclosing them to the public by appropriate means (other than those resulting from protecting or exploiting the results), including in scientific publications (in any medium). This does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply._ _A beneficiary that intends to disseminate its results must give advance notice to the other beneficiaries of — unless agreed otherwise — at least 45 days, together with sufficient information on the results it will disseminate. Any other beneficiary may object within — unless agreed otherwise — 30 days of receiving notification, if it can show that its legitimate interests in relation to the results or background would be significantly harmed. In such cases, the dissemination may not take place unless appropriate steps are taken to safeguard these legitimate interests. If a beneficiary intends not to protect its results, it may — under certain conditions (see Article 26.4.1) — need to formally notify the Agency before dissemination takes place._ ## 29.2 Open access to scientific publications _Each beneficiary must ensure open access (free of charge online access for any user) to all peerreviewed scientific publications relating to its results. In particular, it must:_ 1. _as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications._ 2. _ensure open access to the deposited publication — via the repository — at the latest:_ 1. _on publication, if an electronic version is available for free via the publisher, or ii. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case._ 3. _ensure open access — via the repository — to the bibliographic metadata that identifies the deposited publication. The bibliographic metadata must be in a standard format and must include all of the following:_ 1. _the terms “European Union (EU)” and “Horizon 2020”; ii. the name of the action, acronym and grant number; iii. the publication date, and length of embargo period if applicable, and iv. a persistent identifier._ **29.3 Open access to research data** _Regarding the digital research data generated in the action (‘data’), the beneficiaries must:_ 1. _deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge — the following:_ _i. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; ii. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan';_ 2. _provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves). This does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply. As an exception, the beneficiaries do not have to ensure open access to specific parts of their research data if_ _the achievement of the action's main objective, as described in Annex 1, would be jeopardised by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access._ ## 29.4 Information on EU funding — Obligation and right to use the EU emblem _Unless the Agency requests or agrees otherwise or unless it is impossible, any dissemination of results (in any form, including electronic) must:_ 1. _display the EU emblem and_ 2. _include the following text: “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 678760”._ _When displayed together with another logo, the EU emblem must have appropriate prominence. For the purposes of their obligations under this Article, the beneficiaries may use the EU emblem without first obtaining approval from the Agency. This does not, however, give them the right to exclusive use. Moreover, they may not appropriate the EU emblem or any similar trademark or logo, either by registration or by any other means._ ## 29.5 Disclaimer excluding Agency responsibility _**Any dissemination of results must indicate that it reflects only the author's view and that the Agency is not responsible for any use that may be made of the information it contains.** _ ## 29.6 Consequences of non-compliance _If a beneficiary breaches any of its obligations under this Article, the grant may be reduced (see Article 43)._ # 4 ATLAS’ Data Management Plan (DMP) ATLAS will generate diverse research outputs, including data, software and scientific articles about physical oceanography, biogeochemistry, visual surveys of biodiversity, biological rates and traits measurements, genomic analyses, socio-economic metrics, and spatial planning. This diversity requires an ambitious data management plan, building on existing open science resources that are interoperable and trusted. The key to implementing ATLAS’ Data Management Plan is to appoint an information specialist (UniHB) with experience in marine science who will act as facilitator between ATLAS partners and six open science resources: * ENA, the European Nucleotides Archive ( _http://www.ebi.ac.uk/ena_ ) * PANGAEA, Data Publisher for Earth and Environmental Science ( _http://www.pangaea.de_ ) * EuBI, Euro-BioImaging ( _http://www.eurobioimaging.eu/_ ) * ZENODO, EU-funded open access digital repository ( _https://zenodo.org_ ) * OpenAIRE, H2020’s research monitoring infrastructure ( _https://www.openaire.eu_ ) * EMODnet, the European Marine Observation and Data Network ( _http://www.EMODnet.eu_ ) In compliance with H2020’s Open Research Data Pilot, these resources (i.e. ENA, PANGAEA, EuBI) ensure that _**data sets are** _ _**archived and preserved** _ , using a _**reference and name** _ (i.e. authors, year, title, DOI or accession number), a _**description** _ (i.e. targeted use, geolocation, methods, link to related articles), and community _**standards and metadata** _ (i.e. parameters semantic, formats, units, and use of ontologies and registries). Unless authors request that data be embargoed (see ATLAS’ Open Science Policy below), data sets archived at ENA, PANGAEA, EuBI are available freely in open access (see Task 8.3 below). Moreover, data sharing and reporting to the EC is maximised by disseminating metadata to OpenAIRE and EMODnet (see Tasks 8.4 & 8.5 below). A _**list of data and metadata expected from ATLAS’ research activities** _ is provided in Appendix I, based on previous EU projects that conducted similar studies, i.e. HERMES, HERMIONE and CoralFISH. The Data Management Plan of ATLAS is coordinated by Work Package 8, and is articulated around five key objectives/tasks: ## Task 8.1 Engage ATLAS partners in H2020’s Open Research Data Pilot This task essentially consists in effectively communicating the founding principles and implementation steps of ATLAS’ DMP and Open Science Policy to all partners of the project. This will be achieved by personal communication between partners and the information specialist (UniHB) during the entire life cycle of the project. Furthermore, ATLAS will engage in the development of community-oriented services that help deposit, link and enrich research outputs as part of OpenAIRE-Connect. ## Task 8.2 Assemble relevant research outputs from past EU‐funded and nationally‐funded efforts This task will assemble a collection of research outputs (e.g. literature, data, maps and model outputs) available in open access from past and on‐going research initiatives that are relevant to Atlanticecosystem‐based research. These include projects funded under FP6 (HERMES), FP7 (HERMIONE, CoralFISH, THOR, EURO‐BASIN, FixO3) and H2020 (AtlantOS and SponGES), and initiatives funded in the USA (NOAA) and Canada (DFO). This task will establish a list of data and metadata expected from ATLAS’ research activities (see Appendix I). ## Task 8.3 Safeguard and publish ATLAS’ research outputs in open access Research articles will be published in peer-reviewed journals and other digital material (other than data) will be deposited at ZENODO. Nucleotides data, imaging data, and environmental data (including bathymetry, physics, chemistry and biology) will be deposited in the selected data archives, i.e. European Nucleotides Archive (ENA), Euro-BioImaging (EuBI) and PANGAEA, respectively. There are two main routes towards open access to literature and data publications: * **Gold open access (open access publishing)** means that data sets or articles are immediately provided in open access by the publisher. The business model of most journal publishers is shifting the payment of publication costs away from readers (subscription charges) towards the authors (Article Processing Charges, APCs). These costs can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. ATLAS partners are strongly encouraged to use their institutional or ATLAS funds to publish research articles in Gold Open Access. Publishing data at ENA, EuBI and PANGAEA also follows gold open access but the cost model is different. ENA and EuBI are European Infrastructures and therefore offer their service free of charge to authors. The publication costs of data sets at PANGAEA are included in the budget of ATLAS’ UniHB partner. The information specialist (UniHB) will assist ATLAS’ partners in publishing their data in open access. * **Green open access (self-archiving)** means that data sets or articles are archived (deposited) for free by the author in an online repository. In the case of articles submitted to a non-open access journal, a final, peer-reviewed and proofed-read version can be deposited in Green Open Access. Some journal publishers request that open access is granted only after an embargo period has elapsed. Research outputs that are published in non-open access by ATLAS partners will be deposited at ZENODO. The information specialist (UniHB) will assist ATLAS’ partners in publishing their data in open access. ## Task 8.4 Monitor and report on ATLAS research outputs using OpenAIRE (M6‐M48) This will be achieved by actively monitoring data and literature publications from ATLAS partners using services from DataCite and Thompson Reuters, and by maintaining close communication with ENA, EuBI, PANGAEA and ZENODO. The information specialist (UniHB) will ensure that all research outputs (e.g. literature, data, maps and model outputs) from ATLAS are cross‐linked and monitored by OpenAIRE. The information specialist (UniHB) will also assist ATLAS partners in engaging and using OpenAIRE-Connect. OpenAIRE will be used to report on research outputs from the project (https://www.openaire.eu/project/ATLAS). The following deliverables will also provide regular updates on ATLAS research outputs: * D8.2 18‐month progress report on ATLAS data integration in EMODnet * D8.3 36‐month progress report on ATLAS data integration in EMODnet * D8.4 42-month Synthesis of ATLAS research outputs available in open access ## Task 8.5 Transfer ATLAS research outputs to science and industry stakeholders using EMODnet ATLAS research outputs (i.e. data and linked journal publications) available in open access will be transferred to EMODnet, using the interoperability protocols specified by the different EMODnet thematic nodes (e.g. bathymetry, chemistry and biology). # 5 ATLAS’ Open Science Policy The Open Science Policy of ATLAS provides clear rules for safeguarding, sharing and facilitating reuse of research outputs (e.g. data, articles, cruise reports, presentations, posters, outreach material, code, software, tools, etc). In compliance with Article 29 of the Grant Agreement, and following best practices described in the Data Management Plan, individuals who are involved in ATLAS activities will: * register at ORCID ( _http://orcid.org/_ ) and communicate their ORCID ID to the ATLAS project office ( [email protected]_ ); * ensure that Cruise Summary Reports (CSRs) (including a list of sampling activities, measured parameters and contact details of principal investigators for each parameter) are deposited in open access at ZENODO ( _https://zenodo.org/deposit/new?c=atlas_ ) no later than three months after the cruise; * 45 days prior to submitting a manuscript for publication in a scientific journal, a draft version must be sent to the ATLAS project office (Katherine Simpson, [email protected]_ and Murray Roberts, [email protected]_ ), indicating to which journal it will be submitted. The draft can consist of the complete manuscript, or include at a minimum the title, authors and abstract. The project office will make sure that the EC is properly acknowledged in the manuscript and will send the authors a contribution number. The draft will be placed on the password protected area of the eu-atlas website and all ATLAS partners will be informed by email by the Project Office. Comments about the draft manuscript (or abstract) should be sent directly to the authors and any concerns should be raised with the project coordinator. Unless issues have been raised during a period of 30 days, it is considered that all partners have read the research output and agree with its content; * ensure that the final version of all research outputs (e.g. data, articles, cruise reports, presentations, posters, outreach material, code, software, tools, etc) are deposited at ZENODO ( _https://zenodo.org/deposit/new?c=atlas_ ) as soon as possible and no later than three months after their production. Access to these research outputs can be restricted. Research outputs deposited in ZENODO are automatically reported to OpenAIRE and to the EC; * ensure that their research outputs are made available preferably in gold open access (or a least green open access) as soon as possible and at the latest six months after publication; twelve months for publications in the social sciences and humanities; twenty-four months in the case of unpublished data that are part of a research thesis; * ensure that all research outputs display the EU emblem, include the following text: “ _This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 678760 (ATLAS). This output reflects only the author’s view and the European Union cannot be held responsible for any use that may be made of the information contained therein”._ <table> <tr> <th> _This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 678760 (ATLAS). This output reflects only the author’s view and the European Union cannot be held responsible for any use that may be made of the information contained therein._ </th> </tr> </table> # 6 Appendix I – Parameters expected from research activities of ATLAS The following table was compiled from data sets published from three previous European projects that were similar in scope to ATLAS, i.e. HERMES, HERMIONE and CoralFISH. We therefore expect that the parameters listed in this table will either be measured during field and experimental campaigns organised by or in collaboration with ATLAS partners, or re-used from already published data sets. <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> amount concentration of biological entity per unit mass of environmental entity </td> <td> Bacteria, abundance per unit sediment mass </td> <td> 10**6 #/g </td> </tr> <tr> <td> amount concentration of biological entity per unit volume of environmental entity </td> <td> Prokaryotes, abundance as single cells; Bacteria, abundance </td> <td> 10**9 #/cm**3 </td> </tr> <tr> <td> amount concentration of chemical entity per unit volume of environmental entity </td> <td> Acid volatile sulphides; Chromium reducible sulphides; Iron ascorbic acid extraction; Iron dithionite extraction; Ammonium; Carbon, inorganic, dissolved; Carbon, organic, dissolved; Iron, total, dissolved; Nitrate; Nitrite; Nitrogen, total dissolved; Oxygen; Phosphate; Silicate; Sulfite; Sulphide; Thiosulphate; Concentration; Heptasulphide; Hexasulphide; Iron 2+; Pentasulphide; Polythionate; Tetrasulphide; Trisulphide; Zero valent sulphur, in pore water; Zero valent sulphur, in pore water cyanolysis; Zero valent sulphur, in pore water ZnCl2 extraction; Sulphur, elemental; Alkalinity, total; Chloride; Sulphate; Phospholipids; Adenylates, total </td> <td> µmol/cm**3; µmol/l; µmol/ml; mmol(eq)/l; mmol/l; nmol/ml; pmol/cm**3 </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> amount rate of chemical entity per unit area of environmental entity </td> <td> Methane flux; Oxygen flux, sediment oxygen demand; Methane, oxidation rate, anaerobic </td> <td> mmol/m**2/day </td> </tr> <tr> <td> amount rate of chemical entity per unit volume of environmental entity </td> <td> Esterase activity per sediment volume </td> <td> nmol/ml/h </td> </tr> <tr> <td> mass concentration of chemical entity per unit area of environmental entity </td> <td> Carbon, organic, particulate per area, fraction; Carbon, total particulate; Carbon, total particulate, fraction; Nitrogen, organic, particulate; Nitrogen, organic, particulate, fraction; Carbon, organic, particulate per area; Particle concentration, resuspendable; Marine litter, mass per area </td> <td> g/m**2; kg/km**2 </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> mass concentration of chemical entity per unit mass of environmental entity </td> <td> Amino acid, hydrolysable per unit sediment mass; Amino acids per unit sediment mass; Carbohydrates, acid soluble; Carbohydrates, NaOH soluble; Carbohydrates, total; Lipids, total; Proteins, total; Acenaphthene, fraction, per unit mass organic carbon; Acenaphthene, per unit mass organic carbon; Acenaphthylene, fraction, per unit mass organic carbon; Acenaphthylene, per unit mass organic carbon; alpha-Hexachlorocyclohexane, fraction, per unit mass organic carbon; alpha-Hexachlorocyclohexane, per unit mass organic carbon; Anthracene, fraction, per unit mass organic carbon; Anthracene, per unit mass organic carbon; Benz(a)anthracene, fraction, per unit mass organic carbon; Benz(a)anthracene, per unit mass organic carbon; Benz(a)pyrene, fraction, per unit mass organic carbon; Benz(a)pyrene, per unit mass organic carbon; Benz(b)fluoranthene, fraction, per unit mass organic carbon; Benz(b)fluoranthene, per unit mass organic carbon; Benz(g,h,i)perylene, fraction, per unit mass organic carbon; Benz(g,h,i)perylene, per unit mass organic carbon; Benz(k)fluoranthene, fraction, per unit mass organic carbon; Benz(k)fluoranthene, per unit mass organic carbon; beta-Hexachlorocyclohexane, fraction, per unit mass organic carbon; beta-Hexachlorocyclohexane, per unit mass organic carbon; Chrysene, fraction, per unit mass organic carbon; Chrysene, per unit mass organic carbon; Dibenzo(a,h)anthracene, fraction, per unit mass organic carbon; Dibenzo(a,h)anthracene, per unit mass organic carbon; Fluoranthene, fraction, per unit mass organic carbon; Fluoranthene, per unit mass organic carbon; Fluorene, fraction, per unit mass organic carbon; Fluorene, per unit mass organic carbon; gamma-Hexachlorocyclohexane, fraction, per unit mass organic carbon; gammaHexachlorocyclohexane, per unit mass organic carbon; Hexachlorobenzene, fraction, per unit mass organic carbon; Hexachlorobenzene, per unit mass organic carbon; Indeno(1,2,3-cd)pyrene, fraction, per unit mass organic carbon; Indeno(1,2,3-cd)pyrene, per unit mass organic carbon; Naphthalene, fraction, per unit mass organic carbon; Naphthalene, per unit mass organic carbon; ortho,para-Dichlorodiphenyltrichloroethane, fraction, per unit mass organic C; </td> <td> µg/g; mg/kg; ng/kg </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> mass concentration of chemical entity per unit volume of environmental entity </td> <td> Chloroplastic pigment equivalents per area; Chloroplastic pigment equivalents per volume; Chlorophyll a; Chlorophyll pigment equivalents; Phaeopigments; Carbon, organic, total per volume; Density, sigma, in situ; Density, sigma2000; Density, sigma-theta (0); Density, mass density; Protein, readily soluble per sediment volume </td> <td> µg/cm**3; µg/l; kg/m**3; mg/cm**3 </td> </tr> <tr> <td> mass concentration of chemical entity; biological entity per unit area of environmental entity </td> <td> Bacteria, biomass as carbon; Benthos, biomass as carbon; Biomass as carbon; Copepoda, biomass as carbon; Meiofauna, biomass as carbon; Nematoda, biomass as carbon; Polychaeta, biomass as carbon </td> <td> µg/10 cm**2; mg/m**2 </td> </tr> <tr> <td> mass of biological entity </td> <td> Amblyraja radiata, mass; Anarhichas denticulatus, mass; Anarhichas lupus, mass; Brosme brosme, mass; Chimaera monstrosa, mass; Etmopterus spinax, mass; Gadus morhua, mass; Galeus melastomus, mass; Gorgonacea, mass; Lithodes maja, mass; Lophelia pertusa, mass; Melanogrammus aeglefinus, mass; Molva molva, mass; Pollachius virens, mass; Porifera, mass; Rajella fyllae, mass; Sebastes norvegicus, mass </td> <td> kg </td> </tr> <tr> <td> mass rate of chemical entity per unit area of environmental entity </td> <td> 24-Ethylcholest-5-en-3beta-ol flux; 24-Methylcholest-5-en-3beta-ol flux; 24-Methylcholesta5,22E-dien-3beta-ol flux; all- cis-4,7,10,13,16,19-Docosahexaenoic acid flux; all- cis-5,8,11,14,17Icosapentaenoic acid flux; all- cis-6,9,12,15-Octadecatetraenoic acid flux; Cholesta-5-en-3beta-ol flux; cis-11-Hexadecenoic acid flux; cis-9-Hexadecenoic acid flux; Lipids flux; Biogenic flux; Biopolymeric carbon flux; Calcium carbonate flux; Carbohydrate flux; Carbon, inorganic, particulate flux per day; Carbon, organic, particulate flux per day; Flux of total mass; Lithogenic flux per day; Nitrogen, organic, particulate flux per day; Nitrogen, particulate flux; Opal flux; Organic matter flux; Protein total flux; Silica, particulate flux per day; Silicon, particulate flux; Total mass flux per day </td> <td> µg/m**2/day; mg/m**2/day </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> mass rate of chemical entity per unit volume of environmental entity </td> <td> Sulphate reduction rate; Methane, oxidation rate </td> <td> nmol/cm**3/day </td> </tr> <tr> <td> mass rate of chemical entity; biological entity per unit volume of environmental entity </td> <td> Bacterial biomass production of carbon </td> <td> ng/l/h </td> </tr> <tr> <td> metadata </td> <td> Buildup; Taxa analyzed; Replicates; Seismic line; Mesh size; Sample surface; Time, incubation; Duration, number of days; Signal strength; Pressure, water; Current direction; Current meter, pitch; Current meter, roll; Current meter, tilt; Course; Direction; Duration; File size; Depth, bottom/max; Depth, top/min; DEPTH, sediment/rock; DEPTH, water; Depth, bathymetric; Height above sea floor/altitude; Recovery; Taxon/taxa; Position; Description; Experimental treatment; Area/locality; Comment; Date/time end; Date/time start; Device type; Event; File name; Gear; Latitude 2; Longitude 2; Parameter; Reference of data; Reference/source; Sample code/label; Sample comment; Split; DATE/TIME; LATITUDE; LONGITUDE; ORDINAL NUMBER; Sorting; Species; Core; File format; Log info; Number; Profile; Sample ID; Sample, optional label/labor no; Shoot point; Site; Treatment; Type; Uniform resource locator/link to file; Uniform resource locator/link to graphic; Uniform resource locator/link to image; Uniform resource locator/link to metadata file; Uniform resource locator/link to movie; Uniform resource locator/link to raw data file; Uniform resource locator/link to sgy data file; Uniform resource locator/link to thumbnail; Cruise/expedition; Depth, bathymetric, maximum; Depth, bathymetric, minimum; ELEVATION; Feature; Filter; Hooks; Identification; Legibility; Location; Method comment; Seastate label; Section; Species code; Status </td> <td> </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> non-parametric quality of biological entity </td> <td> Sex; Stage </td> <td> </td> </tr> <tr> <td> non-parametric quality of environmental entity </td> <td> Coral Status; Food; Habitat; Substrate type; Morphology </td> <td> </td> </tr> <tr> <td> non-parametric quality of environmental entity per unit biological entity </td> <td> Coral; Amblyraja radiata; Anarhichas denticulatus; Anarhichas lupus; Brosme brosme; Chimaera monstrosa; Clupea harengus abundance as Nautical Area Scattering Coefficient; Etmopterus spinax; Fish; Gadus morhua; Gorgonacea; Lithodes maja; Lophelia pertusa; Melanogrammus aeglefinus; Molva molva; Pollachius virens; Rajella fyllae; Sebastes norvegicus; Sponge </td> <td> </td> </tr> <tr> <td> non-parametric quality of environmental entity per unit anthropogenic entity </td> <td> Marine litter, clinker; Marine litter, fabric; Marine litter, fishing net; Marine litter, glass; Marine litter, hard plastic; Marine litter, long line; Marine litter, metal; Marine litter, oil drum; Marine litter, soft plastic </td> <td> </td> </tr> <tr> <td> parametric quality of environmental entity </td> <td> Food index; Porosity; Species richness; Dauwe index; Diversity; Equitability; Shannon index of diversity </td> <td> </td> </tr> <tr> <td> parametric quality of environmental entity per unit biological entity </td> <td> Nematoda, trophic diversity; Mesopelagic fish abundance as Nautical Area Scattering Coefficient; Micromesistius poutassou abundance as Nautical Area Scattering Coefficient; Plankton abundance as Nautical Area Scattering Coefficient; Pollachius virens abundance as Nautical Area Scattering Coefficient </td> <td> </td> </tr> <tr> <td> parametric quality of physical entity </td> <td> Temperature, in rock/sediment; Temperature, water; Temperature, water, potential; Fluorescence; Turbidity; Echo backscatter; Turbidity (Formazin Turbidity Unit); Conductivity; Oxidation reduction (RedOx) potential; pH; Salinity; Attenuation, optical beam transmission; Humidity, relative; Pressure, atmospheric; Radiation, photosynthetically active; Temperature, air; Wind direction </td> <td> degC; arbitrary units; dB; mS/cm; mV </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> ratio of biological entity per unit biological entity </td> <td> Anaerobic methanotrophic archaea-1 archaea, targed with Anaerobic methanotrophic archaea-1- 350 oligonucleotide FISH-probe; Anaerobic methanotrophic archaea-2a archaea, targed with Anaerobic methanotrophic archaea-2a-647 oligonucleotide FISH- probe; Anaerobic methanotrophic archaea-2a,b; Anaerobic methanotrophic archaea-2c; Anaerobic methanotrophic archaea-3; Anaerobic methanotrophic archaea-3 archaea, targed with Anaerobic methanotrophic archaea-3-1249 oligonucleotide FISH-probe; Archaea, Anaerobic methanotrophic archaea-1; Archaea, GEG; Archaea, GoM Arc I; Archaea, MBG-B; Archaea, MBG-D; Archaea, targed with ARCH915 oligonucleotide FISH-probe; Methanomicrobiales; Methanosaeta related Methanosarcinales; Methanosarcinales, unaffiliated; Acidobacteria; Acrobacter spp., targed with ARC94 oligonucleotide FISH-probe; Actinobacteria; Alphaproteobacteria; Bacteria; Bacteria, JS1; Bacteria, OP11; Bacteria, OP8; Bacteria, targed with EUB338(I-III) oligonucleotide FISH-probe; Bacteria, TG1; Bacteria, unaffiliated; Bacteria, WS2; Bacteria, WS3; Bacteroidetes; Betaproteobacteria; Chlorobi; Chloroflexi; Deferribacteres; Delta- Proteobacteria; Desulfusarcina/Desulfococcus, targed with DSS658 oligonucleotide FISH-probe; Epsilonproteobacteria; Firmicutes; Fusobacteria; Gammaproteobacteria; Planctomycetes; Spirochaetes; Meiofauna; Foraminifera, benthic hyaline; Adelosina spp.; Ammolagena clavata; Ammonia beccarii; Amphicoryna scalaris; Angulogerina angulosa; Articulina tubulosa; Asterigerinata mamilla; Astrononion stelligerum; Bigenerina nodosaria; Biloculinella depressa; Biloculinella inflata; Biloculinella labiata; Bolivina psudoplicata; Bolivina psudopunctata; Brizalina alata; Brizalina albatrossi; Brizalina dilatata; Brizalina subaenariensis; Bulimina aculeata; Bulimina costata; Bulimina elongata; Bulimina marginata; Bulimina striata mexicana; Cassidulina laevigata; Cassidulina oblonga; Cassidulinoides bradyi; Chilostomella oolina; Cibicides lobatulus; Cibicides spp.; Cibicidoides pachyderma; Cibicidoides spp.; Cornuspira involvens; Dentalina albatrossi; Dentalina spp.; Discammina compressa; Discanomalina semipunctata; Discorbinella bertheloti; Discorbis spp.; Elphidium spp.; Epistominella rugosa; Fissurina spp.; Fursenkoina cf. acuta; Fursenkoina mexicana; Gavelinopsis praegeri; Glandulina spp.; Globobulimina affinis; Globobulimina pseudospinescens; Globobulimina spp.; Glomospira charoides; Guttulina cf. communis; Gyroidinoides neosoldanii; Gyroidinoides orbicularis; Gyroidinoides umbonatus; Haynesina spp.; High oxygen indicators; Hoeglundina elegans; Hyalinea balthica; Laevidentalina flexuosa; Lagena spp.; Lenticulina calcar; Lenticulina cultrata; Lenticulina orbicularis; Lenticulina </td> <td> </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> </td> <td> spp.; Low oxygen indicators; Marginulina spp.; Marginulinopsis spp.; Melonis barleeanus; Miliolinella spp.; Miliolinella subrotunda; Neoconorbina terquemi; Neolenticulina peregrina; Nonion asterizans; Nonion depressulum; Nonionella turgida; Nummoloculina contraria; Oolina spp.; Paracassidulina minuta; Patellina corrugata; Placopsilina sp.; Planorbulina mediterranensis; Planulina ariminensis; Polymorphina spp.; Pseudoclavulina crustata; Pseudotriloculina laevigata; Pseudotriloculina oblonga; Pullenia bulloides; Pullenia salisburyi; Pullenia subcarinata; Pyrgo cf. murrhina; Pyrgo comata; Pyrgo elongata; Pyrgo inornata; Pyrgo spp.; Pyrgoella sphaera; Quinqueloculina laevigata; Quinqueloculina milletti; Quinqueloculina padana; Quinqueloculina seminula; Quinqueloculina spp.; Quinqueloculina venusta; Quinqueloculina viennensis; Repmanina charoides; Reussella spinulosa; Rhabdammina sp.; Robertinoides translucens; Rosalina bradyi; Rosalina spp.; Rotamorphina involuta; Sigmoilina costata; Sigmoilina sigmoidea; Sigmoilinita tenuis; Sigmoilopsis schlumbergeri; Siphonaperta aspera; Siphonina reticulata; Siphotextularia heterostoma; Sphaeroidina bulloides; Spiroloculina sp.; Spiroplectinella sagittula; Textularia agglutinans; Textularia pala; Triloculina spp.; Triloculina tricarinata; Uvigerina mediterranea; Uvigerina peregrina; Uvigerina proboscidea; Valvulineria bradyana </td> <td> </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> ratio of chemical entity per unit chemical entity </td> <td> Alanine; alpha-aminobutyric acid; Arginine; Asparagine; beta-Alanine; gamma- Aminobutyric acid; Glutamine; Glycine; Histidine; Isoleucine; Leucine; Lysine; Methionine; Ornithine; Phenylalanine; Serine; Threonine; Tyrosine; Valine; Nitrogen, organic, fraction; Oxygen, saturation; Methane; Organic matter; Ethane; n-Butane; Propane; Isobutane; Hydrate; Water content of wet mass; Aluminium; Calcium; Calcium carbonate; Carbon, inorganic, total; Carbon, organic, fraction; Carbon, organic, particulate; Carbon, organic, total; Carbon, total; Iron; Magnesium; Nitrogen, organic; Nitrogen, total; Opal, biogenic silica; Silicon; Sulphur, total; Lithogenic material; Volcanic glass, acidic; delta 15N; delta 15N, particulate organic nitrogen; delta 234 Uranium; Acid volatile sulphides isotopes; Chromium reducible sulphides isotopes; delta 13C; delta 13C, adjusted/corrected; delta 13C, organic carbon; delta 13C, particulate organic carbon; C1/C2 hydrocarbon ratio; C1/C2+ hydrocarbon ratio; C1/C3 hydrocarbon ratio; Lead 206/Lead 207; Lead 208/Lead 206 ratio; Neodymium 143/Neodymium 144; Strontium 87/Strontium 86 ratio; epsilon- Neodymium (0); Thorium 230/Thorium 232 ratio; Thorium 230/Uranium 234 ratio; Uranium 234/Uranium 238 activity ratio; Uranium 238/Thorium 232 ratio; Calcium/(Calcium+Iron) ratio; Carbon/Nitrogen ratio; Carbon/Sulphur ratio; Nitrogen/Carbon ratio </td> <td> </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> unit concentration of biological entity per unit area of environmental entity </td> <td> Eumeiofauna; Meiofauna, abundance; Pseudomeiofauna; Abundance per area; Benthos, other; Copepoda indeterminata; Meiofauna, abundance of foraminifera; Meiofauna, abundance of harpacticoida; Meiofauna, abundance of metazoa; Cladocera; Foraminifera, benthic; Lysianassidae; Scaphopoda; Anomonema; Chitwoodia; Comesomoides; Cricolaimus; Cylicolaimus; Demonema; Desmocolex; Diplolaimelloides; Echinodesmodora; Endeolophos; Eubostrichus; Eudiplogaster; Gairleanema; Gomphionchus; Gonionchus; Kraspedonema; Leptonemella; Metadasynemoides; Monoposthia; Morlaixia; Odontophoroides; Omicronema; Parachromadorita; Paradesmodora; Parallelocoilas; Paranticoma; Perspiria; Polysigma; Pontonema; Prochromadora; Psammonema; Pseudocella; Steineridora; Trichotheristus; Valvaelaimus; Cirratulidae; Maldanidae; Paramphinome sp.; Spionidae; Acari; Amphimonhystrella bullacauda; Antarcticonema; Asellota; Bathyepsilonema; Calanoidea; Cheironchus; Ciliata; Comesoma; Cyatholaimus; Deontolaimus; Desmodora pilosa; Desmodoroidea; Desmolorenzenia; Desmotricoma; Dorylaimopsis; Ethmolaimidae; Goniadidae; Greeffiellinae; Greeffiellopsis; Halanonchus; Hesionidae; Ixonema; Karkinochromadora; Lacydoniidae; Lumbrineridae; Metacomesoma; Metacylicolaimus; Monhysterina; Nannolaimoides; Nephtyidae; Nicascolaimus; Notochaetosoma; Nyctonema; Opheliidae; Ophiuroidea; Pandolaimus; Paraethmolaimus; Paraonidae; Parapinnanema; Pareudesmoscolex; Phoxocephalidae; Phyllodocidae; Pilargidae; Prochaetosoma; Pseudolella; Quadricoma; Sabatieria bitumen; Sabatieria conicauda; Sabatieria demani; Sabatieria lawsi; Sabatieria ornata; Sabatieria propisinna; Sabatieria punctata; Sabatieria stekhoveni; Sabatieria vasicola; Sipuncula; Synonchium; Tanaoidea; Tarvaia; Tetrapturus; Thoracostomopsis; Amphinomidae; Capitellidae; Glyceridae; Terebellidae; Virus; Carrier crab; Copepoda; Crab; Edible crab; Galeus melastomus; Gastrotricha; Kinorhyncha; Nematoda genus sp.; Number of clones; Number of individuals; Phylotype number; Shrimps; Small grenadier; Spinder crab; Squat lobster; Swimmer crab; Foraminifera, benthic agglutinated; Foraminifera, benthic agglutinated indeterminata; Foraminifera, benthic calcareous; Foraminifera, benthic perforates indeterminata; Foraminifera, chitineous; Insects, total counts; Ostracoda; Ameira; Ameirinae; Ameiropsis; Amphiascoides; Amphiascus; Archesola; Argestes; Argestidae; Atergopedia; Bathycamptus; Bodinia; Bradya; Bradyellopsis; Canthocamptidae; Cletodes; Cyclopoida; Cylindronannopus; Diarthrodella; Dizahavia; Ectinosoma; Ectinosomatidae; Enhydrosoma; Eurycletodes; Filexilia; Fultonia; Halectinosoma; Halophytophilus; Haloschizopera; Hastigerella; Heterolaophonte; </td> <td> #; #/10 cm**2; #/m**2 </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> </td> <td> Idomene; Idyanthe; Idyanthidae; Idyella; Klieosoma; Kliopsyllus; Laophonte; Laophontodes; Leptomesochra; Leptopsyllus; Lineosoma; Lobopleura; Malacopsyllus; Marsteinia; Mesochra; Mesocletodes; Metahuntemannia; Microsetella; Misophrioida; Nematovorax; Neobradyidae; Novocriniidae; Parameiropsis; Paramesochra; Paramesochridae; Parapseudoleptomesochra; Peltobradya; Perissocope; Poecilostomatoida; Pseudameira; Pseudobradya; Pseudomesochra; Pseudotachidiinae; Rhyncholagena; Rhynchothalestris; Robertgurneya; Sagamiella; Sarsameira; Scottopsyllus; Sigmatidium; Siphonostomatoida; Stenocopia; Talpina; Tegastes; Tetragonicepsidae ; Xylora; Zosime; Dendrophyllia cornigera; Desmophylum; Lophelia; Madrepora oculata; Centrophorus squamosus; Helicolenus dactylopterus dactylopterus; Hexanchus griseus; Lepidion eques; Lopheous piscatorious; Mora moro; Phycis blennoides; Synaphobranchus kaupi; Dendrophyllia; Madrepora; Actinonema; Adoncholaimus; Aegialoalaimidae; Amphimonhystera; Anticyathus; Aponema; Araeolaimus; Ascolaimus; Astomonema; Bathynox; Benthimermithidae; Bodonema; Bolbolaimus; Calligyrus; Catanema; Cephalanticoma; Ceramonema; Chaetonema; Choanolaimus; Chromadorella; Chromadoridae; Cobbia; Comesa; Comesomatidae; Coninckia; Cyatholaimidae; Dasynemoides; Desmodorella; Desmodoridae; Didelta; Diodontolaimus; Diplolaimella; Diplopeltidae; Diplopeltis; Disconema; Dolicholaimus; Doliolaimus; Draconema; Elzalia; Enchelidiidae; Enoplinae; Enoploides; Enoplus; Epacanthion; Euchromadora; Eurystomina; Filoncholaimus; Gammanema; Glochinema; Gnomoxyala; Graphonema; Greeffiella; Halichoanolaimus; Haliplectus; Halomonhystera; Hapalomus; Hopperia; Hypodontolaimus; Intasia; Laimella; Latronema; Leptosomatum; Linhomoeidae; Linhomoeus; Litinium; Longicyatholaimus; Megadesmolaimus; Mesacanthion; Mesacanthoides; Metachromadora; Metacyatholaimus; Metadasynemella; Metaparoncholaimus; Metasphaerolaimus; Metoncholaimus; Micoletzkyia; Microlaimidae; Minolaimus; Monhystera; Monhysteridae; Nannolaimus; Nemanema; Neochromadora; Neotonchus; Noffsingeria; Odontanticoma; Odontophora; Oncholaimellus; Oncholaimidae; Oncholaimus; Paracomesoma; Paralongicyatholaimus; Paramesonchium; Paramicrolaimus; Paramonhystera; Pararaeolaimus; Parasphaerolaimus; Parastomonema; Paratricoma; Pareurystomina; Parodontophora; Phanoderma; Phanodermatidae; Phanodermopsis; Pierrickia; Platycoma; Platycomopsis; Praeacanthonchus; Promonhystera; Prooncholaimus; Prototricoma; Pseudonchus; Pterygonema; Ptycholaimellus; Rhynchonema; Richtersia; Setoplectus; Setosabatieria; Siphonolaimus; Sphaerolaimidae; Spilophorella; </td> <td> </td> </tr> </table> <table> <tr> <th> **Category of parameters** </th> <th> **Parameters** </th> <th> **Units** </th> </tr> <tr> <td> </td> <td> Spinodesmoscolex; Spirobolbolaimus; Steineria; Stephanolaimus; Stylotheristus; Symplocostoma; Synodontium; Synonchiella; Synonchus; Thalassironus; Trefusia; Trefusialaimus; Tripyloides; Trissonchulus; Trochamus; Xennella; Xyala; Acantholaimus; Acanthonchus; Aegialoalaimus; Alaimella; Ammotheristus; Amphimonhystrella; Amphipoda; Anoplostoma; Anticoma; Antomicron; Aplacophora; Atrochromadora; Axonolaimus; Bathyeurystomina; Belbolla; Bivalvia; Calanoida; Calomicrolaimus; Calyptronema; Camacolaimus; Campylaimus; Cervonema; Chromadora; Chromadorina; Chromadorita; Chromaspirina; Cnidaria; Crenopharynx; Cricohalalaimus; Cumacea; Cyartonema; Dagda; Daptonema; Desmodora; Desmolaimus; Desmoscolecida; Desmoscolex; Dichromadora; Diplopeltoides; Diplopeltula; Draconematidae; Echinodermata; Eleutherolaimus; Enoplolaimus; Epsilonema; Ethmolaimus; Eumorpholaimus; Fenestrolaimus; Filitonchus; Gerlachius; Gnathostomulida; Halacarida; Halalaimus; Holothuroidea; Hydrozoa; Indeterminata; Innocuonema; Intoshia; Isopoda; Ledovitia; Leptolaimoides; Leptolaimus; Linhystera; Loricifera; Manganonema; Marylynnia; Metadesmolaimus; Metepsilonema; Meyersia; Microlaimus; Molgolaimus; Monhystrella; Nauplii; Nematoda; Oligochaeta; Oxyonchus; Oxystomina; Paracanthonchus; Paracyatholaimoides; Paracyatholaimus; Paralinhomoeus; Paramesacanthion; Paramonohystera; Polychaeta; Pomponema; Porifera; Priapulida; Procamacolaimus; Prochromadorella; Pselionema; Pseudochromadora; Retrotheristus; Rhabdocoma; Rhabdodemania; Rhips; Rotifera; Sabatieria; Sipunculida; Southerniella; Sphaerolaimus; Spiliphera; Spirinia; Stygodesmodora; Subsphaerolaimus; Syncarida; Synonema; Syringolaimus; Tanaidacea; Tantulocarida; Tardigrada; Terschellingia; Thalassoalaimus; Thalassomonhystera; Theristus; Tricoma; Trileptium; Turbellaria; Vasostoma; Viscosia; Wieseria; Xyalidae; Metalinhomoeus; Adercotryma sp.; Ammodiscus sp.; Bolivina sp.; Buliminella sp.; Discorbinellidae indeterminata; Epistominella sp.; Fissurina sp.; Ioannella sp.; Lagena sp.; Lagenammina sp.; Nodellum sp.; Oolina sp.; Reophax sp.; Triloculina sp.; Gastropoda; Harpacticoida </td> <td> </td> </tr> <tr> <td> **Category of parameters** </td> <td> **Parameters** </td> <td> **Units** </td> </tr> <tr> <td> volume of biological entity </td> <td> Bacteria, cell biovolume; Bacteria, heterotrophic, cell biovolume </td> <td> µm**3 </td> </tr> </table> Deliverable 10.1
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1052_MC-SUITE_680478.md
The MC-SUITE project proposes a new generation of ICT enabled process simulation and optimization tools enhanced by physical measurements and monitoring that can increase the competence of the European manufacturing industry, reducing the gap between virtual modelling and real physical processes. In order to achieve the expected results by the MC-SUITE approach, the consortium assembled for this 36-month project consists of a unique combination of skills and expertise including 6 SMEs, 2 universities, 2 research centres and 2 large companies. Hence, this consortium covers all the value chain of the manufacturing products including software providers, equipment providers, machine tool builders and end users. The advances in the Information and Communication Technology (ICT) are revolutionizing our everyday life. However, the manufacturing industry has not taken complete advantage of this huge potential yet. The final aim of the MC- SUITE project is to open the doors of the manufacturing workshops to this new revolution. Indeed, the MC-SUITE project consortium strongly thinks that the current ICT can have a tremendous impact in the productivity increasing the competence and expertise of the European manufacturing companies and especially the SMEs. The aim of the project is to develop the MC-SUITE application based on six different modules ready to reduce the gap between the simulated and the real process (see Figure 2). **1.2** **PROJECT CONTEXT AND** **DATA ORIGIN** _Figure 2: MC-SUITE modules_ **_Objective 1:_ ** To develop a virtual model of the machining process. * **MC-Virtual** obtains the final path of the tool and the quality of the real part, the cutting force and process stability, overcoming the limits of the actual Computer Aided Manufacturing software. **_Objective 2:_ ** To apply multi objective optimization methods in manufacturing processes. * **MC-Optim** optimizes the milling process considering multiple objectives including productivity, quality and energy consumption. **_Objective 3:_ ** To create a complete Cyber Physical System for machine productivity improvement. * **MC-CyPhy** includes three different embedded systems connected to the virtual model and to the monitoring system to increase the productivity. **_Objective 4:_ ** To build up a monitoring system based on the cloud. * **MC-Monitor** is a cloud-based system able to store heterogeneous data including signal coming from internal sensors of the machine, from embedded systems and operator authored data. **_Objective 5:_ ** To create new services based on Big Data. * **MC-Analytics** is a platform to treat the information of the cloud for predictive maintenance and productivity improvement. **_Objective 6:_ ** To combine simulated and experimental data in a complete software suite. * **MC-Bridge** compares the results of the virtual model with the real ones obtained from the monitoring system. This way on-line and off-line optimizations are performed both on the simulations and on the machining process. The project will be generating two types of research data. The first type will be large sets of numerical arrays from the machine’s sensors which will be used for the development of the working models behind the MC-SUITE modules. The second type will be data generated from the predictive and optimization models in the project. Simulation and programming codes compiled in the project will be regarded as a research data too. One of the challenges in preserving data for long term is the choice of file format. In the MCSuite project whenever possible data will be saved in open formats or widely popular formats which do not depend on proprietary software. For text files this can be .txt, for tabular data \- .csv, for image- .tif, for audio- .mp3 and for video- .avi. Data will be generated continuously throughout the whole duration of the project but the total amount of data is expected to be less than 20 GB per partner. A more detailed description of the data and the equipment which will be used for its collection/ generation is provided in the following subsections. This is based on the early stage plans of the project and it will be updated together with the later versions of the Data Management Plan. **2.1** **M** **ILLING** **MACHINES** The main milling machines that will be used in the project are the SORALUCE SV 6000, the LAGUN GVC-1000-HS and the FIDIA DL165 machining centres (see Figure 3). All machines will be equipped with a variety of sensors which will help analysing the metal cutting processes, build process models and test the virtual sensors developed in the MC-Suite project. Initially, investigations will be focused on orthogonal and oblique milling of stainless steel and aluminium. <table> <tr> <th> </th> <th> **2\. DATA DESCRIPTION** </th> </tr> </table> LAGUN GVC-1000-HS machine SORALUCE SV 6000 machine FIDIA DL165 machine _Figure 3: Machining centres used in the MC-SUITE project._ All sensor data will be presented together with some of the tool and process characteristics. This will include cutting angles of the tool, cutting edge radius, substrate material and tool coating (if known). Characteristics as cutting depth, cutting speed and feed will be also added to the documentation. **2.2** **C** **UTTING FORCES** Cutting force is one of the most significant parameters in process modelling. A range of sensors are available in the consortium. The Kistler 9255B multi- component dynamometer plate, the Kistler 9121 3-component orthogonal dynamometer and the Kistler rotational 9123 C dynamometer will be used as a force measurement reference for different cutting tests (see Figure 4). 9255 B 9121 9123 C _Figure 4: Kistler odynometers used in the project._ Dynamometers are widely used in laboratories but because of their very high cost they are rarely applied in real production environment. A more cost effective alternative for an indirect measurement of the forces will be investigated in the project. It has been demonstrated that a virtual sensor can be based on the current measured on the drive and the spindle of the machine. Current sensors on the electric loops of the spindle and driver motors of the SV 6000 machine have already been installed by the machine supplier. However, external sensors might be added to verify the reliability of the internal CNC data and to obtain higher sampling frequencies. The cutting forces collected directly from the sensors will be saved in ASCII files which will be structured in 4 columns. Three of the columns will be dedicated to the force components and the fourth will be representing the time. The processed cutting forces will be saved in separate ASCII files together with the cutting test conditions, cutting force average and the standard deviation of each force component. **2.3** **V** **ELOCITIES AND** **POSITIONS** The tests will be carried on the SV 6000 SORALUCE milling machine. Accelerometers are not integrated in the machine and will be placed on the horizontal axes of the ram tip. The returned information is an image of the acceleration that, once processed, provides the velocity and displacement of the ram tip. Encoders are electromechanical devices that generate electrical signal proportional to the position of the measured element. They enable the collection of accurate information in real time about the motor position and velocity value to close the control loops. The following encoder’s data will be used : * Three linear encoders, Figure 5, giving information about the position following the XYZ axes of the SV milling machine. * A rotary encoder giving information about the spindle velocity. _Figure 5: Linear encoder._ To enable a high frequency acquisition and avoid limitation from the CNC, the encoder signals will be duplicated and processed directly in a real time acquisition system with a frequency of up to 10 kHz. Results will be directly exported to Matlab. **2.4** **M** **ACHINE AND TOOL DYNA** **MICS** Frequency Response Function (FRF) of the tool tip will also be provided. FRF is very valuable in vibration analysis to study the behaviour of the machine and tools. The FRF will be saved in “.uff” format file. # 2.5 VIDEO AND SOUND ACQUISITION Video and sound of the machining processes will be recorded. Standard IP cameras will be used to provide a general view of the machining process and document the tests. Moreover, sound analysis can reveal information about the occurrence of vibrations and chatter frequencies. A High Speed Filming device FASTCAM Ultima APX-RS 250K (Figure 6) will be used to obtain videos and images from the chip generation. Files will be saved in several formats- AVI, JPEG, PNG (10bit) or TIFF. _Figure 6: Ultima APX-RS 250K._ **2.6** **S** **URFACE MEASUREM** **E** **NTS** When required, surface finish quality will be measured with a non-contact laser measurement in order to obtain information about the workpiece roughness. Alternatively, white light interferometry (WLI) or contact probe profilemetres can be used too. Some complimentary information will be received from optical microscopy images. **2.7** **S** **IMULATION DATA** Many of the machining simulations in the project will be performed using the MACHPpro simulation software. The software has a milling module which enables prediction of process parameters such as cutting force, tool side load, workpiece cutting force, spindle torque, spindle power and energy consumption, bending moment, form error and chip load. _Figure 7: MACHpro interface._ Results obtained with MACHpro will be used to provide estimation of cutting forces and cutting process energy consumption. Outputs will be presented in ASCII files. # 2.8 OPTIMISATION DATA The MC-OPTIM module will be receiving input data from a variety of simulations performed in the MC-VIRTUAL module. Multiobjective process optimisation will be applied to the simulation results to provide the machine operator with optimal working conditions. Table 1 summarizes the MC-OPTIM output parameters. _Table 1: MC-OPTIM output data details (preliminary version)._ <table> <tr> <th> **Output data** </th> <th> **Format** </th> </tr> <tr> <td> Spindle speed: N value for the best individual (between a given range: Nmax-Nmin) </td> <td> Boolean cast to Integer (16 bit) or double (32 bit) </td> </tr> <tr> <td> Feed Override: F for the best individual (between a given range: Fmax-Fmin) </td> <td> Integer (16 bit) </td> </tr> <tr> <td> Value for the Fitness function for the best individual </td> <td> double (32 bit) </td> </tr> <tr> <td> Value for the individual objective functions that conform the Fitness function </td> <td> double (32 bit) </td> </tr> <tr> <td> Pareto frontier or the output parameters for the best last 10 solutions </td> <td> double (32 bit) </td> </tr> <tr> <td> Time spent to get the solution </td> <td> double (32 bit) </td> </tr> <tr> <td> Data for a graphical display in 3D of the solution explored if the Fitness function is in 2 or 3 dimensions </td> <td> double (32 bit) </td> </tr> </table> **3.** **DATA STORAGE AND MAI** **NTENANCE** **3.1** **D** **ATA STORAGE AND BACK** **UPS** At least two copies of all data generated in the project will be kept at all times. To avoid loss of information in the event of fire or any other accident all copies of the data will be kept at different locations. These can be the personal computers of work colleagues based in separate offices or if available institutional back- up servers. When this is not possible, the data will be backed on external hard drives or DVDs which will be kept at a different location and can be accessed by at least two different persons. USB pen drives are not considered as a reliable backup storage media because of their short life (1 year in average). Commercial cloud storage services may be used but they will not be considered as a primary back up media. Backups will be performed by each partner on the first working day of every month or whenever new significant results are achieved. The functionality of all data is to be checked at least once a year for any corrupted files or format updates. Data from sensor readings and surface measurements will be preserved in its original raw format and shape. Any further analysis is to be performed on a copy of the original files. # 3.2 FILE ORGANIZATION AND NAMING The following folder structure will be used: ## <institution>/<researcher name or initials>/<data identifier>/<date> If necessary the _date_ folder can be broken down into to further subfolders. When a short phrase cannot be a clear data identifier a reference number will be used instead. However, in this case a separate _readme_ text document with a short description of each data set and its reference number will be created in the _researcher’s name_ folder. The YYYY-MM-DD date convention will be used for dates in folder and file names. This is the ISO 8601 standard and when in the beginning of the folder/file name, items are automatically sorted chronologically. For example the 1 st of March 2016 will be written as 2016-03-01. The following naming system will be used for files: ## YYYY-MM-DD_project name_institution_researcher intitials_measurement /simulation _number For example, if there was a document containing tabular data representing vibration measurements taken by the author of this DMP on the 1 st of March 2016, the file name of the first set for the day would be: ## 2016-03-01_UoN_NV_vibr_01.csv **3.3** **A** **DDITIONAL NOTES** For every data set a separate text document with explanatory notes will be created and saved in the _date_ folder together with all other files. The notes will contain full details of how and under what conditions the information was collected or generated and explanation of the file naming system. In the case of experimental data the document will contain information about the exact experimental set up, name of operators, references to specific samples, any abbreviations or physical units used in the data set and any other information which might be required to reproduce the results. When relevant, notes from personal and lab logs can be added too. In the case of data generated from simulation models the exact input parameters and references to the simulation code will be included in the explanatory document. **4.** **DATA ARCHIVING** **AND SHAR** **ING** **4.1** **D** **ATA REPOSITORY** All data collected, generated or processed in the project which has a significant scientific value or is linked to a published work will be uploaded to a public data repository. The primary repository for the MC-SUITE project will be FigShare ( _https://figshare.com/_ ) which is a very well established provider for scientific data and is free to use. FigShare is one of the repositories recommended by the Nature journal and it has been chosen as an institutional repository for a large number of highly regarded universities and publishers. They have a very easy to navigate user interface and unlike many of their competitors they do not impose restrictions on the file format of the documents deposited. Maximum size of a given dataset is 20 GB and the recommended limit for a single file is 5 GB. According to the service agreement, files on FigShare are retained for the lifetime of the repository or at least 10 years after publishing. MC-SUITE data will be stored under one of the Creative Commons licenses. For figures, media, posters, papers and file sets CC-BY license will be used. CC- BY ensures the research will be openly available but it requires that a credit in the form of citation is given when used or referred to. In the case of complete databases or structured datasets with highly factual data CC0 license will be used. CC0 similarly to CC-BY is an open license but does not legally require users of the data to cite the source. However, the moral obligation of attribution remains same as in the case of any research journal paper citation. In the special case of programming or computer simulation code sharing the MIT license will be used. MIT provides full open access to the code but also removes any liability of the authors in the event of any legal claim or damage caused by the use of the code. All data with potential commercial value will be embargoed for 12 months after the termination of the project to allow for patent filing. MC-SUITE will not be publishing any sensitive data so there is no need of taking any special anonymization procedures. Data published on FigShare is automatically assigned with DataCite DOI. Additionally, there is an option of exporting a reference file to some of the most common reference managers like RefWorks, BibTeX, Endnote, DataCite, NLM, DC and RefMan. **4.2** **D** **ATA OWNERSHIP** Every project partner generating, collecting or processing data in the project will create their own FigShare account. An MC-SUITE project space which will link the individual accounts will be created too. Publishing of relevant data when available is the responsibility of the partner who created the data. It is important that accounts are managed and regarded as institutional rather than personal so that any enquiries related to the published data can be answered even after the end of the project. This means that at least two persons will have an access to every FigShare account at any time during and after the project and both of them will have equal administration rights. Partners will be reviewing their new data and publish it when significant at least every 6 months starting at M6 of the project. **4.3** **M** **ETADATA FORMAT** Unless there is an established metadata schema for the type of data published in the project the Dublin Core simple level schema will be used. This is a well-established general scheme which can effectively cover a large variety of data. The elements of the schema are as follow: title, creator (authors), subject (e.g. Engineering), description, publisher, contributor, date, type (e.g. sound), format, identifier (DOI), source, language, relation, coverage (e.g. location), rights, funding. All these elements are covered on opening a FigShare account so authors do not need to take any special actions when publishing. <table> <tr> <th> </th> <th> **5\. CLOSING REMARKS** </th> </tr> </table> The DMP is a continuously evolving document which will be updated at the mid- term review and the end of the project. Updates will be reflected in the Open Research Data deliverable document due in September 2017. All activities regarding data publishing will be coordinated and supported by the University of Nottingham (UoN) which is also responsible to ensure that the current DMP is followed by all partners. UoN will organise a workshop to familiarise all interested parties with FigShare.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1054_INNO-4-AGRIFOOD_681482.md
# Introduction The current document constitutes the interim version of the **Data Management Plan** (DMP) elaborated in the framework of the **INNO-4-AGRIFOOD** project, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 681482. INNO-4-AGRIFOOD is aimed at fostering, supporting and stimulating **online collaboration for innovation** amongst **agri-food SMEs** across Europe. To this end, the project will enhance the service portfolio and practices of **innovation intermediaries and SME support networks** across Europe by providing them with a well-tailored blend of demand-driven **value propositions** , including: * A **new generation of value added innovation support services** aimed at empowering their agri-food SME clients to capitalise on the full potential of online collaboration for innovation. * A **suite of smart and platform-independent ICT tools** to support and optimise the delivery of the novel online collaboration for innovation support services. * A **series of highly interactive and flexible e-training courses** that will equip them with the knowledge and skills required to successfully deliver these new services. On top of the above mentioned, the accumulated experience and lessons learned through INNO-4-AGRIFOOD will be translated into meaningful **guidelines** to be diffused across Europe so as to fuel the replication of its results and thus enable SMEs in other European sectors to tap into the promising potential of online collaboration for innovation as well. To this end, INNO-4-AGRIFOOD brings together and is implemented by a well- balanced and complementary **consortium** , which is comprised of **7 partners across 6 different European countries** , as presented in the following table. ## _Table 1: INNO-4-AGRIFOOD consortium partners_ <table> <tr> <th> **Partner** **No** </th> <th> **Partner Name** </th> <th> **Partner short name** </th> <th> **Country** </th> </tr> <tr> <td> 1 </td> <td> Q-PLAN INTERNATIONAL ADVISORS (Coordinator) </td> <td> Q-PLAN </td> <td> Greece </td> </tr> <tr> <td> 2 </td> <td> Agenzia per la Promozione della Ricerca Europea </td> <td> APRE </td> <td> Italy </td> </tr> <tr> <td> 3 </td> <td> IMP 3 rove – European Innovation Management Academy EWIV </td> <td> IMP 3 rove </td> <td> Germany </td> </tr> <tr> <td> 4 </td> <td> European Federation of Food Science and Technology </td> <td> EFFoST </td> <td> Netherlands </td> </tr> <tr> <td> 5 </td> <td> BioSense Institute </td> <td> BIOS </td> <td> Serbia </td> </tr> <tr> <td> 6 </td> <td> National Documentation Centre </td> <td> EKT/NHRF </td> <td> Greece </td> </tr> <tr> <td> 7 </td> <td> Europa Media szolgaltato non profitkozhasznu KFT </td> <td> EM </td> <td> Hungary </td> </tr> </table> In this context, the **interim version of the DMP** presents the data management principles set forth in the framework of INNO-4-AGRIFOOD by its consortium partners (Chapter 2). Moreover, it builds upon the initial version of the DMP and provides an updated list of the datasets that have been or will be collected, processed and/or produced during the project along with an up- to-date description for each one (Chapter 3), addressing crucial aspects pertaining to their management and taking into account the “ _Guidelines on Data Management_ _in Horizon 2020_ ” provided by the European Commission (EC). **Important remark** The DMP is not a fixed document. To the contrary, it evolves during the lifespan of the project. In particular, the DMP will be updated at least once more during INNO-4-AGRIFOOD (i.e. as D7.4 at M30) as well as ad hoc when deemed necessary in order to include new datasets, reflect changes in the already identified datasets, changes in consortium policies and plans or other potential external factors. Q-PLAN is responsible for the elaboration of the DMP and will update and enrich it, when required, with the support of all relevant members of the INNO-4-AGRIFOOD consortium. # Data management principles ## Standards and metadata Any open datasets produced by INNO-4-AGRIFOOD will be accompanied by data that will facilitate their understanding and re-use by interested stakeholders. These data may include basic details that will assist interested stakeholders to locate the dataset, including its format and file type as well as meaningful information about who created or contributed to the dataset, its name and reference, date of creation and under what conditions it may be accessed. Complementary documentation may also encompass details on the methodology used to collect, process and/or generate the dataset, definitions of variables, vocabularies and units of measurement as well as any assumptions made. Finally, wherever possible consortium partners will identify and use existing standards. ## Data sharing The Coordinator (Q-PLAN) in collaboration with the respective Work Package Leaders of the project, will determine how the data collected and produced in the framework of INNO-4-AGRIFOOD will be shared. This includes the definition of access procedures as well as potential embargo periods along with any necessary software and/or other tools which may be required for data sharing and re-use. In case the dataset cannot be shared, the reasons for this will be clearly mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related). A consent will be requested from all external data providers in order to allow for their data to be shared and all such data will be anonymised before sharing. ## Data archiving and preservation The datasets of the project which will be open for sharing will be deposited to an open data repository and will be made accessible to all interested stakeholders, ensuring their long-term preservation and accessibility beyond the lifetime of the project. In fact, we consider the use of _Zenodo_ as one of the best online and open services to enable open access to INNO-4-AGRIFOOD datasets. In this respect, the Coordinator (Q-PLAN) will be responsible for uploading all open datasets to the repository of choice, while all partners will be responsible for disseminating them through their professional networks and other communication channels. ## Ethical considerations INNO-4-AGRIFOOD entails activities which involve the collection of meaningful data from selected individuals (e.g. the interview-based survey aimed at revealing the needs of agri-food SMES in terms of online collaboration for innovation support, the online survey that shed light to the training needs of innovation intermediaries, etc.). The collection of data from participants in such activities will be based upon a process of informed consent. Any personal information will be handled according to the principles laid out by the Directive 95/46/EC of the European Parliament and of the Council on the “Protection of individuals with regard to the processing of personal data and on the free movement of such data” (24 October 1995) and its revisions. The participants’ right to control their personal information will be respected at all times (including issues of confidentiality). The Coordinator (Q-PLAN) will regulate and deal with any ethical issues that may arise during the project in this respect, in cooperation with the Steering Committee of the project. # Data management plan ## Overview INNO-4-AGRIFOOD places special emphasis on the management of the valuable data that have been or will be collected, processed and generated throughout its activities. In this respect, the table below provides a list of the datasets identified by INNO-4-AGRIFOOD consortium members, indicating the name of the dataset, its linked Work Package and the respective leading consortium member (i.e. Work Package Leader) as well as its status compared to the previous version of the DMP. ### Table 2: List of INNO-4-AGRIFOOD datasets <table> <tr> <th> **No** </th> <th> **Dataset Name** </th> <th> **Linked** **Work** **Package** </th> <th> **Work** **Package** **Leader** </th> <th> **Status** </th> </tr> <tr> <td> 1 </td> <td> Analysis of the agri-food value chain </td> <td> WP1 </td> <td> BIOS </td> <td> Updated (M15) </td> </tr> <tr> <td> 2 </td> <td> Needs of agri-food SMEs in terms of online collaboration for innovation support </td> <td> WP1 </td> <td> BIOS </td> <td> Updated (M15) </td> </tr> <tr> <td> 3 </td> <td> Skills of innovation intermediaries in terms of supporting online collaboration for innovation </td> <td> WP1 </td> <td> BIOS </td> <td> Updated (M15) </td> </tr> <tr> <td> 4 </td> <td> Outcomes of the INNO-4-AGRIFOOD Cocreation Workshop – E-learning </td> <td> WP2 </td> <td> IMP 3 rove </td> <td> New entry 1 (M15) </td> </tr> <tr> <td> 5 </td> <td> Case-based training material supplemented by theoretical information on the topic </td> <td> WP2 </td> <td> IMP 3 rove </td> <td> \- </td> </tr> <tr> <td> 6 </td> <td> Outcomes of the INNO-4-AGRIFOOD Cocreation Workshop – Services and tools </td> <td> WP3 </td> <td> Q-PLAN </td> <td> New entry 1 (M15) </td> </tr> <tr> <td> 7 </td> <td> Pool of agri-food SMEs </td> <td> WP4 </td> <td> APRE </td> <td> Updated (M15) </td> </tr> <tr> <td> 8 </td> <td> Roster of specialists database </td> <td> WP4 </td> <td> APRE </td> <td> Updated (M15) </td> </tr> <tr> <td> 9 </td> <td> Service testing metrics </td> <td> WP4 </td> <td> APRE </td> <td> Updated (M15) </td> </tr> </table> <table> <tr> <th> 10 </th> <th> User data and learning curve of e-learning participants </th> <th> WP5 </th> <th> EM </th> <th> New entry 2 (M15) </th> </tr> <tr> <td> 11 </td> <td> Feedback derived from e-learning participants </td> <td> WP5 </td> <td> EM </td> <td> New entry 2 (M15) </td> </tr> <tr> <td> 12 </td> <td> Awareness creation, dissemination and stakeholder engagement </td> <td> WP6 </td> <td> EFFoST </td> <td> Updated (M15) </td> </tr> </table> With the identified datasets of INNO-4-AGRIFOOD in mind, the current section of the DMP provides meaningful information per each one, including: * The name of the dataset. * The type of study in the frame of which the dataset is produced. * A concise description of the dataset. * The methodology and tools employed for collecting/generating the data. * The format and volume of the dataset. * Any standards that will be used (if applicable) and/or metadata to be created. * Potential stakeholders for whom the data may prove useful. * Provisions regarding the confidentiality of the data. **Important remark** The information provided within this section reflects the current views and plans of INNO-4-AGRIFOOD consortium partners at this stage of the project (M15) and may be adapted and/or updated in future versions of the DMP (e.g. through the inclusion of more elaborate descriptions of the datasets, standards and metadata, how the datasets may be preserved, accessed and re- used in the long-term, etc.). The template employed for collecting the information from project partners is annexed to this document. ## Analysis of the agri-food value chain <table> <tr> <th> **Dataset name** </th> <th> Analysis of the agri-food value chain. </th> </tr> <tr> <td> **Type of study** </td> <td> Agri-food value chain analysis aimed at revealing the primary value chain areas and SME actors to be targeted by the project based on both secondary and primary research. </td> </tr> <tr> <td> **Dataset description** </td> <td> Data derived from interviews with members of the Advisory and Beneficiaries boards of INNO-4-AGRIFOOD, providing their opinions about the needs of SMEs with respect to innovation support and the opportunities for online collaboration for innovation. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> A semi-structured questionnaire was employed in order to collect qualitative data during the interviews. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset is stored within a .zip file which comprises of 7 distinct documents stored in .docx formats. The total size of the (uncompressed) dataset is 1.12 MB. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Each document of the dataset is accompanied by descriptive metadata including title, author and keywords. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset provided INNO-4-AGRIFOOD consortium members with valuable information from the perspective of agri-food stakeholders, fuelling and complementing the agri-food value chain analysis conducted in the context of the project. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The outcomes of the study that produced the dataset have been published through the _Agri-food Value Chain Analysis Report_ , already available at the web portal of the project. The dataset itself, used only in the context of the project, is not intended for sharing and/or re-use, with a view to safeguarding the privacy of interviewees. With that in mind, the dataset is now archived at the private server of the Coordinator (Q-PLAN) and will be preserved for at least 5 years following the end of the project, before eventually being deleted. </td> </tr> </table> ## Needs of agri-food SMEs in terms of online collaboration for innovation support <table> <tr> <th> **Dataset name** </th> <th> Needs of agri-food SMEs in terms of online collaboration for innovation support. </th> </tr> <tr> <td> **Type of study** </td> <td> Interview-based survey of representatives of agri-food SMEs as well as innovation intermediaries aimed at revealing the needs, level of readiness and profiles of agrifood SMEs in terms of online collaboration for innovation. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset contains the responses (mostly qualitative) provided by interviewees who participated in the study, addressing different aspects of the current situation in the EU with respect to online collaboration for innovation amongst SMEs in the agri-food sector as well as diverse topics relevant to collaborating for innovation by employing online means (e.g. specific attributes of platforms and tools needed for online collaboration, support that SMEs may seek or need in this respect, etc.). </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> The collection of the data was realised through a semi-structured questionnaire administered to survey participants in the frame of interviews. An online (web) form was employed by interviewers in order to submit a record to the dataset. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset is stored in spreadsheet (.xls) and .pdf formats, both of which contain the 52 replies derived from the interview-based survey. The size of the dataset in .xls format is 0.17MB, while in .pdf it is 0.76MB. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Descriptive metadata (i.e. title, author and keywords) have been created to accompany the dataset. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The insights derived from the analysis of the data have been key in the process of co-creating and developing the novel services and tools of INNO-4-AGRIFOOD according to the needs of agri-food SMEs and their innovation intermediaries. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The findings and conclusions of the study based on the processing and analysis of the data within this dataset, have been openly shared through the _Agri- food SME_ _Profiling and Needs Analysis Report_ , which is published at the web portal of INNO4-AGRIFOOD. The raw data collected through the interview- based survey will not be shared and/or re-used (outside the framework of the project and/or beyond its completion) to ensure the confidentiality of the interviewees and their responses. Hence, the dataset, currently archived at the private server of the Coordinator (QPLAN), shall be preserved for at least 5 years following the end of the project, before eventually being deleted. </td> </tr> </table> ## Skills of innovation intermediaries in terms of supporting online collaboration for innovation <table> <tr> <th> **Dataset name** </th> <th> Skills of innovation intermediaries in terms of supporting online collaboration for innovation. </th> </tr> <tr> <td> **Type of study** </td> <td> Online survey of staff of innovation intermediaries and SME support networks aimed at assessing the current level of their knowledge and skills in providing support to the online collaboration for innovation endeavours of agri-food SMEs. </td> </tr> <tr> <td> **Dataset description** </td> <td> The data collected comprise predominantly quantitative responses provided by the participants of the online survey, including demographic information as well as their perceived level of skills (gauged via a 5-scale Likert scale) in different skill areas, including agri-food industry, support services, collaboration, innovation management and soft skills. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> A structured questionnaire was used in order to collect data. The questionnaire was self-administered and survey participants were able to access it online by following a dedicated link. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset is stored in standard spreadsheet format (.xlsx). In total, 79 respondents from the EU filled in and successfully submitted a questionnaire. The same number of records was collected and is now within the dataset. The size of the dataset is 53KB. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> The dataset is accompanied by descriptive metadata (i.e. title, author and keywords). </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset has been of great use to INNO-4-AGRIFOOD consortium members, enabling them to unearth the insight required to set the stage for the need- driven co-creation and development of the project’s e-learning curriculum and modules. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The _Skills Mapping and Training Needs Analysis Report_ , available at the web portal of the project, provides public access to the findings of the study in the frame of which this dataset has been produced. The records of the database are available only to selected members of the INNO-4-AGRIFOOD project team and are not intended for sharing and/or re-use, so as to ensure the privacy of the study’s participants. The dataset itself is archived at the private server of the Coordinator (Q-PLAN) and will be preserved for at least 5 years following the end of the project, before eventually being deleted. </td> </tr> </table> ## Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – E-learning <table> <tr> <th> **Dataset name** </th> <th> Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – E-learning. </th> </tr> <tr> <td> **Type of study** </td> <td> The INNO-4-AGRIFOOD Co-creation Workshop which was held on the 15 th of September 2017 at Amsterdam, the Netherlands in order to co-create, along with stakeholders of the agri-food ecosystem, the e-learning offer of INNO-4-AGRIFOOD. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset generated encompasses the feedback as well as the innovative concepts and ideas provided by participants of the INNO-4-AGRIFOOD Co-creation Workshop during the structured activities of the co-creative session dedicated to the e-learning offer of the project. The data is mostly textual (short sentences) and refer to (i) the appropriateness of the e-learning material developed at the time of the workshop and (ii) supplementary ideas for consideration in the process of developing the e-learning material of the project. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> In addition to the minutes recorded throughout the co-creation workshop, the participants’ input from the group discussions were tabulated for each of the draft e-learning modules (which were provided as background information) using preprepared templates. Comments of relevant consortium members on each module were added remotely after the event. Conclusions were then drawn on the content and weighting of elements within each module. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The data collected have been integrated within the report on the _Outcomes of the_ _INNO-4-AGRIFOOD Co-creation Workshop: Curriculum concept and key training_ _topics_ . The report is stored in .pdf format and its size is 1.08MB. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> The report in which the dataset has been integrated includes meaningful information with respect to the context in which the data have been collected as well as the methodology for collecting them. The report itself is accompanied with basic descriptive metadata including title and type of file. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Innovation support service designers and providers as well as relevant trainers and educators would find the dataset most useful, especially those who operate within the agri-food sector or are interested to do so. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The dataset has been openly shared through the public report on the _Outcomes of_ _the INNO-4-AGRIFOOD Co-creation Workshop: Curriculum concept and key training_ _topics_ , which is available at the INNO-4-AGRIFOOD web portal. </td> </tr> </table> ## Case-based training material supplemented by theoretical information on the topic <table> <tr> <th> **Dataset name** </th> <th> Case-based training material supplemented by theoretical information on the topic. </th> </tr> <tr> <td> **Type of study** </td> <td> Development of educative case studies based on the services provided in the framework of INNO-4-AGRIFOOD blended with theoretical training building upon existing material available to partners either from previous work or from open sources. </td> </tr> <tr> <td> **Dataset description** </td> <td> The data collected will be simple responses (plain text in English) provided in the frame of interviews. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Data required for the development of the case studies will be collected with the help of semi-structured questionnaires administered during interviews by project partners. Additional data will be gathered from the existing knowledge base of project partners (e.g. previous project documentations, previous service provision documentations, etc.) and/or from OER repositories as well as other third-party secondary data sources. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> Data collected during the interviews conducted in the framework of case study development will be stored in document format (e.g. .doc). The case studies stemming from the interviews will be preserved in .pdf format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> All e-learning material developed based on the case studies will be SCORM compliant to enable its packaging and facilitate the re-use of the learning objects. The Articulate software, which has been used to create the e-learning material of the project, can generate the Content Aggregation Metadata File required. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset would be quite useful for innovation intermediaries and consultants as well as educators who would use this case-based e-learning material in their own activities. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The e-learning material will be openly available to all interested stakeholders, protected with an appropriate Creative Commons licence (to be timely decided at a later stage of the project in consultation with the Advisory Board of the project). </td> </tr> </table> ## Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – Services and tools <table> <tr> <th> **Dataset name** </th> <th> Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – Services and tools. </th> </tr> <tr> <td> **Type of study** </td> <td> The INNO-4-AGRIFOOD Co-creation Workshop, which was held on the 15 th of September 2017 at Amsterdam, the Netherlands with a view to co-creating innovative ideas and designs for the innovation support services and smart tools of the project, building upon the valuable contribution of diverse agri- food stakeholders. </td> </tr> <tr> <td> **Dataset description** </td> <td> The data include innovative concepts and ideas provided by the participants of the workshop’s co-creative session that focused on the innovation support services and smart tools of INNO-4-AGRIFOOD. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> The data were collected during the INNO-4-AGRIFOOD Co-creation Workshop and documented as transcript notes. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The data have been integrated into the report on the _Outcomes of the INNO-4AGRIFOOD Co-creation Workshop: Innovation support_ _services and ICT tools_ , which is stored in .pdf format. The size of the file is 5.53MB. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> The report in which the data have been incorporated provides insights into the objectives and methodology of the INNO-4-AGRIFOOD Co-Creation Workshop, elaborates on the outcomes of its session on the services and tools and translates the aforementioned outcomes into meaningful conclusions and key potential characteristics for innovation support services and tools. Basic descriptive metadata are provided along with the report (i.e. title and type of file). </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset has contributed significantly in developing the services, smart tools and e-learning modules of the project in line with the needs and preferences of agrifood stakeholders in the context of INNO-4-AGRIFOOD. Beyond the context of the project, innovation support service designers and providers as well as ICT application developers and training providers could potentially find the dataset and its accompanying report useful. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The report on the _Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop:_ _Innovation support services and ICT tools_ , which includes the dataset, is already published and available at the web portal of the project. </td> </tr> </table> ## Pool of agri-food SMEs <table> <tr> <th> **Dataset name** </th> <th> Pool of agri-food SMEs. </th> </tr> <tr> <td> **Type of study** </td> <td> Deployment of INNO-4-AGRIFOOD services and tools in real-life contexts. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset consists of 2 separate lists of agri-food SMEs which may be interested in benefiting from the innovation support services of the project in the framework of its 3 iterative testing, validation and fine-tuning rounds. The 1 st list includes SMEs which are either clients or among the professional network of INNO-4-AGRIFOOD consortium partners, and to which services may be delivered by these partners. The 2 nd list includes SMEs who have been identified through other channels (e.g. through the INNO-4-AGRIFOOD’s Beneficiaries and Advisory Boards, the online contact form of the project’s web portal, etc.), and to which services may be delivered by external innovation consultants. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> In addition to the professional networks of INNO-4-AGRIFOOD (1 st list of the dataset), several sources have been employed to identify suitable SMEs to participate in the real-life deployment of the novel services and tools of the project, including (among others) networks and member organisations of INNO-4AGRIFOOD’s Advisory and Beneficiaries Boards as well as interested SMEs which participated in the surveys launched in the context of the project or expressed their interest through the online contact form of its web portal (2 nd list of the dataset). </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> A spreadsheet (in .xlsx format) with two separate tabs (each for one of the two lists described above) is used to store the Pool of agri-food SMEs, which, by the end of the project, will contain at least 125 records. In fact, the complete dataset will contain the following data for each recorded SME: (i) Name of the SME; (ii) Contact person; (iii) Country, (iv) City, (v) Telephone number; and (vi) E-mail address. In the case of the 1 st list, information about the INNO-4-AGRIFOOD consortium partner connected to a recorded SME will also be included. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Descriptive and structural metadata may be created and provided at a later stage of the project. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset would be most useful for consortium partners during the real-life deployment activities of the project. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The dataset will be available only to relevant INNO-4-AGRIFOOD consortium partners and may not be disclosed or used for purposes outside the framework of the project, unless otherwise allowed by either the project partner or the external stakeholder that has provided the respective data. </td> </tr> </table> ## Roster of specialists database <table> <tr> <th> **Dataset name** </th> <th> Roster of specialists database. </th> </tr> <tr> <td> **Type of study** </td> <td> Involvement of trained staff of innovation intermediaries and SME support networks in the deployment of INNO-4-AGRIFOOD services and tools in real-life contexts. </td> </tr> <tr> <td> **Dataset description** </td> <td> Pool of appropriately qualified SME consultants who may participate in the testing of project’s services and tools by providing them to agri-food SMEs. The Roster of Specialists Database (RSD) will encompass valuable information about the recorded consultants, such as demographics and contact details of themselves and their affiliated organisations, data about their progress towards completing the project’s e-learning offer and providing its services as well as miscellaneous data that can help INNO-4-AGRIFOOD consortium members to better match them with appropriate agri-food SMEs to service. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> The RSD will be populated with consultants who will have successfully completed the INNO-4-AGRIFOOD e-learning courses addressing the project’s services, participated in the project’s 1 st webinar and/or have been personally trained by a designated INNO-4-AGRIFOOD Coach. The database will be enriched as the real-life deployment activities of INNO-4-AGRIFOOD progress and the staff of innovation intermediaries and SME support networks gain experience in the project’s services and e-learning modules. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The Roster of Specialists Database is stored in a standard spreadsheet format and by the end of the project will comprise of at least 50 records of SME consultants across the EU. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Descriptive and structural metadata may be created to accompany the dataset at a later stage of the project. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Agri-food SMEs who would like to receive support from innovation consultants specialised in supporting online collaboration for innovation. </td> </tr> <tr> <td> **Confidentiality** </td> <td> Records in the database will remain for internal use only during the lifecycle of the project. </td> </tr> </table> ## Service testing metrics <table> <tr> <th> **Dataset name** </th> <th> Service testing metrics. </th> </tr> <tr> <td> **Type of study** </td> <td> Testing, validation and fine-tuning of the INNO-4-AGRIFOOD services and smart tools. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset includes data collected during the iterative testing, validation and finetuning of the INNO-4-AGRIFOOD services and smart tools, aimed at managing ambiguity during the various iterations as well as measuring the impact of improvements after each iteration. In particular, it contains both qualitative and quantitative data on (i) the satisfaction of SMEs that received INNO-4-AGRIFOOD services, (ii) the satisfaction of SMEs and innovation consultants that have used the INNO-4-AGRIFOOD smart tools, (iii) the impact of the INNO-4-AGRIFOOD services on the business of the SMEs that received them, (iv) the activities performed in the framework of each INNO-4-AGRIFOOD service provided in the context of the project, and (v) different aspects of the services and smart tools that can be further streamlined according to users’ needs and expectations. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> In line with the _INNO-4-AGRIFOOD Metrics Model_ , this dataset will be fuelled by the respective surveys that will run over the 3 real-life deployment rounds of the project’s services and smart tools as well as by the service stories that will be produced under this framework. All surveys will employ questionnaire-based tools aiming at mining both qualitative and quantitative data from agri-food SMEs and innovation consultants. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset will employ a typical spreadsheet format. The volume of the dataset’s final version will depend on the number of users that will participate in the real-life deployment of the INNO-4-AGRIFOOD services and smart tools. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Descriptive metadata will be provided (such as title, abstract, author, type of data, data collection method and keywords). </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Innovation support service designers and providers may find use in this dataset. </td> </tr> <tr> <td> **Confidentiality** </td> <td> All records will be open provided the necessary consent has been provided by users who provided the data and any required anonymization has been performed. </td> </tr> </table> ## User data and learning curve of e-learning participants <table> <tr> <th> **Dataset name** </th> <th> User data and learning curve of e-learning participants. </th> </tr> <tr> <td> **Type of study** </td> <td> Provision of e-training courses to staff of innovation intermediaries and SME support networks. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset contains demographic data of the people who have registered to the e-learning platform of INNO-4-AGRIFOOD and their affiliated organisations along with data reflecting their e-learning progress. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Data are provided voluntarily by the individuals who register to the INNO-4AGRIFOOD e-learning platform through a dedicated online form which aims at creating the profile necessary for their registration. Moreover, the e-learning platform automatically collects all necessary data about the online activities of the participants who access the system via a unique username- password combination. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset is stored in a MySQL database and exported to standard spreadsheet format (.csv or other). At least 150 registered participants are expected to be recorded within the dataset by the end of the project. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> The dataset is not intended for sharing and re-use and thus will not be accompanied by metadata. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset will be used by selected INNO-4-AGRIFOOD consortium members to analyse the learning behaviour of the e-learning participants in the frame of the project. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The data of e-learning participants will be confidential and used only in the context of the project. With that in mind, the administrators of the e-learning platform will have access to the data provided by e-learning participants, apart from their password information (which will be known only to the e-learning participants themselves). E-learning participants can configure their profile indicating the open data they would like to share. Still, the data of the participants’ learning curve (e.g. statistics on accessing the e-learning, following existing material, concluding tests, etc.) will be accessible only to the administrators of the e-learning platform as well. Any meaningful analysis or conclusions drawn from these data will be shared in relevant upcoming reports that will be produced by the project. </td> </tr> </table> ## Feedback derived from e-learning participants <table> <tr> <th> **Dataset name** </th> <th> Feedback derived from e-learning participants. </th> </tr> <tr> <td> **Type of study** </td> <td> Testing, validation and fine-tuning of the e-learning environment. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset includes feedback on technical and content-wise aspects of the elearning environment of INNO-4-AGRIFOOD (including the e-learning platform as well as its constituent e-learning modules), gathered from e-learning participants with a view to evaluating its functionalities, graphics and content. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Data will be provided voluntarily by e-learning participants of INNO-4-AGRIFOOD via dedicated questionnaire-based feedback forms. The questionnaires utilised by the feedback forms employ the Likert scale (1 \- Strongly Disagree to 5 - Strongly Agree) so that participants can quickly provide their opinion on the functionalities and content of the different e-learning modules as well as the platform as a whole. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset is stored in a MySQL database and can be exported to standard spreadsheet format (.csv or other). The volume of the final dataset will depend on the number of people who will have provided their feedback on the e-learning platform and/or its modules by the end of the project. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> The dataset is not intended for sharing and re-use and thus will not be accompanied by metadata. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset will be used by selected INNO-4-AGRIFOOD consortium members to analyse user experience on the e-learning environment and thus provide the basis for further improvement in the future iterations in the context of the project. </td> </tr> <tr> <td> **Confidentiality** </td> <td> In order to ensure the privacy of the participants who provided their feedback, the records of the database will remain confidential. The administrators of the elearning platform will have access to the feedback provided, however they will not disclose that information to third parties. </td> </tr> </table> ## Awareness creation, dissemination and stakeholder engagement <table> <tr> <th> **Dataset name** </th> <th> Awareness creation, dissemination and stakeholder engagement. </th> </tr> <tr> <td> **Type of study** </td> <td> Assessment of the results and impact of the awareness creation, dissemination and stakeholder engagement activities of the project employing an indicator- based framework. </td> </tr> <tr> <td> **Dataset description** </td> <td> Data collected during INNO-4-AGRIFOOD with a view to measuring and assessing the performance and results of the project in terms of awareness creation, dissemination, stakeholder engagement. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Primary data are being collected through the dissemination activity reports of project partners regarding media products, events, external events, general publicity, etc. Third party tools are being employed as well (e.g. Google analytics, social media statistics, etc.). </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The collected data are preserved in a spreadsheet format (.xlsx). </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Descriptive metadata will be provided (such as title, type of data, data collection method and keywords). </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset would be meaningful to the European Commission as well as researchers who study relevant aspects of EU-funded projects. </td> </tr> <tr> <td> **Confidentiality** </td> <td> All records will be openly shared. </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1055_INNO-4-AGRIFOOD_681482.md
# Introduction The current document constitutes the initial version of the **Data Management Plan** (DMP) elaborated in the framework of the **INNO-4-AGRIFOOD** project, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 681482. INNO-4-AGRIFOOD is aimed at fostering, supporting and stimulating **online collaboration for innovation** amongst **agri-food SMEs** across Europe. To this end, the project will enhance the service portfolio and practices of **innovation intermediaries and SME support networks** across Europe by providing them with a well-tailored blend of demand-driven **value propositions** , including: * A **new generation of value added innovation support services** aimed at empowering their agri-food SME clients to capitalise on the full potential of online collaboration for innovation; * A **suite of smart and platform-independent ICT tools** to support and optimise the delivery of the novel online collaboration for innovation support services; and * A **series of highly interactive and flexible e-training courses** that will equip them with the knowledge and skills required to successfully deliver these new services. On top of the above mentioned, the accumulated experience and lessons learned through INNO-4-AGRIFOOD will be translated into meaningful **guidelines** to be diffused across Europe so as to fuel the replication of its results and thus enable SMEs in other European sectors to tap into the promising potential of online collaboration for innovation as well. INNO-4-AGRIFOOD is implemented by a well-balanced and complementary **consortium** , which is comprised of 7 partners across 6 different European countries, as presented in the following table. ## _Table 1: INNO-4-AGRIFOOD consortium partners_ <table> <tr> <th> **Partner** **No** </th> <th> **Partner Name** </th> <th> **Partner short name** </th> <th> **Country** </th> </tr> <tr> <td> 1 </td> <td> Q-PLAN International Advisors Ltd (Coordinator) </td> <td> Q-PLAN </td> <td> Greece </td> </tr> <tr> <td> 2 </td> <td> Agenzia per la Promozione della Ricerca Europea </td> <td> APRE </td> <td> Italy </td> </tr> <tr> <td> 3 </td> <td> IMP 3 rove – European Innovation Management Academy EWIV </td> <td> IMP 3 rove </td> <td> Germany </td> </tr> <tr> <td> 4 </td> <td> European Federation of Food Science and Technology </td> <td> EFFoST </td> <td> Netherlands </td> </tr> <tr> <td> 5 </td> <td> BioSense Institute </td> <td> BIOS </td> <td> Serbia </td> </tr> <tr> <td> 6 </td> <td> National Documentation Centre </td> <td> EKT/NHRF </td> <td> Greece </td> </tr> <tr> <td> 7 </td> <td> Europa Media szolgaltato non profitkozhasznu KFT </td> <td> EM </td> <td> Hungary </td> </tr> </table> In this context, the **initial version of the DMP** presents the data management principles set forth in the framework of INNO-4-AGRIFOOD by its consortium partners. Moreover, it provides a first description of the datasets that will be collected, processed and/or produced during the project, taking into account the “Guidelines on Data Management in Horizon 2020” provided by the European Commission 1 . In this respect, it addresses important relevant points such as standards and metadata as well as outlines a provisional approach to data sharing, archiving and preservation on a dataset by dataset basis. **Important remark** The DMP is not a fixed document. To the contrary, it will evolve during the lifespan of the project. In particular, the DMP will be updated at least twice during INNO-4-AGRIFOOD (i.e. as D7.3 at M15 and D7.4 at M30) as well as ad hoc when deemed necessary in order to include new datasets, reflect changes in the already identified datasets, changes in consortium policies and plans or other potential external factors. Q-PLAN is responsible for the elaboration of the DMP and with the support of all Work Package Leaders will update and enrich it when required. # Data management principles ## Standards and metadata All datasets produced by INNO-4-AGRIFOOD will be accompanied by data that will facilitate their understanding and re-use by interested stakeholders. These data may include basic details that will assist interested stakeholders to locate the dataset, including its format and file type as well as meaningful information about who created or contributed to the dataset, its name and reference, date of creation and under what conditions it may be accessed. Complementary documentation may also encompass details on the methodology used to collect, process and/or generate the dataset, definitions of variables, vocabularies and units of measurement as well as any assumptions made. Finally, wherever possible consortium partners will identify and use existing community standards. ## Data sharing The Coordinator (Q-PLAN) in collaboration with the respective Work Package Leaders, will determine how the data collected and produced in the framework of INNO-4-AGRIFOOD will be shared. This includes the definition of access procedures as well as potential embargo periods along with any necessary software and/or other tools which may be required for data sharing and re-use. In case the dataset cannot be shared, the reasons for this will be clearly mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related). A consent will be requested from all external data providers in order to allow for their data to be shared and all such data will be anonymised before sharing. ## Data archiving and preservation The datasets of the project which will be open for sharing will be deposited to an open data repository and will be made accessible to all interested stakeholders, ensuring their long-term preservation and accessibility beyond the lifetime of the project. At the moment, we consider the use of Zenodo 2 as one of the best online and open services to enable open access to INNO-4-AGRIFOOD datasets but similar repositories will also be considered and an appropriate decision will be timely made at a future update of the DMP. Q-PLAN will be responsible for uploading all open datasets to the repository of choice, while all partners will be responsible for disseminating them through their professional networks and other media channels. ## Ethical considerations INNO-4-AGRIFOOD entails activities which involve the collection of meaningful data from selected individuals (e.g. the interview-based survey aimed at revealing the needs of agri-food SMES in terms of online collaboration for innovation support, the online survey that will shed light to the training needs of innovation intermediaries in this respect, the co-creation workshop, etc.). The collection of data from participants in these activities will be based upon a process of informed consent. Any personal information will be handled according to the principles laid out by the Directive 95/46/EC of the European Parliament and of the Council on the “Protection of individuals with regard to the processing of personal data and on the free movement of such data” (24 October 1995) and its revisions. The participants’ right to control their personal information will be respected at all times (including issues of confidentiality). The Coordinator (Q-PLAN) will regulate and deal with any ethical issues that may arise during the project in this respect, in cooperation with the Steering Committee of the project. # Data management plan In perfect alignment with the paradigm shift towards openness, INNO-4-AGRIFOOD participates in the Horizon 2020 Open Research Data Pilot and places special emphasis on the management of the valuable data that will be collected, processed and generated throughout its activities. The current section provides meaningful information in this respect per each identified dataset, including: * The name of the dataset. * The type of study in the frame of which the dataset is produced. * A concise description of the dataset. * The methodology and tools employed for collecting/generating the data. * The format and volume of the dataset. * Any standards that will be used (if applicable) as well as metadata to be created. * Potential external stakeholders for whom the data may prove useful. * Provisions regarding the confidentiality of the data. **Important remark** The information provided within this section reflects the current views and plans of INNO-4-AGRIFOOD consortium partners at this early stage of the project (M3) and may be adapted or updated in future versions of the DMP (e.g. through the inclusion of more elaborate descriptions of the datasets, standards and metadata, how the datasets may be preserved, accessed and re- used in the long-term, etc.). The template employed for collecting the information from project partners is annexed to this document. ## Analysis of the agri-food value chain <table> <tr> <th> **Dataset name** </th> <th> Analysis of the agri-food value chain (WP1) </th> </tr> <tr> <td> **Type of study** </td> <td> Agri-food value chain analysis aimed at revealing the primary value chain areas and SME actors to be targeted by the project based on both secondary and primary research. </td> </tr> <tr> <td> **Dataset description** </td> <td> Data derived from interviews with members of the Advisory and Beneficiaries boards of INNO-4-AGRIFOOD. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> A semi-structured questionnaire and a respective template will be employed in order to collect and document data during the interviews. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> Standard spreadsheet (.csv or other) and document (e.g. .doc) formats will be used in order to store and preserve the dataset. The report on the agri-food value chain analysis will accompany the dataset in pdf format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Data records will be accompanied by the following metadata: a) the name of the person that generated it, b) information about the location where it was generated, c) reproduction rights and d) annotations made by the interviewers. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset may prove meaningful for a broad array of stakeholders of the Agrifood Ecosystem, including agri-food SMEs along with their innovation intermediaries and providers as well as policy makers at regional, national and EU level. </td> </tr> <tr> <td> **Confidentiality** </td> <td> All records will be open, provided that the corresponding interviewees have provided their consent in this respect within the frame of the interviews. </td> </tr> </table> ## Needs of agri-food SMEs in terms of on-line collaboration for innovation support <table> <tr> <th> **Dataset name** </th> <th> Needs of agri-food SMEs in terms of on-line collaboration for innovation support (WP1) </th> </tr> <tr> <td> **Type of study** </td> <td> Interview-based survey of representatives of agri-food SMEs as well as innovation intermediaries aimed at revealing the needs, level of readiness and profiles of agrifood SMEs in terms of online collaboration for innovation. </td> </tr> <tr> <td> **Dataset description** </td> <td> The data collected will be responses (plain text in English and/or numbers) provided by the interviewees. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> The collection of the data will be realised through a semi-structured questionnaire that will be administered to survey participants in the frame of interviews. An online (web) form will be employed for interviewers to submit a record to the dataset. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset will be stored in a typical spreadsheet format (.csv or other). It is estimated that more than 50 interviews will be conducted across Europe with agrifood SMEs and innovation intermediaries and the same number of records will be collected and stored in the dataset. The report on the findings of the interviewbased survey based on the analysis of the data will complement the dataset and will be stored in pdf format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> The following metadata will be created for each record: a) the name of the person that generated it, b) information about the location where it was generated, c) reproduction rights and d) annotations made by the interviewers. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> In addition to agri-food SMEs, the dataset would also be quite useful for innovation intermediaries and consultants that focus on agri-food SMEs, technology and innovation providers that operate within the agri-food sector, on-line collaboration networks and platforms providers, academics and researchers in relevant fields (e.g. SME collaboration, open innovation, etc.) as well as policy makers at regional, national and EU level who design and implement innovation support policies for SMEs. </td> </tr> <tr> <td> **Confidentiality** </td> <td> All records will be open. A consent will be requested from users during the interview so that their data may be anonymised and re-used. </td> </tr> </table> ## Skills of innovation intermediaries in terms of supporting on-line collaboration for innovation <table> <tr> <th> **Dataset name** </th> <th> Skills of innovation intermediaries in terms of supporting on-line collaboration for innovation (WP1) </th> </tr> <tr> <td> **Type of study** </td> <td> Online survey of staff of innovation intermediaries and SME support networks aimed at assessing the current level of their knowledge and skills in providing support to the online collaboration for innovation endeavours of agri-food SMEs. </td> </tr> <tr> <td> **Dataset description** </td> <td> The data collected will be both qualitative and quantitative responses provided by survey participants. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> A structured questionnaire will be used in order to collect data. The questionnaire will be self-administered and survey participants will be able to access it online by following a dedicated link. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset will be stored in standard spreadsheet format (.csv or other). At least 50 filled in and successfully submitted questionnaires are expected. The same number of records will be collected and stored in the dataset. The report presenting the analysis of the data and the identified skill gaps and training needs of innovation intermediaries will supplement the dataset and will be preserved in pdf format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Each data record will be accompanied by metadata on a) a mandatory basis (e.g. the country in which respondent is based; main business activities; type of key clients) and b) on a voluntary basis (e.g. the name of the respondent, the name of the organization the respondent works for, respondent´s email-address, firm´s website). </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Policy makers with an interest in the Agri-food Ecosystem as well as relevant training designers and providers may also find this dataset useful. </td> </tr> <tr> <td> **Confidentiality** </td> <td> Database records will be available only to selected members of the INNO-4AGRIFOOD project team in order to safeguard the confidentiality of participants. </td> </tr> </table> ## Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop <table> <tr> <th> **Dataset name** </th> <th> Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop (WP2 & WP3) </th> </tr> <tr> <td> **Type of study** </td> <td> A co-creation workshop will be held at the Netherlands in order to co-create along with project stakeholders the main value propositions of INNO-4-AGRIFOOD (i.e. services, tools and e-learning curriculum). </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset will include feedback, innovative concepts and ideas provided by workshop participants along with their contact details. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Data will be will be collected by the dedicated minute takers of the co- creation workshop and documented as transcript notes. The participants list will serve as the main source of data for contact details and other relevant background information. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset will be preserved in simple spreadsheet or document format. The two reports that will stem from the co-creation workshop will accompany the dataset and will be preserved in pdf format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Structural and descriptive metadata may be created. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Innovation support service designers and providers, ICT application providers as well as relevant trainers and educators would find the dataset most useful, especially those who operate within the agri-food sector or are interested to do so. </td> </tr> <tr> <td> **Confidentiality** </td> <td> Data will be open, provided that the person or partner who provided the information grants permission. </td> </tr> </table> ## Case-based training material supplemented by theoretical information on the topic <table> <tr> <th> **Dataset name** </th> <th> Case-based training material supplemented by theoretical information on the topic (WP2) </th> </tr> <tr> <td> **Type of study** </td> <td> Development of educative case studies based on the services provided in the framework of INNO-4-AGRIFOOD blended with theoretical training based on existing material available to partners either from previous work or from open sources. </td> </tr> <tr> <td> **Dataset description** </td> <td> The data collected will be simple responses (plain text in English) provided in the frame of interviews. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Data required for the development of the case studies will be collected with the help of semi-structured questionnaires administered during the interviews by project partners. Additional data will be gathered from the existing knowledge base of project partners (e.g. previous project documentations, previous service provision documentations, etc.) and/or from OER repositories as well as other thirdparty secondary data sources. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> Data collected during the interviews conducted in the framework of case study development will be stored in document format (e.g. .doc). The case studies stemming from the interviews will be preserved in pdf format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> All e-learning material developed based on the case studies will be SCORM compliant to enable its packaging and facilitate the re-use of the learning objects. The Articulate software, which will be used to create the e-learning, can generate the Content Aggregation Metadata File required. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset would be quite useful for innovation intermediaries and consultants as well as educators who would use the e-learning material in their own activities. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The e-learning material will be openly available to all interested stakeholders, protected with an appropriate Creative Commons licence (to be decided at a later stage of the project). </td> </tr> </table> ## Pool of agri-food SMEs <table> <tr> <th> **Dataset name** </th> <th> Pool of agri-food SMEs (WP4) </th> </tr> <tr> <td> **Type of study** </td> <td> Deployment of INNO-4-AGRIFOOD services and tools in real-life contexts. </td> </tr> <tr> <td> **Dataset description** </td> <td> List of agri-food SMEs which may be interested in benefiting from the online collaboration for innovation support services of the project in the framework of its 3 iterative testing, validation and fine-tuning rounds. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> In addition to the professional networks of INNO-4-AGRIFOOD, several sources will be employed to identify suitable SMEs to participate in the deployment of the novel services and tools, including (among others) networks and member organisations of the Advisory and Beneficiaries boards of the project as well as interested SME participants in the surveys launched under WP1. Online sources may also be utilised to complement the dataset with useful information regarding the companies. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> A spreadsheet (e.g. using an .xlsx format) will be created to maintain the list of agrifood SMEs, which by the end of the project will contain at least 125 records. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Structural metadata may be provided. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset would be most useful for consortium partners during the activities of the project. </td> </tr> <tr> <td> **Confidentiality** </td> <td> The dataset will be available only to relevant INNO-4-AGRIFOOD partners and may not be disclosed or used for purposes outside the framework of the project, unless otherwise allowed by either the project partner or the external stakeholder that has provided the respective data. </td> </tr> </table> ## Roster of specialists database <table> <tr> <th> **Dataset name** </th> <th> Roster of Specialists Database (WP4) </th> </tr> <tr> <td> **Type of study** </td> <td> Involvement of trained staff of innovation intermediaries and SME support networks in the deployment of INNO-4-AGRIFOOD services and tools in real-life contexts. </td> </tr> <tr> <td> **Dataset description** </td> <td> Pool of SME consultants who have successfully completed the INNO-4-AGRIFOOD e-training courses and may participate in the testing of services and tools of the project by providing them to agri-food SMEs. The Roster of Specialists Database will encompass valuable information, such as contact details and areas of expertise of the specialists, (e.g. in terms of sector, industry, etc.). </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> The Roster of Specialists Database will be enriched as the activities of WP5 progress and an increasing number of staff of innovation intermediaries and SME support networks complete the e-learning of the project. Data on the specialists who will participate in the e-learning as well as on the degree to which they will have completed it will be collected/generated by the e-learning platform. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The Roster of Specialists Database will be preserved in a standard spreadsheet format and will be comprised of at least 50 SME consultants across the EU. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Structural metadata may be provided. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Agri-food SMEs who would, like to receive support from innovation consultants specialised in supporting online collaboration for innovation. </td> </tr> <tr> <td> **Confidentiality** </td> <td> Records in the database will be open as long as the provider of the corresponding information has given the relevant permission. </td> </tr> </table> ## Service testing metrics <table> <tr> <th> **Dataset name** </th> <th> Service testing metrics (WP4) </th> </tr> <tr> <td> **Type of study** </td> <td> Testing, validation and fine-tuning of INNO-4-AGRIFOOD services. </td> </tr> <tr> <td> **Dataset description** </td> <td> Metrics collected during the iterative real-life deployment of INNO-4-AGRIFOOD services and tools aimed at managing ambiguity during the various iterations as well as measuring the impact of improvements after each iteration. Indicative metrics that may be incorporated in the dedicated model of the project and fuel the dataset include: basic metrics related to the willingness of end-users to try the services, their level of satisfaction from them, etc.; tailored innovative performance metrics such as number of new on-line contacts, conversion ratio of new contacts to innovation partners, number of innovation projects launched with partners identified through INNO-4-AGRIFOOD services, etc.; as well as forward looking </td> </tr> <tr> <td> </td> <td> metrics related to the estimation of potential future growth of the developed services. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> A semi-structured questionnaire will be employed to mine both qualitative and quantitative data from agri-food SME participants in the various deployment rounds of the project. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> The dataset will employ a typical spreadsheet format. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Structural and descriptive metadata may be provided. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> Innovation support service designers and providers may find use in this dataset. </td> </tr> <tr> <td> **Confidentiality** </td> <td> All records will be open after the necessary consent has been provided by interviewees and any required anonymization has been performed. </td> </tr> </table> ## Personal data, profile and learning curve of e-learning participants <table> <tr> <th> **Dataset name** </th> <th> Personal data, profile and learning curve of e-learning participants (WP5) </th> </tr> <tr> <td> **Type of study** </td> <td> Provision of e-training courses to staff of innovation intermediaries and SME support networks. </td> </tr> <tr> <td> **Dataset description** </td> <td> Personal data of e-learning participants including statistics of accessing the system, duration of stay and movements of the learner on the portal. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Individuals who will register to the INNO-4-AGRIFOOD e-learning will voluntarily share the personal data and profile needed for their registration. The e-learning platform will automatically collect all necessary data based on the visitors accessing the system using one username-password combination. </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> Dataset will be stored in MySQL database that can be exported to standard spreadsheet format (.csv or other). At least 150 registered learners are expected. </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Any potential standards to be used or metadata to be created will be determined at a later stage of the project. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset will be used to analyse the learning behaviour of the participants in the frame of the project and thus may be useful for educators and/or e-learning organisers targeting the same target group. </td> </tr> <tr> <td> **Confidentiality** </td> <td> Personal data of training participants will be confidential, the administrators will have access to their data, but not to password information. Training participants can configure their profile and the open data they want to share. Regarding the learning curve related data, the statistics on accessing the e-learning, watching and existing material, concluding tests, etc. will be available for the administrators of the e-learning portal. Any analysis or conclusions drawn from these data will be shared unanimously. </td> </tr> </table> ## Awareness creation, dissemination and stakeholder engagement <table> <tr> <th> **Dataset name** </th> <th> Awareness creation, dissemination and stakeholder engagement </th> </tr> <tr> <td> **Type of study** </td> <td> Assessment of the results and impact of the awareness creation, dissemination and stakeholder engagement activities of the project employing an indicator- based framework. </td> </tr> <tr> <td> **Dataset description** </td> <td> Metrics collected during INNO-4-AGRIFOOD with a view to measuring and assessing the performance and results of the project in terms of awareness creation, dissemination, stakeholder engagement. </td> </tr> <tr> <td> **Methodologies for data collection / generation** </td> <td> Primary data will be collected through the activity reports of project partners regarding media products, events, external events, general publicity, etc. along with any supporting material (e.g. photos presentations, participants lists, etc.). Third party tools will be used as well (e.g. Google analytics, social media statistics, etc.). </td> </tr> <tr> <td> **Format and volume of the dataset** </td> <td> All data will be collected and preserved in a document format (e.g. .doc) along with any complementary material in their original format (e.g. JPEG, PPT, etc.) </td> </tr> <tr> <td> **Metadata and** **standards** </td> <td> Any potential standards to be used or metadata to be created will be determined at a later stage of the project. </td> </tr> <tr> <td> **For whom might the dataset be useful?** </td> <td> The dataset would be meaningful to the EC as well as researchers who study relevant aspects of EU-funded projects. </td> </tr> <tr> <td> **Confidentiality** </td> <td> All records will be open </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1058_NANO2ALL_685931.md
# INTRODUCTION ## Summary **NANO2ALL** is a coordination and support action (CSA) aimed at establishing a European-wide sustainable platform for mutual learning and informed dialogue among all stakeholders (researchers including social sciences and humanities, industry/ business, the public including Civil Society Organisations and the media, as well as policy-makers and research funders) to improve transparency and societal engagement in responsible nanotechnology. Engaged through European-wide initiatives, as well as via an online platform making full use of current communication technologies, stakeholders will exchange best practice (e.g. from ObservatoryNano, NanoOpinion, NanoEIS and Nanodiode, among others). They will also develop their understanding of responsible research and innovation (RRI) and its tools (e.g. based on EthicSchool and RRI Tools). Based on this, through national and EU level dialogue, stakeholders will develop an action plan (MLAP) with a shared vision on existing and potential future benefits and risks of advancing nanotechnology including ways to support RRI along the value chain. Systemic engagement of policy-makers and relevant communities will facilitate “buy-in” through co-development of roadmaps to drive future Joint Stakeholder Undertakings, such as the Nanofutures ETP. This will result in a permanent platform for future development of trust, awareness and inclusion, providing guidance on societal needs and concerns, and fuelling coinnovation that benefits society as a whole. In order to achieve the **NANO2ALL** goals, we will deliver during the lifetime of the project activities where we will gather different types of data. To be sure to comply with the strict requirements of the European Commission this Data management plan (DMP) will serve as a manual to describe the process of gathering the information, preserving and archiving it. The main findings that will be subject to this DMP refer to: * Task 1.1 Nano2All Multi-Stakeholder Platform, in relation to the subscriptions to project newsletters, registrations for getting enrolled in Forum discussions, emails from stakeholders about inquiries and sharing best practices in RRI * Task 1.3. Stakeholder contact management * Task 1.5. Engaging the Stakeholders, in relation to engagements events and other engagement activities where stakeholder data is collected * Task 2.3 Training Needs Self-Assessment Tool, in relation to the online surveys that will be conducted to the various stakeholders * Task 2.4 Preparing for the Dialogues, in relation to the face to face training of the stakeholders’ representatives. * Task 3.2 Future Techno-Moral & Application Scenarios, where a Delphi survey among selected nanoexperts will be performed. * Task 3.3. Multi-stakeholder dialogues * Task 3.5. Monitoring and Evaluation * Task 4.2 Expert & Stakeholders Validation Symposium A set of planned datasets is described, including information about the dataset itself, what it is intended to contain, and what the current plans are to make the data available during but especially after the project. Where appropriate, and not otherwise restricted by existing partner agreements, such data will by default be deposited with the EU zenodo repository (http://zenodo.org), with a general restriction that any such data should be used for non-commercial, preferably research purposes. The commitment to gathering and sharing data, associated with what **NANO2ALL,** is related also with engaging relevant parties and raising general awareness. This creates a need to consider ethical oversight, not least because of related plans to publicise and share our results through research publication as well as more widely targeted dissemination collateral. As an initial outline and set of planned activities, this document provides a summary of discussions already underway within the consortium to fully manage project data. It will, however, be revised and updated in response to the practical experiences of the partners and if more sensitive activities are needed. ## DMP aim **NANO2ALL** Data Management Plan (DMP) describes the data management life cycle for all data sets that will be collected, processed or generated by **NANO2ALL** project. This document is aiming at outlining the way the consortium will collect and manage data and information from the involved stakeholders. It contains: * The nature of the data gathered; * The methodology used for the collection and preservation; * The procedure used for the analysis; * The Ethical treatment of participant data; * Data Open access to publications The DMP is not a fixed document as it evolves during the lifetime of the project. **NANO2ALL** consortium foresees to provide updates at the end of each reporting period, inside the periodic project reports. In the early months of the project, there are a number of specific considerations which will be discussed in the current report, to be further expanded in the periodic progress reports: * How can the project handle the series of multi-stakeholder dialogues and training activities to address knowledge gaps between various types of actors and potential co-production of knowledge? This will include a consideration of the: * different stakeholder categories that will be engaged with the project activities, and * various activities that will involve them and generate a remarkable amount of data and knowledge. * What data sets are currently available, and can they be shared with others? * What data sets are going to be developed during the project lifetime? * What form do those data sets take? The purpose of this deliverable is to introduce some of the considerations under discussion within the **NANO2ALL** consortium in relation to these high- level concerns. ## Intended audience The document is aimed at two different types of readership. First, it provides an internal record for the project itself and all consortium members to be able to refer to and understand what plans are in place, and what progress is being made. Secondly, it serves as summary and stake-in-the-ground for DMP aimed at external parties interested in the **NANO2ALL** project outcomes (the Project Officer, researchers, and other stakeholders). Stakeholder participants include citizens, organisations and experts already involved Nanotechnologies, as well as those who will join specifically during the dialogues and who, in particular, may provide input and responses as the basis for their perspective and needs in the project. ## Document structure The next Chapter will begin with a consideration, on the nature of the data gathered, the methodology used for the collection and preservation, the procedure used for the analysis, the Ethical treatment of participant data. Finally, a brief overview of the **NANO2ALL** approach to publishing results is introduced in the chapter Data Open access to publications. D5.4. DATA MANAGEMEN T PLAN **NANO2ALL** • NANOTECHNOLOGY MUTUAL LEARNING ACTION PLAN FOR TRANSPARENT AND RESPONSIBLE UNDERSTANDING OF SCIENCE AN TECHNOLOGY **2** **.** **D** **ata set** # DATA SET ## Description of the nature of the data The main set of Data collected, generated and analysed in the framework of the **NANO2ALL** project will come from Multi-Stakeholder Platform and dialogues, and can be classified as: 1. Profile public data, including: e.g. organisational profiles and contacts, which will be provided, for public use, on the project website, the multi-stakeholder platform, the project events and the public reports. 2. Personal, survey and evaluation data, acquired by surveys, interviews and case studies developed during the project progress. These two different types of data will each require a unique procedure for data collection, management, preservation and sharing. 1. **NANO2ALL** project involves, by definition, engagement with those already involved with or motivated to support prosocial activities in relation to the MultiStakeholder Platform and dialogues (WP1-WP4). All such interactions will involve data, such as contact details and some description of interest and motivation. This profile public data will attempt to capture such information. With the explicit consent of stakeholders’ participants at project activities, the following contact information data will be collected: * Name * eMail * Address (optional) * Contact phone number * Nature of interest * Online presence (e.g., website, blog etc.) * Any additional information (e.g., related projects, initiatives, activities, etc.) The data will be largely free form text, and held in a project-wide data repository. There are no specific standards relevant. The data will be stored initially in Excel spreadsheet form, which may be converted or backed up as CSV text. Initially, the data will be restricted to project partners. Towards the end of the project, those whose details are contained within the dataset will be contacted and asked for permission to make the data available solely for non- commercial purposes and strictly within the context of nanotechnology. The Multi stakeholder platform (Task 1.1) has the following metadata: 1. Textual information, translated in all partner languages, about the project (objectives, impacts, expect results) and stakeholders (who they are, their benefits and advantages for being part of the project). 2. Relevant external sources (opinions, reports, initiatives and events) in a searchable format and with public access; 3. Relevant best practices, which will be available in the platform, in a searchable format, without restrictions. These practices will be result of the tasks 2.1 and 2.4, but all users are invited to share their own practices in the platform. 4. Online communication tools (social networks and discussion areas in the platform) will allow us to collect specific information such as: personal data (name, e-mail, country, etc), collected from registration forms, surveys and newsletter subscription. 5. Promotional materials, such as: flyers, newsletters, press releases and posters. In addition, a database of relevant stakeholders contacts will be developed in Task 1.3. This database constitutes an excel file, for private use of the consortium and only available in Basecamp. And in Task 1.4 a list of relevant initiatives is being produced. This list is presented in the news and events section in the platform, in a searchable way and public access. During the project, the data will be curated by the Data controller (ViLabs). With consent as indicated in the previous section, the data will be transferred to Zenodo repository. 2. Especially, the surveys undertaken in the WPs below, will likely include personal data, or at very least will include data that may introduce a risk that data subjects could be identified, even if their survey data is anonymized: * WP2, under task 2.3, where a short online survey across NfA contacts (NfA members cover all Nano Technology stakeholders’ categories) will be performed so as to triangulate the training needs identified in literature together with a short survey among EUSJA’s contacts to gather their training needs and training preferences, since the media’s training needs on RRI have not yet been identified by any other EU project. Based on the surveys an online tool (NANO2ALL Self-Assessment Training Tool) will be built and the work will be reported on “D2.2 : Online Training Needs Survey report” that will include questionnaires, analysis and results. * and WP3, under task 3.2, where a Delphi survey among selected nanoexperts will be performed. Throughout the citizen and multi-stakeholder dialogues, various measures will be taken to maintain the quality of the results. The quality criteria are based on the responsive research methodology as formulated by Guba and Lincoln (1989) 1 : credibility, fairness, and satisfaction. Credibility of the results is firstly enhanced through member check. The Delphi survey, the combination of citizen and multi-stakeholder dialogues as well as the online deliberations all contribute to triangulation. ## Methodology of collection and analysis For all the foreseen research activities the **NANO2ALL** consortium will apply the following procedures: * Non-disclosure and privacy protection statements complying with EU and national legislation will be signed by task leaders and WP leaders and privacy/non-disclosure statements and information on data collection and data treatment will be made available to all respondents to all research actions. * Collected data will be: textual answers to an online survey; audio taped interviews and/or textual in depth interviews; pictures, audio and/or videotaped interviews and/or pictures, * All collected data will be treated anonymously and numerically identified, * Task leaders will be responsible and accountable for data storage and treatment after the end of the project. Collected raw information (texts and audio files of interviews as well as answers to questionnaires) will not be published online and/or made available to the public anyway. * For the specific release and dissemination of success stories identified respondents will be involved in the contents design of their ‘story’ and will have to provide written and signed authorization to the release and publication of any audio/video/textual contents related to their own personal experiences. * All original, irreplaceable electronic project data and electronic data from which individuals might be identified will be stored on Task Leaders supported media and secure server space or similar; such data will never be stored on temporary storage media nor cloud services, unless fully compliant with national and EU regulation on data protection. * All research data will be accessible to partners of the project being in charge of the specific research action, as well as the Ethics auditors. Especially, the surveys undertaken in WP2 and WP3 will likely include personal data, or at very least will include data that may introduce a risk that data subjects could be identified, even if their survey data is anonymised. Therefore, we will introduce a threestep process to ensure that the data is anonymised and ensure that participants retain control over the release of research data relating to them. This will be supplemented by the projects ethical and data protection protocol (D5.2: Data protection and ethical protocol, delivered on M18). First, in describing the survey purpose, **NANO2ALL** partners will populate information in general terms on the website, within project deliverables and when presenting the results of the case study research. Furthermore, if information related to a particular context is so specific that the identity of the stakeholder is obvious, **NANO2ALL** partners may choose not to release that portion of the data, or to re-mix stakeholders’ elements in order to disrupt the link between the stakeholder and the information. Second, the **NANO2ALL** project will utilise a multi-dimensional consent mechanism. Surveyed participants will be invited to consent to the survey itself and the storage, preservation, opening and sharing of their data. Thus, data subjects can consent to participate in the survey, and decline to consent to providing open access to their data. This may be particularly useful to research participants who feel they may be opening themselves up to adverse effects if their data was made openly available. In all cases, participants will be given access to their anonymised transcript and invited to make corrections, amendments, additions, etc. before the survey material is utilised in any publicly-available documents. Third, the anonymised data will be made available after an embargo period corresponding to one year after the close of the case study work package. This will enable **NANO2ALL** researchers to draft and make available to participants both project deliverables, which will be publicly available, and journal articles or other publications, which will also be published as open access, resulting from the case study research before they are released. This will give survey participants an opportunity to withdraw their consent for their survey material to be made publicly available if they have any concerns based on what is written in the research outputs. However, if other researchers wish to verify the material, we will construct a managed access plan to enable such verification, with the understanding that these other researchers will respect the wishes of the data subject. In order to aid the preservation and re-use of the survey data, **NANO2ALL** will make use of existing standards and infrastructure. It will use the Zenodo repository developed by OpenAIRE/OpenAIRE plus and it will follow their guidelines in relation to authoring appropriate and useful metadata. The project will also use standardised format for data sharing – for example, .txt in respect of survey transcripts originally produced in Microsoft Word. As such, we will avoid access barriers related to proprietary software. We will rely on the good practice developed by the OpenAIRE team in relation to ensuring the integrity and validity of the data we deposit. ## Internal project working environment Basecamp.com, has been decided to be used for the internal management and collaboration work in the project. In Basecamp, all data is encrypted via _SSL/TLS_ when transmitted from their servers to the browser. The database backups are also encrypted. D5.4. DATA MANAGEMEN T PLAN **NANO2ALL** • NANOTECHNOLOGY MUTUAL LEARNING ACTION PLAN FOR TRANSPARENT AND RESPONSIBLE UNDERSTANDING OF SCIENCE AN TECHNOLOGY **3** **.** **E** **thical** **T** **reatment of** **participant** **data** # ETHICAL TREATMENT OF PARTICIPANT DATA **NANO2ALL** platform, by definition, contains personal data from registered users and project events 2 . Moreover, WP2 and WP3 will involve the collection of specific research data from data subjects. The purpose of this Chapter, therefore, is to set out the basic ethical principles to be adhered to in the project. Partners involved have many years’ experience in running both quantitative and qualitative surveys. If and when required, they will seek ethics approval from their respective institutions prior to any data collection exercise. ## Ethical Requirements A number of ethics requirements are identified and listed in _Table 1_ below. Table 1 : Ethical requirements for **NANO2ALL** <table> <tr> <th> **# Requirement** </th> <th> **Status** </th> </tr> <tr> <td> 1 </td> <td> Copies of ethical approvals for the collection of personal data by the competent University Data Protection Officer / National Data Protection authority should be submitted </td> <td> As and if required, approval will be sought from the relevant institution(s) _prior_ to any data collection. </td> </tr> <tr> <td> 2 </td> <td> The recipients of the Multi- stakeholder platform are addressing all stakeholder groups. Because of the open character of the platform already running, there is a risk of stigmatization of individuals who use the services of the platform. The project should describe a strategy to counteract the possibility of </td> <td> For the operational use of the platform, data is used in compliance with the existing Terms & Conditions between the partners. These are currently being reviewed within the project to determine if further explicit consent may be required, for instance, for data to be stored in a centralised location. This will inform architectural decisions to be </td> </tr> </table> <table> <tr> <th> </th> <th> stigmatization of individual recipients. In addition, a policy about the possibility of abuse and misuse of the platform for unduly personal gain should be in place. </th> <th> reported in WP5. </th> </tr> <tr> <td> 3 </td> <td> The platform must explicitly confirm that the existing data is publically available </td> <td> See Requirement 2, above. In addition, an Open Data source which may be used during platform operation will be used strictly in accordance with the terms of the provider. Such data are typically public. </td> </tr> <tr> <td> 4 </td> <td> Detailed information must be provided on the procedures that will be implemented for data collection, storage, protection, retention and destruction, and confirmation that they comply with national and EU legislation. </td> <td> In addition to the Data management plan outlined in the preceding sections, appropriate research protocols will be developed and published D5.2: Data protection and ethical protocol, delivered on M18. </td> </tr> <tr> <td> 5 </td> <td> Detailed information must be provided on the informed consent procedures that will be implemented </td> <td> See requirement 4 above. Notwithstanding research protocol details to be provided in D5.2: Data protection and ethical protocol, delivered on M18). </td> </tr> <tr> <td> 6 </td> <td> In case of data not publicly available, relevant authorisations must be provided </td> <td> There is currently no plan to release data which is not publically available. As outlined under Requirement 4 and 5 above, procedures are already being set out as guidelines for data handling. </td> </tr> <tr> <td> 7 </td> <td> The project must clarify whether children and/or adults unable to give informed consent will be involved </td> <td> There is no plan to recruit or involve any minors or vulnerable adults. </td> </tr> <tr> <td> </td> <td> and, if so, justification for their participation must be provided. </td> <td> </td> </tr> </table> In the following sections, the overarching ethical and data protection framework for the project are discussed, and will continue to be referred to and monitored during project meetings. Any changes to the above will be reported in the periodic progress reports and, if warranted, discussed directly with the Project Officer. ## Reference material The overall approach in the project, notwithstanding specific provisions under the legislation of individual Member States, is based on the following EU legislation: **Directive 95/46/EC** forms the basis for legal management of personal data and data protection in general (European Commission, 1995). This has been extended in the **Directive 2002/58/EC** with guidance on data protection should data be transferred via public connections (European Commission, 2002) 3 . In addition to these legal instruments, and leading on from Directive 95/46/EC, a working party (WP29) was set up by the European Commission to consider and share opinion on specific issues of data protection and handling. In the context of **NANO2ALL** , we are specifically looking at information on anonymization at this time (WP29, 2014) 4 . ## Definitions This section provides a few definitions as they relate specifically to the project. This is not an exhaustive list as provided, for example, in (Article 2: "Definitions" European Commission, 1995, No L 281/38f). <table> <tr> <th> Data controller </th> <th> “’controller’ shall mean the natural or legal person, public authority, agency or any other body which alone or jointly with others determines the purposes and means of the processing of personal data” (European Commission, 1995, Article 2, Paragraph (d)) For **NANO2ALL** , there is discussion at the present time to establish appropriate general guidelines for the Data controller. As an initial framework, we are suggesting: 1. By default, SPI as project co-ordinator, will act as Data controller; 2. For existing data, all partners will remain data controller solely in respect of the data they use for the purposes of their current activities; </th> </tr> <tr> <td> Data processor </td> <td> “’processor’ shall mean a natural or legal person, public authority, agency or any other body which processes personal data on behalf of the controller” (European Commission, 1995, Article 2, Paragraph (e)) As above, the initial framework would correspondingly involve by default, WP leaders (within the context of their WPs) will act as Data processors for all data collected, generated and analysed to their perspective WP. </td> </tr> <tr> <td> Data subject </td> <td> “an identified or identifiable natural person” (European Commission, </td> </tr> </table> 1995, Article 2, Paragraph (a)) For the purposes of **NANO2ALL** , Data subjects will be all stakeholders’ participants that will be recruited to be engaged to the WP activities. They will be presented with a separate Consent Form requesting their agreement for the collection and specific processing of their data for the purposes of the project surveys. The project partners are currently reviewing any consent forms or terms and conditions to provide a baseline for future engagement and the use of personal data, for example, as part of dissemination materials. D5.4. DATA MANAGEMEN T PLAN **NANO2ALL** • NANOTECHNOLOGY MUTUAL LEARNING ACTION PLAN FOR TRANSPARENT AND RESPONSIBLE UNDERSTANDING OF SCIENCE AN TECHNOLOGY **4** **.** **D** **ata** **open** **access** **to** **publications** **NANO2ALL** • SOCIETAL ENGAGEMENT ON **RESPONSIBLE NANOTECHNOLOGY** D5.1. COMMUNICATION PLAN # DATA OPEN ACCESS TO PUBLICATIONS Notwithstanding any provision relating to data subject rights and on the basis of what has been outlined for specific data sets, it is by definition the intention of the **NANO2ALL** project to share and publicise information as widely as possible. The following briefly outlines current plans in relation both to scientific publication as well as more general materials providing a description of the project and reports on specific events. ## Dissemination materials The project will produce a set of leaflets and brochures to describe the project itself, any specific white-paper style discussion of the problems of nanotechnology, and to highlight individual events. The specific plan for what publications will be produced will be discussed in the annual communications reports (D1.2-D1.4). These are intended for publication primarily through the project website, with additional references and links from individual partner sites as they relate to or are relevant to the specific environment within which end-user partners operate. Publication on the project website will also include all project deliverables, with the exception of the two which are confidential to the consortium, as well as informative videos providing further detailed information about different aspects of the **NANO2ALL** activities. ## Scientific publications Scientific publications will all be targeted at open access journals in the first instance, unless specifically made available within a conference or other such event. DOIs will be generated by the journal initially. . D5.1. COMMUNICATION PLAN **NANO2ALL** • SOCIETAL ENGAGEMENT ON **RESPONSIBLE NANOTECHNOLOGY** **C** **onclusions** **NANO2ALL** • SOCIETAL ENGAGEMENT ON **RESPONSIBLE NANOTECHNOLOGY** D5.1. COMMUNICATION PLAN # CONCLUSIONS This version of the DMP as anticipated is a first version that will be updated during the lifetime of the project and if needed more activities which are sensitive in terms of data management plan might be added. Although the **NANO2ALL** project has only recently begun (in October, 2015), it is already appropriate to outline how the project will handle existing data, from the usecase partners, along with future data. Such data will be collected and processed in connection with the platform development (WP1) and the Multi-stakeholder dialogues (WP3), so important to inform and support the longer term benefits of the project. With that in mind, in the preceding chapters, the report has introduced key concepts and set out generalized guidelines for data management. This has to include the identification of suitable stakeholders and interested parties, but also take into account the value and potential sensitivity of the data which will be generated during the project. Such data, it is expected, will be generated as a result of operation of the platform and the stakeholders’ engagement. But in addition, data will also be created in terms of survey responses as part of the Dialogues preparations planned. In consequence, there is also a need for ethical oversight and governance, as well as some consideration of how and when to make data and information available. We have already outlined here the relationship between partners in terms of data management responsibilities (Data controller vs Data processor). We also foresee not only typical dissemination materials, such as brochures, whitepapers, and the like, but additionally as research papers and conference presentations, making the ethical oversight of data an important aspect of the work being started. This report has therefore sought to bring together all of the related strands in a first iteration of the project data management. There are some discussions to be held as the project progresses so that the plans outlined in this report can be adapted as required to meet new demands as they occur, and to exploit the experience of the close collaboration of the various partners in the consortium. D5.1. COMMUNICATION PLAN **NANO2ALL** • SOCIETAL ENGAGEMENT ON **RESPONSIBLE NANOTECHNOLOGY**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1059_SmartNanoTox_686098.md
# 1\. PROJECT DESCRIPTION ## 1.1. ABSTRACT A definitive conclusion about the dangers associated with human or animal exposure to a particular nanomaterial can currently be made upon complex and costly procedures including complete NM characterisation with consequent careful and well-controlled in vivo experiments. A significant progress in the ability of the robust nanotoxicity prediction can be achieved using modern approaches based on one hand on systems biology, on another hand on statistical and other computational methods of analysis. In this project, using a comprehensive self-consistent study, which includes invivo, in-vitro and in-silico research, we address main respiratory toxicity pathways for representative set of nanomaterials, identify the mechanistic key events of the pathways, and relate them to interactions at bionano interface via careful post-uptake nanoparticle characterisation and molecular modelling. This approach will allow us to formulate novel set of toxicological mechanism-aware endpoints that can be assessed in by means of economic and straightforward tests. Using the exhaustive list of end-points and pathways for the selected nanomaterials and exposure routs, we will enable clear discrimination between different pathways and relate the toxicity pathway to the properties of the material via intelligent QSARs. If successful, this approach will allow grouping of materials based on their ability to produce the pathway-relevant key events, identification of properties of concern for new materials, and will help to reduce the need for blanket toxicity testing and animal testing in the future. ## 1.2. PROJECT COORDINATION Dr. Vladimir Lobaskin, School of Physics, University College Dublin, Belfield, Dublin 4, Ireland. [email protected]_ Phone: +353-1-716-2432, Fax: +353-1-283-7275. **Data Manager:** Philip Cotter, Information Manager, Systems Biology Ireland, Science Link Building, University College Dublin, Belfield, Dublin 4, Ireland. [email protected]_ Phone: +353-1-716-6310. ## 1.3. LIST OF PARTICIPATING ORGANISATIONS 1. University College Dublin / National University of Ireland, Dublin, Ireland. 2. Stockholm University, Sweden. 3. Helmholtz Zentrum München, Germany. 4. National Research Centre for the Working Environment, Denmark. 5. French National Institute for Occupational Health, France. 6. Jozef Stefan Institute, Slovenia. 7. Environmental Health Centre, Canada. 8. Imperial College London, UK. 9. University of Lorraine, France. 10. Biovia, 3DS, France. 11. Victrocell Systems, Germany ## 1.2. DEFINITIONS _**AOP** _ – Adverse Outcome Pathway: representation of biological pathways leading to adverse effects; _**Data Archive –** _ A device or service where machine-readable data are acquired, managed, documented, and finally distributed to the wider scientific community; _**DOI** _ – Digital Object identifier – a persistent data-resource specific identifier used to uniquely identify an entity; _**MIE** _ – Molecular Initiating Events – the interaction if a molecule and biomolecule/biosystem which can be observed to trigger an AOP; _**NM, NP** _ – Nanomaterial, nanoparticle; _**Open Data** _ – Research data made available to the wider scientific community as evidence to support the interpretation of scientific discoveries. Open data is described and contextualised for use in reanalysis or re-use in further experimentation and is hosted by publicly accessible data archives; _**Data –** _ Fact-based material generally accepted in the scientific community as necessary to document and support research findings. Research Data include digitally stored raw experimental data, digital laboratory notebooks, preliminary datasets and data analysis and draft documentation such as reports, publications and project-related communications; _**SNT** _ / _**SmartNanoTox** _ – SmartNanoTox synonyms for the project title ‘Smart Tools for Gauging Nano Hazards’. # 2\. DATA SUMMARY ## 2.1. PURPOSE OF THE DATA COLLECTION AND GENERATION A heterogeneous collection of dry and wet lab experimental data has and will be generated over the four years of the Project. These will form a basis for mechanism-based tools for nanomaterial (NM) toxicity assessment. The project aims at prediction of specific pulmonary adverse outcome pathways (AOPs) caused by NM exposure with a focus on Molecular Initiating Events (MIE) and Key Events (KE) of these AOPs. The key to predicting these events is in understanding the interactions at bionano interface. Therefore, a significant part of the project will be devoted to description of the pathways using ‘omics approaches and statistical models from Systems Biology and to description of the state of NMs after contact with organisms and tissues. ## 2.2. DATA TYPES & FORMATS The project will generate multiple data types stored in both standardised and not yet standardised but community defined set of data formats. A description of each data generation process within the project is tabulated in Appendix A. Data from _in vivo_ and _in vitro_ experiments will be recorded in text and spreadsheet forms before conversion into corresponding NANoREG templates developed by eNanoMapper ( _http://www.enanomapper.net/_ ) . Due to the novel nature of this project data, curation and design will be subject to modification during the data aggregation stage. Data from _in vivo_ and _in vitro_ NM exposure Transcriptomics and Proteomics and lipidomics data from analysis of NP corona protein structure will consist of raw experimental data stored in bespoke formats and manipulated by Manufacturer developed tooling. Analysis configuration files (typically XML), machine and laboratory generated metadata files and results, both initial machine read files and post-processing with analytical software/protocols (typically coma or tab delimited text formats) will also be provided for use by project partners and potential reuse by others. Along with Histology data and reports, these data will also be converted into eNanoMapper-NANoREG inherited MS Excel templates for incorporation into the eNanoMapper Ontology. Where eNanoMapper templates do not yet exist, wider community standards will be applied. Results from Fluorescence Microspetroscopy (FMS) derived images and their documentation, simulation data, Gene Regulatory Networks and AOP- representing computer models, descriptors from _in silico_ computed NP structure and descriptors of measured NP toxicity data will be stored in ISA- TAB Nano or JSON formats with all raw experimental data and documentation preserved in the formats produced by their analysis software. ## 2.3. IMPORTED DATA RESOURCES For NM characterisation, bionano interactions modelling and construction of the QSAR models, we will use NM characterisation data imported from two previous projects: NANoREG ( _http://www.nanoreg.eu/_ ) (now succeeded by NanoReg2 ( _http://www.nanoreg2.eu/_ ) ) and NanoGenoTox ( _https://www.anses.fr/fr/node/120284_ ) . Data obtained from NANoREG and NanoReg2 are made freely usable and shareable for non-commercial applications vi a _CC-BY-NC-4.0_ . Computational construction of biological pathways and Transcriptomics analysis use entity and interaction identifiers imported from the Gene Expression Omnibus (GEO) ( _https://www.ncbi.nlm.nih.gov/geo/_ ) and NCBI GenBank. Proteomics data will use human and mouse protein identifiers derived from the EMBL-EBI UniProt database ( _http://www.uniprot.org/_ ) . Molecular simulations will use Protein structure data imported from the Protein Data Bank ( _http://www.rcsb.org/pdb/home/home.do_ ) , which is made available by _CC- BY-4.0_ . The Chemical Information Ontology, a semantic representation of relevant NP entities and relationships will also be imported and expanded upon from the eNanoMapper Ontology (GNU Lesser GPL ( _http://www.gnu.org/licenses/lgpl.html_ ) . Biological entities will use DOIs inherited from the Bioinformatics databases used in the analysis of experimental data. Those currently in use are included in the licence and usage policy table (Appendix B) and described in more detail in the section “ _**DOI and Nomenclature Used** _ ”. ## 2.4. ESTIMATED DATA VOLUME The expected size of the data varies depending on the type of experiment. The largest volume should be reserved for Mass-Spectrometry data (~150 GB raw data and analytical metadata) and from NP Protein corona analysis and molecular simulation data (ca. 50-100 GB). All other combined data should not be very space-demanding. ## 2.5. DATA USE AND RE-USE POTENTIAL iQSAR data, combining the experimental data accumulated; ontologies and models developed over the four years over the project may be used to: 1. provide a transparent Scientific basis for the drafting of regulations on NM use in manufacturing: the data will contribute to the development of robust systems for evaluating the health and environmental impact of engineered NMs by developing a systematic framework into which existing and emerging experimental data can be utilised to predict the interactions of NMs of industrial and economic significance with living systems. 2. reduce the financial cost of and need for animal testing currently required before a new NM may enter the market: the data and predictive tooling provided by the project may form a robust basis of a future screening strategy to predict the health impact of new NMs. Other approaches, based on learning approaches in databases, though necessary also, would less easily reduce the need for empirical testing for the broad range of entirely novel materials. 3. contribute to predictive models for designing and engineered NMs that are safe by design: by relating the physicochemical properties of NMs with their biological fate (uptake and localization) and behaviour (functional impacts and potential toxicity) we will enable the categorization of NMs by physicochemical factors that constitute a “risk” either for bioaccumulation or potentially a specific toxicity endpoint. Feeding this information back into the process development will enable manufacturers to quickly screen out particles with physicochemical properties that equate with a risk, and either develop new particles or reengineer their products to modify their properties, thereby designing out the risk factors initially, and in the longer term potentially designing the NMs to be safe. # 3\. FAIR DATA COMPLIANCE ## 3.1. MAKING DATA FINDABLE It is envisaged that all data collected by the consortium will be collated together in a flexible data structure accommodating OWL Ontologies, open source conventional MySQL RDBMS and the JSONlike MongoDB ( _https://www.mongodb.com/_ ) NoSQL database. Data files, models and other resources used or created will be assigned a unique and persistent identifier linked to a title, creator and creation data attributions as well as a brief text description of content, provenance, work package affiliation, input format and compatible formats for possible export. Experimental data file content will be indexed and linked to their curated analysis metadata files. Described NPs, NMs and biological entities will use reference DOIs inherited from the external ontological and bioinformatics databases used in the analysis of experimental results. Access, though restricted during the development stage will be opened to the community allowing the flow of information both into and out of the SNT database and iQSAR system. Though all project data will be made available on UCD School of Physics Servers and via Zenodo, ‘omics data that has been published or is of particular interest to the research community will be submitted to community repositories and annotated as per the requirement of the resources to with they are submitted, e.g. PRIDE Mass Spec data submissions require a verbose description of Sample and Data Processing Protocols accompanying the upload of experimental, configuration, analysis and results files. Keyword indexes built on these submitted text descriptions are searchable from ProteomeCentral ( _http://proteomecentral.proteomexchange.org/cgi/GetDataset_ ) . Thus, placing data in resources built for data redistribution and most commonly queried by domain specialists. ### 3.1.1. METADATA PROVISION Metadata will be provided to describe the data resources created. Documentation (READMEs) will accompany any tooling or data resources created and made available from the SNT and Zenodo file servers. All experimental datasets added to the curated iQSAR system will be described in terms derived from and added to the eNanoMapper Ontology. The computational tools needed to evaluate the NM characteristics will be described according to EMMC guidance using MODA templates. Journal Publications and dataset submitted to community repositories will also provide an annotated community interface for the datasets produced. ### 3.1.2. DOIs & NOMENCLATURE USED A heterogeneous collection of data sources and types requires a large and potentially expanding variety of Bioinformatics and Semantic tools for information representation and disambiguation. A framework will be built around the core data resources: QSAR ( _https://www.qsartoolbox.org/_ ) , NANoREG ( _http://www.nanoreg.eu/_ ) Terminology\eNanoMapper Ontology ( _http://www.enanomapper.net/_ ) , PDB ( _http://www.rcsb.org/pdb/home/home.do_ ) , UniProt Knowledgebase ( _http://www.uniprot.org/_ ) , GenBank ( _https://www.ncbi.nlm.nih.gov/genbank/_ ) and GeneOntology ( _http://www.geneontology.org/_ ) . NP and NM identifiers are inherited from QSAR and the eNanoMapper Ontology; information describing the composition of the Corona which develops around a NP when NPs are exposed to living material are described using DOIs from PDB and UniProt (Proteomics). Gene Regulatory Networks will use interaction relationships described by GEO data using gene DOIs from GenBank (Transcriptomics). Lipid nomenclature or DOI systems have yet to be investigated for selection. ‘Omics interaction Network analysis will use these DOIs, which are bidirectionally linked within each respective database. Coarse-grained network descriptions garnered from text-data mining of peerreviewed texts will be screened by domain experts and recorded using _recommended names_ linked to the GenBank and UniProt Bioinformatics databases. ### 3.1.3. KEYWORD SEARCH Development of SNT database: The database design will consider the main challenges of data integration and interoperability in Big Data that have been identified in the literature and, in particular, will follow the recommendations of Kadadi, A., Agrawal, R., Nyamful, C. and Atiq, R., 2014, October. Challenges of data integration and interoperability in big data. In _Big Data (Big Data), 2014 IEEE_ _International Conference on_ (pp. 38-40). ( _http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7004486_ ) which uses algorithmic integration of data from disparate data formats. Using a hybrid RDBMS and NoSQL database implementation, we will attempt to solve for the available data the challenges of i) the scope of data; ii) data consistency; iii) optimised data querying; iv) acquire adequate information and data processing resources and v) scalability. By having a dedicated data manager, we also ensure that the database will be properly maintained and that the users’ needs are taken into account when developing the various tools and resources required. The data manager will participate in the NanoSafety Cluster focus groups, tasked with ensuring data standardization, integration, curation and adherence to current SOPs in the data science community. ### 3.1.4. DEVELOPMENT OF THE SMARTNANOTOX DATABASE To provide access to project digital data resources for all partners and facilitate the data analysis and integration the data manager will develop a portal to the specific requirements of the project. Webcontent, various media and more advanced project information retrieval will eventually be made public to facilitate the re-use of the data generated in the consortium by the scientific community. This will form the basis of the iQSARs. To facilitate the access of all this data a bespoke user-friendly interface will be developed. Although the final features of the portal will be determined by the specified requirements of the consortium partners and the access restriction imposed upon them by publication. The portal will provide customisable access to correlation data for different NM, including physicochemical properties, toxicology data and other meaningful data as required. The data will be archived on available storage resources using the nomenclature following the recommendations of NANoREG final report to ensure a minimal pre-requisite for system knowledge for research scientists to extract relevant and detailed information using typical domain nomenclature. ### 3.1.5. VERSIONING The digital capture of experimental data will typically only be produced and processed once. Version numbers will not be required as the data resources are not incrementally built. These resulting datasets will be assigned unique IDs in the iQSAR system and stored using human interpretable file and folder names on the SNT and ultimately on the Zenodo file servers. Simulation data, such as the Gene Regulatory Network will use a Project and/or experiment name acronym, version number and creation date data combination: ‘SNT[SYM]<EXP>[SYM]<VERSION>[SYM]<CREATION_DATE>’: _SNT_GRN-v2-02052017_ . All data files shared and stored within the consortium will follow this naming convention, e.g. ‘ **SmartNanoTox** _Phys-chem proposed dose range_ _upload_ **1.1 final-26072017** .xls **’.** eNanoMapper Ontology releases - The ontology has been made available in a public version control repository (GitHub) as an OWL language file. Any accepted updates and additions to the source material will be managed by the GitHub system. ### 3.1.6. METADATA STANDARDS USED OR CREATED Work on specific metadata formats is ongoing and though not uniformly XML based, it will compliant with DataCite’s Metadata Schema 4.0 ( _https://schema.datacite.org/_ ) . Metadata will be stored in DataCite XSD (XML) and JSON format variants which allow conversion to ISA TAB Nano. We will continue a discussion with NanoReg2 ( _http://www.nanoreg2.eu/_ ) IT team on file template, implemented data formats and maximising interoperability. ## 3.2. MAKING DATA ACCESSIBLE ### 3.2.1. DATA WHICH WILL BE MADE OPENLY AVAILABLE Where required, restrictions will be applied to the data in accordance with the embargo requirements of journal publication. Other results obtained using confidential information and materials provided by Chiesi Pharmaceuticals ( _http://www.chiesigroup.com/en/_ ) , who will be supplying lung surfactant for certain _in vitro_ experiments, will be omitted from the final data release. In SmartNanoTox, the following data will be made publicly available: <table> <tr> <th> **Type of activity** </th> <th> **Object of study** </th> <th> **Type of data produced** </th> <th> **Type of data shared** </th> </tr> <tr> <td> In vivo toxicity tests in mice </td> <td> IT instillation of nanomaterials </td> <td> Tissue for histological evaluation and gene profiling, analysis of biological endpoints </td> <td> Toxicological endpoints </td> </tr> <tr> <td> In vivo toxicity tests in rats </td> <td> Nose-only inhalation of nanomaterials </td> <td> Tissue for histological evaluation and gene profiling, analysis of biological endpoints </td> <td> Toxicological endpoints </td> </tr> <tr> <td> Transcriptomics </td> <td> Nanomaterials/tissues </td> <td> Gene expression profile </td> <td> Gene identification </td> </tr> <tr> <td> </td> <td> Nanomaterials </td> <td> Mass-spectrometry data for the protein corona content </td> <td> List of proteins in the corona </td> </tr> <tr> <td> Atomistic molecular simulation </td> <td> Protein, lipid-NP interface </td> <td> Energy profiles, trajectories </td> <td> Potentials of mean force for biomolecules/NM </td> </tr> <tr> <td> Coarse-grained molecular simulation </td> <td> Protein, lipid-NP interface </td> <td> Energy profiles </td> <td> Protein descriptors, adsorption energies, protein adsorption affinity rankings </td> </tr> </table> ### 3.2.2. MECHANISMS OF DATA ACCESSIBILITY (Meta)Data and documentation will be made available via a shared file storage server administered by SNT and will be mirrored on the Zenodo Open Science file storage service ( _https://zenodo.org_ ) . Access to the SNT File Server will initially be restricted to SNT partners. Access to Zenodo will similarly be restricted until project completion or if requested by other researchers beforehand. Where data-specific online storage repositories exist, such as ProteomeXchange ( _http://www.proteomexchange.org/_ ) for Proteomics data, datasets will be submitted to these to place data in a location where it can be best found, accessed and reused by researchers. We have set up the project website ‘ _http://www.smartnanotox.eu_ ’ to publicise the activities of consortium and the associated projects, and to provide documentation and access information to the project’s resources. A member area within the project website has been created where experimental and computational data may also be stored and shared between the participants and if required, through the project website. ### 3.2.3. METHODS & SOFTWARE REQUIREMENTS Raw experimental data may be opened in any of appropriate tool for reading or analysing the data. For example, Mass Spectrometry data will be stored in Thermo Xcalibur .RAW format with the accompanying mqpar.xml file which describes the tool configuration used to run quantitative proteomics analysis in freely available (registration required) MaxQuant ( _http://www.biochem.mpg.de/5111795/maxquant_ ) . Generated Ontological data may be viewed on the free to use Protégé ( _http://protege.stanford.edu/_ ) . ‘Omics data interaction network analysis will be performed on Qiagen’s Ingenuity Pathway Analysis ( _https://www.qiagenbioinformatics.com/products/ingenuity-pathway-analysis/_ ) under a pre-existing licence held by partners at UCD and on String-DB ( _https://string-db.org/_ ) (registration required) and the DAVID Bioinformatics Resource ( _https://david.ncifcrf.gov/home.jsp_ ) (free-to- use). ### 3.2.4. META(DATA) RESOURCES A project management site has been created at EMDesk ( _https://emdesk.eu_ ) to assist the project members in management the project and reporting. In its first year the consortium has already generated a considerable amount of experimental data that has been shared, as required, between the consortium partners. The media for these experimental data is currently stored privately by the groups that produced them and is not yet accessible by all partners but delivered on request via a shared cloud file sharing services. From the second year onward, the amount of data produced within the consortium will increase significantly and so too will the need for secure and flexible sharing methods that can accommodate large volumes of data. To facilitate this the SmartNanoTox file server ( _http://137.43.122.41:8080_ ) has been established to centralise project data and make it available to all consortium members. While access to the system requires user authentication, the data that it will host can be shared by request or uploaded to public repositories as data processing has been completed (unless otherwise restricted by publication embargo). The SNT File Server infrastructure offers a resource of approximately 36 Terabytes of shared disk space which may be accessed globally from Linux based QTS Operating System’s desktop-emulating browser interface or via a selection of secured and customisable file transfer protocols e.g. WebDAV, NFS, CIFS and SSH. The project’s data will be preserved on this system for a period of five years under the administration of UCD’s School of Physics. However, data mirrored on Zenodo servers should be available for a much longer, though indefinite period (the envisioned minimum duration is 20 years). Most data will be copied to the Zenodo file service as they are produced, and all that that is to be shared will be transferred before the end of the project term. ## 3.3. MAKING DATA INTEROPERABLE & REUSABLE ### 3.3.1. DATA CONVERSION FOR INTEGRATION, SEARCH AND EXPORT As the Project will build on data resources developed for the NANoREG Project we will continue to convert data into ISA-TAB NANO file formats; using NANoREG harmonised terminology and eNanoMapper ontology terms and relations in the description and representation of our _in vivo_ and _in vitro_ data. Tooling will be adopted or developed as required to transform this data into open access formats such as XML, ISA-TAB or JSON documents allowing the import and export of data into formats required for further processing, shared transfer or archive. Human-readable text documentation will also be stored in the ISO-standardised PDF/A format to ensure long-term reusability. Content from tabular datasets such as MS Excel spreadsheets will also be converted to RFC 4180 defined comma delimited text files to similarly maximise their potential dispersion and reuse. Image data will be converted and saved along with ISO-12639 Standard TIFF formatted duplicates. We will use a combination of semantic web technologies in conjunction with conventional relational (MySQL) and document orientated (JSON/MongoDB) databases to facilitate information storage, indexing and retrieval. ## 3.4. MAKING DATA REUSABLE ### 3.4.1 LICENSING & ACCESS RESTRICTIONS There will be no restrictions imposed on the data made available by this project though registration will be required to access the iQSAR system and tools developed for data processing. Access to data resulting from work involving Chiesi Pharmaceuticals is protected by a non-disclosure agreement and will not be provided unless specifically permitted by them. ### 3.4.2 GUIDELINES FOR DATA AVAILABILITY Data will be stored on closed file systems until fully open or partial access is requested. Except for where temporary publication embargoes are imposed all relevant (meta)data resources will be made accessible upon completion of analysis, verification and integration of information into iQSAR. Mechanisms by which data will be made accessible have been described in Section 3.2.2. Where embargoes are imposed, citations and links to supplementary data resources will be provided until the embargo has expired, releasing the data to the repositories used by SNT and described in the next section. ### 3.4.3. LONG-TERM DATA AVAILABILITY As described in section 3.2.2 and 3.2.4 Zenodo File Servers will be used to store (meta)data and documentation relating to the projected Where domain-specific online data repositories exist, such as GEO for high- throughput functional Genomics data and PRIDE, as part of ProteomeXchange for storing and redistributing Mass Spectrometry Proteomics data, etc will be used to ensure that data is preserved beyond the lifetime of the project and will still be freely accessible to the research community. # 4\. RESOURCE ALLOCATION A part-time Data Manager was appointed for one year in May 2017. They will be responsible for implementing as close to a FAIR data framework as is possible within the current DMP and work with each consortium partner to ensure that the SOPs for data standardization are uniformly applied and robust enough to handle the growing volume and spectrum of data produced. In 2016 a QNAP TS-653A ( _https://www.qnap.com/en/product/model.php?II=213_ ) NAS was purchased to securely store and share all data produced by the project consortium. This machine is hosted by the School of Physics, UCD and will make data available to authorised users for at least one year beyond the timespan allotted for the project. Approximately €3k was spent on securing the NAS and it’s six 7.28Tb HDDs. Costs of administration will be covered for one year by the Data Manager appointed in 2017 with subsequent devolution of tasks to existing SNT staff at UCD. # 5\. DATA SECURITY ## 5.1. DATA ACCESS & SHARING With the exception of data based on work involving materials and information provided by Chiesi pharmaceuticals there are no other data used or created by the project partners which may be considered commercially or ethically sensitive. Where Institutional data sharing policies prohibit the sharing of data on commercial ‘cloud-based’ file storage services data will be routed to the secure shared folders of the SNT File Server. ## 5.2. DATA RECOVERY There are currently no facilities in place for data backup, however most of the data produced will be duplicated across collaborator infrastructure at source. For example: all Transcriptomics, Proteomics and (the yet to be produced) Lipidomics data will be duplicated on Systems Biology Ireland’s ( _http://www.ucd.ie/sbi/_ ) data storage servers from where it will be backed up, weekly, to resources hosted in UCD’s Daedalus datacentre. All data will be mirrored at no supplementary cost to the project at Zenodo servers ( _https://zenodo.org/_ ) and opened to public access in accordance with the guidelines for data availability. Data verification will be carried out by generated MD5 checksums to ensure no loss in data integrity during the transfer. ## 5.3. ETHICS COMPLIANCE This proposal complies with ethical principles (including the highest standards of research integrity, as set out, for instance, in the European Code of Conduct for Research Integrity and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct). # APPENDICES ## APPENDIX A – DATA GENERATION BY WORKPACKAGE As of July 2017 <table> <tr> <th> **Contact** </th> <th> **WP** </th> <th> **Study** </th> <th> **Activity** </th> <th> **Activity Description** </th> <th> **Type of data for sharing** </th> <th> **Shared Data Format** </th> <th> **Archive** </th> <th> **Vol** </th> </tr> <tr> <td> UCD </td> <td> 4 </td> <td> Proteins </td> <td> Molecular simulation </td> <td> Prediction of 3D structure of proteins using i-Tasser software, calculation of molecule characteristics </td> <td> coarse-grained models, sequence descriptors, structure descriptors, adsorption energies </td> <td> TDT, eNanoMapper Templates </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> <tr> <td> Imperial </td> <td> 4 </td> <td> Nanomaterials </td> <td> Molecular simulation </td> <td> Quantum-chemical modelling of NM, calculation of material properties </td> <td> NM coarse-grained presentation, NM descriptors, Hamaker constants </td> <td> TDT, eNanoMapper Templates </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> <tr> <td> UCD </td> <td> 5 </td> <td> AOP </td> <td> Statistical Modelling </td> <td> Computer simulation (CG) of the interaction of the key molecule with the NM </td> <td> Programs & Computational models </td> <td> Mathematica data, ISA-TAB, JSON </td> <td> SNT, Zenodo </td> <td> TBD </td> </tr> <tr> <td> UCD </td> <td> 2 </td> <td> Nanomaterials & Tissues </td> <td> Proteomics </td> <td> Omics studies for in vivo reaction to inhalation/ instillation of NMs for main classes of NMs: gene expression, metabolomics, proteomics </td> <td> MS Protein IDs </td> <td> MaxQuant Data, Tables*, eNanoMapper Templates </td> <td> SNT, PRIDE, Zenodo </td> <td> 50 GB </td> </tr> <tr> <td> UCD </td> <td> 2 </td> <td> Nanomaterials </td> <td> Lipidomics </td> <td> Study of the content and evolution of the NM biomolecular corona and lipid wrap/NM containing complexes from in vivo experiment </td> <td> MS derived Lipid IDs </td> <td> eNanoMapper Templates </td> <td> SNT, Zenodo </td> <td> 10 GB </td> </tr> <tr> <td> SU </td> <td> 4 </td> <td> Nanomaterials </td> <td> Molecular simulations </td> <td> Multiscale modelling of NM corona formation and validation of the modelling against in vivo studies </td> <td> bionano interactions descriptors - adsorption free energies; binding coordination </td> <td> data sheets / Tables* </td> <td> SNT, Zenodo </td> <td> 100MB </td> </tr> <tr> <td> HMGU </td> <td> 1 </td> <td> Lung cells </td> <td> In vitro tox tests in murine cells </td> <td> Cell-line exposure experiments using Vitrocell exposure systems </td> <td> Cytotoxicity and biological endpoints (gene/protein-level) </td> <td> data sheets / Tables* </td> <td> SNT, Zenodo </td> <td> </td> </tr> <tr> <td> HMGU </td> <td> 1 </td> <td> Inhalation / IT instillation of nanomaterials </td> <td> In vivo tox tests in mice </td> <td> Inhalation experiments: defined NMs aerosolisation, and investigation of interactions of NM within the lung lining fluid, alveolar epithelial cells and macrophages </td> <td> Toxicological endpoints - Tissue for histological evaluation and gene profiling, analysis of biological endpoints </td> <td> data sheets / Tables* </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> <tr> <td> NRCWE </td> <td> 1 </td> <td> Nanomaterials </td> <td> Surface Tension Experiments </td> <td> Investigation of NM interaction with the lung alveolar fluid and lung integrity </td> <td> Surface tension profiles, dose-response curves - Effect of nanomaterials on lung surfactant surface tension, inhibitory dose </td> <td> data sheets / Tables* </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> </table> _This project has received funding from the_ <table> <tr> <th> NRCWE </th> <th> 3 </th> <th> Database for the lungspecific or respiratory AOP </th> <th> Toxicity Pathway Identification </th> <th> Data mining in Gene Expression (GEO) database for the known pathways for lung diseases, library of biological processes that are perturbed by NM uptake </th> <th> Candidate Pathways </th> <th> data sheets / Tables* </th> <th> SNT, GEO, Zenodo </th> <th> 10 MB </th> </tr> <tr> <td> NRCWE </td> <td> 1&3 </td> <td> IT instillation of nanomaterials </td> <td> In vivo tox tests in mice </td> <td> Instillation experiments: Investigation of interactions of NM within the lung lining fluid, alveolar epithelial cells and macrophages </td> <td> Toxicological endpoints - Tissue for histological evaluation & gene profiling, analysis of biological endpoints </td> <td> data sheets / Tables* </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> <tr> <td> FIOH </td> <td> 3&5 </td> <td> Lung tissue </td> <td> Histology evaluation </td> <td> Tissue analysis of in vovio samples </td> <td> **Verbal** evaluation, Semi-quantitative scores </td> <td> TDT, Tables*, Images**, MDs </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> <tr> <td> JSI </td> <td> 2 </td> <td> Labelled Nanoparticles </td> <td> Fluorimetry microspectroscopy, FMS </td> <td> NM modification for efficient tracking (fluorescent/isotopic labelling) </td> <td> fluorescence spectra - procedures for labelling nanoparticles and characterize the efficiency of labelling </td> <td> spectra, synthetic pathways, data sheets & Tables* </td> <td> SNT, Zenodo </td> <td> 50 MB </td> </tr> <tr> <td> JSI </td> <td> 2 </td> <td> Labelled Nanoparticles, cell lines </td> <td> Fluorescence microscopy, FMS, 3D STED with spectral data </td> <td> NM modification for efficient tracking (fluorescent/isotopic labelling) </td> <td> 2D & 3D stack of 2D images (4D & 5D sets of data) - images, evaluation of affinity constants, lifetimes of states, characteristic times of membrane changes </td> <td> Images**, text based, Video </td> <td> SNT, Zenodo </td> <td> 20 GB </td> </tr> <tr> <td> JSI </td> <td> 2 </td> <td> Nanoparticles, Lung-lining fluid, Lipid dispersion </td> <td> In vitro 'NP'-to-lipid affinity </td> <td> Analysis of NM-lipid interaction for labelled NMs </td> <td> autocorrelation curves analysis & stack of images - images, evaluation of affinity constants </td> <td> interaction description </td> <td> SNT, Zenodo </td> <td> 10 MB </td> </tr> <tr> <td> UL </td> <td> 1&3 </td> <td> Lung cells </td> <td> In vitro tox tests (rat & human cells) </td> <td> Analysis of cell responses to NMs </td> <td> Cytotoxicity and biological endpoints (gene/protein-level), included transcriptomics </td> <td> data sheets / Tables* </td> <td> SNT, Zenodo </td> <td> 20 GB </td> </tr> </table> * archived as CSV text, ** archived as TIFF _This project has received funding from the_ ## APPENDIX B – IMPORTED DATA & LICENCING <table> <tr> <th> **Resource Usage Description** </th> <th> **Licence Type** </th> <th> **Full description** </th> </tr> <tr> <td> eNanoMapper Templates: NANoREG & NanoGenotox </td> <td> "The NANoREG – eNanoMapper database and NANoREG deliverables, SOPs and other documents [hereinafter: sources] may be used by individuals, institutions, governments, corporations or other business entities so long as the use itself or any works derived from the use of these sources, are not intended to generate sales or profit." </td> <td> CC-BY- NC-4.0 </td> <td> _http://www.rivm.nl/en/About_RIVM/International_Affairs/_ _International_Projects/Completed/NANoREG/NANoREG_specific_license_ __information_ </td> </tr> <tr> <td> eNanoMapper Ontology </td> <td> "The content which is created by eNanoMapper (available in the internal/ folder) is licensed under CC-BY (https://creativecommons.org/licenses/by/3.0/). However, eNanoMapper reuses content from external ontologies which retain their original licenses." </td> <td> GNU Lesser GPL </td> <td> _https://github.com/enanomapper_ _/ontologies/tree/master/licenses_ </td> </tr> <tr> <td> GenBank </td> <td> "no restrictions on the use or distribution of the GenBank data. However, some submitters may claim patent, copyright, or other intellectual property rights in all or a portion of the data they have submitted" </td> <td> Bespoke </td> <td> _http://hgdownload.soe.ucsc.edu/_ _goldenPath/hg38/bigZips/_ </td> </tr> </table> _This project has received funding from the_ <table> <tr> <th> Gene Ontology </th> <th> "You are free to copy and redistribute the material in any medium or format and remix, transform, and build upon the material for any purpose, even commercially" </th> <th> CC-BY-4.0 </th> <th> _http://www.geneontology.org/page/use-and-license_ </th> </tr> <tr> <td> Gene Expression Omnibus </td> <td> "NCBI places no restrictions on the use or distribution of the GEO data. However, some submitters may claim patent, copyright, or other intellectual property rights in all or a portion of the data they have submitted." </td> <td> Bespoke </td> <td> _https://www.ncbi.nlm.nih.gov/geo/info/disclaimer.html_ </td> </tr> <tr> <td> Protein Data Bank </td> <td> “are free of all copyright restrictions and made fully and freely available for both noncommercial and commercial use. Users of the data should attribute the original authors of that structural data” </td> <td> Bespoke </td> <td> _https://www.rcsb.org/pdb/static.do?p=general_information/about__ _pdb/policies_references.html_ </td> </tr> <tr> <td> UniProt </td> <td> "free to copy, distribute, display and make commercial use of these databases in all legislations, provided you give us credit" </td> <td> CC-ND- 3.0 </td> <td> _http://www.uniprot.org/help/license_ </td> </tr> <tr> <td> QSAR Toolbox </td> <td> Free to use with accreditation </td> <td> Bespoke </td> <td> _http://www.oecd.org/env/ehs/risk-assessment/Eula4.pdf_ </td> </tr> </table> _This project has received funding from the_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1061_MAGNEURON_686841.md
**Deliverable content** **1\. Purpose of the MAGNEURON Data Management Plan (DMP)** MAGNEURON is a Horizon 2020 project participating in the Open Research Data Pilot. This pilot is a part of the Open Access to Scientific Publications and Research Data programme in H2020. The goal of the program is to foster access to the data generated in H2020 projects. The purpose of the DMP is to analyse the main elements of the European data management policy that will be used by the consortium regarding all the datasets that will be generated by the project. The DMP as a not fixed document, has been evolving during the lifespan of the project. This last version of the DMP includes an overview of the datasets generated by the project, and the specific conditions that are attached to them. The MAGNEURON project DMP primarily lists the different datasets generated by the project and the main actions that the project implemented to manage these datasets. **2\. Register of the data sets generated or collected in MAGNEURON** The register must be understood as a dynamic document, which has been updated regularly during project lifetime. The information listed below reflects the conception and design of the individual work packages at the end of the project. The data register delivers information according to Guidelines on Data Management in Horizon 2020: * Data set reference and name; * Data set description; * Standards and metadata; * Data sharing; * Archiving and preservation. 1. **Datasets collected within WP1** Not relevant for this work package. 2. **Datasets collected within WP2 - Preparation of synthetic and recombinant MNPs (Phenix – Christine M.)** <table> <tr> <th> **Data set reference and name: WP2 dataset** Person in charge: Christine Ménager (CNRS) </th> </tr> <tr> <td> **Data type** </td> <td> **Data standards** </td> <td> **Data generation software** </td> </tr> <tr> <td> Images (TEM pictures) </td> <td> Format: TIFF </td> <td> ImageJ (open source image processing program designed for scientific multidimensional images) </td> </tr> <tr> <td> Video tapes </td> <td> Format: AVI, STK </td> <td> MetaMorph Microscopy Automation and Image Analysis Software </td> </tr> <tr> <td> Analysis files (text, calculation) for magnetism, DLS, zeta potential, hyperthermia, spectroscopy (UV-visible absorption, fluorescence) </td> <td> Format: Different text and calculation formats </td> <td> Different text processing and calculation software (excel, kaleidagraph, Igor) </td> </tr> <tr> <td> A dataset containing electronic microscopy imaging of the optimized particles and their main characterizations (size analysis, hydrodynamic diameter, magnetism properties) used by the different partners of the project </td> <td> Format of the data: TIFF, JPEG and PDF </td> <td> The dataset was made using imageJ, excel, word, Malvern zetasizer software </td> </tr> </table> <table> <tr> <th> **Estimated data size** </th> <th> **Data sharing** </th> <th> **Data storage and archiving** </th> </tr> <tr> <td> ~500 gigabytes per year will be generated by WP2. </td> <td> The generated data have been shared only with the team of Christine Ménager and the MAGNEURON project partners. The MAGNEURON partners have used Dropbox for data sharing in WP2, internal reports and EC deliverables </td> <td> The data generated in WP2 have been stored only in the digital format, no physical support will be kept. The data have been stored on the Phenix local servers. The scientific data have been stored during all the project, and 5 years after the project </td> </tr> <tr> <td> 25 Mb </td> <td> The dataset is shared on the website zenodo: with DOI: 10.5281/zenodo.3587537 https://zenodo.org/record/3587537#.XhL6kvxCc2w </td> <td> The dataset will be stored on zenodo, and on the PHENIX internal servers. </td> </tr> </table> 3. **Datasets collected within WP3 - Biofunctionalization and intracellular delivery of MNPs (UOS – Jacob P.)** <table> <tr> <th> **Data set reference and name: WP3 dataset** Person in charge: Jacob Piehler (UOS) </th> <th> </th> </tr> <tr> <td> **Data type** </td> <td> **Data standards** </td> <td> **Data generation software** </td> </tr> <tr> <td> Maps of Plasmids Used for Magnetogenetic Manipulation </td> <td> Format: .cm5 </td> <td> Clone Manager </td> </tr> <tr> <td> Tools & Microscope-Setup for Magnetogenetic Manipulation of Cells </td> <td> Format: .docx </td> <td> MS-Word </td> </tr> <tr> <td> Protocol for Production of MagIcS Nanoparticles </td> <td> Format: .docx </td> <td> MS-Word </td> </tr> <tr> <td> Probing Magnetic Manipulation of MagIcS Nanoparticles Inside Living Cells </td> <td> Format: .docx, .AVI, .TIFF </td> <td> MS-Word, Image-J, AxioVision </td> </tr> <tr> <td> Tools for Probing Magnetogenetic Activation of GTPases </td> <td> Format: .docx, .AVI, .TIFF </td> <td> MS-Word, Image-J, AxioVision </td> </tr> <tr> <td> Script for Analysis of FRET-Ratio Images </td> <td> Format: .cmv </td> <td> Matlab </td> </tr> </table> <table> <tr> <th> **Estimated data size** </th> <th> **Data sharing** </th> <th> **Data storage and archiving** </th> </tr> <tr> <td> ~10-50 gigabytes per year have been generated by WP3. </td> <td> The generated data have been shared only with the team of Jacob Piehler and the MAGNEURON project partners. The MAGNEURON partners have used two software types for data sharing in WP3: * UOS own software for sharing the scientific data; * Dropbox for sharing internal reports and EC deliverables. </td> <td> The data generated in WP3 have been stored only in the digital format, no physical has been kept. The data have been stored on the UOS local servers using OMERO software. The scientific data will be stored during all the project, and for 10 years thereafter. </td> </tr> <tr> <td> All data described above are about 10-50 gigabytes in sizes </td> <td> These data will be published on _https://zenodo.org_ </td> <td> These data will be stored on _https://zenodo.org_ </td> </tr> <tr> <td> including data files </td> <td> raw </td> <td> </td> <td> </td> </tr> </table> 1. **Datasets collected within WP4 - Tools for MNP manipulation in single-cell assays** <table> <tr> <th> **Data set reference and name: WP4 dataset** Person in charge: Mathieu Coppey (Institut Curie) </th> </tr> <tr> <td> **Data type** </td> <td> **Data standards** </td> <td> **Data generation software** </td> </tr> <tr> <td> Maps of molecular plasmids </td> <td> Format: JDK </td> <td> Curie Institute application </td> </tr> <tr> <td> Document presenting the tools for magnetic manipulation </td> <td> document docx </td> <td> Word & PowerPoint </td> </tr> <tr> <td> Videos: raw data of manipulation of magnetic nanoparticles in single cells </td> <td> Tiff videos </td> <td> Recorded with Metamorph </td> </tr> <tr> <td> Scripts for analysis </td> <td> MATLAB scripts, csv file with quantification, tracks. </td> <td> Coded with MATLAB </td> </tr> <tr> <td> Videos: Cellular magneticmanipulation with an endosomal approach </td> <td> AVI, TIFF, </td> <td> Metamorph, ImageJ, </td> </tr> <tr> <td> Analysis: Cellular magneticmanipulation with an endosomal approach </td> <td> PowerPoint, Excel files </td> <td> PowerPoint, Excel </td> </tr> </table> <table> <tr> <th> **Estimated data size** </th> <th> **Data sharing** </th> <th> **Data storage and archiving** </th> </tr> <tr> <td> ~1-2 terabytes per year generated by WP4. </td> <td> The generated data have been shared only with the team of Mathieu Coppey and the MAGNEURON project partners. The MAGNEURON partners have used two software types for data sharing in WP4: * Xfer – Institut Curie own software – for sharing the scientific data; * Dropbox for sharing internal reports and EC deliverables. </td> <td> The data generated in WP4 have been stored only in the digital format, no physical support have been kept. The data have been stored on the Curie Institute local servers. The scientific data have been stored during all the project, and the after-project scientific data storage is to be discussed. </td> </tr> <tr> <td> Tool for magnetic manipulation: 20kB </td> <td> Open data will be published on _https://zenodo.org_ </td> <td> Word document published directly on _zenodo.org_ </td> </tr> <tr> <td> Magnetic manipulations in single cells: 10GB – 50GB of raw data </td> <td> The dataset is shared on the website zenodo: with DOI 10.5281/zenodo.3579322 _https://zenodo.org/record/3579322#.Xigz52hKhPY_ </td> <td> The data will be stored directly on _zenodo.org_ with a backup on the NAS of the team. </td> </tr> <tr> <td> Scripts for the analysis: 100Mo </td> <td> The dataset is shared on the website zenodo: with DOI 10.5281/zenodo.3582962 _https://zenodo.org/record/3582962#.XigzFmhKhPY_ </td> <td> Scripts will be stored directly on _zenodo.org_ with a backup on the NAS of the team. </td> </tr> <tr> <td> Endosomal approach: 80GB </td> <td> Raw data will be published with description of the published experiments on _https://zenodo.org_ </td> <td> The data will be stored directly on _zenodo.org_ with a backup on the NAS of the team. </td> </tr> </table> 1. **Datasets collected within WP5 - Biomagnetic control of stem cell differentiation (Keele – Neil T. / Michael R.)** <table> <tr> <th> **Data set reference and name: WP5 dataset** Person in charge: Alicia El Haj (Birmingham University) </th> </tr> <tr> <td> **Data type** </td> <td> **Data standards** </td> <td> **Data generation software** </td> </tr> <tr> <td> Video files </td> <td> Format: AVI, WMV </td> <td> Microscopy Automation and Image Analysis Software </td> </tr> <tr> <td> Still images </td> <td> Format: TIF, JPEG </td> <td> ImageJ (open source image processing program designed for scientific multidimensional images) </td> </tr> <tr> <td> Graphing and statistics files </td> <td> Format: various software specific </td> <td> Microcal Origin, SPSS, PRISM, GraphPad, Minitab </td> </tr> <tr> <td> Analysis files </td> <td> Format: MS Excel spreadsheets </td> <td> MS Excel </td> </tr> <tr> <td> Data summary files </td> <td> Format: ppt, docx, pdf </td> <td> MS PowerPoint, MS Word, Adobe acrobat </td> </tr> </table> <table> <tr> <th> **Estimated data size** </th> <th> **Data sharing** </th> <th> **Data storage and archiving** </th> </tr> <tr> <td> ~1-2 terabytes per year will be generated by WP5. </td> <td> The generated data will be shared only with the team of Alicia El Haj, and the MAGNEURON project partners. The MAGNEURON partners will use Dropbox for data sharing in WP5. Processed data associated with published work will be made open access via a suitable data repository (e.g. EPrints / Keele Research repository). </td> <td> The data generated in WP5 will be stored in the digital format, and in physical copies in laboratory notebooks. The data will be stored on the Keele University local servers and hard drives. The scientific data will be stored during the project and for 10 years thereafter. Place for after-project scientific data storage is to be discussed. </td> </tr> </table> In more details, the following datasets have been identified in WP5 <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP5.1 SH active b-catenin expt. </td> <td> Active bcatenin fluorescent images </td> <td> PPT, Spreadsheet (<10Mb), JPG, <1Mb per image (150+ images), Minitab file (<1Mb per file) </td> <td> Processed images and data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (Poster) </td> </tr> <tr> <td> WP5.1 SH bcatenin ELISA expt. </td> <td> B-cat ELISA </td> <td> PPT, Spreadsheet (<10Mb), Minitab file (<1Mb per file) </td> <td> Processed data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP5.1 SH MNP labelling </td> <td> UM206-MNP and TREKMNP fluorescent images </td> <td> PPT, TIF, <3Mb per image (100+ images) </td> <td> Processed images shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP5.1 / WP5.2 Receptor expression (PCR) </td> <td> SH and Rat receptor PCR gels and qPCR </td> <td> PPT, TIF 3-13Mb per image (15+ images), spreadsheets (<10Mb per file), Minitab files (<2Mb per file) </td> <td> Processed images shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (MSc dissertation) </td> </tr> <tr> <td> WP5.1/5.2 MNP and PMA IF expt. </td> <td> TH and DAT fluorescent images </td> <td> PPT, Spreadsheet (<10Mb), TIF (100+ images, <25Mb per image), JPG (50+ images, <2Mb), OIB files (20+ files, <10Mb per file), Minitab file (<1Mb per file) </td> <td> Processed images shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (Poster), Physical (MSc dissertation) </td> </tr> <tr> <td> WP5.1/5.2 Diff. media +/- MNP IF expt. </td> <td> TH and DAT fluorescent images </td> <td> PPT, JPG, (220+ images, (<2Mb per image), OIB files (20+ files, <10Mb per file) </td> <td> Processed images shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (MSc dissertation) </td> </tr> </table> <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP5.1/5.2 Diff. media +/- MNP PCR expt. </td> <td> Neuro. and Dopa. gene expression with Diff. media with/without MNP </td> <td> PPT, Spreadsheets (<10Mb per file), Minitab files (<1Mb per file) </td> <td> Processed data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (Poster and MSc dissertation) </td> </tr> <tr> <td> WP5.2 Target receptor homology study (Human vs Rat) </td> <td> Rat Frizzled and TREK receptor homology </td> <td> PPT (<10Mb) </td> <td> Processed data shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP5.1 Receptor expression during differentiation </td> <td> Frizzled, TREK and BIII Tubulin fluorescent images </td> <td> PPT, TIF <10Mb per image (100+ images), JPG <5Mb per image (50+ images) </td> <td> Processed images shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (MSc dissertation) </td> </tr> <tr> <td> WP5.1 Stress response gene expt. </td> <td> Stress response gene expression with TREKMNP </td> <td> PPT, Spreadsheets (<10Mb per file), Minitab files (<1Mb per file) </td> <td> Processed data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (MSc dissertation) </td> </tr> <tr> <td> WP5.1 Wnt response gene expt. </td> <td> Wnt associated gene expression with UM206MNP </td> <td> PPT, Spreadsheets (<10Mb per file), Minitab files (<1Mb per file) </td> <td> Processed data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (MSc dissertation) </td> </tr> <tr> <td> WP5.1 Wnt reporter expt. </td> <td> SH TCF/LEF reporter </td> <td> PPT, Spreadsheets (<10Mb per file), Minitab files (<1Mb per file) </td> <td> Processed data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (Poster) </td> </tr> <tr> <td> WP5.2 SH Neurite extension study </td> <td> SH neurite extension fluorescent microscopy </td> <td> PPT, TIFF, <10Mb per image (100+ images), Spreadsheet (<2Mb per file) </td> <td> Processed images shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP5.2 hDNP with MNP study </td> <td> hDNP neurite extension microscopy </td> <td> PPT, JPG, <2Mb per image (40+ images), Spreadsheets (<5Mb per file), Neurite trace file (ImageJ format, <1Mb) </td> <td> Processed images shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> **Data Ref** </td> <td> **Name** </td> <td> **Type and size** **(Standards / metadata)** </td> <td> **Sharing** </td> <td> **Storage / Archiving** </td> </tr> <tr> <td> WP5.2/WP5.3 Rat slice expt’s </td> <td> Rat slice fluorescent images </td> <td> PPT, TIF, <15Mb per image (300+ images) </td> <td> Processed data shared with Magneuron consortia and disseminated at conferences </td> <td> Digital (University hard drives), Physical (Poster) </td> </tr> <tr> <td> WP5.2 Rat VM / N27 cells </td> <td> Rat VM / N27 in vitro diff. Expt’ and MNP labelling </td> <td> PPT, TIFF, JPG, <10Mb per image (800+ images), Spreadsheet (<2Mb per file) </td> <td> Processed images shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP2/WP3 MNP and Ferritin samples characterisation </td> <td> ACS (100F1, Dahan MNP, Ferritin Sample K) </td> <td> Spreadsheets (<5Mb each) </td> <td> Processed data shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP5 MNP characterisation </td> <td> Micromod 250nm MNP TEM images </td> <td> PPT, TIF, <2Mb per image (100+ images) </td> <td> Processed data shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> </table> 2. **Datasets collected within WP6 - Directed fibre outgrowth of neuronal cells (RUB – Rolf H.)** <table> <tr> <th> P </th> <th> **Data set reference and name: WP6 dataset** erson in charge: Rolf Heumann (Ruhr-University Bochum) </th> </tr> <tr> <td> **Data type** </td> <td> **Data standards** </td> <td> **Data generation software** </td> </tr> <tr> <td> Video tapes </td> <td> Format: AVI, MP4 </td> <td> Olympus Cell Sense Dimensions Image Analysis Software </td> </tr> <tr> <td> Still images </td> <td> Format: TIF, JPEG </td> <td> ImageJ (open source image processing program designed for scientific multidimensional images) Bio-Rad Image Lab Olympus Cell Sense Dimensions Image Analysis Software </td> </tr> <tr> <td> Maps of molecular plasmids </td> <td> Format: XDNA, DNA </td> <td> Serial Cloner, SnapGene </td> </tr> <tr> <td> Graphing and statistics files </td> <td> Format: XLSX, PZF, PZFX, OPJ and further software specific formats </td> <td> Microsoft Excel, Microcal Origin, GraphPad PRISM </td> </tr> <tr> <td> Analysis files (text, calculation) </td> <td> Format: Different text and calculation formats </td> <td> Different text processing and calculation software </td> </tr> </table> <table> <tr> <th> **Estimated data size** </th> <th> **Data sharing** </th> <th> **Data storage and archiving** </th> </tr> <tr> <td> ~1.0 terabytes per year have been generated by WP6. </td> <td> The generated data have been shared only with the team of Rolf Heumann and the MAGNEURON project partners. The MAGNEURON partners will use two software types for data sharing in WP6: * FileShare (Ruhr-University Bochum internal) * Dropbox for sharing internal reports and EC deliverables. </td> <td> The data generated in WP6 have been stored only in the digital format, no physical support have been kept. The data have been stored on the Ruhr-University Bochum local servers. The scientific data have been stored during all the project, and the afterproject scientific data storage is to be discussed. </td> </tr> <tr> <td> 5.9 Mb </td> <td> The dataset is shared on the website zenodo: with DOI: 10.5281/zenodo.3606263 https://zenodo.org/record/3606263#.XhyPQ yNCdaQ </td> <td> The dataset will be stored on zenodo, and on the RUB internal servers. </td> </tr> </table> 1. **Datasets collected within WP7 - Magnetic manipulation of cellular signalling in organotypic brain slices and in vivo models of Parkinson’s disease** <table> <tr> <th> **Data set reference and name: WP7 dataset** Person in charge: Monte Gates (Keele) </th> </tr> <tr> <td> **Data type** </td> <td> **Data standards** </td> <td> **Data generation software** </td> </tr> <tr> <td> Video tapes </td> <td> Format: AVI, MPEG </td> <td> Microscopy Automation and Image Analysis Software Ethovision and Tracksys processing software </td> </tr> <tr> <td> Still images </td> <td> Format: TIF, JPEG, JPEG2000 </td> <td> NIS elements (NIKON), and ImageJ (open source image processing) programs designed for scientific multidimensional images MatLab Image Processing Toolbox </td> </tr> <tr> <td> Statistics files </td> <td> Format: SPSS and Prism formats (PZF) </td> <td> SPSS, GraphPad Prism </td> </tr> <tr> <td> Analysis files (excel files, word docs) </td> <td> Format: Different text and spreadsheet formats </td> <td> Microsoft Office </td> </tr> <tr> <td> Video of slice culture protocol / embryo dissection and imaging </td> <td> Combination of video (AVI) and images (JPG, TIFF) </td> <td> Video and images edited in Movie Maker </td> </tr> </table> <table> <tr> <th> **Estimated data size** </th> <th> **Data sharing** </th> <th> **Data storage and archiving** </th> </tr> <tr> <td> ~1-2 terabytes per year will be generated by WP7. </td> <td> The generated data have been shared only with the team of Monte Gates and the MAGNEURON project partners. The MAGNEURON partners have used Dropbox for data sharing in WP7: </td> <td> The data generated in WP6 have been stored in the digital format, and in physical copies in laboratory notebooks. The data have been stored on the Keele University local servers and hard drives. The scientific data have been stored during the project and will for 10 years thereafter. Place for after-project scientific data storage is to be discussed. </td> </tr> <tr> <td> ~up to 1-2 gigabytes </td> <td> Video/images/description of some protocols (e.g. the tissue slicing protocol, preparation of VM cells and imaging) will be made openly available. </td> <td> Video and images will be stored on Keele University servers and the final version will be made openly available. </td> </tr> </table> A video outlining some of the protocols developed in WP7 will be made openly available. Furthermore, any accepted manuscripts arising from work in WP7 will be made open access. In more details, the following datasets have been identified in WP7 <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Optimisation of slice cutting angle </td> <td> PPTX, JPG, TIFF, ND2, <3GB </td> <td> Processed summary shared with Magneuron consortia, disseminated at conference </td> <td> Digital (University hard drives), physical (poster) </td> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Cell death analysis in slices </td> <td> PPTX, JPG, TIFF, ND2, XLS Excel spreadsheets, PZF GraphPad Prism spreadsheet, <15GB </td> <td> Processed summary shared with Magneuron consortia, disseminated at conference </td> <td> Digital (University hard drives), physical (poster) </td> </tr> </table> <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Retrograde labelling with FluoroGold and FluoroRuby </td> <td> PPTX, JPG, TIF, ND2, <36GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Degeneration of TH in slices </td> <td> PPTX, JPG, TIF, ND2, <10GB </td> <td> Processed summary shared with Magneuron consortia, disseminated at conference </td> <td> Digital (University hard drives), physical (poster) </td> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Removal of cortex to improve slice health </td> <td> PPTX, JPG, TIF, ND2, XLS Excel spreadsheets, PZF GraphPad Prism spreadsheet, <3GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Optimisation of slices derived from older rats </td> <td> PPTX, JPG, TIF, ND2, <15GB </td> <td> Processed summary shared with Magneuron consortia, disseminated at conference </td> <td> Digital (University hard drives), physical (poster) </td> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Transplantation of embryonic VM tissue into slices </td> <td> PPTX, JPG, TIFF, ND2, XLS Excel spreadsheets, PZF GraphPad Prism files, <15GB </td> <td> Processed summary shared with Magneuron consortia, disseminated at conference </td> <td> Digital (University hard drives), physical (poster) </td> </tr> </table> <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP Task 7.1 Optimisation of slice culture model of Parkinson’s disease </td> <td> Functional connections between transplanted tissue and slices </td> <td> PPTX, JPG, TIF, ND2, <20GB </td> <td> Processed summary shared with Magneuron consortia, disseminated at conference </td> <td> Digital (University hard drives), physical (poster) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Microinjection of SH- SY5Y cells into slices </td> <td> PPTX, JPG TIF, ND2, <6GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Injection of MNPs directly into slices </td> <td> PPTX, JPG TIF, ND2, <32GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Optimisation of protocol for getting MNPs into cell line and primary cells </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Injection of MNPloaded cells into slices (in collaboration with WP5) </td> <td> PPTX, JPG TIF, ND2, XLS Excel spreadsheets, PZF GraphPad Prism files, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Establishment of E12 VM dopaminergic precursors (neurospheres) </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> </table> <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Images for quantification of TH and Tuj1 outgrowth in slices and SN/VTA explants grafted into slices </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Incubation of nanorods with cells to test compatibility </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Comparison of use of E11 and E13 rat embryos to generate neurospheres </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Comparison of use of cell penetrating peptide-linked MNPs and endosomal MNPs in cells </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Comparison of effects/efficacy of different concentrations of endosomal MNPs on SH-SY5Y and primary cells </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Testing the effects of the magnet microarray on MNPs </td> <td> PPTX, JPG TIF, ND2, AVI <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> </table> <table> <tr> <th> **Data Ref** </th> <th> **Name** </th> <th> **Type and size** **(Standards / metadata)** </th> <th> **Sharing** </th> <th> **Storage / Archiving** </th> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Use of GIRK2 and Calbindin to differentiate between SN and VTA dopaminergic neurons </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Use of E17 cortical neurons to test uptake of MNPs </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Assessing the effects of size and position of magnets on MNPloaded cells </td> <td> PPTX, JPG TIF, ND2, <2GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Injection of neurospheres into striatum of slices </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> tempMode experiments using neurospheres in slices (in collaboration with WP5) </td> <td> PPTX, JPG TIF, ND2, <20GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Optimisation of D1 and D2 receptor antibody staining </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> **Data Ref** </td> <td> **Name** </td> <td> **Type and size** **(Standards / metadata)** </td> <td> **Sharing** </td> <td> **Storage / Archiving** </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Optimisation of growth of primary cells in microfluidic devices </td> <td> PPTX, JPG TIF, ND2, PZF GraphPad Prism files <30GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Improving the loading of MNPs in free- floating primary cells (for injection into slice) </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> Loading of primary VM cells into SN area of the slice </td> <td> PPTX, JPG TIF, ND2, <1GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> <tr> <td> WP Task 7.2/7.3 Transplantation of MNP-loaded dopaminergic neurons/precursors to slice cultures </td> <td> spaceMode experiments using the magnetic device (made by Koceila at IC) to promote outgrowth of transplanted MNP- loaded primary VM cells </td> <td> PPTX, JPG TIF, ND2, PZF GraphPad Prism files <20GB </td> <td> Processed summary shared with Magneuron consortia </td> <td> Digital (University hard drives) </td> </tr> </table> **3\. Conclusion** Over the last reporting period, a great effort was made to open our Data to the general public. We decided to share them mostly through the open repository Zenodo (https://zenodo.org/). This repository can be linked to the European Commission Funded Research (OpenAIRE). A DOI (Digital Object Identifier) is attributed to each dataset. See for example _https://zenodo.org/communities/locco_ for the IC partner. We will continue this effort over the coming months to ensure that all results and products of the Magneuron project are being shared on an open repository. Regarding the archiving of the numerous data that have been produced, diverse solutions have been proposed to ensure backups and accessibility (NAS, zenodo, dropbox, hard drives).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1067_HNSciCloud_687614.md
Clarification in section 3 of the introduction that this first version of the Data Management Plan includes first examples from the scientific use cases, and that others will be added during subsequent document revisions </td> <td> David Foster, Bob Jones (CERN) </td> </tr> <tr> <td> V1.0 </td> <td> </td> <td> Final Version </td> <td> Rachida Amsaghrou (CERN) </td> </tr> </table> **Document Approval** <table> <tr> <th> **Issue** </th> <th> **Date** </th> <th> **Name** </th> </tr> <tr> <td> V0.1 </td> <td> 04/03/2016 </td> <td> First draft for circulation within the collaboration </td> </tr> <tr> <td> V0.2 </td> <td> 23/03/2016 </td> <td> Final draft revised by the HNSicCloud project office </td> </tr> </table> **Executive Summary** This document describes the initial Data Management Plan (DMP) for the HNSciCloud project. It addresses Project administration data collected as part of the execution and management of the Pre-Commercial Procurement (PCP) process within the project as well as data used as part of the scientific use cases to be deployed on the cloud services. The DMP for the scientific use cases is based on the Guidelines on Data Management in the Horizon 2020 document version 2.0 of 30 October 2015\. These guidelines state: _“Participating projects will be required to develop a Data Management Plan (DMP), in which they will specify what data will be open.”_ As such, this DMP focuses primarily on data sharing and re-use and in particular on the following issues (Annex 1 of the H2020 DMP Guidelines): * _What types of data will the project generate/collect?_ * _What standards will be used?_ * _How will this data be exploited and/or shared/made accessible for verification and re-use? If data cannot be made available, explain why._ * _How will this data be curated and preserved?_ Regular updates to this data management plan will be made according to the following draft schedule: <table> <tr> <th> **Date** </th> <th> **Milestone** </th> <th> **Issue(s) to be addressed in revision** </th> </tr> <tr> <td> May 2016 </td> <td> Tender publication </td> <td> Were all the use-cases retained? </td> </tr> <tr> <td> April 2017 </td> <td> Start of prototype phase </td> <td> Could all the use-cases be satisfied? </td> </tr> <tr> <td> January 2018 </td> <td> Start of pilot phase </td> <td> Could all the use-cases be deployed? </td> </tr> <tr> <td> June 2018 </td> <td> End of Project </td> <td> Are the plans for preservation beyond the end of the project still valid? </td> </tr> </table> # Introduction This Data Management Plan (DMP) addresses two distinct sets of data to be managed by the HNSciCloud project: * **Project administration data** : data collected as part of the execution and management of the Pre-Commercial Procurement (PCP) process within the project * **Scientific use cases** : data managed by some of the use cases that will be supported during the pilot phase by the cloud services to be developed and procured by the PCP process. These two sets of data are treated separately by the project and in this document. This is the initial version of the data management plan. It explains the provisions for project administration data and the general approach for scientific use cases with first examples from high energy physics use cases. The data management plan will be reviewed and updated at major milestones in the project, once the final set of use cases has been refined, and with the following tentative schedule: <table> <tr> <th> **Date** </th> <th> **Milestone** </th> <th> **Issue(s) to be addressed in revision** </th> </tr> <tr> <td> May 2016 </td> <td> Tender publication </td> <td> Were all the use-cases retained? </td> </tr> <tr> <td> April 2017 </td> <td> Start of prototype phase </td> <td> Could all the use-cases be satisfied? </td> </tr> <tr> <td> January 2018 </td> <td> Start of pilot phase </td> <td> Could all the use-cases be deployed? </td> </tr> <tr> <td> June 2018 </td> <td> End of Project </td> <td> Are the plans for preservation beyond the end of the project still valid? </td> </tr> </table> **Table 1 - Tentative Data Management Plan Update Schedule** # Project administration data This section describes the plan for the data to be managed as part of the execution and management of the Pre-Commercial Procurement (PCP) process within the project. The project office (WP1) will gather and store contact details (name, email address, role within company, company name, and company postal address) provided by individuals representing companies and organisations interested or participating in the PCP process. Such details will be used by the consortium as part of the communication plan (WP7) and procurement process (WP2). The contact details will also be used as the basis for statistics summarizing the level and scope of engagement in the procurement process and will be reported (anonymously) in the mandatory deliverable reports. Beyond the summary statistics reported in the project deliverables, such information will be restricted to the consortium members. The data will be maintained beyond the lifetime of the project as email lists and entries in the supplier database of the lead procurer, which may be used for subsequent procurement activities in the cloud services domain. # Scientific use cases This section describes the plan for the data to be managed as part of the scientific use cases that will be deployed during the pilot phase of the pre- commercial procurement process. As a general rule, data that is created in the Cloud will be copied back to institutional and/or discipline-specific repositories for long-term preservation, curation and sharing. Many of these repositories (e.g. the WLCG Tier0 and Tier1 sites) are in the process of self-certification according to ISO 16363: _ISO 16363:2012 defines a recommended practice for assessing the trustworthiness of digital repositories. It is applicable to the entire range of digital repositories. ISO 16363:2012 can be used as a basis for certification._ As such, the procured services will not be the mechanism by which data preservation for the use-cases presented below will be ensured. This matches the hybrid public-commercial cloud model where the publicly operated data centres provide data preservation facilities. Once the PCP model has been proven to work, we may begin entrusting it with the procurement of data preservation services, possibly initially on ISO 16363-certified sites. However, this will be outside the scope of the HNSciCloud PCP project. A number of the experiments that form part of the High Energy Physics (HEP) Use Case, namely the Belle II experiment at KEK and the four main LHC experiments at CERN (ALICE, ATLAS, CMS and LHCb) collaborate through both the Worldwide LHC Computing Grid (WLCG) project (for data processing, distribution and analysis) as well as DPHEP (Data Preservation of Long-term Analysis in HEP). DPHEP maintains a portal through which information on these and other experiments can be obtained, the status of their data preservation activities and, in some cases, access to data released through Open Data policies. There is significant overlap between the data preservation plans and the H2020 DMP guidelines. For these experiments, we present the current status / plans in the agreed DPHEP format whereas for the LHC experiments it is also presented according to the H2020 guidelines (Annex 1). (The DPHEP format is quite detailed but emphasizes that these “plans” are backed by “implementation” and that there is clear evidence of data sharing and re-use. Furthermore, the data preservation services work at a scale of 100TB – 100+PB, with an outlook to perhaps 10EB and a duration of several decades). This version of the Data Management Plan includes initial examples from the scientific use cases, others will be added during subsequent document revisions. ## H2020 Data Management Plans for the LHC Experiments These Data Management Plans were elaborated at a Data Preservation (DPHEP) workshop in Lisbon in February 2016 with the assistance of two of the co- chairs of the Research Data Alliance (RDA) Interest Group on Advanced Data Management Plans (ADMP). Representatives from the LHC experiments as well as HNSciCloud / WLCG Tier0/1 sites contributed to the debate. <table> <tr> <th> **H2020 Annex 1 Guidelines** </th> </tr> <tr> <td> Guideline </td> <td> Guidance </td> <td> Statement </td> </tr> <tr> <td> Data set reference and name </td> <td> _Identifier for the data set to be produced._ </td> <td> This Data Management Plan (DMP) refers to the data set generated by the 4 main experiments (also know as “Collaborations”) currently taking data at CERN’s Large Hadron Collider (LHC). These experiments are ALICE, ATLAS, CMS and LHCb. For the purpose of this plan, we refer to this data set as “The LHC Data”. In terms of Data Preservation, the software, its environment and associated documentation must also be preserved (see below). Further details can be found at the DPHEP portal site, with entries for each of the above experiments: * _http://hep-project-dphepportal.web.cern.ch/content/alice_ * _http://hep-project-dphepportal.web.cern.ch/content/atlas_ * _http://hep-project-dphepportal.web.cern.ch/content/cms_ * _http://hep-project-dphepportal.web.cern.ch/content/lhcb_ </td> </tr> <tr> <td> Data set description </td> <td> _Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful,_ </td> <td> The 4 experiments referenced above have clear scientific goals as described in their Technical Proposals and via their Websites (see _https://greybook.cern.ch/greybook/_ for the official catalogue of all CERN </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> _and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse._ </th> <th> experiments that is maintained by the CERN Research Board). Hundreds of scientific publications are produced annually. The data is either collected by the massive detectors of the above experiments (the raw data), is derived from it, or is the result of the simulation of physics processes according to theoretical models and the simulated response of the detector to these models. Similar data – but at lower energies – have been produced by previous experiments and comparisons of results from past, present and indeed future experiments is routine. (See also the DPHEP portal for further information: _http://hep-projectdphep-portal.web.cern.ch/_ ) The data behind plots in publications has been made available since many decades via an online database: _http://hepdata.cedar.ac.uk/_ . Re-use of the data is made by theorists, by the collaborations themselves, by scientists in the wider context as well as for Education and Outreach. </th> </tr> <tr> <td> Standards metadata </td> <td> and </td> <td> _Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created._ </td> <td> The 4 main LHC experiments work closely together through the WLCG Collaboration on data management (and other) tools and applications. At least a number of these have found use outside the HEP community but their initial development has largely been driven by the scale and timeline of the above. The ROOT framework, in particular, is used as “I/O library” </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> (and much more) but all LHC experiments and is a _de-facto_ standard within HEP, also across numerous other laboratories. The meta-data catalogues are typically experiment-specific although globally similar. The “open data release” policies foresee the available of the necessary metadata and other “knowledge” to make the data usable (see below). </th> </tr> <tr> <td> Data sharing </td> <td> _Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.)._ _In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal_ </td> <td> The 4 LHC experiments have policies for making data available, including reasonable embargo periods, together with the provision of the necessary software, documentation and other tools for re-use. Data releases through the CERN Open Data Portal ( _http://opendata.cern.ch/_ ) are published with accompanying software and documentation. A dedicated education section provides access to tailored datasets for self- supported study or use in classrooms. All materials are shared with Open Science licenses (e.g. CC0 or CC-BY) to enable others to build on the results of these experiments. All materials are also assigned a persistent identifier and come with citation recommendations. </td> </tr> </table> <table> <tr> <th> </th> <th> _data, intellectual property, commercial, privacy-related, security-related)._ </th> <th> </th> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> <td> _Description of the procedures that will be put in place for longterm preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._ </td> <td> The long-term preservation of LHC data is the responsibility of the Tier0 and Tier1 sites that form part of the WLCG Collaboration. A Memorandum of Understanding (MoU) outlines the responsibilities of sites that form part of this collaboration (Tier0, Tier1s and Tier2s). In the case of the Tier0 and Tier1s, this includes “curation” of the data with at least two copies of the data maintained worldwide (typically 1 copy at CERN and at least 1 other copy distributed over the Tier1 sites for that experiment). The costs for data storage and “bit preservation” form part of the resource requests that are made regularly to the funding agencies. A simply cost model shows that the annual storage costs – even including the anticipated growth – go down with time and remain within the funding envelope foreseen. (The integrated costs of course rise). Personnel from the Tier0 and Tier1 sites have followed training in ISO 16363 certification – A Standard for Trusted Digital Repositories – and self- certification of these sites is underway. </td> </tr> <tr> <td> </td> <td> </td> <td> Any data generated on external resources, e.g. Clouds, is copied back for long-term storage to the Tier0 or Tier1 sites. The eventual long-term storage / preservation of data in the Cloud would require not only that such services are cost effective but also that they are certified according to agreed standards, such as ISO 16363. The data themselves should be preserved for a number of decades – at least during the active data taking and analysis period of the LHC machine and preferably until such a time as a future machine is operational and results from it have been compared with those from the LHC. The total data volume – currently of the order of 100PB – is expected to eventually reach 5-10 EB (in circa 2035 – 2040). Additional services are required for the long-term preservation of documentation (digital libraries), the software to process and/or analyse the data, as well as the environment needed to run these software packages. Such services will be the subject of the on-going self-certification. </td> </tr> </table> ## H2020 Data Management Plans for CTA <table> <tr> <th> **H2020 Annex 1 Guidelines** </th> </tr> <tr> <td> Guideline </td> <td> Guidance </td> <td> Statement </td> </tr> <tr> <td> Data set reference and name </td> <td> _Identifier for the data set to be produced._ </td> <td> The CTA project is an initiative to build the next generation ground-based very high energy gammaray instrument. It will serve as an open observatory to a wide astrophysics community and will provide a deep insight into the nonthermal high-energy universe. </td> </tr> <tr> <td> Data set description </td> <td> _Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse._ </td> <td> </td> </tr> <tr> <td> Standards and metadata </td> <td> _Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created._ </td> <td> </td> </tr> <tr> <td> Data sharing </td> <td> _Description of how data will be shared, including access procedures, embargo periods (if any),_ </td> <td> </td> </tr> <tr> <td> </td> <td> _outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.)._ _In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, securityrelated)._ </td> <td> </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> <td> _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._ </td> <td> </td> </tr> </table> ## Data Management Plans / Status for HEP Experiments These plans are presented in tabular form as on the DPHEP portal site: _http://hep-projectdphep-portal.web.cern.ch/_ . These plans will be revised at regular intervals and the site should be consulted for the most up-to-date information. They were prepared as a result of a DPHEP workshop 1 held at CERN in June 2015 and form part of the DPHEP Status Report (DOI: _10.5281/zenodo.34591_ ). The workshop aimed to: 1. Establish the motivation for long-term data preservation in HEP in terms of succinct Use Cases o Are there a common set of Use Cases, such as those that were recently agreed for the 4 main LHC experiments but in a more global scope? 2. Review the existing areas of "Common Projects" o Can these be extended (similarly) from their current scope - often LHC - to become more global? 3. Perform a site-experiment round-table to capture the current situation HEP-wide o These are summarized in the Status Report and in the tables shown below. Re-use covers re-use by the Collaboration that initially acquired and analysed the data, by theorists (e.g. to check their models), by the wider scientific community, by the general public and for education / outreach purposes. ## Belle and Belle II <table> <tr> <th> **Preservation Aspect** </th> <th> **Status (Belle II)** </th> </tr> <tr> <td> **Bit Preservation** </td> <td> Preamble: The central computing system at KEK is replaced every four years. The main user must be Belle II until the data-taking ends (in 2024). Belle: mDST (necessary for physics analysis) is stored on the disk as well as the tape library. The data is still frequently read by active analysis users. All data will be preserved by migrating to the next system. We experienced data loss in the previous data migration. Main causes of this trouble were the short migration period, miscommunication between researchers and operators and the lack of validation scheme after the migration. We will improve the process of the future migration. </td> </tr> </table> <table> <tr> <th> **Data** </th> <th> Belle: raw data (1PB) and other format (incl. simulation, ~1.5PB) are stored at the KEK central computing system. This data will be migrated to the next system, at least (data will be preserved until 2020). However, there is no plan thereafter, because the data will be superseded by Belle II. And a full set of mDST was copied at PNNL in USA. Belle II: data taking has not yet started. But raw data will be stored at KEK and another set will be copied in some places outside Japan. Also, the replicas of the mDST will be distributed to the world-wide collaborated computing sites. </th> </tr> <tr> <td> **Documentation** </td> <td> Belle: all documentation is stored in the local web server and INDICO system. They are still active and accessible, but not well catalogued at all. Belle II: Using twiki, invenio, svn and INDICO system. </td> </tr> <tr> <td> **Software** </td> <td> Belle: software has been fixed since 2009 except for some patches. The baseline of the OS is still SL5, but it was migrated to SL6. In parallel, the Belle data I/O tool is developed and integrated in the Belle II software. Thanks to this, the Belle data can be analysed under the Belle II software environment. Other Belle handmade analysis tools are being integrated as well. Software version is maintained with SVN. Belle II: basic features witch are necessary for the coming data taking have been implemented. But need more tuning and improvement. The software version is maintained by SVN. SL5/6 32/64-bits, Ubuntu 14.02 LTS are supported </td> </tr> <tr> <td> **Uses Case(s)** </td> <td> Continued analysis by Belle. </td> </tr> <tr> <td> **Target Community(ies)** </td> <td> Belle and Belle II </td> </tr> <tr> <td> **Value** </td> <td> Quantitative measures (# papers, PhDs etc) exist Belle: During the data taking period (1999-2010), averaged number of journal publications is ~30 papers/year and the number of PhDs is ~12/year. After the data-taking, a moderate decreasing tendency can be seen, but the analysis is still active. (~20 publications/year and ~7 PhDs/year). </td> </tr> <tr> <td> **Uniqueness** </td> <td> Belle: Comparing with the data from Hadron colliders, the Belle data has the advantage of analysing the physics modes with missing energy and neutral particles. Until the Belle II starts, these data are unique as well as _BABAR_ ’s data (see _http://hepproject-dphep-portal.web.cern.ch/content/babar_ ) . Belle II: Belle data will be superseded by 2020. After that, the data must be unique samples. </td> </tr> <tr> <td> **Resources** </td> <td> Belle / Belle II: at some stage, the Belle data must be treated as a part of the Belle II data, and resources for the Belle data will be included in the Belle II computing/human resources. </td> </tr> <tr> <td> **Status** </td> <td> Construction of the Belle II detector/SuperKEKB accelerator as well as of the Belle II distributed computing system. </td> </tr> <tr> <td> **Issues** </td> <td> A couple of items have to be implemented in the Belle II software framework to analyse the Belle data. Further check for the performance and reproducibility is also necessary. </td> </tr> <tr> <td> **Outlook** </td> <td> Expect to be able to analyse the Belle data within the Belle II software framework. It provides us with less human resource to maintain the Belle software, and longer lifetime for the Belle data analysis. </td> </tr> </table> ## ALICE <table> <tr> <th> **Preservation Aspect** </th> <th> **Status (ALICE)** </th> </tr> <tr> <td> **Bit Preservation** </td> <td> On tape: data integrity check during each access request On disk: periodically integrity checks </td> </tr> <tr> <td> **Data** </td> <td> 7.2 PB of raw data were acquired between 2010 and 2013 which is stored on tape and disk in 2 replicas. </td> </tr> <tr> <td> **Documentation** </td> <td> ALICE analysis train system & bookkeeping in Monalisa DB: for the last 3-4 years Short introduction along with the analysis tools on Opendata </td> </tr> <tr> <td> **Software** </td> <td> The software package “AliRoot” is published on CVMFS </td> </tr> <tr> <td> </td> <td> For the Open Access the data and code packages are available on Opendata (http://opendata.cern.ch/) </td> </tr> <tr> <td> **Uses Case(s)** </td> <td> Educational purposes like the CERN Master Classes Outreach activities </td> </tr> <tr> <td> **Target Community(ies)** </td> <td> Re-use of data within the collaboration(s), sharing with the wider scientific community, Open Access releases </td> </tr> <tr> <td> **Value** </td> <td> Analysis, publications and PhDs continue to be produced </td> </tr> <tr> <td> **Uniqueness** </td> <td> Unique data sets from the LHC in pp and HI Similar data can only be collected by the other LHC experiments </td> </tr> <tr> <td> **Resources** </td> <td> Since the experiment is still working, budget and FTEs are shared with the operation of computing centre </td> </tr> <tr> <td> **Status** </td> <td> First data from 2010 has been released to the public (8 TB ≈ 10% of data) Some analysis tools are available on Opendata for the CERN Master class program </td> </tr> <tr> <td> **Issues** </td> <td> Improve user interface The interaction with the open-access portal is very slow due to long communication times. E.g. the uploading of data is done by some people in the IT department. The interaction via an automated website would be faster. </td> </tr> <tr> <td> **Outlook** </td> <td> Ongoing analysis within the collaboration Making realistic analysis available on the open-access portal Deployment of more data </td> </tr> </table> ## ATLAS <table> <tr> <th> **Preservation Aspect** </th> <th> **Status (ATLAS)** </th> </tr> </table> <table> <tr> <th> **Bit Preservation** </th> <th> Non-Reproducible data exist in two or more geographically disparate copies across the WLCG. The site bit preservation commitments are defined in the WLCG Memorandum of Understanding 2 . All data to be reprocessed with most recent software to ensure longevity. </th> </tr> <tr> <td> **Data** </td> <td> Non-reproducible: RAW physics data, calibration, metadata, documentation and transformations (jobs). Derived data: formats for physics analysis in collaboration, formats distributed for education and outreach. Greatly improved by common derived data production framework in run 2. Published results in journals and HEPDATA. Sometimes with analysis published in Rivet and RECAST. Format lifetimes are hard to predict, but on current experience are 5-10 years, and changes are likely to coincide with the gaps between major running periods. </td> </tr> <tr> <td> **Documentation** </td> <td> Software provenance of derived data stored in Panda database. Numerous twikis available describing central and analysis level software. Interfaces such as AMI and COMA contain metadata. The publications themselves are produced via the physics result approval procedures set out in ATLGEN-INT-2015-001 held in CDS; this sets out in detail the expected documentation within papers and the supporting documentation required. </td> </tr> <tr> <td> **Software** </td> <td> Compiled libraries and executable of the “Athena” framework are published on CVMFS. Software versioning is maintained on the CERN subversion server. </td> </tr> <tr> <td> **Uses Case(s)** </td> <td> Main usage of data: future analysis within the collaboration Further usage: review in collaboration and potential for outreach </td> </tr> <tr> <td> **Target Community(ies)** </td> <td> Re-use of data (new analyses) within the collaboration, open access sharing of curated data </td> </tr> <tr> <td> **Value** </td> <td> Publications by the collaboration. Training of PhDs </td> </tr> <tr> <td> **Uniqueness** </td> <td> Unique data sets (both pp and HI) being acquired between now and 2035\. Similar data only acquired by other LHC experiments </td> </tr> <tr> <td> **Resources** </td> <td> The active collaboration shares the operational costs with the WLCG computing centres. </td> </tr> <tr> <td> **Status** </td> <td> ATLAS replicates the non-reproducible data across the WLCG and maintains database of software provenance to reproduce derived data. Plans to bring run 1 data to run 2 status. Master-classes exercises available on CERN Open Data Portal, expansion considered. Some analyses published on Rivet/RECAST. </td> </tr> <tr> <td> **Issues** </td> <td> Person-power within the experiment is hard to find. Validation of future software releases against former processing crucial. No current plans beyond the lifetime of the experiment. </td> </tr> <tr> <td> **Outlook** </td> <td> On-going development of RECAST with Rivet and collaboration with CERN IT and the other LHC experiments via the CERN Analysis Portal as solution to problem of analysis preservation. </td> </tr> </table> ## CMS <table> <tr> <th> **Preservation Aspect** </th> <th> Status (CMS) </th> </tr> <tr> <td> **Bit Preservation** </td> <td> Follow WLCG procedures and practices Check checksum in any file transfer </td> </tr> <tr> <td> **Data** </td> <td> RAW data stored at two different T0 1. 0.35 PB 2010 2. 0.56 PB 2011 3. 2.2 PB 2012 4. 0.8 PB heavy-ion 2010-2013 Legacy reconstructed data (AOD): * 60 TB 2010 data reprocessed in 2011 with CMSSW42 (no corresponding MC) * 200 TB 2011 and 800 TB 2012 reprocessed in 2013 with CMSSW53 (with partial corresponding MC for 2011, and full MC for 2012) </td> </tr> </table> <table> <tr> <th> </th> <th> Several reconstruction reprocessing’s The current plan: keep a complete AOD reprocessing (in addition to 2×RAW) * no reconstructed collision data have yet been deleted, but deletion campaigns are planned. * most Run 2 analyses will use miniAOD’s which are significantly smaller in size Open data: 28 TB of 2010 collision data released in 2014, and 130 TB of 2011 collision data to be released in 2015 available in CERN Open Data Portal (CODP) Further public releases will follow. </th> </tr> <tr> <td> **Documentation** </td> <td> Data provenance included in data files and further information collected in CMS Data Aggregation System (DAS) Analysis approval procedure followed in CADI Notes and drafts stored in CDS Presentations in Indico User documentation in Twiki serves mainly the current operation and usage Basic documentation and examples provided for open data users in CODP Set of benchmark analyses reproducing published results with open data in preparation, to be added to CODP </td> </tr> <tr> <td> **Software** </td> <td> CMSSW open source and available in github and in CVFMS Open data: VM image (CERNVM), which builds the appropriate environment from CVFMS, available in COPD </td> </tr> <tr> <td> **Uses Case(s)** </td> <td> Main usage: analysis within the collaboration Open data: education, outreach, analysis by external users </td> </tr> <tr> <td> **Target Community(ies)** </td> <td> Main target: collaboration members Open data: easy access to old data for collaboration members and external users </td> </tr> <tr> <td> **Value** </td> <td> Data-taking and analysis is on-going, more than 400 publications by CMS </td> </tr> </table> <table> <tr> <th> </th> <th> Open data: educational and scientific value, societal impact </th> <th> </th> </tr> <tr> <th> **Uniqueness** </th> <th> Unique, only LHC can provide such data in any foreseeable time-scale </th> </tr> <tr> <th> **Resources** </th> <th> Storage within the current computing resources Open data: storage for the 2010-2011 open data provided by CERN IT, further requests to be allocated through RRB </th> </tr> <tr> <th> **Status** </th> <th> Bit preservation guaranteed in medium term within the CMS computing model and agreements with computing tiers, but the long-term preservation beyond the life-time of the experiment not yet addressed (storage, agreements, responsibilities), Open data release has resulted in * data and software access independent from the experiment specific resources * a timely capture of the basic documentation, which, although limited and incomplete, makes data reuse in long term possible common solutions and services. </th> </tr> <tr> <th> **Issues** </th> <th> Competing with already scarce resources needed by an active experiment. Knowledge preservation, lack of persistent information of the intermediate analysis steps to be addressed by the CERN Analysis Preservation framework (CAP) * CMS has provided input for the data model and user interface design, and defining pipelines for automated ingestion from CMS services. * The CAP use-cases are well acknowledged by CMS. * CAP will be a valuable tool to start data preservation while the analysis is active. Long-term reusability: freezing environment (VM) vs evolving data: both approaches will be followed and CMS tries to address the complexity of the CMS data format </th> </tr> <tr> <th> **Outlook** </th> <th> The impact of the open data release was very positive: • Well received by the public and the funding </th> </tr> <tr> <td> Initial Data Management Plan </td> <td> **24** </td> </tr> </table> <table> <tr> <th> </th> <th> agencies; * No unexpected additional workload to the collaboration; * The data are in use. Excellent collaboration with CERN services developing data preservation and open access services and with DASPOS * Common projects are essential for long-term preservation * Benefit from expertise in digital archiving and library services * Fruitful discussion with other experiments. Long-term vision and planning is difficult for ongoing experiments: * DPHEP offers a unique viewpoint. Next steps for CMS: * Stress-test CERN Open Data Portal with the new data release * Develop and deploy the CMS-specific interface to CERN Analysis Preservation framework. </th> </tr> </table> ## LHCb <table> <tr> <th> **Preservation aspect** </th> <th> **Status (LHCb)** </th> </tr> <tr> <td> **Bit preservation** </td> <td> Data and MC samples are stored on tape and on disk. Two copies of raw data on tape; 1 copy on tape of full reconstructed data (FULL.DST, which contains also raw data); 4 copies of stripped data (DST) on disk for the last (N) reprocessing. Two copies for the N-1 reprocessing. One archive replica on tape. </td> </tr> <tr> <td> **Data** </td> <td> For the long term future, LHCb plans to preserve only a legacy version of data and MC samples. Run 1 legacy data: 1.5 PB (raw), 4 PB FULL.DST, and 1.5 stripped DST. Run 1 legacy MC: 0.8 PB DST. Open data: LHCb plans to make 50% of analysis level data (DST) public after 5 years, 100% public 10 years after it was taken. The data will be made public via the Open Data portal ( _http://opendata.cern.ch/_ ) Samples for educational purposes are already public for the International Masterclass Program and </td> </tr> </table> <table> <tr> <th> </th> <th> accessible also via the Open Data portal (For Education area). </th> </tr> <tr> <td> **Documentation** </td> <td> Data: dedicated webpages for data and MC samples, with details about all processing steps. Software: twiki pages with software tutorials, mailinglists. Documentation to access and analyse masterclasses samples is available on LHCb webpage and on the OpenData portal. </td> </tr> <tr> <td> **Software** </td> <td> Software is organised as hierarchy of projects containing packages, each of which contains some c++ or python code. Three projects for the framework (Gaudi, LHCb, Phys), several “component” projects for algorithms (e.g. Lbcom, Rec, Hlt, Analysis), one project per application containing the application configuration (e.g. Brunel, Moore, DaVinci). Software repository: SVN. Open access: once data will be made public, software to work with DST samples will be released with the necessary documentation. A virtual machine image of LHCb computing environment allows to access and analyse the public samples available on the Open Data portal </td> </tr> <tr> <td> **Use cases** </td> <td> New analysis on legacy data; analysis reproduction; outreach and education. </td> </tr> <tr> <td> **Targeted communities** </td> <td> LHCb collaboration; physicists outside the collaboration; general public. </td> </tr> <tr> <td> **Value** </td> <td> LHCb complementary to other LHC experiments. </td> </tr> <tr> <td> **Uniqueness** </td> <td> Unique samples of pp an HI collisions collected in the forward region. </td> </tr> <tr> <td> **Resources** </td> <td> Dedicated working group within LHCb computing group. </td> </tr> <tr> <td> **Status** </td> <td> Legacy software and data releases defined. Development of a long-term future validation framework ongoing. Masterclasses samples and analysis software available via the Open Data portal. Collaboration with CERN IT and other LHC experiments for the development of an analysis preservation framework. </td> </tr> <tr> <td> **Issues** </td> <td> Main issue is manpower. </td> </tr> <tr> <td> **Outlook** </td> <td> Collaboration with CERN IT and other LHC experiments on the Open Data portal and the analysis </td> </tr> </table> ## Summary We have presented Data Management Plans for some of the main communities that are candidates to use procured services through HNSciCloud. Several of these communities have elaborated DMPs prior to the start of the project, with a focus on data preservation, sharing, re-use and verification of their results. Whilst these plans may be more detailed than is required by the H2020 guidelines, they nevertheless reflect the concrete work in these areas and provide a solid basis on which data management-related work in the project can be evaluated. <table> <tr> <th> </th> <th> preservation framewok. Enrich the Open Data portal with additional masterclasses exercise and real LHCb analysis. Exploit VM technology to distribute LHC computing environment. </th> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1069_Open4Citizens_687818.md
1\. Executive Summary 8 2\. Introduction 8 2.1. Structure of the document 8 2.2. Relation to other deliverables 9 3\. Data Summary 9 3.1. What is the purpose of the data collection/generation and its relation to the objectives of the project? 9 3.1.1. Research data supports evaluation and dissemination activities 10 3.1.2. Data in the O4C platform is available for use in hackathons and reuse beyond these events 10 3.2. What types and formats of data will the project generate/collect? 11 3.2.1. Research data 11 3.2.2. Data in the O4C platform 11 3.3. Will you re-use any existing data and how? 11 3.4. What is the origin of the data? 11 3.5. What is the expected size of the data? 12 3.6. To whom might it be useful ('data utility’) 12 4\. FAIR Data 13 4.1. Making data findable including provisions for metadata 4.1.1. Are the data produced and/or used in the project discoverable with metadata, 14 identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? 14 4.2. Making data openly accessible 14 4.2.1. Which data produced and/or used in the project will be made openly available as the default? 14 4.2.2. How will the data be made accessible (e.g. by deposition in a repository)? 14 4.2.3. What methods or software tools are needed to access the data? 15 4.3. Making data interoperable 15 4.3.1. Are the data produced in the project interoperable? 15 4.3.2. What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? 15 4.4. Increase data re-use (through clarifying licences) 15 4.4.1. How will the data be licensed to permit the widest re-use possible? 15 4.4.2. When will the data be made available for re-use? 15 4.4.3. Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why 15 4.4.4. How long is it intended that the data remains re-usable? 16 4.4.5. Are data quality assurance processes described? 16 5\. Allocation of Resources 16 5.1. What are the costs for making data FAIR in your project? 16 5.2. How will these be covered? 16 5.3. Who will be responsible for data management in your project? 17 5.4. Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)? 17 6\. Data Security 17 6.1. What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? 17 6.2. Is the data safely stored in certified repositories for long term preservation and curation?18 7\. Ethical Aspects 18 7.1. Are there any ethical or legal issues that can have an impact on data sharing? 18 7.2. Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? 19 8\. References 21 # 1\. Executive Summary This midterm document – deliverable D4.6 Data Management Plan (M17) – describes how the research data will be collected and generated throughout the project and how it will be handled during and after the Open4Citizens project. It describes the standards and methodologies used for data collection and elaborates on how the data will be shared and preserved. It also covers the data that is handled in the Open4Citizens Platform. This midterm version Data Management Plan is written by following the Guidelines on FAIR Data Management in Horizon 2020 v3.0, 26 July 2016 provided by the EU Commission. A third and final version of the DMP is contractually bound to appear by M30 and will include the final and updated plans for data management in the project. # 2\. Introduction This deliverable is the updated and second version of the Data Management Plan (DMP). By answering the questions outlined in the updated version 3.0 Guidelines on Data Management in Horizon 2020 July 2016 it describes: * the handling of research data during and after the end of the project * what data will be collected, processed and/or generated * which methodology and standards will be applied * whether data will be shared/made open access and * how data will be curated and preserved (including after the end of the project) (Guidelines on Data Management in Horizon 2020, version 3.0) The first version of the DMP (M6) followed the ‘Guidelines on Data Management in Horizon 2020, version 2.0, July 2015 and focused mainly on the categories of data sets that would be generated and used during the two hackathon cycles. This second version has a shifted focus to include the research data that is generated throughout the project and answers to how the project will make the research data FAIR (Findable, Accessible, Interoperable and Reusable) (Guidelines on Data Management in Horizon 2020, version 3.0). Since the Open4Citizens (O4C) Platform also setup to handle data the document also covers the data that is related to the O4C Platform where applicable. A third and final version of the DMP is contractually bound to appear by M30 and will include the final and updated plans for data management in the project. ## 2.1. Structure of the document The document is based on the template H2020 Programme Guidelines on FAIR Data Management in Horizon 2020 version 3.0, 26 July 2016 and is structured after the questions presented in the template. Section 3-6 focuses primarily on how the research data is managed throughout the project, while section 7-9 also includes questions that are relevant for the data in the O4C platform. Thus - where relevant - the sections will cover both topics. ## 2.2. Relation to other deliverables The datasets that were used in the first round of hackathons are presented in detail in deliverable D3.2 ‘Data Mapping and Integration’ (delivered in M15). The more specific details about the hackathons; their themes, associated challenges, pre-hack activities and key actors are described in detail in deliverable D3.4 ‘First Hackathon Report’ (M17) whereas the datasets that will be identified in the upcoming second round of hackathons will be included in D3.3 ‘Data Mapping and Integration’ (final version M26). # 3\. Data Summary ## 3.1. What is the purpose of the data collection/generation and its relation to the objectives of the project? There are two main groups of data in this project: 1. the research data that is generated throughout the project and 2. the data that is uploaded to and handled in the Open4Citizens (O4C) platform. The data that the platform will handle is in the form of datasets generated from Open Data and authorization data for user logins. Data collected for and generated in the O4C platform ensures that there is sufficient and appropriate open data to support the activities related to the project’s objectives listed in the table below. In the first year, this especially relates to objective 2 - Exploring hackathons and objective 3 - Overcoming citizens’ cognitive gap regarding Open Data. In the second year of the project, the second cycle of hackathons in the five pilot locations, O4C platform data will additionally be used and generated in the development of OpenDataLabs (objectives 1, 3, 4, 5 and 6). The research data generated and collected in the project during the first year of activity relates especially to formative evaluation of objectives 2 and 3. In the second year of the project research data collected will support both formative evaluation related to the development of OpenDataLabs (objectives 1, 3, 4, 5 and 6) as well as summative evaluation elements, regarding the project’s achievements over all. <table> <tr> <th> **Number** </th> <th> **Objective** </th> </tr> <tr> <td> 1 </td> <td> Creating OpenDataLabs where citizens can design new services, or improving the existing ones, in a collaborative environment, and by using open data. </td> </tr> <tr> <td> 2 </td> <td> Exploring hackathons as new forms of collaboration among citizens, technical experts and public institutions that enable citizens, interest groups and grassroots communities to understand and use the potential of open data. </td> </tr> </table> <table> <tr> <th> 3 </th> <th> Overcoming the cognitive gap citizens have with respect to Open Data by making that knowledge available in form of consultants in the OpenDataLabs where citizens will experience the practical value of Open Data in the conception, modification, adaptation and maintenance of urban services. </th> </tr> <tr> <td> 4 </td> <td> Combining two specific models of OpenDataLabs, specifically the solutions development lab and the incubator models. </td> </tr> <tr> <td> 5 </td> <td> Exploring and driving opportunities for further exploitation and implementation of the developed and tested solutions through social network. </td> </tr> <tr> <td> 6 </td> <td> Creating an international network of cities and organisations where the Open Data Labs model implemented by Open4Citizens can be replicated and transferred so generating an international movement based on network cooperation and learning. </td> </tr> </table> **Table 1: Open4Citizens objectives, as stated in the project’s Description of Action (pg. 2)** ### 3.1.1. Research data supports evaluation and dissemination activities Research data in the Open4Citizens project is primarily generated by the members of the five pilots to support formative and summative evaluation of the extent to which we are achieving the project’s stated aims. Most of the data collected for evaluation purposes is focused on the hackathon pressure- cooker event, held over the course of a weekend in most pilots. Data collection is carried out consistently across the pilots using standardised data collection templates consisting of PowerPoint slides. Further data is generated in line with the evaluation framework described in deliverable D4.1 ‘Evaluation Framework’ to support reflections on individual pilots’ contributions and the overarching Open4Citizens project’s contributions to achieving its objectives. In addition, some of this material is used for pilot-specific and project-level dissemination activities. ### 3.1.2. Data in the O4C platform is available for use in hackathons and reuse beyond these events The purpose of generating and uploading datasets to the O4C Platform is to make it possible for participants, curious citizens and other interested stakeholders to locate and find the data that has been produced from the activities in the emerging OpenDataLabs. In general ‘Open Data’ means that the data is publically available and can be used, modified, and shared freely by anyone for any purpose (http://opendefinition.org/). These open data have a large potential to enhance several aspects of human life, including transport, health care, climate and even human behaviour, if they for instance are implemented in new applications. The O4C project intends to include citizens as a driver for innovation in the Open Data arena. However many citizens are not necessarily able to handle Open Data, or even to imagine what to do with them. To help closing this gap between users and data, the datasets generated and used in the Open Data Labs is intended to inspire others to see the potential in open data. ## 3.2. What types and formats of data will the project generate/collect? ### 3.2.1. Research data Research data generated throughout the project includes primarily qualitative material, as well as some quantitative information, e.g. about participants in the hackathons and partners in the OpenDataLabs. Curated and anonymised materials will be made publicly available. This includes the following: * Data gathering and analysis templates (originally produced in Microsoft PowerPoint format) * Templates and guidelines for evaluation data gathering and analysis (originally in Microsoft PowerPoint and Microsoft Word/PDF) * Completed templates from each pilot containing evaluation material from hackathons (PowerPoint) * Questionnaire responses (originally Microsoft Excel documents) * Intermediate evaluation analysis outputs based on data gathered in hackathons (Excel and PDF format) All hackathon participants sign consent forms, allowing the project to use audio, video, visual and textual material about them. Aliases are used unless project participants explicitly wish to be quoted or identified using their real names. ### 3.2.2. Data in the O4C platform The datasets that have been chosen for the hackathon cycles and uploaded to the O4C Platform are in CSV format which allows them to be used with the various tools on the platform. ## 3.3. Will you re-use any existing data and how? **Research data** will consist primarily of new qualitative information generated in the project and will therefore not re-use existing data. Some quantitative data will also be generated as part of the evaluation data collected, complementing qualitative data collected about users of the O4C platform. The data that is uploaded to the **O4C platform** will reuse existing open data. In addition, datasets generated through O4C activities are intended to be available for re-use through the platform. ## 3.4. What is the origin of the data? The research data is original material and will derive from evaluation and dissemination-related activities. The material to be shared will be produced by the Open4Citizens consortium members. The datasets uploaded to the platform will derive from open data repositories and the internet in general from where the data is ‘publically available and can be used, modified, and shared freely by anyone for any purpose’. Certain datasets may be made available by O4C stakeholders who had not previously made this data available, i.e. generating new open data for use in project activities and beyond. ## 3.5. What is the expected size of the data? The total amount of research data to be made available is not yet clear. For the first hackathon cycle, year one of the project, the total size of evaluation and dissemination materials varies between the five pilots, from between about 200MB to 3TB, depending on the amount of media (video and photos) produced. The size is not yet known for the data in the platform. ## 3.6. To whom might it be useful ('data utility’) **Research Data** We intend to make curated research data from the five pilot projects available, as well as crosscutting material reflecting on the evaluation of the project as a whole. This data will be useful for researchers, practitioners and others wishing to duplicate or adapt the Open4Citizens model for empowering citizens to make appropriate use of open data for improved service delivery. The analysis material included in the research data will highlight the strengths and challenges of the approach, allowing others to learn from the experiences of the project. The availability of templates and guidelines used in the project will also allow for better adoption of the approach by others interested in making open data a common good, whether they belong to academia, the public sector, the private sector groups or civil society. Materials that are specifically intended for use by ordinary citizens working in the O4C platform or as part of an OpenDataLab will be specifically highlighted in the platform and the labs to ensure ease of access and understanding. These materials include the toolkits produced as part of the project (Preliminary Hackathon Starter Kit and Citizen Data Toolkit), as well as any additional templates and guidelines to facilitate working with and understanding open data. **Data in the O4C Platform** The O4C platform is focused around helping its users gain understanding of Open Data as well as aiding the development of new services/improve existing services during the hackathon cycle. The data in the platform is aimed at being used as: * Components in digital mobile or web applications – a dynamic product to access personally meaningful or context-aware data, such as a weather or route planner app. * Elements in concepts - i.e. mock-ups of mobile or web applications. * Data examples for the participants to gain a greater understanding of Open Data. * Visualisation – a statistical representation of data, such as an infographics or a narrative told as a news article (data journalism). The main objective is to communicate what is otherwise “raw numbers in a table”. * Digital service – a product-service system with various touch points ingrained with open data. For example, a service where citizens can report faulty street objects (broken lamppost, etc.) using a smartphone application, and the government is notified about these problems and can fix them. # 4\. FAIR Data The O4C project is a grant recipient under Horizon 2020 and is therefore required to deposit peerreviewed publications into an open access repository. In addition, we wish to participate in the European Commission’s Open Research Data (ORD) pilot to the extent that this is feasible and useful. This is in light of the fact that the bulk of research data produced in the project is qualitative information that cannot easily be used outside the project given the need for contextual knowledge for its interpretation. By providing open access to a curated selection of the research data we intend to give access to anonymised and consolidated evaluation materials, tools and guidelines produced as part of the project, and the underlying datasets used to address challenges in urban services during the Open4Citizens hackathon cycle. This is beneficial for the public, but also for researchers (www.openaire.eu, 2017a). Providing Open Access will however not remove the author’s copyright (www.openaire.eu, 2017a). **Open Research Data** It is yet to be decided by the project partners whether we will deposit in an institutional repository of the research institutions with which we are affiliated, a subject-based/thematic repository or if we will make use of a centralised repository, like Zenodo that is hosted by CERN, available to all and set up by the OpenAIRE project (www.openaire.eu, 2017b). We are likely to use the latter option to ensure that the data is easier to find than if it is solely deposited in an institutional repository. In addition, to increase accessibility and findability, we are likely to make the same research data available through the institutional repositories of the universities that are part of the Open4Citizens consortium: Aalborg University, Politecnico di Milano and Technische Universiteit Delft. This use also depends on the compatibility of the institutional platforms with the type and size of data to be uploaded. For example, the Aalborg University Research Portal, VBN (Aalborg University Research Portal, 2017), while compatible with OpenAire (Open AIRE, 2017c) has not previously been used for the type of research material which we are producing; video, photos and slides. Aalborg University is a signatory of the Berlin Declaration on Open Access in the Sciences and Humanities (Berlin Declaration, 2003), whose principles the Open4Citizens project subscribes to. Signatories to the declaration aspire to ‘promote the Internet as a functional instrument for a global scientific knowledge base and human reflection and to specify measures which research policy makers, research institutions, funding agencies, libraries, archives and museums need to consider’ (Berlin Declaration, 2003, pg. 1). Decisions regarding the core platform to be used and the extent to which partners’ national institutional repositories will be used will be made in good time to make initial research data available in the second year of the project (2017) and all relevant research data to be available by the end of the project (project month 30, June 2018). **Regarding Open Access Journals** Where it is not possible to publish final peer-reviewed publications in Open Access Journals, the project partners aim to make publications available in institutional open access repositories such as VBN at Aalborg University. This, however, will depend on the individual journals’ policies regarding open access. **Open Access through the OpenDataLabs** An intended the lasting and living legacy of the Open4Citizens project is the OpenDataLabs. As we are identifying the business and sustainability plans for the OpenDataLabs, we will be prioritising openness and co-creation within the labs as a standard approach across the five pilot locations. In the second hackathon cycle, in 2017, this will involve identifying which guidelines and standards for openness are applicable across all pilots and can be adopted as standard. **Regarding Data on the O4C platform** The approach to data storage in the platform is also inspired by the FAIR principles to make it easier for the participants and other interested stakeholders to find, access and re-use the datasets and make them interoperable with other datasets. This will be elaborated in the sections below. ## 4.1. Making data findable including provisions for metadata 4.1.1. Are the data produced and/or used in the project discoverable with metadata, ### identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? For research data collected and generated in the project, a fit-for-purpose file naming convention will be developed in accordance with best practice for qualitative data, such as described by the UK Data Archive (2011). This will involve identifying the most important metadata related to the various research outputs. Key information includes content description, date of creation, version, and pilot location. To make the datasets in the platform easily findable searchable tags have been added to the metadata. When uploading the data the creator of the dataset also have the option to create new tags that corresponds to the contents of the dataset making it easier for other users to find and reuse the data. ## 4.2. Making data openly accessible **4.2.1. Which data produced and/or used in the project will be made openly available as** ### the default? Certain pilots use medical data as part of the open data in their hackathons and OpenDataLabs. This includes the Karlstad and Barcelona pilots. The consortium partners will be guided by the legal and ethical restrictions that their partners adhere to, i.e. the data owners, who make relevant open data available. Restriction regarding data availability will come into play if the partners of the two pilots deem it necessary. ### 4.2.2. How will the data be made accessible (e.g. by deposition in a repository)? The research data will be made accessible through an Open AIRE compatible repository which has yet to be decided upon. ### 4.2.3. What methods or software tools are needed to access the data? At present Microsoft Office is used for producing research data. However, the research data will be made available in the most appropriate open source formats. ## 4.3. Making data interoperable ### 4.3.1. Are the data produced in the project interoperable? During the first hackathon cycle, research data, consisting of templates, data gathered from hackathon participants and analysis of materials has been produced using Microsoft Office. This includes PowerPoint and Excel-formatted files in particular. The consortium will explore the best open source software to use instead of these formats, and will make relevant research data placed in repositories available in these formats instead. ### 4.3.2. What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? Depending on the choice of repositories we are however likely to follow the standard of The Data Catalog Vocabulary (DCAT) (Data Catalog Vocabulary, 2014) as it defines a standard way to publish machine-readable metadata about a dataset where appropriate. We also intend to use common ontologies and vocabularies for data. ## 4.4. Increase data re-use (through clarifying licences) ### 4.4.1. How will the data be licensed to permit the widest re-use possible? The Open4Citizens project aims to be as open as possible. We take the guidelines developed by Open Knowledge as our starting point. Specifically, we will explore the applicability of the Open Data Commons Open Database License (ODbL) for data created in the project. A project partner in Milan, OnData (http://ondata.it/), is currently developing openness guidelines that may be applicable data used and generated in the Open4Citizens project. ### 4.4.2. When will the data be made available for re-use? The curated research data related to the first round of hackathons is sought to be made available by August 2017. By the end of the project in M30 all data that is not affected by embargo will be made available through the appropriate repositories. ### 4.4.3. Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why. The curated research data made available will be available for re-use. However it may not be feasible for researchers outside the project to interpret and use any anonymised qualitative research data. Raw social science research data of this nature generally requires deep contextual knowledge to be appropriately analysed and interpreted. The O4C Platform will form a virtual component of the five Open Data Labs. The aim is that it will be able to exist after the O4C project has been completed. This means that the Open Data that has been collected, generated and uploaded to the Platform during the project lifetime will be accessible both after each hackathon cycle and after the end of the funding period of the O4C project. The datasets that are uploaded to the O4C platform will be shared through the website www.opendatalab.eu where everyone will have access to them. ### 4.4.4. How long is it intended that the data remains re-usable? We will adhere to the repository standard of the chosen repository or repositories for the project. ### 4.4.5. Are data quality assurance processes described? The data quality assurance processes are not described yet. These will be developed based on experiences finding, using and creating open data sets in the project, including the feasibility of their use in the O4C platform for creating data-led solutions to service challenges. At the end of the first hackathon cycle, as this document is being written, a pragmatic approach is being taken to the quality of the open data being used in the project: at this stage of the project quality standards are less important than making relevant data available and using it to the extent possible, thereby learning about what is needed in order to improve its quality. In further defining the O4C project's data management plan with respect to research data, we will seek inspiration in the plans of similar projects and identify good practice guidelines for social science research data. We will use this input to ensure that we make good quality research data available. # 5\. Allocation of Resources ## 5.1. What are the costs for making data FAIR in your project? Any minor costs for Open Access publishing will be covered by the overall project dissemination budget. This will be used at the discretion of the consortium and Aalborg University as the primary investigator. Wherever possible, free institutional repositories will be used for data and publications. Expected costs will be further identified in the final version of the Data Management Plan, based on project needs to the end of the project. ## 5.2. How will these be covered? There is no significant budget allocation in the project for making the data FAIR. The project therefore expects to cover costs related to open access to research data as eligible costs as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). For any additional costs, sponsorship outside of the project would need to be sought, for example by partners interested in developing OpenDataLabs or by securing additional research funding. ## 5.3. Who will be responsible for data management in your project? Aalborg University (AAU) and Antropologerne (ANTRO) are responsible for the overall collection and handling of research data while Dataproces is responsible for the data management of data in the O4C Platform. The pilot coordinators will be responsible for the collection and handling of research data in each pilot. ## 5.4. Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)? This will be further developed as the business cases for the OpenDataLabs are defined in the project. The consortium expects the main questions relating to data preservation to be identified and answered as these business cases are developed. All datasets that are uploaded to the O4C Platform will be stored on a server at Dataproces who will ensure preservation and backup throughout the project. The aim is that the Platform will continue to be available after the O4C project has been completed. This means that the Open Data that has been collected, generated and uploaded to the Platform during the project life-time will be accessible both after each hackathon cycle and after the end of the funding period of the O4C project. The data in the O4C platform will be available for as long as the internal server at Dataproces is up and running and costs covered by the business case by Dataproces. # 6\. Data Security ## 6.1. What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? Research data is shared between project partners and stored in collaborative online working platforms during the project’s lifetime. These are BaseCamp (https://3.basecamp.com), Google Drive (https://drive.google.com), and Dropbox (https://www.dropbox.com). Some intermediate and all final versions of evaluation data collected in the project and analysis outputs of this material are saved in a standardised filing system with dedicated naming conventions in the project’s BaseCamp account. Uncurated and unanalysed material created during the project is stored locally by the Open4Ctiizens partners according to their institutional data management and storage guidelines. This locally stored research data includes un- anonymised questionnaire data from hackathons, as well as consent forms signed by hackathon and other project participants allowing for the use of personally identifiable information about them. Consent forms will be kept beyond the end of the Open4Citizens project. Additional research data such as personal notes, unused photos and video clips etc. will be safely deleted and discarded after the end of the project. This research includes all data not made publicly available for the long term. In the finished form of the platform the uploaded data is secure and recoverable with daily backups, where Dataproces can go back to any file from any day for all 365 days a year. Dataproces also fulfil all data management comments regarding the European Personal Data Protection Act. ## 6.2. Is the data safely stored in certified repositories for long term preservation and curation? At the time of writing this deliverable, the Open4Citizens project partners are identifying the most appropriate repository or repositories in which to make research data available for the long term. An update on decisions made will be provided in the project’s final Data Management Plan, due in M30. # 7\. Ethical Aspects ## 7.1. Are there any ethical or legal issues that can have an impact on data sharing? **User-generated data** In the second hackathon cycle it is likely that user-generated data will be increasingly important for developing open-data driven solutions in Open4Citizens hackathons. Examples include crowdsourced data such as that made available in platforms like Open Street Map (www.openstreetmap.org), public social media data, as well data scraped from service and product review sites. The Open4Citizens consortium intends to adopt existing standards and approaches regarding the use of this kind of data. Our primary guide in defining an Open4Citizens guideline to using this type of data will be the licences and guidelines available through the Open Data Commons (https://opendatacommons.org) developed by Open Knowledge International (https://okfn.org/) . In essence, the Open Data Commons Open Database License (OBdL) states that the databases to which it applies allow users to freely share, create and adapt from the database, as long as public use is attributed, shared alike and kept open. However, as these guidelines are voluntary, the consortium intends to engage with members of our advisory board to determine which current approaches to openness we should adopt. We intend, for example, to review and appropriately adopt guidelines produced by the Open Data Institute (https://theodi.org/guides) and Open Knowledge International, as leaders in the field, as well as any other relevant guidelines. We foresee that balancing participants’ rights to the intellectual property entailed in their hackathon outputs with the desire for openness may be challenging. We will therefore stay informed about developments in the area of rights, ethics and openness throughout the life of the project to ensure that we are transparently applying best practice procedures and guidelines. We expect that debates regarding use of user-generated data in data journalism will be one useful source of guidance. **Data in the O4C platform** When organising the events Open4Citizens will collect information from public repositories, which contain Open Data. Since Open Data consist of information databases that are public domain, the data can be freely used and redistributed by anyone. Open4Citizens is thus not subject to any regulations regarding proper data storage, including principles stipulated in The Data Protection Directive 95/46/ECP and the General Data Protection Regulation (EU) 2016/679. When providing Open Data for the hackathons and related events, Open4Citizens, its employees, and any contributing partners of Open4Citizens or its employees, shall not be liable for any harm arising from the use of the collected datasets shared through the O4C Platform, including but not limited to, how participating parties handle and develop the Open Data available on the O4C Platform. In regards to the Open Data from various data sources that are made available on the O4C Platform, Open4Citizens does not guarantee that this data has been published with the prior, necessary and informed approval that it requires. ## 7.2. Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? The gathering and analysis of research data in the project is guided by standard ethics guidelines for the social sciences (e.g. as discussed in http://ec.europa.eu/research/participants/data/ref/h2020/other/hi/ethics- guide-ethnoganthrop_en.pdf). For research data collected in relation to Open4Citizens hackathons, as well as questionnaires and other personally identifiable information generated, informed consent is sought. All participants in hackathons are requested to provide their consent for all data produced to be used by the project. Figure 1 represents the template consent form used for the first hackathon cycle below. In addition all users of the platform will be asked for consent when creating a profile in the O4C platform. **Figure 1: Draft consent form used by all five pilots in the first hackathon cycle of the O4C project**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1070_Open4Citizens_687818.md
# 1\. Executive Summary This deliverable contains the first version of the Data Management Plan (M6). It describes the nature of the data that will be collected and used in the Open Data Lab Platform. The open data has been divided into two overarching data categories: 1. Open Data * **Category A** : _Open and ready to use_ * **Category B** : _Open - but not yet in a ready to use format_ 2. Closed Data * **Category C** : data that will not be used in this project It also describes the overall architecture of the OpenDataLab Platform that will contain the data and touch base with the current development on how the data will be stored, made available, shared, reused and used during and after the project. It also outlines the ethics related to working with Open Data as well as presenting an outlook for the further work with the deliverable. # 2\. Introduction The purpose of this deliverable is to ‘provide an analysis of the main elements of the data management policy that will be used in the project with regards to all the datasets that will be generated by the project’ as stated in the document ‘Guidelines of data Management in Horizon 2020 1 . According to the DoA (p. 37), the aims of this deliverable (in M6) are to ‘describe the nature of the data that will be collected and integrated and the way it will be stored, made available and used during and after the project.’ According to the ‘Guidelines of data Management in Horizon 2020 2 ’ p.5, the Data Management Plan should be written from a dataset-by-dataset point of view. However since we do not yet have the specific datasets this initial draft of the Data Management Plan will focus on describing the overarching data categories that will be encountered during the hackathon cycles as well as describing the overall architecture of the OpenDataLab Platform that will contain the data. As each pilot comes closer to defining its specific challenges during the pre-hack phase, we will gain a clearer picture on which datasets to collect and upload to the OpenDataLab Platform. This will contribute to shaping the next editions of this deliverable, which are contractually bound to appear by Month 15 and Month 30. ## 2.1. Overview on structure This initial draft of the Data Management Plan focuses on describing the overarching data categories that will be encountered during the hackathon cycles followed by how each category will be managed. Firstly the two data categories are presented in section 3.0. The subsections describe the origin of the data as well as the possible outcome of using the particular category (concept, mobile application, use case, etc.). Then the structure of the Open Data Lab Platform where the data will be managed is described in section 4.1. The subsections elaborate on access, data standards, data sharing, archiving and preservation, reuse of the data and ethics. Lastly the outlook for the development is presented. # 3\. Description of the Data Categories A large quantity of data concerning e.g. urban environments, weather, census, etc. is being produced every day. Such data can be found, for example as published government data sets, in private companies who have recorded the behaviour of their customers or by scientific research, or generated by users, who often record their activities and in some cases share such data with friends or social networks. The term ‘Open Data’ refers to those data sets that are made publicly available. These data have a large potential to generate new applications and to enhance several aspects of human life, including transport, healthcare, climate and even human behaviour. The way to manage these data as well as realising the real potential of open data is still to be fully discovered. The O4C project intends to contribute to create a demand for applications based on open data and to include citizens as a driver for innovation in the open data arena. However many citizens are not necessarily able to handle Open Data, or even to imagine what to do with them. To help close this gap between users and data, a clarification of the different types of data is needed. In general ‘Open Data’ means that the data is publically available and can be used, modified, and shared freely by anyone for any purpose 3 . With this in mind the following categories have been created: 3) Open Data * Category A: _Open and ready to use_ * Category B: _Open - but not yet in a ready to use format_ 4) Closed Data * Category D: data that will not be used in this project The categories will be described in detail in the following sections. The colour: green, yellow and red corresponds to the level of employment of the data. * Green: the data is ready to use. * Yellow: the data needs to be converted to a usable format or there are restrictions on the use * Red: the data is not available in the O4C project _**3.1. Open Data - Category A** _ The general topic for this category is that the data can be classified as open. It means that it can be freely used, re-used and redistributed by anyone - subject only, at most, to the requirement to attribute and sharealike 4 . <table> <tr> <th> **Category** </th> <th> **Public Data** </th> <th> **User Generated Data** </th> <th> **Datasets generated by Hackathon Participants** </th> </tr> <tr> <td> Category A: Open and ready to use </td> <td> Publicly available datasets that are available in a structured format and ready to be used in the Platform E.g. CSV - Comma Separated Values </td> <td> Basically any content that users of a service have created that is accessible through a graphical user interface Examples: \- GPS data </td> <td> Datasets that have been generated by the users for the O4C project Merged datasets also belongs in this category </td> </tr> </table> **Table 1: Category A - Open and ready to use data** ## 3.1.1 Public data The data is presented in a repository – an online platform – owned and maintained by e.g. a company, an organization or by local, municipal or national governments who have decided to make a selection of data available to the public. Examples on this type of data may include data related to demographics or urban planning, as well as to public transport or real-time road usage. Data from this category can be located through repositories like Open Data DK, Humanitarian Data Exchange, DR Archives, HIP.se – Health Innovation Platform, etc. and is typically presented in a structured CSV format. If the data has a restricted/limited use or isn’t free of charge it belongs in Category B. ## 3.1.2 User Generated Data When a user creates content that will be visible to the general public they usually agree to this in the Terms of Use of the service. Examples for this category are posts that are publically available via a graphical user interface on e.g. Twitter. If the data has a restricted/limited use or isn’t free of charge it belongs in Category B. ## 3.1.3 Data Generated by Hackathon Participants This category refers to datasets that have been created during the hackathons. The datasets can e.g. come from merging ‘Public’ or ‘User Generated Data’ into new datasets. Since the data in this category is created from Open Data there are no restrictions related to use, modification or sharing of the data. ## 3.1.4 Possible Outcomes and Available Datasets It will be possible to manipulate Data from this category in the Open Data Lab Platform. The outcome from this is mainly helping the hackathon participants gain understanding of Open Data as well as aiding the development of new services/improve existing ones during the hackathon cycle. The data can be used as: * Components in mobile or web applications * Concepts - i.e. mockups of mobile or web application * Data examples for the participants to gain a greater understanding of Open Data * Some data may not be off limits because of privacy reasons, as explained in category D, but due to the fact that the owners of the datasets have not yet made them available. Thus existing Open Data can be used as components in use-cases - proof of concept - that can clear the way for opening data that is not currently available as open data. ## 3.2. Open Data - Category B This category refers to data that needs to be converted or have certain restrictions on its use. It means that the data is publically available but it is not ready-to-use as the data from category A. <table> <tr> <th> **Category** </th> <th> **Public Data** </th> <th> **User Generated Data** </th> <th> **Datasets generated by Hackathon Participants** </th> </tr> <tr> <td> Category B: Open - but not yet in an optimal format </td> <td> Publically available unstructured data like: * PDF * Addresses * Menus * Library Catalogues Formats that need to be converted. As well as CSV files with restricted downloads (e.g. number of rows/file size) </td> <td> Data generated by everyday users of services and technology. Examples: \- Twitter </td> <td> Hackathon participants creating datasets as PDF or XLS files or merging of datasets with restricted use </td> </tr> </table> **Table 2: The three sub-categories of Category B** ### 3.2.1 Public Data Public data refers to the often _unstructured_ data that is publically available like online menus, libraries of information, catalogues, etc. Data from this category is typically owned by a for-profit company or an organization (for-profit or non-profit). This category also includes data owned by public institutions, which have not published the data in a format that makes them immediately useable (e.g. PDF) The unstructured data requires more preparation than structured data, since it will have to be converted into a useable format and perhaps be extracted from the websites by using Dataproces’ software robots. If the use is restricted it means that there can be a limited number of downloads. The restriction can also refer to the ownership of the data, which means that the data is free to use but is owed by someone. ### 3.2.2 User Generated Data User generated data is data that is quite literally generated directly by users of everyday services and technology. Examples from this category can be GPS data, Geo-tagged Twitter data. Also for this category the unstructured data requires more preparation than structured data, since it will have to be converted into a useable format and perhaps be extracted from the websites by using Dataproces’ software robots. Restricted use can also apply which means that there can be a limited number of downloads or a payment is required to use the data. The restriction can also refer to the ownership of the data - that the data is free to use but is owed by someone. ### 3.2.3 Data Generated by Hackathon Participants This category encompasses Hackathon participants creating datasets as PDF or XLS files or merging of the Open Data that have restricted use. ### 3.2.4 Possible Outcomes and Available Datasets It is more time consuming to get hold of the data from Category B than Category A. It is possible to retrieve, collect or create datasets during the pre-hack, hack or post-hack phase but as a general rule the possible final outcome when using data from this category is _concepts_ . If the data is converted it “moves” to category A. Data from this category can be used as concepts - i.e. mock-ups of mobile or web applications. ## 3.3. Closed data - Category C Data that is not open refers to personal, medical and other sensitive data that is not open to the public and cannot be found by the general public online. It also encompasses data that is only meant as internal information to public administration. Data from this category will _not_ be available in the O4C project. The same rule applies to data that are only available after purchase, such as business data, etc. <table> <tr> <th> **Category** </th> <th> **Description** </th> </tr> <tr> <td> Category C: Closed data - data that will not be used in this project </td> <td> Any data that contain personal information e.g. medical data or data that is only meant as internal information to public administration </td> </tr> </table> **Table 3: Category C** # 4\. The OpenDataLab Platform The OpenDataLab will be the place or playground where citizens can make their ideas more concrete. The Platform will help support this vision by leaning against the O4C approach: Explore, Learn/Apply, Consolidate and letting the participants build on different levels with Open Data. Starting out with building an understanding of Open Data, then moving on to building concepts and finally building applications and possibly a viable business (figure 1). **Figure 1: How the Open Data Lab Platform supports the O4C approach.** The figure (1) shows an ideal process. In real life the path will be more intricate and winding – however, one of the goals of the platform, as well as the O4C project, is “straightening” out the process – helping citizens getting data and getting there! By helping citizens concretize their ideas, concepts and applications can be developed and where the data is not yet open a well-founded use-case can be created and used as an argument to ask the data owner to open the data. In order to let the participants to build with Open Data both datasets and selected tools will be available inside the Open Data Lab Platform. The basic function of the Platform is to allow citizens to work with Open Data. In order to work with Open Data you must have access to datasets and you must have access to relevant tools. Figure 2 shows a representation of the content of the Platform. It shows the relationship between the local hackathon teams/the hackathon participants (the box to the right – figure 2) and the Platform (the box to the right – figure 2). * Upload of datasets: The local hackathon teams will at all times be able to upload datasets to the platform while the hackathon participants - for now - will not have this opportunity. This division of roles is a measure to ensure relevance and quality of the uploaded datasets. It also helps ensure that there is sufficient storage space so that the platform will run smoothly. * Access to the uploaded datasets: The Open Data Lab Platform contains five data storages - one storage for each location/country - and a selection of tools to work with Open Data. The participants at the hackathon can then access the platform, fetch datasets and use the tools to work with the Open Data. For now, each local hackathon team will only be able to access its own data storage. This is again to ensure quality, structure and stability. At a later time a feature might be added to the platform that allows mixing datasets from the different local hackathon teams. * The platform will be linked to a developer toolbox from IBM, called Bluemix. Arrangements are being made at the time of the editing of this deliverable, to make sure that the toolbox could be linked to the platform, in order to give access to a large range of tools to analyse, search, visualise and working with data. **Figure 2 The OpenDataLab platform structure** ## 4.1. Access to the Data inside the Open Data Lab Platform All hackathon participants – and anyone else who accesses the Open Data Lab Website _www.opendatalab.eu_ – can use the datasets and the tools inside. ## 4.2. Standards and metadata Data standards for Category A (Table 1): CSV, XML, RDF, JSON, OIS, API queries, etc. Data standards for Category B (Table 2): formats that require transformation to become machine readable such as PDF, JPG, TIFF, etc. ## 4.3. Data sharing The datasets that are created during the hackathons will be shared through the Open Data Lab website _www.opendatalab.eu_ . The essence of Open Data is that it is open – which means that anyone can use it without asking permission or informing the data owner about how or where it is being used and for what purpose. The datasets that belong to Category A are therefore embraced by unrestricted and unlimited use as well as unlimited sharing and manipulation. The datasets from Category B might present some restrictions in terms of usage. This has been elaborated upon in section 3.2.1. **4.4. Archiving and preservation (including storage and backup):** The plan is that all datasets that are uploaded to the Open Data Lab Platform will be stored on a server at Dataproces who will ensure preservation and backup throughout the project. ## 4.5. Tools The tools inside the platform will allow the participant to perform different actions, including (among others): * visualise data, * build understanding through data examples and use cases * construct concepts and mobile applications The specific data visualisations tools will be described in detail in the next deliverable (M15). In order to build mobile applications the participants will additionally have access to Bluemix. This toolbox includes a large number of tools to handle, transform and analyse data, including tools to: * transform data in HTML, PDF and Word format into Json and other appropriate formats * create on-demand relational databases * create apps for Web or mobile * perform geospatial analysis The work on the specific cases and the definition of detailed requirements will give the project team the possibility to better define and select the most useful tools. ## 4.6. Reuse of the data The Open Data Lab Platform will form a virtual component of the five Open Data Labs. The platform is also meant to be active after the O4C project has been completed. This means that the Open Data that has been collected, generated and uploaded to the Platform during the project lifetime will be accessible both after each hackathon cycle and after the end of the funding period of the O4C project. This will be elaborated in the upcoming version of the Data Management Plan. ## 4.7. Ethics When organising the events Open4Citizens collects information from public repositories, which contain Open Data. Open Data consist of information databases that are public domain, and therefore data that can be freely used and redistributed by anyone. Open4Citizens is thus not subject to any regulations regarding confidential or sensitive data storage, including principles stipulated in The Data Protection Directive 95/46/ECP and the General Data Protection Regulation (EU) 2016/679. When providing Open Data for the events, Open4Citizens, its employees, and any contributing partners of Open4Citizens or its employees, shall not be liable for any harm arising from the use of the collected datasets shared through the Open Data Lab Platform, including but not limited to, how participating parties handle and develop the Open Data available on the Open Data Lab Platform. In regards to the Open data from various data sources that is made available on the Open Data Lab Platform, Open4Citizens does not guarantee that this data has been published with the prior, necessary and informed approval that it requires. However the Open4Citizens team and the hackathons’ team will verify the reliability of the source of the publication case by case. # 5\. Conclusions and Outlook As mentioned in the preface this first version of the Data Management Plan presents the overarching categories and how data will enter into the Open Data Lab Platform. The next iteration of the Data Management Plan (M15) will provide a more detailed description of specific datasets while the final version (M30) will present the complete and detailed description of datasets and the final architecture of the platform. The next iteration will also elaborate on the concrete tools that will be available in the Open Data Lab Platform.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1071_ARCFIRE_687871.md
# Introduction This Data Management Plan (DMP) intends to outline the handling of data gathered during the ARCFIRE project, from the point where it is gathered until the archiving process at the end of the project. 1.1 Purpose This Data Management Plan ensures that policies are in place to: * Facilitate the generation of data and analyses of that data by ARCFIRE; * Outline the procedures and formats for transforming raw data into processed results. * Ensure that data required to reconstruct published results are made available online in time to facilitate the peer review process, for instance on the ARCFIRE website. * Ensure that raw and processed data sets, together with appropriate documentation, are released in timely ways as structured archive volumes to Open Access repositories for distribution to the FIRE+ community and others beyond the duration of the ARCFIRE project. 1.2 Scope The scope of this data management plan focuses on: * Timely reduction of raw data into structured results, along with documentation that determines when and where the data were acquired, and for what purpose. * Timely generation and validation of archive volumes containing standard data products and documentation. * Timely delivery of archive volumes to open repositories for distribution to the FIRE+ community and others. * Timely posting of new and exciting data sets and results on the Internet for public access. * Timely announcement of the availability of results via social media 3. Responsibilities The development, maintenance and management of the Data Management Plan in is the responsibility of iMinds. The current responsible person is Dr. Dimitri Staessens ([email protected]). This Data Management Plan is not seen as a static document and will be updated during the project whenever appropriate. 4. Change control The validity of the Data Management Plan will be evaluated at least every six (6) months or whenever a revision is necessitated during the project. If changes are required, a new document will be prepared for internal use in the consortium, indicated with a minor version: D4.1.x. These documents can be made available upon request. 5. Relevant documents This document is structured according to the NASA _G_ uidelines for Development of a Project Data Management Plan (PDMP) 1 , incorporating H2020 guidelines 2 . # Project overview The ARCFIRE project will investigate RINA, the Recursive Internet Architecture at scale on FIRE+ testbeds. It will use publicly available Free/Libre Open Source Software (FLOSS) from a number of previous and currently running projects, including * FP7-IRATI: This EC funded project developed a GPL/LGPL licensed implementation of RINA concepts for OS/Linux, called IRATI. It is available on Github and is seen as project background for ARCFIRE. ARCFIRE will contribute to the FLOSS developments of IRATI. * GEANT-IRINA: This EC funded project developed a FLOSS traffic generation tool for the IRATI under the Geant outward software license. ARCFIRE may use this tool. ARCFIRE will develop a much richer tool set that will be made available under a different license (GPL). * FP7-PRISTINE: This ongoing EC funded project develops a Management System for IRATI. ARCFIRE will use and contribute further to this software. * FWO-RINAiSense: This Flemish Government funded project is developing a smaller scale user space RINA implementation aimed at constrained resource devices. ARCFIRE may make use of publicly available software from RINAiSense. 2.1 Project objectives ARCFIRE sets out to prove the effectiveness of the RINA architecture in mitigating key issues observed in the past decades in the deployment of TCP/IP as the underlying infrastructure of the global Internet. It will deploy key experiments on testbeds in Europe (provided by FIRE+) and the U.S. (as made available by GENI). ARCFIRE will develop the tools necessary to deploy experiments using the RINA prototypes quickly and efficiently. ARCFIRE will develop a framework for deploying test programs on a variety of testbeds and gather data regarding resiliency and manageability of RINA networks, as compared to TCP/IPbased networks. # Testbed description The experiments in the ARCFIRE project will run on general purpose hardware and virtual machines running FLOSS. The four experiments will be conducted on selected testbeds available to the project consortium. The two currently available testbed infrastructures are provided by the Fed4FIRE project in the EU (http://www.fed4fire.eu) and GENI (https://www. geni.net/) in the United States. A detailed testbed report will be delivered as part of ARCFIRE D4.2. # ARCFIRE experimentation framework The ARCFIRE experimentation framework has to be developed during the project and its capabilities will impact the Data Management. We foresee that a number of existing tools will be integrated. These tools will serve as probes and be controlled from the ARCFIRE framework. 4.1 probes We give a brief description of such probes below: ## 4.1.1 iperf iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters. This is a new implementation that shares no code with the original iPerf and also is not backwards compatible. iPerf was orginally developed by NLANR/DAST. iPerf3 is principally developed by ESnet / Lawrence Berkeley National Laboratory. It is released under a three-clause BSD license. It can produce output in JSON format. ## 4.1.2 netperf Netperf is a benchmark that can be used to measure the performance of many different types of networking. It provides tests for both unidirectional throughput, and end-to-end latency. The environments currently measureable by netperf include: * TCP and UDP via BSD Sockets for both IPv4 and IPv6 * DLPI * Unix Domain Sockets * SCTP for both IPv4 and IPv6 ## 4.1.3 tcpdump Tcpdump prints out a description of the contents of packets on a network interface that match the boolean expression; the description is preceded by a time stamp, printed, by default, as hours, minutes, seconds, and fractions of a second since midnight. It can also be run with the -w flag, which causes it to save the packet data to a file for later analysis, and/or with the -r flag, which causes it to read from a saved packet file rather than to read packets from a network interface. In all cases, only packets that match expression will be processed by tcpdump. The MIME type application/vnd.tcpdump.pcap has been registered with IANA for pcap files. The filename extension .pcap appears to be the most commonly used along with .cap and .dmp. Tcpdump itself doesn’t check the extension when reading capture files and doesn’t add an extension when writing them (it uses magic numbers in the file header instead). However, many operating systems and applications will use the extension if it is present and adding one (e.g. .pcap) is recommended. ## 4.1.4 wireshark Wireshark is a GUI network protocol analyzer. It lets you interactively browse packet data from a live network or from a previously saved capture file. Wireshark’s native capture file format is pcap format, which is also the format used by tcpdump and various other tools. Wireshark can output XML, PostScript, csv, or plain text ## 4.1.5 RINA traffic generator (tgen) The RINA traffic generator is a tool developed during the Geant3 IRINA project, designed to produce CBR and Poisson-distributed traffic for the IRATI implementation. It outputs periodic statistics in .csv format. ## 4.1.6 OML OML is a generic software framework for measurement collection. It allows the developer of applications to define customisable measurement points (MP) within the application code. It consists of two main components: OML client library: it provides an API for applications to collect measurements that they produce. It exists for different languages including C and Python. • OML Server: the OML server component is responsible for collecting and storing measurements inside a database. Currently, SQLite3 and PostgreSQL are supported as database backends. MPs are usually defined within the application code in the form of tupples like: ("appname", "measurementname1:measurementtype1 measurementname2:measurementtype2") , depending on the binding used. The data recollected in the OML server will be saved in the configured database. The format of this data depends on the user after querying the database. ## 4.1.7 ARCFIRE tools development ARCFIRE will develop a framework that will combine existing tools and develop a frontend to control and deploy these tools on the FIRE testbeds quickly and efficiently. Scripts will be made public open source under appropriate software licenses. We don’t plan for the tool to generate any data by itself. 4.2 Data formats This section gives a brief summary of data formats that will be used for the output of ARCFIRE. 4.2.1 txt Plaintext files will be used for the metadata accompanying any ARCFIRE data products. ## 4.2.2 pcap Applications and libraries should use the pcap library to read savefiles, rather than having their own code to read savefiles. If, in the future, a new file format is supported by libpcap, applications and libraries using libpcap to read savefiles will be able to read the new format of savefiles, but applications and libraries using their own code to read savefiles will have to be changed to support the new file format. “Savefiles” read and written by libpcap and applications using libpcap start with a per-file header. The format of the per-file header is: * Magic number * Major version Minor version * Time zone offset * Time stamp accuracy * Snapshot length * Link-layer header type The magic number is used to discern the format (byte order and timestamp). Other important parameters are: A 4-byte number giving the ”snapshot length” of the capture; packets longer than the snapshot length are truncated to the snapshot length. Following the per-file header are zero or more packets; each packet begins with a per-packet header, which is immediately followed by the raw packet data. The format of the per-packet header is: * Time stamp, seconds value * Time stamp, microseconds or nanoseconds value * Length of captured packet data * Un-truncated length of the packet data All fields in the per-packet header are in the byte order of the host writing the file. For a full description, see the manpage of pcap-savefile. ## 4.2.3 csv RFC 4180 proposes a specification for the CSV format, and this is the definition commonly used. However, in popular usage ”CSV” is not a single, well-defined format. As a result, in practice the term ”CSV” might refer to any file that * is plain text using a character set such as ASCII, various Unicode character sets (e.g. UTF8), EBCDIC, or Shift JIS, * consists of records (typically one record per line), * with the records divided into fields separated by delimiters (typically a single reserved character such as comma, semicolon, or tab; sometimes the delimiter may include optional spaces), where every record has the same sequence of fields. Within these general constraints, many variations are in use. Therefore, without additional information (such as whether RFC 4180 is honored), a file claimed simply to be in ”CSV” format is not fully specified. As a result, many applications supporting CSV files allow users to preview the first few lines of the file and then specify the delimiter character(s), quoting rules, etc. If a particular CSV file’s variations fall outside what a particular receiving program supports, it is often feasible to examine and edit the file by hand (i.e., with a text editor) or write a script or program to produce a conforming format. ARCFIRE .csv files will be formatted according to RFC4180. ## 4.2.4 xml Extensible Markup Language (XML) is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. It is defined by the W3C’s XML 1.0 Specification and some associated open standards. 4.3 JSON JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language. 4.4 Summary A summary of the probes is given in Table 1. Table 1: Summary of probes <table> <tr> <th> Tool </th> <th> type of data gathered </th> <th> raw output format </th> </tr> <tr> <td> iperf </td> <td> bandwidth </td> <td> txt, JSON </td> </tr> <tr> <td> netperf </td> <td> bandwidth </td> <td> txt </td> </tr> <tr> <td> tcpdump </td> <td> raw packet data </td> <td> pcap </td> </tr> <tr> <td> rina-tgen </td> <td> bandwidth </td> <td> txt, csv </td> </tr> </table> # Project data flow ARCFIRE will generate only computer science related data and measurements, so all processing can be done in situ after the experiment is done. ARCFIRE will generate large quantities of network traffic, that can not be stored in its totality and will be processed immediately after acquisition. For post- processing, relevant subsets of that data will be stored at the facility testbed (on the test machine or a centralised server provided by the testbed facility until these resources need to be released. In such cases the data may be moved to a central server at an ARCFIRE partner for further analysis. When all analysis is done, the necessary data to reproduce the results will be packaged and archived in a zip archive or tarball. # Products Products resulting from the project include raw experiment data sets, and associated products such as statistical analysis data. 6.1 Experiment data products The project will generate raw data such as tcpdump traces, that could reach orders of magnitudes exceeding Terabytes of data very quickly (The iLab.t experimentation facility provides Gigabit links). Such data will not be archived or saved, but some traces may be filtered to illustrate key findings in smaller data sets (not exceeding a couple of megabytes in size). ## 6.1.1 ARCFIRE data product template Data products for ARCFIRE will be accompanied by a metadata sheet including the information from Table 2. # Archive location The selected archive for ARCFIRE is the Zenodo archive. All ARCFIRE data products will be grouped in the ARCFIRE community page 3 . # Licensing All open data from ARCFIRE contributed to the Zenodo repository is planned to be released under the Creative Commons CC-BY license 4 . Table 2: ARCFIRE data set template <table> <tr> <th> Data set reference and name </th> <th> Identifier for the data set to be produced. </th> </tr> <tr> <td> Data set description </td> <td> Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. </td> </tr> <tr> <td> Standards and metadata </td> <td> Reference to existing suitab le standards of the discipline. If these do not exist, an outline on how and what metadata will be created. </td> </tr> <tr> <td> Data sharing </td> <td> Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling reuse, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, in dicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy related, security related). </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> <td> Description of the procedures that will be put in place for long term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1075_STARS4ALL_688135.md
# 1\. Introduction This document is the initial version of the Data Management Plan (DMP). The tool DMPOnline 1 has been used for writing this document. It has been written with the specific template created for H2020 projects. In this final version, four plans for different datasets have been written. These can be found in the Annex of this document following the template _Final Review DMP_ . In STARS4ALL, the following datasets have been identified and described: * **Photometer** . These correspond to the data generated by photometers (sensors). In STARS4ALL, a network of photometers will be deployed during the project. Citizens can expand this network in the future buying new photometers from our marketplace. These photometers will send data periodically to our system where they are stored. * **NightKnights game** : In STARS4ALL, games with a purpose will be developed to acquire and process data from LPIs. These data will generate two datasets: i) one with the results of the tasks execution, and ii) another with the gamification process results like points, badges, etc. … The plan for the first dataset is equivalent to the plan generated for the toolbox (annex). The plan for the second dataset will be ready in the next version. * **Cities At Night** : Cities at Night is a citizen science project that aims to create a map, similar to Google maps, of the Earth at night using night time colour photographs taken by astronauts onboard the ISS. Cities At Night is one of our initiatives and we manage the data management of the project. Two datasets are generated: i) tasks, with the definition of the task created, and ii) taskrun, with the results of the executions of these tasks * **Community Health dataset** : STARS4ALL is a project oriented to create awareness among citizens, so for us, it is very important to measure the impact of CAs on the community. It goes without saying that this data must be open and shared with the community in order to engage newcomers and increase their participation. ## STARS4ALL - Photometers - Final review DMP ### 1\. Data summary State the purpose of the data collection/generation The purpose of the data collected is to monitor light pollution with the measurements taken by our photometers (sky brightness sensors). You can find more information here: http://tess.stars4all.eu/. These data will be used by scientists for research purposes. It can be used to generate prediction models on sky brightness, to measure the impact of diferent lighting policies, etc. Explain the relation to the objectives of the project One of the objectives in the project is to deploy a network of photometers to monitor light pollution. Specify the types and formats of data generated/collected Each photometer generates a JSON object that is sent to our system and indexed in our database. Periodic datasets in CSV are generated and published in our data portal and Zenodo. The data collected is the following: <table> <tr> <th> Field ield Namea e Typeype Unitsnits </th> <th> Optionalptional Descriptionescription </th> </tr> <tr> <td> seq </td> <td> integer - </td> <td> Sequence number. If possible use 32 bits. The sequence number will start in 1 mandatory at each device reboot. </td> </tr> <tr> <td> name </td> <td> string </td> <td> \- </td> <td> mandatory Instrument friendly name. Should be unique as it identifies the device </td> </tr> <tr> <td> freq </td> <td> float </td> <td> Hz </td> <td> Raw reading as a frequency with 3 decimal digits precission (milihertz) mandatory NNNNN.NNN </td> </tr> <tr> <td> mag </td> <td> float </td> <td> </td> <td> Visual magnitude (formulae?) corresponding to the raw reading). Transmitted </td> </tr> <tr> <td> mag/arcsec^2 mandatory up to two decimal places NN.NN </td> </tr> <tr> <td> tamb </td> <td> float </td> <td> ºC madatory Ambient Temperature. Transmitted up to one decimal place </td> </tr> <tr> <td> tsky </td> <td> float </td> <td> Sky Temperature. Transmitted up to one decimal place az integer degrees optional photometer optical axis Azimuth sent only on instruments with ºC mandatory accelerometer alt integer degrees optional photometer optical axis Altitude (angle): sent only on instruments with accelerometer </td> </tr> <tr> <td> rev </td> <td> integer - mandatory Payload data format revision number. Current </td> </tr> </table> Specify if existing data is being re-used (if any) We are not aware of any reuse of this data for other purposes, for the moment. Specify the origin of the data Data generated in our photometer network are being collected from our network of sensors located around the world. The upto-date map is available here: http://dashboards.stars4all.eu/tess-chart/tess-locations.html State the expected size of the data (if known) A photometer network with 45 devices generates 134MB of data each month. In the short-term, we expected to duplicate this number, reaching 200 devices in the middle-term. Outline the data utility: to whom will it be useful The data generated by our photometers have a clear application in the light pollution research field. We have data collected since 2017 in an operation model of 24/7/365. This will provide a invaluable source of information to understand the behaviour of the light pollution in a specific area. The effects of light pollution in biodiversity, human health and energy wasted are well documented in scientific literature. #### 2.1 Making data findable, including provisions for metadata [FAIR data] Outline the discoverability of data (metadata provision) All data have been uploaded and indexed in our data portal (http://data.stars4all.eu) adding the metadata defined by CKAN. This includes: 1. Title 2. Description 3. Tags 4. License 5. Visibility 6. Source 7. Version 8. Author / Author email 9. Maintainer /email The datasets can be found using any of the metadata defined previously from a search field. They are also accesible in our data portal/zenodo, and re-usable since they are published with the CC-BY 4.0 license. The fields defined inside the datasets are described in the description field of the dataset. Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? Photometer datasets are uploaded to our portal and to Zenodo (in our community https://zenodo.org/communities/stars4all ). Zenodo is used by our project to generate DOIs and to archive (make persistent) our datasets. This also allows indexing our data in OpenAire. Outline naming conventions used For naming, the following convention has been used: Month-Year Outline the approach towards search keyword Our datasets are published both in our data portal and in Zenodo. A set of tags have been defined to tag each dataset, facilitating their search. Plus, in our data portal, based on CKAN, users can make searches based on the common metadata defined for all datasets that has been previously described in this section. Outline the approach for clear versioning Both Zenodo and our data portal have a versioning system included in the metadata of the datasets. The version changes when a modification has been made on a dataset. The change of the version does not imply a request of a new DOI. Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how There is no entry in the Metadata Standards Directory for the datasets generated in our project. Nevertheless, night sky brightness data is archived using the "NSBM Community Standards for Reporting Skyglow Observations", which was officially adopted at the 12th European Symposium for the Protection of the Night Sky and endorsed by the International Dark Sky Association (IDA) and by the International Astronomical Union (IAU) in Beijing 2012. #### 2.2 Making data openly accessible [FAIR data] Specify which data will be made openly available? If some data is kept closed provide rationale for doing so All data generated is made openly available. Specify how the data will be made available Data is available from our data portal ( http://ckan.stars4all.eu/dataset/tess-monthly-measurements ) and from our Zenodo community ( http://www.zenodo.org/communities/stars4al l ) Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? You only need a browser to access to all data. For more documentation, access the STARS4ALL website (http://www.stars4all.eu), Data portal section. You can find more information here: Data portal deliverable: https://figshare.com/articles/D5_4_STARS4ALL_data_portal/4212555 Sensor API: https://figshare.com/articles/D4_9_Sensor_Management_API_final_/4508690 Specify where the data and associated metadata, documentation and code are deposited The data are deposited in: Data portal (including metadata) -> _http://ckan.stars4all.eu/dataset/tess- monthly-measurements_ Zenodo Community -> _http://www.zenodo.org/communities/stars4all_ The interpretation of each measure, as well as the technical components of the device, are open and available to general public in the following link: _https://drive.google.com/file/d/0Bw_qv1ze9sY2Xy1BYk1Fcmx1X1k/view?usp=sharing_ Software is available on Github ( _https://github.com/STARS4ALL/photometer- api_ ) and the API properly documented in Apiary: _http://docs.photometer.apiary.io_ Specify how access will be provided in case there are any restrictions You can use the contact email for data issues of our project: [email protected] #### 2.3 Making data interoperable [FAIR data] Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. To facilitate interoperability, we have used the same metadata for all datasets generated in the project. It is based in the metadata vocabulary given by CKAN. Data use the standard "NSBM Community Standards for Reporting Skyglow Observations". Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? Data uses the "NSBM Community Standards for Reporting Skyglow Observations" standard. #### 2.4 Increase data re-use (through clarifying licenses) [FAIR data] Specify how the data will be licenced to permit the widest reuse possible Data is licensed with CC-BY 4.0 that allows to: Sharehare — copy and redistribute the material in any medium or format Adaptdapt — remix, transform, and build upon the material for any purpose, even commercially. Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed Data is available in real time through the APi and dashboards, and monthly, in a dataset format through Zenodo and our data portal. Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why Data will be used by light pollution researchers to study the impact of the light pollution in the environment and to be used in scientific publications. Plus, it could be used for environmental studies or commercial activities sush as measure the quality of the sky in rural hotels environments. There is not restriction in the use of the data. We only request to cite us using our DOI. Describe data quality assurance processes The process to publish a dataset is supervised. A curator checks whether the dataset is generated properly, before being published. Specify the length of time for which the data will remain re-usable Data will be continuously archived while the Zenodo platform remains open. ### 3\. Allocation of resources Estimate the costs for making your data FAIR. Describe how you intend to cover these costs The cost of making our data FAIR in Zenodo is free. The cost of the data infrastructure will be covered by the STARS4ALL foundation. Clearly identify responsibilities for data management in your project The data management process is managed by UPM (Universidad Politécnica de Madrid). Describe costs and potential value of long term preservation We use Zenodo for long-term preservation. With these datasets, scientists will be able to measure the evolution of the sky quality throughout the years and to study the impact different outdoor lighting policies. ### 4\. Data security Address data recovery as well as secure storage and transfer of sensitive data These kind of data do not have security restrictions because they do not provide sensitive values. Also, these data do not contain personal information so there is not need to apply protection measurements. Data, with the exception of datasets in Zenodo, are hosted in UPM's servers. Only authorised personnel can access the location where these servers are hosted and periodical backups are being done. ### 5\. Ethical aspects To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former These datasets do not contain personal information so there is not need to apply data protection measurements and to request permission to photometers' owners to publish these data. ### 6\. Other Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) No other issues reported. ## STARS4ALL - NightKnights - Final review DMP ### 1\. Data summary State the purpose of the data collection/generation Night Knights ( https://www.nightknights.eu/ ) is a Game With A Purpose (GWAP) aimed at Data Linking; more specifically, by playing the game, players help in classifying images taken from the International Space Station (ISS) with respect to a given taxonomy (hence, links are created between images and categories). In short, all actions by all players are weighted and aggregated by the game, producing a “crowdsourced” classification of ISS pictures that can be used for subsequent analysis of the light pollution phenomenon. Details on the Night Knights application and its internal functioning are given in deliverables D4.4, D4.6 and D4.10. The data set consists of the output data of the Night Knights game. Data are stored in a MySQL database securely accessible by means of a dedicated API, as described in the following. Explain the relation to the objectives of the project One of the objectives of STARS4ALL is to implement gamification mechanisms to increase citizen participation (O4.1 -> WP4). Another STARS4ALL objective is to give support to LPIs (Light Pollution Initiatives) in data management (O4.4 -> WP4). For that reason, STARS4ALL has deployed a data portal for backing them. Specify the types and formats of data generated/collected The format of the dataset was defined in a way to facilitate its subsequent reuse, i.e. the game output contains all information needed for further processing and evaluation of the data. Data collected is divided in two categories: donetasks and results. In the first case, the dataset contains the individual tasks executed by users. <table> <tr> <th> Field Nameield a e </th> <th> Typeype </th> <th> Unitsnits Optionalptional Descriptionescription </th> </tr> <tr> <td> idUser </td> <td> integer </td> <td> \- </td> <td> mandatory It is the identifier of the user </td> </tr> <tr> <td> taskId </td> <td> integer </td> <td> \- </td> <td> mandatory It is the internal id of the task </td> </tr> <tr> <td> smallPhotoUrl </td> <td> url </td> <td> \- </td> <td> mandatory It is the url of the photo used to be classified </td> </tr> <tr> <td> timestamp </td> <td> datetime </td> <td> \- </td> <td> mandatory It represents the execution time </td> </tr> <tr> <td> choosenCategory enumerate - </td> <td> It is the category choosen by the user. It can be: black, city, stars, aurora, mandatory astronaut or none </td> </tr> <tr> <td> groundTruth boolean - </td> <td> mandatory It represents if the task is a golden task It represents if the user has been logged in the system or is an anonymous </td> </tr> <tr> <td> guest boolean - </td> <td> mandatory </td> </tr> </table> user. In the second case, the dataset contains the results obtained in NightKnights. Field Nameield a e Typeype Unitsnits Optionalptional Descriptionescription <table> <tr> <th> resultId integer </th> <th> \- </th> <th> mandatory It is the identifier of the result </th> </tr> <tr> <td> taskId integer </td> <td> \- </td> <td> It is the same as the submitted task (previous dataset), so to allow for mandatory identification. </td> </tr> <tr> <td> smallPhotoUrl url </td> <td> \- </td> <td> mandatory It is the url of the photo used to be classified </td> </tr> <tr> <td> class enumerate - </td> <td> It is the category choosen by the user. It can be: black, city, stars, aurora, mandatory astronaut or none </td> </tr> <tr> <td> numPlayers integer - </td> <td> mandatory Number of players that were actually needed to classify the photo The timestamp of the result (moment in which the classification was cross- </td> </tr> <tr> <td> solutionDate datetime - </td> <td> mandatory </td> </tr> </table> validated and the photo removed from the game) Specify if existing data is being re-used (if any) Data generated is used in the project Cities At Night, one of the STARS4ALL light pollution initiatives. Some examples can be found at http://citiesatnight.org/index.php/related-projects/ . Data is being used as a training dataset in machine learning algorithms, as can be consulted in the following paper: Gloria Re Calegari, Gioele Nasi and Irene Celino: "Human Computation vs. Machine Learning: an Experimental Comparisonu an o putation vs. achine earning: an xperi ental o parison for Image Classificatiofor I age lassificationn", Human Computation Journal, vol. 5, issue 1, pp. 13-30, DOI: 10.15346/hc.v5i1.2, 2018. [gold open access] Specify the origin of the data The starting point is the NASA database, with almost half a million pictures taken by the astronauts on the International Space Station. These images are classified by users using the application NightKnights ( http://www.nightknights.eu ). The dataset is generated using the NightKnights API, which is described here: https://crowdtaskmanagement.docs.apiary.io/ Methods used: Night Knights Game Results Collection Night Knights Game Done Tasks Collection State the expected size of the data (if known) The size of the data generated depends of the number of executions and the users who play with the game. The average per year is about 10MB, so a big volume of data is not expected in the future. Outline the data utility: to whom will it be useful Data is used for: Calculating community health indicators of the project Cities At Night. See http://dashboards.stars4all.eu/social/ Research in the Human Computation field. Some examples are Gloria Re Calegari, Gioele Nasi and Irene Celino: "Human Computation vs. Machine Learning: anu an o putation vs. achine earning: an Experimental Comparison for Image Classificationxperi ental o parison for I age lassification", Human Computation Journal, vol. 5, issue 1, pp. 13-30, DOI: 10.15346/hc.v5i1.2, 2018. [gold open access ] Gloria Re Calegari and Irene Celino: "Interplay of Game Incentives, Player Profiles and Task Difficulty inInterplay of a e Incentives, layer rofiles and ask ifficulty in Games with a Purposea es ith a urpose", in proceedings of the 21th International Conference on Knowledge Engineering and Knowledge Management - EKAW 2018, LNAI Volume 11313, pp. 306-321, DOI: 10.1007/978-3-030-03667- 6_20, 2018. [green open access (eprint) at arxiv ] Gloria Re Calegari, Andrea Fiano and Irene Celino: "A Framework to build Games with a Purpose for Linked ra e ork to build a es ith a urpose for inked Data Refinementata efine ent", in proceedings of the International Semantic Web Conference 2018, Best Paper Award Resources Track, LNCS Volume 11137, pp. 154-169, DOI: 10.1007/978-3-030-00668-6_10, 2018. [green open access (eprint) at arxiv] #### 2.1 Making data findable, including provisions for metadata [FAIR data] Outline the discoverability of data (metadata provision) As it happens with the rest of the data, all data generated in this project have been uploaded and indexed in our data portal (http://data.stars4all.eu) adding the metadata defined by CKAN. This includes: 1. Title 2. Description 3. Tags 4. License 5. Visibility 6. Source 7. Version 8. Author / Author email 9. Maintainer /email The datasets of our project can be found using any of the metadata defined previously from a search field. Also, they are accesible in our data portal/zenodo, and re-usable since they are published with the CC-BY 4.0 license. The fields defined inside of the datasets are described in the description field of the dataset. Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? NightKnights results are uploaded to our portal and Zenodo ( https://zenodo.org/communities/stars4all ). Zenodo is used to generate DOIs and to archive (make persistent) our datasets. Also, this allows indexing our data in OpenAire. Outline naming conventions used For naming donetasks datasets, the following convention has been used: results-nightknights-year For naming results datasets, the following convention has been used: donetasks-nightknights-year Outline the approach towards search keyword Our datasets are published both in our data portal and in Zenodo. A set of tags have been defined to tag each dataset, facilitating their search. Plus, in our data portal, based on CKAN, users can make searches based on the common metadata defined for all datasets that has been previously described in this section. Outline the approach for clear versioning Both Zenodo and our data portal have a versioning system including in the metadata of the datasets. The version changes when a modification has been made on a dataset. The change of the version does not imply a request of a new DOI. Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how The format of the dataset was defined in a way to facilitate its subsequent reuse, i.e. the game output contains all information needed for further processing and evaluation of the data. Night Knights data gives information about three main themes: Classification results (classified photo, assigned classification, number of players needed to come to a classification agreement) Players’ actions (“log” of all classification actions done by anonymized game users) Game evaluation data (including KPIs like number of tasks available/started/completed, number of players, total played time, throughput, average life play, etc.) #### 2.2 Making data openly accessible [FAIR data] Specify which data will be made openly available? If some data is kept closed provide rationale for doing so All data generated have been made openly available except some personal data such as the identifier of the user in Social Networks. That is because the user can play in the game logging in with its twitter, facebook or google+ accounts. This personal data is closed to respect the personal data law. Specify how the data will be made available Data is available from our data portal. There are two different datasets: i) one for the results , ii) one for the individual tasks. In both cases, there are different resources (files) per year. Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Only a browser is needed to access to the data. Plus, the API of the data portal can be used. For more information, check: https://docs.ckan.org/en/2.7/api/index.html Specify where the data and associated metadata, documentation and code are deposited The data are deposited in our: Data portal, inside of each project. ( http://data.stars4all.eu ) Zenodo Community -> http://www.zenodo.org/communities/stars4all The source code can be found here: https://github.com/STARS4ALL/gwap-enabler You can find tutorials here: https://github.com/STARS4ALL/gwap-enabler-tutorial _https://crowdtaskmanagement.docs.apiary.io_ Specify how access will be provided in case there are any restrictions You can use the contact email for data issues of our project: [email protected] #### 2.3 Making data interoperable [FAIR data] Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. To facilitate interoperability, we have used the same metadata for all datasets generated in the project. It is based in the metadata vocabulary given by CKAN. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? Vocabulary is described in the deliverables: D4.10 Games Release (final version) D5.7 : STARS4ALL tools for crowdsourcing activities (initial release) D5.8 : STARS4ALL tools for crowdsourcing activities (final release) #### 2.4 Increase data re-use (through clarifying licenses) [FAIR data] Specify how the data will be licenced to permit the widest reuse possible Data is licensed with CC-BY 4.0 that allows to: Sharehare — copy and redistribute the material in any medium or format Adaptdapt — remix, transform, and build upon the material for any purpose, even commercially. Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed There is no embargo period. Dataset are published periodically. Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why These data can be used as a complement of the work done in the Cities At Night project and the application Dark Skies ISS ( _https://crowdcrafting.org/project/darkskies/_ ) Describe data quality assurance processes The process to publish a dataset is supervised. A curator checks whether the dataset is generated properly, before being published. Specify the length of time for which the data will remain re-usable Data will be archived while the Zenodo platform remains open. ### 3\. Allocation of resources Estimate the costs for making your data FAIR. Describe how you intend to cover these costs The cost of making our data FAIR in Zenodo is free. The cost of the data infrastructure will be covered by the STARS4ALL foundation. Clearly identify responsibilities for data management in your project The data management process is managed by UPM (Universidad Politécnica de Madrid). Describe costs and potential value of long term preservation We use Zenodo for long term preservation. With these datasets, scientists will be able to measure the evolution of the light pollution using images taken by astronauts of the ISS (International Space Station) ### 4\. Data security Address data recovery as well as secure storage and transfer of sensitive data There is only one field in the dataset that may be considered as personal data. This field, idUser, contains the id of the user who executes the action. Nevertheless, this field is an internal identificator (it does not contain the user name) and the full name or another personal information cannot be traced from this id. Data of the data portal are hosted in UPM's servers. Only authorised personnel can access the location where these servers are hosted and periodical backups are being done. ### 5\. Ethical aspects To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former These datasets do not contain personal information so there is not need to apply data protection measurements. The only personal information stored in our servers in the social identifier (facebook, twitter or google+) of the users. This information is not available in the datasets (but a user id is present in the dataset after executing an anonymising process). According with the Data Protection Spanish National Law, a consent is requested to the user to store this information. See _https://www.nightknights.eu/#/privacy_ ### 6\. Other Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) No other issues reported. ## STARS4ALL - Cities At Night - Final review DMP ### 1\. Data summary State the purpose of the data collection/generation Cities at Night is a citizen science project that aims to create a map, similar to Google maps, of the Earth at night using night time colour photographs taken by astronauts onboard the ISS. NASA has a database with almost half a million pictures taken by the astronauts on the International Space Station.To organise all this data with the use of computers would be infeasible since it would take extremely complex algorithms to interpret the photographs. The human eye, however, knows immediately if the camera was pointing at a city or simply at the stars. For this reason we created Cities at Night, a platform with 3 apps with which anyone can help while enjoying beautiful pictures taken from space. Explain the relation to the objectives of the project One of the objectives of STARS4ALL is to give support to LPIs (Light Pollution Initiatives) in data management (O4.4). For that reason, STARS4ALL has deployed a data portal for backing them. Specify the types and formats of data generated/collected The data collected is divided in two categories: task and taskrun. tasktask defines the tasks created to be executed by users in the Dark Skies application. <table> <tr> <th> Field Nameield a e </th> <th> Typeype </th> <th> Unitsnits Optionalptional Descriptionescription </th> </tr> <tr> <td> task_calibration </td> <td> boolean </td> <td> \- </td> <td> mandatory Define if is a golden task </td> </tr> <tr> <td> task_created </td> <td> datetime </td> <td> \- </td> <td> mandatory Define when the task was created </td> </tr> <tr> <td> task_id </td> <td> integer </td> <td> \- </td> <td> mandatory It is the identifier of the task </td> </tr> <tr> <td> task_info </td> <td> json object </td> <td> \- </td> <td> It is a json object with the input source (video, image, sound, etc ...) of the task mandatory </td> </tr> <tr> <td> task_n_answer </td> <td> integer </td> <td> \- </td> <td> mandatory It is the number of answers necessary to complete the task </td> </tr> <tr> <td> task__priority_0 </td> <td> float </td> <td> \- </td> <td> mandatory Priority of the task </td> </tr> <tr> <td> task_project_id </td> <td> integer </td> <td> \- </td> <td> mandatory It is the identifier of the project associated to the task </td> </tr> <tr> <td> task_quorum </td> <td> integer </td> <td> \- </td> <td> mandatory It is the number of different users necessary to accept the task </td> </tr> <tr> <td> task_state </td> <td> enumerate - </td> <td> mandatory Define the state of the task (COMPLETED) </td> </tr> <tr> <td> task_info_idiss </td> <td> string - </td> <td> Define the id of the task in string format. In our case, the name of the mandatory image that we have to classify </td> </tr> <tr> <td> task_info_link </td> <td> url - </td> <td> mandatory Define the place where user can find the image. </td> </tr> <tr> <td> task_info_link_big </td> <td> url - </td> <td> mandatory Define the url of the image in high resolution </td> </tr> <tr> <td> task_info_link_small url - </td> <td> mandatory Define the url of the image to be shown to the user (small resolution) </td> </tr> </table> taskruntaskrun defines the executions of an specific task. <table> <tr> <th> Field Nameield a e </th> <th> Typeype Unitsnits Optionalptional Descriptionescription </th> </tr> <tr> <td> task_run__calibration </td> <td> boolean - </td> <td> mandatory Define if the origin is a golden task </td> </tr> <tr> <td> task_run__created </td> <td> datetime - </td> <td> mandatory Define when the task was shown to the user to be executed. </td> </tr> <tr> <td> task_run__external_uid </td> <td> string - </td> <td> Define if the task was executed with an external application (using mandatory the API) </td> </tr> <tr> <td> task_run__finish_time </td> <td> datetime - </td> <td> mandatory Define when the task was executed. </td> </tr> <tr> <td> task_run__id </td> <td> integer - </td> <td> mandatory It is the identifier of the object </td> </tr> <tr> <td> task_run__info </td> <td> json object - </td> <td> mandatory It is a json object with the response (result) of the user. </td> </tr> <tr> <td> task_run__project_id </td> <td> integer - </td> <td> mandatory It is the identifier of the project associated to the task </td> </tr> <tr> <td> task_run__task_id </td> <td> integer - </td> <td> mandatory It is the identifier of the task (see previous table) </td> </tr> <tr> <td> task_run__timeout </td> <td> boolean - </td> <td> mandatory Define if the user solved the task in the time assigned to it. </td> </tr> <tr> <td> task_run__user_id </td> <td> integer - </td> <td> mandatory It is the id of the user </td> </tr> <tr> <td> task_runinfo__classification enumerate - </td> <td> It is the result of the classification (AURORA, CITY, BLACK, mandatory ISS) </td> </tr> <tr> <td> task_runinfo__cloudy enumerate - </td> <td> It provides more information about the quality of ther image mandatory shown. In our case if the level of clouds in the image </td> </tr> <tr> <td> task_runinfo__img_big url - </td> <td> mandatory Image in high resolution classified </td> </tr> <tr> <td> task_runinfo__img_smal url - </td> <td> mandatory Image in small resolution classified </td> </tr> </table> Specify if existing data is being re-used (if any) Data generated in these crowdsourcing activities and present in this dataset is being used in many research papers and projects such as: EVALUATING THE ASSOCIATION BETWEEN ARTIFICIAL LIGHT-AT-NIGHT EXPOSURE AND BREAST AND PROSTATE CANCER RISK IN SPAIN. ( http://citiesatnight.org/index.php/portfolio-view/6799-2/ ) ARTIFICIALLY LIT SURFACE OF EARTH AT NIGHT INCREASING IN RADIANCE AND EXTENT. (http://citiesatnight.org/index.php/portfolio-view/alan-increasing/ ) Artificial light at night and cognitive development and performance in children, Juana Maria Delgado-Saborit, T u r t l e s c o n s e r v a t i o n i n F l o r i d a , D i a n a U m p i e r r e , I D A , F l o r i d a , U S A ISGlobal (Spain) and King’s College London (UK) EMISSI@N, Kevin G. Gaston, Environmental and Sustainability Institute of University of Exeter, Cornwall, United Kingdom More information can be found in: http://citiesatnight.org/index.php/related-projects/ http://citiesatnight.org/index.php/science-research / Specify the origin of the data The starting point is the NASA database with almost half a million pictures taken by the astronauts on the International Space Station. These images are classified by users with the project Dark Skies ( https://crowdcrafting.org/project/darkskies/ ) that it is running in the platform Crowdcrafting. State the expected size of the data (if known) According to the estimation done based on previous datasets, the size of storing the execution of 193090 tasks takes up 625MB. These tasks comprise a period of one and a half year. Outline the data utility: to whom will it be useful As it has been described in section "data re-used section", these data is relevant for light pollution researchers because they can be used to estimate the light pollution in cities as well as the colour of them (ambar, blue, white, etc.). #### 2.1 Making data findable, including provisions for metadata [FAIR data] Outline the discoverability of data (metadata provision) As it happens with the rest of the data, all data generated in this project have been uploaded and indexed in our data portal (http://data.stars4all.eu) adding the metadata defined by CKAN. This includes: 1. Title 2. Description 3. Tags 4. License 5. Visibility 6. Source 7. Version 8. Author / Author email 9. Maintainer /email The datasets of our project can be found using any of the metadata defined previously from a search field. Also, they are accesible in our data portal/zenodo, and re-usable since they are published with the CC-BY 4.0 license. The fields defined inside of the datasets are described in the description field of the dataset. Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? Dark Skies ISS tasks and taks runs are uploaded to our portal and Zenodo( https://zenodo.org/communities/stars4all ). That is because, Zenodo is used by our project to generate DOIs and to archive (make persistent) our datasets. Also, this allows indexing our data in OpenAire. Outline naming conventions used For naming task datasets, the following convention has been used: CitiesAtNight-Tasks For naming task_run datasets, the following convention has been used: CitiesAtNight-Tasks_run_(Year) Outline the approach towards search keyword Our datasets are published both in our data portal and in Zenodo. A set of tags have been defined to tag each dataset, facilitating their search. Plus, in our data portal, based on CKAN, users can make searches based on the common metadata defined for all datasets that has been previously described in this section. Outline the approach for clear versioning Both Zenodo and our data portal have a versioning system including in the metadata of the datasets. The version changes when a modification has been made on a dataset. The change of the version does not imply a request of a new DOI. Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how There is no entry in the Metadata Standards Directory for the datasets generated in our project. There is no standard used for crowdsourcing activities. Nevertheless, we use the data model of PyBossa ( _http://docs.pybossa.com/en/latest/model.html_ ). Following the PyBossa model, data will be stored in a database system (Postgres) as JSON objects. Depending on their nature, projects in PyBossa belong to two different categories: thinking or sensing. The thinking category is used in projects where users solve problems, such as tagging images. The sensing category is used for projects where users gather data. The generated metadata are stored as a result of a task (taskrun). Once a quorom is reached, the value is incorporated to the resource. #### 2.2 Making data openly accessible [FAIR data] Specify which data will be made openly available? If some data is kept closed provide rationale for doing so All data generated have been made openly available. Specify how the data will be made available Data is available from our data portal. There are two different datasets: i) one for tasks , ii) one for the results. In the case of the results, there are different resources (files) per year. Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Only a browser is needed to access to the data. Plus, the API of the data portal can be used. For more information, check: _https://docs.ckan.org/en/2.7/api/index.html_ Specify where the data and associated metadata, documentation and code are deposited The data are deposited in our: Data portal, inside of each project. ( _http://data.stars4all.eu_ ) Zenodo Community -> _http://www.zenodo.org/communities/stars4all_ Specify how access will be provided in case there are any restrictions You can use the contact email for data issues of our project: [email protected] #### 2.3 Making data interoperable [FAIR data] Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. To facilitate interoperability, we have used the same metadata for all datasets generated in the project. It is based in the metadata vocabulary given by CKAN. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? There is no standard used for crowdsourcing activities. Nevertheless, we use the data model of PyBossa ( _http://docs.pybossa.com/en/latest/model.html_ ). #### 2.4 Increase data re-use (through clarifying licenses) [FAIR data] Specify how the data will be licenced to permit the widest reuse possible Data is licensed with Creative Commons License. Cities at Night by Alejandro Sánchez de Miguel et al. Atlas of astronaut photos of Earth at night A&G (2014) is distributed under Creative Commons Non commercial attribution – Share 4.0 International licence. _https://doi.org/10.1093/astrogeo/atu165_ However, if the Cities at Night data are a principal component of a science paper then co- authorship to PI's should be offered. Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed There is no embargo period. Dataset is published periodically. Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why As it has been mentioned in previous sections, these data is very important for light pollution research. Some examples can be found in: _http://citiesatnight.org/index.php/related-projects/_ Describe data quality assurance processes The process to publish a dataset is supervised. A curator checks whether the dataset is generated properly, before being published. Specify the length of time for which the data will remain re-usable Data will be archived while Zenodo platform remains open. ### 3\. Allocation of resources Estimate the costs for making your data FAIR. Describe how you intend to cover these costs The cost of making our data FAIR in Zenodo is free. The cost of the data infrastructure will be covered by the STARS4ALL foundation. Clearly identify responsibilities for data management in your project The data management process is managed by UPM (Universidad Politécnica de Madrid). Describe costs and potential value of long term preservation We use Zenodo for long term preservation. With these datasets, scientists will be able to measure the evolution of the light pollution using images taken by astronauts of the ISS (International Space Station) ### 4\. Data security Address data recovery as well as secure storage and transfer of sensitive data There is only one field in the dataset that may be due to security restrictions because it may be considered as personal data. This field, task_run_user_id, contains the id of the user who executes the action. Nevertheless, this field is an internal identificator (not contains the name of the user) and the full name or another personal information can not be traced from this id. Data of the data portal are hosted in UPM's servers. Only authorised personnel can access the location where these servers are hosted, and periodical backups are being done. All the generated data in the platform Crowdcrafting by the volunteers are, unless otherwise stated, under the _Open Data_ _Commons License_ . Check the description the project's page for further details about which license is going to be applied . ### 5\. Ethical aspects To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former These datasets do not contain personal information so there is not need to apply data protection measurements. ### 6\. Other Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) No other issues reported. ## STARS4ALL - Community Health - Final review DMP ### 1\. Data summary State the purpose of the data collection/generation In order to carry out the community health analysis in WP3, we use data from the various LPIs to calculate a set of metrics as set out in D3.4. We are primarily concerned with analysing the number of classifications or tasks carried out, how these are distributed among the community, and how many people in total are participating. Since we cover a range of project types in the LPIs, we for now focus on a set of data that is particular to the classification-based projects, however much of this can be easily adapted to other types of project, too. Explain the relation to the objectives of the project One of the objectives on WP3 is to analyse effectivity of community awareness activities, in our case, LPIs. Community engagement needs to be continuously scrutinized and monitored and relevant activities and strategies need to be adjusted accordingly. The aim here is to set up the analysis tools that will allow us to take informed and well-balanced decisions regarding our activities. Specify the types and formats of data generated/collected Contribution data are exported from individual projects, such as Dark Skies, Globe at Night etc, and include the number of contributes over time, the number of users who are contributing, and task metrics. <table> <tr> <th> Field ield Namea e Typeype Unitsnits Optionalptional Descriptionescription </th> </tr> <tr> <td> day datetime - </td> <td> mandatory Analyzed day </td> </tr> <tr> <td> rows integer - </td> <td> mandatory Number of entries processed </td> </tr> <tr> <td> contributions integer - </td> <td> mandatory Number of contributions done </td> </tr> <tr> <td> users integer - </td> <td> mandatory Number of users have made a contribution </td> </tr> <tr> <td> starters integer - </td> <td> mandatory Number of starters </td> </tr> </table> Specify if existing data is being re-used (if any) Not for the moment. Specify the origin of the data These data are generated by several scripts that periodically run in our servers. The source code is here: https://github.com/STARS4ALL/STARS4ALL- social-dashboard State the expected size of the data (if known) The total amount of data collected in these datasets is 8.7MB (with data collected from two years ago). These data will depend of the amount of contributions but we do not expect more than 10MB of data per year. Outline the data utility: to whom will it be useful The target of this data is to the initiative/project coordinator. With this dataset, the coordinator will be able to measure the health of the community behind the project. Also, these datasets can be valuable for researchers who working with online communities. #### 2.1 Making data findable, including provisions for metadata [FAIR data] Outline the discoverability of data (metadata provision) As it happens with the rest of the data, all data generated in this project have been uploaded and indexed in our data portal (http://data.stars4all.eu) adding the metadata defined by CKAN. This includes: 1. Title 2. Description 3. Tags 4. License 5. Visibility 6. Source 7. Version 8. Author / Author email 9. Maintainer /email The datasets of our project can be found using any of the metadata defined previously from a search field. Also, they are accesible in our data portal/zenodo, and re-usable since they are published with the CC-BY 4.0 license. The fields defined inside of the datasets are described in the description field of the dataset. Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? Community health indicators are uploaded to our portal and Zenodo ( https://zenodo.org/communities/stars4all ). That is because, Zenodo is used by our project to generate DOIs and to archive (make persistent) our datasets. Also, this allows to index our data in OpenAire. Outline naming conventions used For naming, the following convention has been used: social-name_of_the_project Outline the approach towards search keyword Our datasets are published both in our data portal and in Zenodo. A set of tags have been defined to tag each dataset, facilitating their search. Plus, in our data portal, based on CKAN, users can make searches based on the common metadata defined for all datasets that has been previously described in this section. Outline the approach for clear versioning Both Zenodo and our data portal have a versioning system including in the metadata of the datasets. The version changes when a modification has been made on a dataset. The change of the version does not imply a request of a new DOI. Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how There is no entry in the Metadata Standards Directory for the datasets generated in our project. The convention follow to describe the fields of the dataset is decribed in D3.6 Models and methods for community health monitoring (final release) #### 2.2 Making data openly accessible [FAIR data] Specify which data will be made openly available? If some data is kept closed provide rationale for doing so All data generated have been made openly available. Specify how the data will be made available Data is available from our data portal. There is one social dataset per each project and from our Zenodo community ( _http://www.zenodo.org/communities/stars4all_ ) Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? You only need a browser to access to the data. Plus, you can use the API of the data portal. You can find more information here: _https://docs.ckan.org/en/2.7/api/index.html_ Specify where the data and associated metadata, documentation and code are deposited The data are deposited in our: Data portal, inside of each project. ( _http://data.stars4all.eu_ ) Zenodo Community -> _http://www.zenodo.org/communities/stars4all_ You can access to visual components here: _http://dashboards.stars4all.eu/social/_ The source code is here: _https://github.com/STARS4ALL/STARS4ALL-social- dashboard_ Specify how access will be provided in case there are any restrictions You can use the contact email for data issues of our project: [email protected] #### 2.3 Making data interoperable [FAIR data] Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. To facilitate interoperability, we have used the same metadata for all datasets generated in the project. It is based in the metadata vocabulary given by CKAN. Standards and methodologies are described in D3.6 Models and methods for community health monitoring. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? Vocabulary is described in deliverable D3.6 Models and methods for community health monitoring. #### 2.4 Increase data re-use (through clarifying licenses) [FAIR data] Specify how the data will be licenced to permit the widest reuse possible Data is licensed with CC-BY 4.0 that allows to: Sharehare — copy and redistribute the material in any medium or format Adaptdapt — remix, transform, and build upon the material for any purpose, even commercially. Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed There is no embargo period. Dataset is published periodically. Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why These data is very important for the coordinators of the LPIs (Light Pollution Initiatives) because they can see easily the impact of their policies and the health of their community. Also, it can be used by social researchers to measure diferent social communities. Describe data quality assurance processes The process to publish a dataset is supervised. A curator checks whether the dataset is generated properly, before being published. Specify the length of time for which the data will remain re-usable Data will be archived while Zenodo platform remains open. ### 3\. Allocation of resources Estimate the costs for making your data FAIR. Describe how you intend to cover these costs The cost of making our data FAIR in Zenodo is free. The cost of the data infrastructure will be covered by the STARS4ALL foundation. Clearly identify responsibilities for data management in your project The data management process is managed by UPM (Universidad Politécnica de Madrid). Describe costs and potential value of long term preservation We use Zenodo for long term preservation. With these datasets, scientists will be able to measure the evolution of the health on social communities. ### 4\. Data security Address data recovery as well as secure storage and transfer of sensitive data These kind of data do not have security restrictions because they are not sensitive values. Also, these data do not contain personal information so there is not need to apply protection measurements. Data, with the exception of datasets in Zenodo, are hosted in UPM's servers. Only authorised personnel can access the location where these servers are hosted, and periodical backups are being done. ### 5\. Ethical aspects To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former These datasets do not contain personal information so there is not need to apply data protection measurements. ### 6\. Other Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) No other issues reported.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1076_STARS4ALL_688135.md
1\. Introduction 5 2\. Datasets 6 2.1. Photometers’ Network (Initial Version) 6 2.2. Photometers’ Network (Detailed Version) 7 2.3. Community Health (Initial Version) 9 2.4. Games -­‐ NightKnights (Initial Version) 10 # Introduction This document is the intermediate version of the Data Management Plan (DMP). We have used the tool DMPOnline 1 for generating the partial DMPs for each dataset. They have been written with the template specially created for H2020 projects. Regarding previous version of the document (D4.2), some modifications have been done: * In the dataset generated by the **photometer network** , we have included in each measurements the photometer's coordinates (latitude, longitude). * We have removed the crowdsourcing dataset, integrating its data in the **community health** and **games** datasets Also, we have added the following datasets: * A **detailed version** of the photometers’ dataset. * **Games dataset (initial version)** : In STARS4ALL, games with a purpose will be developed to acquire and process data from LPIs. These data will generate two datasets: i) one with the results of the tasks execution, and ii) another with the gamification process results like points, badges, etc … The plan for the first dataset is equivalent to the plan generated for the toolbox (annex). The plan for the second dataset will be ready in the next version. * **Community Health dataset (initial version)** : STARS4ALL is a project oriented to create awareness among citizens, so for us, it is very important to measure the impact of CAs on the community. It goes without saying that this data must be open and shared with the community in order to engage newcomers and increase their participation. # Datasets ## Photometers’ Network (Initial Version) ### Data set description Light pollution samples will be continuously taken by the photometer network deployed in the project. The specification of the photometers is provided in deliverable D4.1 (https://figshare.com/s/76f01dec468f8286f781). Each photometer will send data to a message broker, which will send it to the subscriptors. In our case, the subscriptors will insert data in our data portal (also described in deliverable D5.4). Raw data will be acquired every 5 minutes. This default data acquisition rate may be reduced to 1 minute if necessary. Raw data will be generated as a 10-row CSV in ASCII format. This means that each photometer will generate 288K per day in the worst scenario (1 sample / minute). Following the STARS4ALL Key Performance Indicators, which are described in the Document of Action (Section 2.1.1.2), at the end of the project, 250 photometers will be deployed. This amounts to a volume of data of 72MB per day, 2.16 GB per month and 25.96GB per year. This size has been calculated considering uncompressed data. ### Standards and metadata Night sky brightness data will be archived using the "NSBM Community Standards for Reporting Skyglow Observations", which was officially adopted at the 12 th European Symposium for the Protection of the Night Sky and endorsed by the International Dark Sky Association (IDA) and by the International Astronomical Union (IAU) in Beijing 2012 (SpS17: "Light Pollution: Protecting Astronomical Sites and Increasing Global Awareness through Education"). More information about the header files can be found in the following link: _http://darksky.org/light-pollution/measuringlight-_ pollution/ The fields present in the dataset are: * **name** * Name of the device * **mag** * Magnitude measured by the photometer * **tsky** * Temperature of the sky measured by an infrared sensor placed in the photometer. * **tamb** * Ambient temperature * **latitude** • **longitude** • **tstamp** * Timestamp of the measurement in ISO-8601 format ### Data sharing Users can access to the datasets in three different manners: * From our data portal ( _http://ckan.stars4all.eu_ ). A description of this data portal can be found in deliverable D5.4. * Using our data API (D4.9), which is used by our dashboards to visualize data. * From our Zenodo Community 2 . Monthly datasets will be published and a Digital Object Identifier will be generated for each of them. We are using the UPM's own servers for data storage, and in the long-term we will hire hosting services with the funds raised by the foundation. All data generated by this network will be open access and we do not consider any restriction, including embargo periods. ### Archiving and preservation (including storage and backup) As discussed in the data sharing section, data will be archived and preserved using Zenodo. The amount and preservation of the data will depend of the policy applied by the Zenodo consortium. According with its current policy, there is no limitation in the public space and in the preservation time. Also, data files are backed up nightly. ## Photometers’ Network (Detailed Version) Scientific research data should be easily **:** ### Discoverable Datasets with the measurements generated by photometers are accessible through two portals: * STARS4ALL data portal based on CKAN ( _http://ckan.stars4all.eu_ ) * STARS4ALL community in Zenodo (https://zenodo.org/communities/stars4all/) Datasets in Zenodo have their own DOI. **Figure 1: DOI of the dataset with April’s measurements** ### Accessible **Are the data and associated software produced and/or used in the project accessible and in what modalities, scope, licenses?** Apart from using the data portals, the last 30 days of measurements from photometers are accessible through an API (see http://docs.photometer.apiary.io). The associated software is available and accessible at: * Data extraction: _https://github.com/STARS4ALL/tess-adapter_ * API: _https://github.com/STARS4ALL/photometer-api_ * CKAN image (forked from CKAN project): https://github.com/STARS4ALL/ckan The license of the datasets and the associated software is Creative Commons Attribution 4.0 (CC-BY4.0). See http://opendefinition.org/licenses/cc-by/ ### Assessable and intelligible **Are the data and associated software produced and/or used in the project assessable for and intelligible to third parties in contexts such as scientific scrutiny and peer review?** Each dataset is generated with a DOI and the devices, which take the measurements, are conveniently identified (id & location). The interpretations of each measure, as well as the technical components of the device, are open and available to general public in the following link: _https://figshare.com/s/66d11c0d5a1c9ac81160_ Software is available in Github (see previous point) and the API properly documented in Apiary: http://docs.photometer.apiary.io ### Usable beyond the original purpose for which it was collected **Are the data and associated software produced and/or used in the project useable by third parties even long time after the collection of the data?** Yes, this data can be used to generate future mathematical models that can predict or allow understand, light pollution. Many domains can benefit: * Urban planners to measure the impact of new light sources or to analyse the urban growth. * Biologists can study the impact of light pollution on animals or plants * Medical doctors can use them to deep in the study of light pollution in human health. These photometers and their measurements will provide the community an historical dataset that shows the evolution of the light pollution. This will be a valuable resource for scientists and public institutions. ### Interoperable to specific quality standards **Are the data and associated software produced and/or used in the project interoperable allowing data exchange between researchers, institutions, organisations, countries, etc?** Night sky brightness data will be archived using the "NSBM Community Standards for Reporting Skyglow Observations", which was officially adopted at the 12 th European Symposium for the Protection of the Night Sky and endorsed by the International Dark Sky Association (IDA) and by the International Astronomical Union (IAU) in Beijing 2012\. ## Community Health (Initial Version) ### Data set description In order to carry out the community health analysis in WP3, we use data from the various LPIs to calculate a set of metrics as set out in D3.4. We are primarily concerned with analysing the number of classifications or tasks carried out, how these are distributed among the community, and how many people in total are participating. Since we cover a range of project types in the LPIs, we for now focus on a set of data that is particular to the classification-based projects, however much of this can be easily adapted to other types of project, too. The dataset will continuously evolve throughout the project, both as the current LPIs continue to produce data, and as new LPIs are run. This will allow us to compare the metrics from various projects to determine what types attract ‘healthier’ communities, while it will also allow us to manage those communities better by being aware of their current health at a particular moment in time. The dataset consists of the results of a range of metrics used to measure various aspects of community health. It is therefore derivative data from the raw data obtained by each LPI. We are generating four metrics per project per day. ### Standards and metadata As data comes from numerous different projects, we do not put restrictions of the format of the data itself, but instead require that certain attributes are contained. Using this, we then carry out a range of calculations on the data to make our own secondary dataset that consists of the metrics that we require to measure community health on the dashboard discussed in D3.4. The most critical fields required in the data provided to us by an LPI are regarding ‘task runs’, or records of which player has completed which tasks, as follows: * PlayerID * The unique identifier of the player/volunteer carrying out a specific task. This is so we can determine how many contributions each player is making, and how these are distributed among the whole community, as well as the lifetime of a player’s activity. This ID will not include reference to any kind of personal data to maintain the user’s anonymity. * TaskID * The unique identifier of a specific tasks, e.g. an image that is to be classified. This allows us to determine which task was completed by the player on this particular task run. * Timestamp * The time at which the task run was completed. This means we can calculate metrics such as the user’s lifetime (time from first task run to their last task run), and the active period of the project and each task. ### Data sharing Data is shared to the community health analysis dashboard from each individual LPI, as discussed above. In some cases, this is a download of the most recent data dump available from the project (such as from the crowdcrafting platform for Dark Skies, Lost at Night and Night Cities), or through APIs (such as for the recently released NightKnights game). We share the results of the community health metrics via the dashboard that will position projects on a two-dimensional matrix, and show their path over time to indicate their changing health. Where possible, we will release project-specific snapshots of these metrics as datasets that can be downloaded for additional study. Once the dashboard is stable we will aim to release a monthly snapshot of the metrics calculated as part of this analysis. We will ensure that datasets will not include personal data from users, maintaining their anonymity. ### Archiving and preservation (including storage and backup) The data for the community health analysis is currently hosted at the University of Southampton in the UK, where it will be backed up to allow recovery as necessary. The computed metrics data is stored in a database and therefore it may be moved easily should we deem that it requires hosting elsewhere during the project. We will ensure that the data will remain available throughout the project duration (until December 2018). We will archive the data regarding the computed metrics for community health measures and will then be stored and preserved by the STARS4ALL foundation servers. Furthermore, monthly datasets will be accessible from our data portal and our Zenodo community. ## Games -­‐ NightKnights (Initial Version) ### Data set description Night Knights ( _https://www.nightknights.eu/_ ) is a Game With A Purpose (GWAP) aimed at Data Linking; more specifically, by playing the game, players help in classifying images taken from the International Space Station (ISS) with respect to a given taxonomy (hence, links are created between images and categories). In short, all actions by all players are weighted and aggregated by the game, producing a “crowdsourced” classification of ISS pictures that can be used for subsequent analysis of the light pollution phenomenon. Details on the Night Knights application and its internal functioning are given in deliverables D4.4, D4.6 and D4.10. The data set consists of the output data of the Night Knights game. The data are stored in a MySQL database securely accessible by means of a dedicated API, as described in the following. ### Standards and metadata The format of the dataset was defined in a way to facilitate its subsequent reuse, i.e. the game output contains all information needed for further processing and evaluation of the data. Night Knights data gives information about three main themes: * Classification results (classified photo, assigned classification, number of players needed to come to a classification agreement) * Players’ actions (“log” of all classification actions done by anonymized game users) * Game evaluation data (including KPIs like number of tasks available/started/completed, number of players, total played time, throughput, average life play, etc.) No personal information about game players is given, in order to respect user privacy, including the application privacy policy available here: _https://www.nightknights.eu/#/privacy_ . We are also evaluating the possibility to make available the classification results (first bullet point of the above list) in RDF format according to the Human Computation Ontology ( _http://swa.cefriel.it/ontologies/hc#_ ), which preserves the provenance of the collected information. In this case, each “consolidated information” consists of an RDF triple (with subject=ISS photo, object=aggregated classification and predicate=link) enriched with metadata about the consolidation process. Finally, it is worth noting that the ISS images are provided by the NASA’s Gateway to Astronaut Photography Of Earth (courtesy of the Earth Science and Remote Sensing Unit, NASA Johnson Space Center, _https://eol.jsc.nasa.gov/_ ), free of any copyright restrictions. The Night Knights dataset contains links to the original photo location. ### Data sharing Data sharing is implemented through a secure Web API exposed on the same server where the game is running. The full detailed documentation to access the API is documented according to the API Blueprint format and available online at _http://docs.crowdtaskmanagement.apiary.io_ . All API responses are described, including their output JSON format that contains the specified information. The actual URL to access the API is not public given, the access restrictions explained below. Access is regulated through token-based authentication. Token can be obtained at the administrator authentication endpoint by providing username and password; once obtained the token it can be used to access the API through the Authorization header field with the ‘Bearer’ scheme. Token expires after 60 minutes; the administrator credentials can be requested by writing to the following e-mail address: [email protected]_ . Data access is currently restricted to project partners, also to prevent potential malicious use during the online competition based on the use of the Night Knights game. Decision on the actual access by third parties will be evaluated by STARS4ALL partners on a case-by-case basis. Nevertheless, datasets with the solved tasks will be uploaded periodically to our data portal and to Zenodo. Currently the data is used by the University of Southampton, in order to analyse metrics around community health, so that the engagement levels with NightKnights may be compared with other LPIs, and allow us to notice when intervention is required to re-engage with users. Details about the community health dashboard are available in D3.4, and the dataset used in this is discussed in the ‘ _Community Health data model_ ’. The photos classified as “city”, “stars” and “black” will also be used to improve research on light pollution, in order to extend the coverage of the analysis of the phenomenon. The “city” photos can be used to map the light pollution effect and “stars” and “black” photos are used for calibration in order to measure that effect. ### Archiving and preservation (including storage and backup) Universidad Politecnica de Madrid is responsible for backup and recovery of the dataset. Night Knights data is stored in a MySQL database hosted in an Aurora DB Instance (Amazon Web Services). The class of instance is db.t2.medium (2 virtual CPUs and 4GB) The dataset will remain available and accessible through the aforementioned API until the actual game will be available online, which means at least until the end of the STARS4ALL project (December 2018). As previously commented, datasets will be preserve in our data portal and our Zenodo community. After the end of the project, the STARS4ALL foundation will have the responsibility of the maintenance of the data servers.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1081_MuMMER_688147.md
_transfers/adequacy/index_en.htm_ ) in accordance with Article 25 of the Directive 95/46/EC and so the transfer may take place. We will ask for ethical opinion from relevant institutional ethical committees, e.g. the Ethical committee of social sciences of Tampere area (Tampereen alueen ihmistieteiden eettinen toimikunta, _http://www.uta.fi/tutkimus/etiikka/arviointitmk/kokoonpano.html_ , in Finnish only) to ethically evaluate the planned research or relevant parts of it. VTT is represented in the Ethical committee. # 2.3 Storage and access provision All data created and collected during MuMMER will be stored internally on a private gitlab server hosted at Softbank Robotics, to which only project partners will have access. All data in gitlab will be private by default; where appropriately anonymized and processed data is to be made openly available, we will make use of the University of Glasgow’s Enlighten Research Data repository at _http://researchdata.gla.ac.uk/_ . # 2.4 File formats Wherever possible, we will use open and/or archival formats for all MuMMER data. This includes TIFF for images, FLAC for audio files, MPEG-4 for video, RTF for text, and CSV for spreadsheet data. # 3 Description of MuMMER Datasets ## 3.1 Dataset naming Conventions With regards to the naming convention MuMMER datasets, each name will be created as follows: 1. Dataset number containing prefix “DS” followed by a unique identifier number, e.g. DS1 2. Partner short name who will be managing the dataset, e.g. VTT 3. Short title of the dataset summarising the data contained within the dataset, e.g. Consumer Research E.g. “DS1.VTT.Consumer Research” ## 3.2 Summary of Datasets The following table provides the name and a short description of each dataset. In total we envisage three high-level datasets to be collated during the MuMMER project, encompassing a wide variety of data files. <table> <tr> <th> **Name** </th> <th> **Description** </th> </tr> <tr> <td> DS1.VTT. Consumer Research </td> <td> All data collected as part of the co-design activities carried out by VTT throughout the project. </td> </tr> <tr> <td> DS2.Partner. Interaction with Robot </td> <td> All data collected during interactions with the MuMMER robot, at all partner sites. </td> </tr> <tr> <td> DS3.Partner. Software Development </td> <td> All data collected for the purposes of developing and training the software models included in the MuMMER system. </td> </tr> </table> In the remainder of this document, we provide an initial DMP for each of these datasets. Note that the canonical version of the DMPs are stored in DMPonline at _https://dmponline.dcc.ac.uk/_ ; please contact the MuMMER administrator to be given access to the online DMPs. # 4 MuMMER Datasets ## 4.1 Consumer Research DMP The canonical version of this DMP is stored in DMPonline ( _https://dmponline.dcc.ac.uk/_ ). Please contact the MuMMER administrator if you require access to the online DMP. ### 4.1.1 Data set description WP1: Use scenarios, acceptance and success metrics will be led by VTT. The objective of this WP is to ensure that the MuMMER robot and its implemented and foresighted future applications will be user-driven in terms of (Human- Robot Interaction) HRI, socially and ethically accepted, and interesting to commercial end users. This goal will be achieved through an intensively applied co-design approach that engages consumer users and other relevant stakeholders in the technical development throughout the design. Consumers will be engaged using several forms of market research including demonstrations, workshops, discussion events, interviews and surveys. The co- design activities will produce use scenarios to guide the development, increase user acceptance toward robotic applications in consumer markets and develop success metrics for human-interactive mobile robots. This DMP aims to address the storage of MuMMER consumer research data in line with H2020 guidelines. ### Origin of data Data collected by VTT at the following events: * Demonstrations * Workshops with consumers * Discussion events * Interviews In addition VTT will collect data from the following: * Surveys/questionnaires * Consent Forms ### Nature and scale of data * Demonstrations – photos and video recordings in Finnish. Summary of results in English to be shared with Partners * Workshops with consumers - video recordings in local language. Summary of results in English to be shared with Partners * Interviews - video/audio recording in Finnish. Summary of results in English to be shared with Partners * Surveys and questionnaires – Internet questionnaire as well as paper questionnaires in word format or via digital feedback screens * Consent Forms – paper questionnaires in RTF format ### To whom the dataset could be useful Raw data in local language will be assessed and by VTT a summary of results (in English) will be shared with the consortium. Some pictures and videos may be used for dissemination purposes ### Related scientific publications The summary data derived from these studies will be used as the basis for scientific publications. Pictures may also be used as part of scientific publication, where appropriate consent has been obtained. ### 4.1.2 Standards and metadata The following metadata will be recorded regarding the consumer research data: * Demonstrations - metadata required: demonstration host, group size, location, date and time, duration * Workshops with consumers - metadata required: workshop host, workshop attendees, location, date and time, duration * Interviews - metadata required: interviewer, interviewee, location, date and time, duration * Surveys/questionnaires – metadata required: location, date and time * Photos - saved in TIFF format with location, description and date in the title ### 4.1.3 Data sharing Data will be shared internally with project partners using the MuMMER gitlab server. Where appropriate, reusable data such as log files or (where permission has been obtained) video data will be shared using the University of Glasgow's Enlighten Research Data server. ### 4.1.4 Archiving and preservation (including storage and backup) During the course of the MuMMER project, all data will be stored on the MuMMER project gitlab server, which is backed up regularly. Public data will be released through the University of Glasgow's Enlighten Research Data server, where it will persist after the end of the project. ## 4.2 Human-Robot Interaction DMP The canonical version of this DMP is stored in DMPonline ( _https://dmponline.dcc.ac.uk/_ ). Please contact the MuMMER administrator if you require access to the online DMP. ### 4.2.1 Data set description This data set will encompass all system logs, video recordings, and user questionnaire responses arising from users interacting with the MuMMER robot, both in lab settings and in the public deployment locations (including but not necessarily limited to Ideapark). It will also include any after-the-fact annotations of the video data that is gathered. Within the project, this data will be useful for analysing the success of all robot deployments throughout the project: this information will then be used to inform the development and refinement of the robot scenarios, as well as to help the developers to enhance the system for future deployments. The data will also be used as the basis for scientific publications describing the robot deployments. Outside of the MuMMER project, this data could also be useful to guide the work of other developers of similar public-space robot systems. **4.2.2 Standards and metadata** System logs: * Timestamped entries for every system event, internal state update, and message exchanged among the system components * Every log includes a synchronisation signal to allow it to be matched up later with video data Videos: * High-quality HD videos, compressed with a useful codec such as H.264 * If multiple videos are included of a session, they will be synchronised with each other using a synchronisation signal Questionnaires: * Every questionnaire annotated to indicate the date, time, and location of the session it relates to, in order to allow questionnaire responses to be correlated with videos and/or logs Annotations: * Any annotated data will be stored alongside the raw data. ### 4.2.3 Data sharing Data will be shared internally with project partners using the MuMMER gitlab server. Where appropriate, reusable data such as log files or (where permission has been obtained) video data will be shared using the University of Glasgow's Enlighten Research Data server. ### 4.2.4 Archiving and preservation (including storage and backup) During the course of the MuMMER project, all data will be stored on the MuMMER project gitlab server, which is backed up regularly. Public data will be released through the University of Glasgow's Enlighten Research Data server, where it will persist after the end of the project. ## 4.3 Software Development Data DMP The canonical version of this DMP is stored in DMPonline ( _https://dmponline.dcc.ac.uk/_ ). Please contact the MuMMER administrator if you require access to the online DMP. ### 4.3.1 Data set description This data set will encompass all audiovisual recordings specifically designed to help in the development of the technical components of MuMMER. Note that this dataset is distinct from the Human-Robot Interaction dataset: while the data in that set is related to interactions with the deployed robot system, the data in this set is specifically gathered and designed to help in developing and training the software components of the system. Within the project, this data will be useful for ensuring that the components of the robot system function appropriately in the target environment: for example, this will include recordings made with the robot's own sensors of users interacting in the target deployment locations within Ideapark. The data will also be used as the basis for scientific publications describing the technical development. Outside of the MuMMER project, this data could also be useful to guide the work of other developers of similar technical components, both as training and as test data. **4.3.2 Standards and metadata** Videos: * High-quality HD videos, compressed with a useful codec such as H.264 * If multiple videos are included of a session, they will be synchronised with each other using a synchronisation signal ### 4.3.3 Data sharing Data will be shared internally with project partners using the MuMMER gitlab server. Where appropriate, reusable data such as log files or (where permission has been obtained) video data will be shared using the University of Glasgow's Enlighten Research Data server. ### 4.3.4 Archiving and preservation (including storage and backup) During the course of the MuMMER project, all data will be stored on the MuMMER project gitlab server, which is backed up regularly. Public data will be released through the University of Glasgow's Enlighten Research Data server, where it will persist after the end of the project.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1085_SPARK_688417.md
# 1\. Executive Summary _______________________________________________________________________ 5 2. Introduction _______________________________________________________________________________ 5 1. Scope of the activities and of the deliverable _________________________________________ 5 3. the data In spark __________________________________________________________________________ 5 4. Focus on each WP ______________________________________________________________________ 10 1. WP1: CHARACTERIZATION OF USERS’ NEEDS AND EXPECTATIONS ________________ 11 2. WP2 - DEVELOPMENT OF SPARK MODULES ________________________________________ 13 3. WP3 – DEVELOPMENT AND TESTS OF SPARK PLATFORM __________________________ 15 4. WP4 – TEST AND VALIDATION IN RELEVANT ENVIRONMENT ______________________ 17 5. WP5 – VALIDATION AND DEMONSTRATIONS IN REAL OPERATIONAL ENVIRONMENT ___________________________________________________________________________ 20 5. Archiving and Storage of the Data ______________________________________________________ 24 1. Public data __________________________________________________________________________ 24 2. Private data _________________________________________________________________________ 24 3. Format of the data to be archived and published in ZENODO. _____________________ 24 6. PROPOSED POLICY _____________________________________________________________________ 24 7. Conclusion ______________________________________________________________________________ 25 1. **EXECUTIVE SUMMARY** The present deliverable is the result of the initial consultation among the partners for what concerns the Data Management Plan. The activities started soon at the beginning of the project and the issues related to storage and sharing of the project data (both generated, processed and collected) have been carefully analysed. The plan that will be presented in the following gives a general overview for the project as a whole and per each WP. The Consortium knows that as the project will evolve, also the policy to apply for the data management will have to be updated and/or revised. 2. **INTRODUCTION** ### 2.1. SCOPE OF THE ACTIVITIES AND OF THE DELIVERABLE This document is an initial overview on the approach the SPARK Consortium will use for the Management of the data. It is important to highlight here that this approach will be further detailed and improved along the project evolution and two updates of the deliverable are foreseen at M21 and at M36. The deliverable falls under the headings of the WP6 and in particular within the context of T6.1. The aim is to manage all the issues related to the availability of the data collected and generated and processed during the SPARK project lifecycle. # **3\. THE DATA IN SPARK** SPARK will deal with two main types of data: * (Raw) Gathered Data; * Processed/Generated Data. Moreover, two main perspectives have been considered in order to define a plan for the management of data. On the one hand, data can be characterized in terms of sensitivity whenever they deal with personal data (of a person or an external entity with respect to the consortium – e.g. a End User’s customer participating a creative design session). On the other hand, data can be also classified consistently with the overall purpose they play along the project activities. These two complementary classifications are not mutually exclusive and provide additional details to understand how to deal with the data generated along the project. This organization allows defining a data management policy that appropriately balances between the need of keeping confidential what is critical for the development of the SPARK platform and the publication of data in adequate repositories (e.g.: it supports decisions concerning the embargo duration before making data publicly available). The two above-mentioned additional classes can be detailed as: * **Sensitivity** o data could be sensitive because of confidential issues that goes beyond the consortium (because of 3 rd parties rights); * data that are sensitive because of the ethical issues; o data that are not sensitive. * **Project strategy** o data that are relevant for the business/market exploitation of the SPARK platform; * data that are relevant for what concerns the scientific objectives of the project. In particular: * Analysis of the dynamics of co-creative processes of teams dealing with digital and physical prototypes; * Development of the SAR-based responsive ICT platform, i.e. the SPARK platform; * Study and analysis of how and to what extent the SAR technology can stimulate and enhance design creativity through a comparison against pre-defined metrics in real operational design environments. In the following, a matrix is provided to show how the WP leader will have to cluster the different data gathered or processed in the relevant WP, once agreed by the consortium. **Sensitive data** **due to** **confidentiality** **issues beyond** **the** **consortium** **Sensitive** **data due** **to ethical** **issues** **Non** **-** **sensitive** **data** **Recorded** **Processed** **Sensitivity** **Elaboration** ### **Strategy** Figure 1. The three dimensions to determine the management of data In general, whatever will be the specific classification of data according to the above presented categories, in the end they could be classified as: * Confidential; * Temporarily confidential (due to the expiration of the NDAs or before anonymization); - Not confidential. A suggested policy for the dissemination plan has been initially defined in this document for each outcome of the WP activities. This policy suggests how the publication of the data will be managed during the project. Other descriptors should finally have identified to characterise the data from the technical point of view. These descriptors are: * Format: type and corresponding extension that identify the file containing the data * Medium of data: physical / virtual * Projected volume: hypothesis made on the number of each data * Data reading: the tool used to read the data in a specific format * Metadata: information that further identifies the data and that can be used to manage (storing and searching) the data within a database. Table 1 provides examples of the possible data readings while Table 2 provides the structure of the metadata (Id and Format) and some application examples related to the file used so far. Data reading and metadata should be the same for all work packages, since they are independent from the origin of the data. Table 1: Examples of possible data readings <table> <tr> <th> </th> <th> **DATA READING** </th> <th> </th> </tr> <tr> <td> **FORMAT** </td> <td> </td> <td> **MOST USED TOOL(S)** </td> <td> **SIZE RANGE** </td> </tr> <tr> <td> Text / .doc </td> <td> </td> <td> MS Word, Open Office Writer </td> <td> 0..10 Mo </td> </tr> <tr> <td> Spreadsheet / .xls </td> <td> </td> <td> MS Excel, Open Office Calc </td> <td> 0..2 Mo </td> </tr> <tr> <td> Recordings </td> <td> </td> <td> Player: VLC </td> <td> 0..N Go </td> </tr> <tr> <td> Translation / .srt </td> <td> </td> <td> Subtitles editor: notepad++, notepad </td> <td> 0..2 Mo </td> </tr> </table> Table 2: Structure of the metadata and examples <table> <tr> <th> **DATA FORMAT** </th> <th> </th> <th> **METADATA** </th> <th> </th> </tr> <tr> <th> **ID** </th> <th> **FORMAT** </th> <th> **EXAMPLE** </th> <th> **COMMENT** </th> </tr> <tr> <td> Text / .doc & Spreadsheet / .xls </td> <td> File / Title </td> <td> SPARK_[wkp]_[doc]_[version] </td> <td> SPARK_WKP3_SPARKPlatform- Architecture_v1.2 </td> <td> \- </td> </tr> <tr> <td> Subject </td> <td> Text </td> <td> Spark Platform Architecture </td> <td> \- </td> </tr> <tr> <td> Author </td> <td> [company]_name_surname </td> <td> VISEO_DURAND_Pierre </td> <td> \- </td> </tr> <tr> <td> Responsible </td> <td> [company]_name_surname </td> <td> VISEO_MARTIN_Arthur </td> <td> Actor in charge of the document production </td> </tr> <tr> <td> Company </td> <td> Text </td> <td> VISEO </td> <td> \- </td> </tr> <tr> <td> Diffusion </td> <td> yyyy-mm-dd </td> <td> 2016-06-23 </td> <td> Aimed date for data diffusion </td> </tr> <tr> <td> Recordings </td> <td> File </td> <td> SPARK_[wkp]_[doc]_[yyyymm_dd] </td> <td> SPARK_WKP1_Co- creative-design_2016- 07-15 </td> <td> \- </td> </tr> <tr> <td> Translation / .srt </td> <td> File </td> <td> SPARK_[wkp]_[doc]_[yyyymm_dd] </td> <td> \- </td> <td> Same as aimed record </td> </tr> </table> Finally, Figure 2 shows a diagram that summarises the expected outcomes deriving from the all the task of the project. The diagram makes easier the identification and the clustering of the type of data for each WP. Figure 2: Summary of the outcomes for each Task (WP1-WP5). **4\. FOCUS ON EACH WP** This section proposes the characterisation of the data for each WP. In particular, the following paragraphs report the part of the diagram related to a specific WP, as a shown in Figure X and a table where the data are classified according to the criteria described in the previous section i.e.: * Origin of data (type specified) o Gathered data * Processed / generated data * Sensitivity * Because of 3 rd parties’ rights o Because of ethical issues o Non sensitive data * Project strategy * Business / Commercialization o Scientifically relevant * Suggested policy * Description o Format o Medium of data o Projected volume o Data reading (see below) o Metadata (see below) ## 4.1. WP1: CHARACTERIZATION OF USERS’ NEEDS AND EXPECTATIONS <table> <tr> <th> Titolo </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> WP1 </td> <td> Beginning of the project </td> <td> T 1 . 1 T 1 . 2 </td> <td> T 1 . 3 T 1 . 4 T 1 . 5 **Deliverable** **1** **.** **1** Definition of :  WP 1 Case studies  Metrics for creative sessions </td> <td> </td> </tr> </table> Figure 3: Expected outcomes of WP1 Table 3: Type of data and classification for WP1 <table> <tr> <th> </th> <th> **Origin of data (Type specified)** </th> <th> **Sensitivity** </th> <th> **Project strategy** </th> <th> **Suggested Policy** </th> <th> **Description** </th> </tr> <tr> <th> **Gathered data** </th> <th> **Processed / Generated data** </th> <th> **Because of 3rd** **parties rights** </th> <th> **Because of** **ethical issues** </th> <th> **Non** **sensitive data** </th> <th> **Business / commercialization** </th> <th> **Scientifically relevant** </th> <th> **Format** </th> <th> **Medium of data** </th> <th> **Projected volume** </th> </tr> <tr> <td> **WP1** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Task 1.1 </td> <td> </td> <td> Case studies </td> <td> x </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Data to be made public according to the indication of responsible partners </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 1.2 </td> <td> Existing metrics </td> <td> </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.2 </td> <td> </td> <td> Co-creative design metrics </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> </table> <table> <tr> <th> Task 1.3 </th> <th> Recordings </th> <th> </th> <th> x </th> <th> x </th> <th> </th> <th> </th> <th> x </th> <th> Data to be made public according to the indication of responsible partners </th> <th> Video files </th> <th> Virtual </th> <th> 1 per case study </th> </tr> <tr> <td> Task 1.3 </td> <td> </td> <td> Transcripts/translation </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 1.3 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.3 </td> <td> </td> <td> Analysis of design protocols </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.4 </td> <td> Recordings </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> Data to be made public according to the indication of responsible partners </td> <td> Video files </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 1.4 </td> <td> </td> <td> Transcripts/translation </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 1.4 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.4 </td> <td> </td> <td> Analysis of design protocols </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.5 </td> <td> Interviews </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 per interview </td> </tr> <tr> <td> Task 1.5 </td> <td> </td> <td> Summary of the results </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.6 </td> <td> </td> <td> Combined analysis of needs and expectations of End Users </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first paper (the deliverable D1.2 is, however, public) </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 1.7 </td> <td> </td> <td> SPARK design specification </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> Ready for Open Access being the deliverable D1.3 public </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> </table> ## 4.2. WP2 - DEVELOPMENT OF SPARK MODULES <table> <tr> <th> Titolo </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> WP2 </td> <td> Beginning of the project </td> <td> T 2 . 1 </td> <td> Review of the State of the Art for SAR technologies **Deliverable** **2** **.** **1** </td> <td> T 2 . 3 T 2 . 2 </td> <td> T 2 . 4 SPARK modules developed SAR technologies selected **Deliverable** **2** **.** **2** **Deliverable** **2** **.** **3** From WP 1 ( T 1 . 7 ) Metrics for the SPARK modules </td> <td> T 2 . 5 SPARK modules validated Data about  Tracking  Interaction  Visualization Performances , ... </td> <td> SPARK modules T 2 . 6 Definitive set of features **Deliverable** **2** **.** **4** To WP 3 ( T 3 . 1 ) **(T2.2/T2.4/T2.5/T2.6)** </td> </tr> </table> Figure 4: Expected outcomes of WP2 Table 5: Type of data and classification for WP2 <table> <tr> <th> </th> <th> **Origin of data (Type specified)** </th> <th> **Sensitivity** </th> <th> </th> <th> **Project strategy** </th> <th> **Suggested Policy** </th> <th> </th> <th> **Description** </th> <th> </th> </tr> <tr> <th> **Gathered data** </th> <th> **Processed /** **Generated data** </th> <th> **Because of 3rd** **parties rights** </th> <th> **Because of** **ethical issues** </th> <th> **Non** **sensitive data** </th> <th> **Business / commercialization** </th> <th> **Scientifically relevant** </th> <th> **Format** </th> <th> **Medium of data** </th> <th> **Projected volume** </th> </tr> <tr> <td> **WP2** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Task 2.1 </td> <td> Literature review </td> <td> </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> Ready for Open Access being the deliverable D2.1 public </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1..N </td> </tr> <tr> <td> Task 2.1 </td> <td> Hardware specifications </td> <td> </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> Ready for Open Access being the deliverable D2.1 public </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1..N </td> </tr> </table> <table> <tr> <th> Task 2.2 </th> <th> </th> <th> Modules evaluation metrics </th> <th> </th> <th> </th> <th> x </th> <th> </th> <th> x </th> <th> To be published after the first paper related to the tests in T2.5 </th> <th> Spreadsheet / .xls </th> <th> Virtual </th> <th> 1 </th> </tr> <tr> <td> Task 2.3 </td> <td> </td> <td> Hardware selection </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> To be published at the beginning of the T5.3/5.4 </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 2.4 </td> <td> </td> <td> Modules prototype description </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 2.5 </td> <td> Set of answers to questionnaires </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after anonymization after the first paper </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 2.5 </td> <td> Modules evaluation data </td> <td> </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 2.5 </td> <td> </td> <td> Technological benchmark </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 2.6 </td> <td> </td> <td> SPARK platform features </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> To be published after the first release of the integrated SPARK platform </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> </table> ## 4.3. WP3 – DEVELOPMENT AND TESTS OF SPARK PLATFORM Figure 5: Expected outcomes of WP3 Table 6: Type of data and classification for WP3 <table> <tr> <th> </th> <th> **Origin of data (Type specified)** </th> <th> </th> <th> **Sensitivity** </th> <th> </th> <th> **Project strategy** </th> <th> **Suggested Policy** </th> <th> </th> <th> **Description** </th> <th> </th> </tr> <tr> <th> **Gathered data** </th> <th> **Processed /** **Generated data** </th> <th> **Because of 3rd** **parties rights** </th> <th> **Because of** **ethical issues** </th> <th> **Non** **sensitive data** </th> <th> **Business / commercialization** </th> <th> **Scientifically relevant** </th> <th> **Format** </th> <th> **Medium of data** </th> <th> **Projected volume** </th> </tr> <tr> <td> **WP3** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Task 3.1 </td> <td> </td> <td> SPARK platform architecture </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> To be published after the release of the Deliverable D3.1 </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 3.2 </td> <td> </td> <td> Description of SPARK platform versions </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the end of the project </td> <td> Source code </td> <td> Virtual </td> <td> 1..N </td> </tr> <tr> <td> Task 3.3 </td> <td> Set of answers to questionnaires </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 3.3 </td> <td> SPARK platform evaluation data </td> <td> </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc and / or Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> </table> ## 4.4. WP4 – TEST AND VALIDATION IN RELEVANT ENVIRONMENT <table> <tr> <th> Titolo </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> WP4 </td> <td> T 4 . 1 T 4 . 2 </td> <td> Definition of : WP 4 Case studies Experimental protocol **Deliverable** **4** **.** **1** From WP 3 ( T 3 . 3 ) </td> <td> Observations concluded T 4 . 3 T 4 . 4 Observations concluded With Control Group # 1 Control Group # 2 \+ Test Group Recordings & technical data With ICT tools \+ SPARK platform Recordings of design sessions </td> <td> T 4 . 5 Data T 4 . 3 / T 4 . 4 **Deliverable** **4** **.** **2** Analyzed </td> </tr> </table> Figure 6: Expected outcomes of WP4 Table 7: Type of data and classification for WP4 <table> <tr> <th> </th> <th> **Origin of data (Type specified)** </th> <th> </th> <th> **Sensitivity** </th> <th> </th> <th> **Project strategy** </th> <th> **Suggested Policy** </th> <th> </th> <th> **Description** </th> <th> </th> </tr> <tr> <th> **Gathered data** </th> <th> **Processed /** **Generated data** </th> <th> **Because of 3rd** **parties rights** </th> <th> **Because of ethical issues** </th> <th> **Non** **sensitive data** </th> <th> **Business / commercialization** </th> <th> **Scientificall y relevant** </th> <th> **Format** </th> <th> **Medium of data** </th> <th> **Projected volume** </th> </tr> <tr> <td> **WP4** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Task 4.1 </td> <td> </td> <td> Case studies </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> To be published after the end of the embargo period defined in the NDAs </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 4.2 </td> <td> </td> <td> Testing protocol </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> </table> <table> <tr> <th> Task 4.3 </th> <th> Recordings </th> <th> </th> <th> x </th> <th> x </th> <th> </th> <th> </th> <th> x </th> <th> Data to be made public according to the indication of responsible partners </th> <th> Video files </th> <th> Virtual </th> <th> 1 per case study </th> </tr> <tr> <td> Task 4.3 </td> <td> </td> <td> Transcripts </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 4.3 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 4.3 </td> <td> </td> <td> Analysis of design protocols </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 4.4 </td> <td> Recordings </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> Data to be made public according to the indication of responsible partners </td> <td> Video files </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 4.4 </td> <td> </td> <td> Transcripts </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 4.4 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 4.4 </td> <td> </td> <td> Analysis of design protocols </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> </table> <table> <tr> <th> Task 4.4 </th> <th> </th> <th> SAR module performances of the SPARK platform </th> <th> </th> <th> </th> <th> x </th> <th> x </th> <th> x </th> <th> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </th> <th> Spreadsh eet / .xls </th> <th> Virtual </th> <th> 1 </th> </tr> <tr> <td> Task 4.5 </td> <td> </td> <td> Information MGMT system performances of the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsh eet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 4.5 </td> <td> </td> <td> Comparison of outcomes of design sessions run with and without the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> </table> ### 4.5. WP5 – VALIDATION AND DEMONSTRATIONS IN REAL OPERATIONAL ENVIRONMENT <table> <tr> <th> </th> <th> **Origin of data (Type specified)** </th> <th> </th> <th> **Sensitivity** </th> <th> **Project strategy** </th> <th> **Suggested Policy** </th> <th> </th> <th> **Description** </th> <th> </th> </tr> <tr> <th> **Gathered data** </th> <th> **Processed /** **Generated data** </th> <th> **Because of 3rd** **parties rights** </th> <th> **Because of ethical issues** </th> <th> **Non** **sensitive data** </th> <th> **Business / commercialization** </th> <th> **Scientifically relevant** </th> <th> **Format** </th> <th> **Medium of data** </th> <th> **Projected volume** </th> </tr> <tr> <td> **WP5** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Task 5.1 </td> <td> Recordings </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the end of the embargo period defined in the NDAs </td> <td> Video files </td> <td> Virtual </td> <td> 1 per case study </td> </tr> </table> <table> <tr> <th> Titolo </th> <th> </th> <th> </th> </tr> <tr> <td> WP5 </td> <td> T 5 . 1 T 5 . 2 Platform validated With students Platform validated End Users premises **Deliverable** **5** **.** **1** **(** **DEM** **)** Recordings & analysis of activities @ End Users premises From WP 3 ( T 3 . 3 ) </td> <td> @ **Deliverable** **5** **.** **2** **(** **DEM** **)** Recordings & analysis of design activities with students From WP 3 ( T 3 . 3 ) </td> <td> T 5 . 4 </td> <td> SPARK Platform in design sessions with other End Users and Custo T 5 . 3 **Deliverable** **5** **.** **3** **(** **DEM** **)** **Deliverable** **5** **.** **4** Recordings ( & analy **(DEM)** of Show cases </td> <td> s demonstrations rmers Recordings ( & analysis ? ) of demonstrations Awareness on the SPARK sis?) platform increased </td> </tr> </table> Figure 8: Expected outcomes of WP5 Table 9: Type of data and classification for WP5 <table> <tr> <th> Task 5.1 </th> <th> </th> <th> Transcripts </th> <th> x </th> <th> x </th> <th> </th> <th> </th> <th> x </th> <th> To be published open access after anonymization, after the first paper </th> <th> Text / .doc / .srt </th> <th> Virtual </th> <th> 1 per case study </th> </tr> <tr> <td> Task 5.1 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.1 </td> <td> </td> <td> Analysis of design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.1 </td> <td> </td> <td> SAR module performances of the SPARK platform </td> <td> </td> <td> </td> <td> X </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.1 </td> <td> </td> <td> Information MGMT system performances of the SPARK platform </td> <td> </td> <td> </td> <td> X </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.2 </td> <td> Recordings </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Video files </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 5.2 </td> <td> </td> <td> Transcripts </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 5.2 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> </table> <table> <tr> <th> Task 5.2 </th> <th> </th> <th> Analysis of design protocols </th> <th> </th> <th> </th> <th> x </th> <th> </th> <th> x </th> <th> To be published after the first paper </th> <th> Text / .doc </th> <th> Virtual </th> <th> 1 </th> </tr> <tr> <td> Task 5.2 </td> <td> </td> <td> SAR module performances of the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.2 </td> <td> </td> <td> Information MGMT system performances of the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.3 </td> <td> Recordings </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> Data to be made public according to the indication of responsible partners </td> <td> Video files </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 5.3 </td> <td> </td> <td> Transcripts </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper has been published </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 5.3 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.3 </td> <td> </td> <td> Analysis of design protocols </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.3 </td> <td> </td> <td> SAR module performances of the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> </table> <table> <tr> <th> Task 5.3 </th> <th> </th> <th> Information MGMT system performances of the SPARK platform </th> <th> </th> <th> </th> <th> x </th> <th> x </th> <th> x </th> <th> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </th> <th> Spreadsheet / .xls </th> <th> Virtual </th> <th> 1 </th> </tr> <tr> <td> Task 5.4 </td> <td> Recordings </td> <td> </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> Data to be made public according to the indication of responsible partners </td> <td> Video files </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 5.4 </td> <td> </td> <td> Transcripts </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published open access after anonymization, after the first paper </td> <td> Text / .doc / .srt </td> <td> Virtual </td> <td> 1 per case study </td> </tr> <tr> <td> Task 5.4 </td> <td> </td> <td> Coded design protocols </td> <td> x </td> <td> x </td> <td> </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.4 </td> <td> </td> <td> Analysis of design protocols </td> <td> </td> <td> </td> <td> x </td> <td> </td> <td> x </td> <td> To be published after the first paper </td> <td> Text / .doc </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.4 </td> <td> </td> <td> SAR module performances of the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> <tr> <td> Task 5.4 </td> <td> </td> <td> Information MGMT system performances of the SPARK platform </td> <td> </td> <td> </td> <td> x </td> <td> x </td> <td> x </td> <td> To be published after the first publication or after the exploitation strategy benefits of them, whatever comes later </td> <td> Spreadsheet / .xls </td> <td> Virtual </td> <td> 1 </td> </tr> </table> **5\. ARCHIVING AND STORAGE OF THE DATA** ### 5.1. PUBLIC DATA The public data will be made available through the ZENODO platform, which will ensure the publication of such data also on OPENAIRE (http://zenodo.org/about). ### 5.2. PRIVATE DATA All the project gathered, generated and processed data and documents will be archived on a web-repository managed by Viseo: CODENDI, which will ensure the secure and safe storage of the data. Codendi is an ALM (Application Lifecycle Management) which permits a fine tuning of access rights. However, video recordings of the design sessions are likely to imply very large files (several TBs per session) that can hardly be managed via a standard Internet connection. Therefore, all the original video recordings will be stored in appropriate data storage devices and at least two copies of each file will be stored in at least two different locations (at PoliMI and at GINP) in order to reduce the risks of losing data. All the devices shall be kept with by the involved organizations with the maximum care they dedicate to the storage of sensible data. All private data will be stored for at least 5 years. 5.3. FORMAT OF THE DATA TO BE ARCHIVED AND PUBLISHED IN ZENODO. The managed data will concern recordings of design sessions (with downsized image resolution for reducing the size of the files), documents (Text files; Spreadsheets; Presentations) as well as data acquired to assess the technical performance of the system. All these data will be archived using standard file format, i.e. files that the most diffused editors, readers, office suites can handle. **6\. PROPOSED POLICY** All the data that are relevant and strategic from a scientific point of view will be made available after the accomplishment of the analysis and the issue of the first scientific publications. For what concerns the data gathered during the testing activities, they will be initially anonymised, so that also the ethical issues could be respected. Data that are not confidential and that are not strategic for the business and market will be made available at latest after 3 years from the collection or processing. As the project cannot be considered as a fixed element, and therefore changes and evolutions can occur during the whole project lifecycle, the Steering Committee can introduce modifications to the current classification as soon as these modifications are necessary in order to avoid problems related to confidential issues. The modifications will be registered in the minutes of the meeting and will be integrated in the next versions of the DMP. **7\. CONCLUSION** This document is the first version of the Data Management Plan as it has been conceived by the SPARK Consortium, so as to have a path that will be used for the Management of the data that will be collected, generated and/or processed during the project lifecycle. This deliverable is the result of the activities of task 6.1 (WP6) that have carefully analysed all the possibilities related to the confidentiality of the data and the importance of these data from a strategic point of view. In addition, ethical issues related to the sensitivity of data collected during the testing activities have been taken into account. Eventually, the modalities for the data archiving, storage and safeguard of the data have been addressed. All the data that can be made openly accessible will be uploaded by means of Zenodo. Finally, the approach described in this deliverable will be further detailed and improved along the project evolution and specifically at M21 and at M36.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1086_MAMI_688421.md
# Executive Summary Horizon 2020 projects can participate in a limited and flexible pilot action on open access to research data. Participating projects must develop a Data Management Plan (DMP) specifying which data will be openly accessible. This deliverable contains that data management plan. As per the _Guidelines on Data Management in Horizon 2020_ (Version 2.1, 15 February 2016) 1 , this document covers: * The handling of research data during & after the project (Sections 2 and 3) * What data will be collected, processed or generated (see Section 4) * What methodology & standards will be applied (Section 2) * Whether data will be shared/made open access & how (Sections 2 and 3.3) * How data will be curated & preserved (Section 3.4) # Introduction One of the primary objectives of the MAMI project is the collection and investigation of data about middlebox manipulation of packets on the Internet. To this end, MAMI stores data in a _Middlebox Observatory_ (or ‘Observatory’ for short). This Observatory is meant primarily for data generated in the MAMI project itself, but may store or cache data from other measurement projects as well. Data being offered by the MAMI data Observatory falls into three categories. **Data Generated by MAMI.** These are data generated by MAMI’s own probes and experiments. For these data sets, MAMI Observatory is the authoritative source. One example of such data is Pathspider data (see Section 4.1). **Data Held by MAMI.** These are data that were not generated by MAMI, but which are held within the MAMI Observatory, for example because they are otherwise not available on demand. For these data, MAMI can also be viewed as an authoritative source. **Data Cached by MAMI.** These are data that are not generated by MAMI, which may be available online, for which MAMI is not the authoritative source, and which are held by the MAMI Observatory only for convenience. At the time of writing, it is not at all clear that there will be any data of the second category (held authoritatively by MAMI, but not generated by it). We include this category here for completeness’ sake in case data of that category should indeed exist. It also not certain that data of the third category (data cached by MAMI) will be used in MAMI except as interesting background data, e.g., to calibrate MAMI’s own measurements. Since MAMI data contains IP addresses and other potentially Personally Identifiable Information (PII), MAMI can not give out these raw data sets to everyone (see Section 3.3). Instead, MAMI runs a query system on top of the Observatory that will aggregate MAMI data to so called _Observations_ and thereby only return data that is free of PII (Section 2). Researchers can however ask for an agreement with the MAMI consortium that would allow them access to the raw data files. Measure \- ments Obser- vations Analysis Modules Measure- ment Campaigns Analysts Jupyter Jupyter Jupyter Analysis Console General Public Web Server Figure 1: Management Architecture # MAMI Observatory Architecture and Implementation The management architecture is depicted in Figure 1. All access to the data management infrastructure is mediated through a web server, nginx in this case. The data itself is stored in two databases. One stores observations in the format specified by MAMI project, and is used to drive the publicly- accessible observatory website, as well as exploratory analysis. The other database holds the raw data that from which these observations are derived. This data might be available in different formats depending on the tool(s) and methodologies used to collect the data (see section 4). Each data set might have its own analysis module to derive observations. The raw measurement data are stored in a Hadoop filesystem (HDFS) node. Observations are stored in a MongoDB NoSQL database. Access to these data falls into three categories: * **Measurement Campaigns.** These may be people or machines. They are equipped with tokens that allow them to upload raw measurements to the measurement infrastructure. * **Analysts.** These are people who write analysis modules that turn measurements and other observations into more observations for exploratory analysis of both measurements and observations. * **General Public.** Members of the general public can access observations based on a web frontend. The analysis modules will be written in a variety of languages, with a preference for Python. Whenever new measurements arrive in the measurement database, appropriate analysis modules will be triggered that transform measurements into observations which can even trigger other analysis modules that derive observation based on other observations. The analysis consoles are implemented using Jupyterhub, a multi-user Jupyter server which enables the use of Python facilities for data analysis such as Pandas 2 . The MAMI Observatory is managed by ZHAW; Dr. Stephan Neuhaus is the main contact. ## Software Management We use a software stack that consists exclusively of open-source software for the storage and evaluation of measurements and observations. The list of this software contains Linux (Ubuntu 14.04 LTS); 3 , HDFS; 4 , jupyterhub 5 MongoDB 6 , Python 3; 7 and nginx 8 . All of this software is available also in previous revisions, from a variety of sources on the Internet. The project prefers open source releases of its products. Open-source software generated by the project, including measurement tool as described in Section 4 as well as any code developed for the Observatory itself for management and analysis purposes, are made available and archived on GitHub, at https://github.com/mami-project. We commit to maintaining access to the software necessary for interacting with data in the Observatory as long as the Observatory is in operation. Other software that is not directly related to the operation of the Observatory may not made generally available to the public and is stored in the project’s github repository at gitlab. mami-project.eu, where it is backed up daily and are transferred to off-site storage once a week. # Data Management Plan ## Data Set Description The measurement work in the MAMI project is largely about assessing and classifying the types and extent of impairments to path transparency in the Internet by middleboxes. The data generated and stored by MAMI therefore consists of empirical observations of paths that exist in the Internet, and the conditions that exist along with them. A typical observation will measure whether on a certain date a certain condition was true on a certain path; for example, “On 1 April 2016, at 12:34:45 UTC, it was possible to establish a TCP connection from 2001:db8::dead:beef to 2001:db8:abcd::1 on port 443.” Condition information is often derived from observation of packets and packet headers resulting from certain active Internet measurement activity; conditions may also contain additional data about measured parameters associated with the condition (e.g., round-trip time as measured by packets with given properties). More formally, an observation consists of: * one or more **timestamps** at which the observation is considered valid, either because it was directly measured or derived from a direct measurement taken at these times; * a description of the **path** for which the observation is considered valid, consisting of a sequence of one or more path elements (i.e. addresses, prefixes, autonomous system numbers, or data-source-generated pseudonyms therefor); * a description of the **condition** observed along the path, as defined by the analysis module generating the observation; * any additional **values** associated with the condition; and * a reference to the **source** of the observation, both the raw data (from which metadata is available) as well as the version of the analysis module that generated it. Observations are either directly collected from vantage points around the Internet, or they are computed from raw measurement data. Data uploaded to MAMI comes from publicly available cloud servers as well as from Internet- connected testbeds. The measurement run may be simple active measurements such as pings or traceroutes, or they may be more elaborate exchanges of protocol messages. Some data are generated explicitly for MAMI. For example, Pathspider (see Section 4.1) is a tool developed within the project. Pathspider does active A/B testing of connectivity dependency and feature usability of optional transport features (e.g., Explicit Congestion Notification (ECN), and the risk of enabling ECN by default on the client side). In order to determine the parameters of optional transport features, it is necessary to keep track of certain IP and TCP header fields, something that most data sets do not do. Pathspider will in turn be deployed on vantage points including hosts from the Measuring Mobile Broadband Networks in Europe (MONROE) project. MAMI data are useful to all networking researchers interested in path-related issues. For MAMI itself, this is mainly path transparency, but it could also be about connectivity or even about certain protocols. For example, MAMI data might be useful to determine connectivity to and from countries whose governments aim to control or monitor their citizens’ use of the Internet. MAMI has not yet generated any data, but MAMI-like data have in the past been used for scientific publications; for example, on using path transparency observations to support protocol engineering [7], on middlebox cooperation [9], or on the Internet-wide deployment of ECN [8]. ## Standards and Metadata MAMI data consist of observations of path conditions, and raw data from which these observations can be derived. Raw data generated by the project or imported to the measurement data must contain at least the following metadata, derived from the metadata available from each data source. * A (low-precision) timestamp at which the measurement data was created * A (low-precision) timestamp at which the measurements were added to the Observatory * Information about the entity (organization, individual, etc.) supplying the data * Information about any licensing terms that may apply to the data * Any URL for automated retrieval / re-retrieval of the data Information about the source and target of active measurements, and timestamps for each part of the measurement, from which path and timestamp information in the observations are derived, are stored in the data itself. Currently, raw data is stored as JSON [2], CSV [4], and IPFIX [1]; derived observations are stored as JSON. ## Data Sharing As described in Section 1, due to privacy concerns MAMI will not provide general access to raw data sets to external users. These raw data are stored inside MAMI with “copyright MAMI consortium, all rights reserved”. MAMI can license these data for use by other researchers on a case-by-case basis, after these researchers have come to an agreement with the MAMI consortium to access the raw data and not expose any PII in any derived results. The results from the MAMI query interface, which provides public access to the observations stored in the MAMI database, are licensed under the Creative Commons “Attribution 4.0 International” (CC BY 4.0) license (see https://creativecommons.org/licenses/by/4.0/) . The MAMI Observatory is open to all data sets where MAMI query results involving such data sets can be shared using CC BY 4.0. However, storing MAMI- generated data in the Observatory always has priority over third-party data, and third-party data may be removed from the Observatory should space become an issue. This is obviously not a problem for cached data sets, but MAMI will even try to find a home for data sets that are not merely cached and that would otherwise be orphaned, on a best-effort basis. Data sets that are uploaded to MAMI, but which are later found not to be compatible with the MAMI data-sharing license, may be removed without notice. ## Archiving and Preservation (Incl. Storage and Backup) At the time of this first version of the DMP, the Observatory does not contain live data. Plans for archiving are thus preliminary and fluid. Only data that are originally generated by MAMI will be archived and curated. At the time of writing, this includes Pathspider data and certain tracebox or copycat data sets. For data set storage and backup, ZHAW will back up the HDFS and MongoDB onto external disk drives. Several drives will be used in rotation, and at least one drive will always be stored off-site. After the end of the project, ZHAW will prepare a final, unalterable version of the data. These data will then be made available to researchers on request. Support for the MAMI web site and public-facing repository query tools will be continued as long as funds are available to sustain this long-term curation. # Data Sources The MAMI Observatory is managed as a single, unitary data set, containing both observations for querying, as well as raw data for analysis and reanalysis from which these observations are derived. In this section, we list the data sources we presently know will provide data to the MAMI Observatory. This list is not complete, and will expand in the future. Whenever measurement from a new data source is added to the Observatory, it is further necessary to add new analysis modules to generate observations from this data. ## PathSpider PathSpider is a generalized tool for building connectivity and optional transport feature / transport protocol A/B functionality tests. A/B testing differentiates transient connectivity failures from connectivity failures due to the use of a particular transport protocol or feature. It currently supports testing of ECN connectivity and negotiation, but work is presently underway to add support for Multipath TCP, TCP Fast Open, TCP window scale negotiation, Stream Control Transmission Protocol (SCTP), and other protocols and protocol features. PathSpider functions by generating two simultaneous flows from the source to the target, and passively observing these flows at the source to determine the characteristics of the flow. The raw data generated by this observation process are essentially flow data, linking characteristics of the “A” flow (feature enabled) to those of the “B” flow (feature disabled, experimental control). These raw flow data are analyzed by the PathSpider tool itself into MAMI-native observation records before transmission to the Observatory. PathSpider’s output therefore consists of { _time,path,condition,value_ } tuples as in section 3.1: **timestamps** Observation time of the first packet in the flow from which the condition was derived. **path** Path derived from the source and destination addresses of the measurement. **condition** Condition observed along the path; for example “ECN negotiation successful”, “ECN negotiation causes connectivity failure”. **values** An value associated with the condition (not yet required by present conditions, for future use). **source reference** Version of Pathspider used (in terms of GitHub tag or commit hash). Pathspider was designed for large-scale testing of millions of targets (e.g. the Alexa top million webservers) from a set of active measurement agents; past measurement campaigns have used DigitalOcean cloud server instances as active measurement agents. However, for future verification of path support for uncommon features (e.g. SCTP), it may be necessary to operate targets with known support for these features, and passive observation of the traffic at the targets can also be used to generate observations. ## Tracebox Tracebox experiments attempt to contact a remote server from a vantage point and identify the used path. Tracebox data sets are in a binary format called warts, native to CAIDA’s Scamper tool 9 on which it is based. These data are translated into a Tracebox-specific JSON schema for analysis at the observatory. Each data set contains necessary metadata about the measurement, as follows: **version** The version of the tool used to obtain the data set. **type** The type of data contained in the data set, e.g., tracebox. **userid** The (Unix) user ID under which the tool ran. Often 0. **method** The method used to gather data, e.g., ip4-tcp. **probe** The basic probe format, e.g., ip/tcp/mss(1460)/sackp (i.e., TCP segment, with Selective Acknowledgment and MSS of 1460 bytes). **src** The vantage point’s source address. **dst** The experiment’s destination address. **sport/dport** The experiment’s source and destination TCP ports (if TCP is being used). **result** The overall result, e.g., success **start** The experiment’s start time, containing sec (seconds since the Epoch), usec (microseconds) and ftime (human-readable local time, e.g., 2016-03-23 11:57:10). **Assorted TCP and IP options** These flags and values are often informative only, such as tcp_seq (initial TCP sequence number), but may sometimes be important for the protocol, such as tcp_ack. ## Copycat Copycat is a tool for detecting differential treatment of UDP and TCP traffic over an Internet path between two measurement agents. Copycat generates raw IPFIX [1] flow data using the QoF [6] flow meter, which contains additional information about TCP loss and latency. By comparing the characteristics of UDP traffic with TCP traffic along the same path at equivalent times, differential treatment can be detected. Raw Copycat data contains basic QoF flows as well as TCP performance metrics; i.e., the following IPFIX Information Elements (IEs) as well as the appropriate reverse counterparts [5]. * octetDeltaCount * packetDeltaCount * protocolIdentifier * tcpControlBits * sourceTransportPort * sourceIPv4Address * ingressInterface * destinationTransportPort * destinationIPv4Address * egressInterface * sourceIPv6Address * destinationIPv6Address * minimumTTL * maximumTTL * flowEndReason * flowId * flowStartMilliseconds * flowEndMilliseconds * transportOctetDeltaCount * transportPacketDeltaCount * initialTCPFlags (6871 / 14) * unionTCPFlags (6871 / 15) * reverseFlowDeltaMilliseconds (6871 / 21) * reverseInitialTCPFlags (6871 / 16398) * reverseUnionTCPFlags (6871 / 16399) * tcpSequenceCount (35566 / 1024) * tcpRetransmitCount (35566 / 1025) * minTcpRttMilliseconds (35566 / 1029) * ectMarkCount (35566 / 1031) * ceMarkCount (35566 / 1032) * tcpSequenceLossCount (35566 / 1035) * tcpLossEventCount (35566 / 1038) * qofTcpCharacteristics (35566 / 1039) * tcpRttSampleCount (35566 / 1046) See https://github.com/britram/qof/wiki and https://iana.org/assignments/ipfix for a full reference for relevant IE definitions. ## Revelio Revelio [3] is a tool for detecting IPv4 network address translation on access networks. Revelio produces CSV-formatted data with the following fields: **boxid** Unique identifier of the device running the Revelio client (assigned based on MAC address). **revelio type** The Revelio version and the platform used to deploy Revelio. **timestamp** Time at the start of the measurement. **local IP** IP address of the device running the Revelio client. **IGD** IP address of the WAN-facing interface if device supports UPnP, **STUN mapped** The public mapped address (the GRA). **trace packetSize** The packet size of the traceroute probe. **traceroute result** Output of traceroute to a fixed target examining the hops within the access network
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1087_VICINITY_688467.md
# Executive Summary _«The VICINITY project will build and demonstrate a bottom-up ecosystem of decentralised interoperability of IoT infrastructures called virtual neighbourhood, where users can share the access to their smart objects without losing the control over them.»_ The present document is a deliverable “D9.4 – Data Management Plan” of the VICINITY project (Grant Agreement No.: 688467), funded by the European Commission’s Directorate-General for Research and Innovation (DG RTD), under its Horizon 2020 Research and Innovation Programme (H2020). This is the third version of the project Data Management Plan (DMP) and contains updated information about datasets generated or collected through the project. This includes information on whether and how it will be exploited or made accessible for verification and re-use, and how it will be curated and preserved. The purpose of the Data Management Plan is to present a way forward with the data management policy that has been used by the consortium. This is made with regard to all the datasets generated by the project. The datasets referred to in this document were completed during the first stages (completed 15 th of January 2019) of the project. The document can only reflect the intentions of the project partners toward the final project’s datasets. This third and final version (D9.4) will be put in effect the 31 st December 2019 and follows the H2020 guidelines on Data Management Plans, and as stated in the Grant Agreement 688467. Changes in this final document is caused by some changes in the organisations, the datasets from the use cases, more legal considerations are being presented, as well as the project have received a lot more information from Open Call winners. As the project progressed and the partners gained insight in results, the datasets were elaborated on. The detailed descriptions of all the specific datasets that were collected are described and made available under the relevant Data Management framework. All personal data, or data directly related to the residents, was collected after the project received a signed informed consent form from the stakeholders participating in the pilots. The DMP will also be put in effect after the completion of the project as it presents a structure that follows guidelines The DMP is not a fixed document and evolved during the lifespan of the project (Figure 1). **Figure 1: Data Management Plan – deliverables 2016 – 2019** Note: In order to assist the official project review process by the Commission for the first project period (M1-M18), a preliminary version of the updated DMP of D9.3 was delivered prior to M20 (August 2017) in order to be enable a better assessment of the progress of the Data Management. The preliminary version was accepted, and laid the groundwork for this third and final version. # Introduction The purpose of the Data Management Plan (DMP) deliverable is to provide relevant information concerning data collected and used by the partners of the project VICINITY. The projects aim was to develop a solution defined as “Interoperability as a Service”. This concept became a part of the VICINITY Open Gateway (Figure 2), and was achieved by implementing a platform for harvesting, converting and sharing data from IoT devices on the service layer of the network. **Figure 2: Domains and some of the functionalities the DMP covers** This goal entails the need for good documentation and implementation of descriptors, lookup-tables, privacy settings and intelligent conversion of data formats. The strength of having a cloud-based gateway is that it should be relatively simple to upgrade with new specifications and implement conversion, distribution and privacy strategies. In particular, the privacy part is considered an important aspect of the project, as VICINITY needs to follow and adhere to strict privacy policies. It will also be necessary to focus on possible ethical issues and access restrictions regarding personal data so that no regulations on sensitive information are violated. <table> <tr> <th> **Purpose of access restrictions: Right of access, Art. 15:** * Purposes of processing * Affected categories of personal data * Recipients or categories of recipients * Expected storage time * All available information on where the personal data originated * The occurrence of automated decisions </th> </tr> </table> **Table 1: GDPR on right of access** The datasets collected belong to four main domains; 1. Smart energy and buildings as demonstrated by ENERC in Portugal and tested at AAU in Denmark. 2. Mobility as demonstrated by HITS in Norway and tested at AAU. 3. Smart home as demonstrated by the Norwegian partners HITS and TINYM and tested at AAU. 4. eHealth as demonstrated by GNOMON and CERTH in Greece. (Figure 3: Example of potential data points in use cases that generate data.). Several standards and guidelines were taken into consideration that the project needs to be aware within each of these fields. A number of different vendors and disciplines are involved – and much of the information that is available only exists in proprietary data formats. For this reason, VICINITY will target IoT units that follow the specifications defined by oneM2M consortium, ETSI standardization group and international groups and committees. The DMP has been undergone some changes in particular in regards to agreement concerns when storing and handling data after the completion of VICINITY. This version of the document is based on the knowledge generated through discussions, demonstrations and preparations for deployment at pilot sites. **Figure 3: Example of potential data points in use cases that generate data.** # General Principles ## 3.1. IPR management and security As a research and innovation action, VICINITY aims at developing an open framework and gateway – demonstrated through a subset of Value-Added Services and the related business models. The project consortium includes partners from private sector, public sector and end-users (Table 4). Some partners may have Intellectual Property Rights on their technologies and data. Consequently, the VICINITY consortium will protect that data and crosscheck with the concerned partners before data publication. <table> <tr> <th> **Partner** </th> <th> **Sector** </th> <th> **Domain** </th> <th> **IPR** </th> </tr> <tr> <td> ATOS Spain SA (ATOS) </td> <td> Private </td> <td> </td> <td> </td> </tr> <tr> <td> Aalborg University (AAU) </td> <td> Public </td> <td> </td> <td> </td> </tr> <tr> <td> Bavenir S.R.O. (BVR) </td> <td> Private </td> <td> </td> <td> </td> </tr> <tr> <td> Centre for Research and Technology Hellas </td> <td> Public </td> <td> Health </td> <td> </td> </tr> <tr> <td> Climate Associates Limited (CAL) </td> <td> Private </td> <td> Environ mental consulta ncy </td> <td> </td> </tr> <tr> <td> Enercoutim (ENERC) </td> <td> Private </td> <td> Energy, Buildings O&M services </td> <td> </td> </tr> <tr> <td> Gnomon Informatics S.A. (GNOMON) </td> <td> Private </td> <td> </td> <td> </td> </tr> <tr> <td> Gorenje Gospodinjski Aparati D.D. (GRN) </td> <td> Private </td> <td> White goods and services </td> <td> </td> </tr> <tr> <td> Hafenstrom AS (HITS) </td> <td> Private </td> <td> Mobility </td> <td> </td> </tr> <tr> <td> Hellenic Telecommunications Organization S.A. (OTE) </td> <td> Private </td> <td> </td> <td> </td> </tr> <tr> <td> Intersoft A.S. (IS) </td> <td> Private </td> <td> </td> <td> X </td> </tr> <tr> <td> Municipality of Pilea-Hortiatis (MPH) </td> <td> Public </td> <td> Health </td> <td> </td> </tr> <tr> <td> Technical University of Kaiserslautern (UNIKL) </td> <td> Public </td> <td> </td> <td> </td> </tr> <tr> <td> Tiny Mesh AS (TINYM) </td> <td> Private </td> <td> Building </td> <td> </td> </tr> <tr> <td> Universidad Politecnica de Madrid (UPM) </td> <td> Public </td> <td> Energy </td> <td> </td> </tr> </table> **Table 4: The VICINITY consortium includes partners from different sectors with confidential data** A holistic security approach has been followed, in order to protect the pillars of information security (confidentiality, integrity, availability). The security approach consists of a methodical assessment of security risks followed by their impact analysis. This analysis is performed on the personal information and data processed by the proposed system, their flows and any risk associated to their processing. Security measures include secure protocols (HTTPS and SSL), login procedures, as well as protection against bots and other malicious attacks such as CAPTCHA technologies. Moreover, the VICINITY pilot sites apply procedures related to the data collection, their integrity and protection. The data protection and privacy of personal information include protective measures against infiltration as well as physical protection of core parts of the systems and access control measures. ## 3.2. Personal Data Protection The VICINITY architecture does not expose, use or analyse data. However, some activities have involved human participants and utilize Value-Added Services (VAS) that may operate on datasets. The pilots have been conducted in real apartments and cover real use scenarios related to health monitoring, booking, home management, governance, energy consumption and other various human activity and behaviour analysis –related data gathering purposes. Some of the activities that were carried out by the project gathered some basic personal data (e.g. name, background, contact details, interest, IoT units and assigned actions), even though the project avoided collecting such data unless data is really necessary for the application. Such data is protected in accordance with the EU's Data Protection Directive 95/46/EC 1 “on the protection of individuals with regard to the processing of personal data and on the free movement of such data” and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data ( **Fehler! Verweisquelle konnte nicht gefunden werden.** ). Info windows containing relevant links and descriptions to GDPR Articles and sections as well as best practice are placed throughout this deliverable. <table> <tr> <th> **General Data Protection Regulation is a common regulation for all EU/EEA countries. GDPR introduced several changes in privacy legislations and new privacy rules May 25, 2018. It represents a significant tightening of rules and possibility of large fines in case of breach.** This also includes more stringent requirements for documentation of IT systems and security solutions (Art. 28): • Fines up to EUR 20,000,000 or 4% of the company's turnover (Art. 79) • Inadequate data security for the processing of personal data will have such high financial risk that it must be reported in annual and audit reports </th> </tr> </table> **Table 2: General Data Protection Regulation description** This document aims to address issues related to responsibilities as there are considerations to be made when several partners and external stakeholders involved. Examples of such issues are what actions that are to be undertaken by project coordinator, responsibilities assigned to partner organisations and the role of the ethics committee in handling data. This is even more prominent with the number of Open Call winners that have access to the innovation results – but which also have their own legal contacts as well as third party suppliers. Who can ensure the data is actually gone – and who are responsible if traces still can be found? Additionally, a project such as VICINITY also has providers of value-adding services that are either within or outside the ecosystem that the partners in the project have direct control over. This is just the beginning; Who is responsible if personal data that violates the regulations should have been stored? Or rather; what happens if the personal data is leaked or used incorrectly? That being said, GDPR does not represent a big issue for most the partners in VICINITY as it only applies to data where a person can directly be identified by the stored information. The labs are using test users, and therefore have other concerns to made. That being said - it is still a good habit to follow up and have proper data management strategies, regardless of what is being stored. Thus, a reminder will be sent to the partners at the end of the project, requesting partners and Open Call winners to remove or anonymize data in accordance with GDPR. WP3 and WP4 activities dealing with the implementation and deployment of core components have been performed in Slovakia under leadership of local partners (BVR and IS). For this reason, the solution will be reviewed for compliance with Data Protection Act No. 122/2013 approved by National Council of the Slovak Republic together with its amendment No. 84/2014 which already reflects the EC directive proposal 2012/0011/COD. WP7 and WP8 activities have been performed in Greece, Portugal and Norway under the leadership of local partners. In the following the consortium outlines the legislation for the countries involved in the Trial: 1. Greek Trial in Municipality of Pilea-Hortiatis, Thessaloniki, for Greece, legislation includes “Law 2472/1997 (and its amendment by Law 3471/2006) of the Hellenic Parliament”. * Regulatory authorities and ethical committees: Hellenic Data Protection Authority http://www.dpa.gr/ 2. Norwegian trials in Teaterkvarteret healthcare assisted living home in Tromsø and offices in Oslo Sciencepark, Oslo, have to comply with national legislation “Personal Data Act of 14 April No.31” 5relating to the processing of personal data. * Each pilot demonstration has to notify regulatory body Datatilsynet pursuant to section 31 of the Personal Data Act and section 29 of the Personal Health Data Filing System Act. 3. Portuguese Trial in Martim Longo microgrid pilot site in the Algarve region, Portugal. The Portuguese renewable energy legislative base dates back to 1988, and was upgraded and reviewed multiple times since then. The most important legislative diplomas are listed; DL 189/88, DL 168/99, DL 312/2001, DL 68/2002, DL 29/2006 and DL 153/2014. The last on the list refers to also one of the most important legislative changes, being the legislative base for broad based auto-consumption, with possibility to inject excess energy in to the grid under certain conditions. * The collection and use of personal data in Portugal are regulated by the following two laws: “Law 41/2004” (and its amendment “Law 46/2012”), and “Law 32/2008”. Further information on how personal data collection and handling were approached in the VICINITY project have been described in D4.3: “VICINITY Security Services”, D5.2 “VICINITY value- added services implementation framework”, D6.4: “VICINITY security and privacy evaluation report”, and also addressed in the D8.2 – D8.5 that covers pilot results from smart energy, smart buildings, smart parking and eHealth. All personal data collection efforts of the project partners have been established after giving subjects full details on the experiments to be conducted, and obtaining from them a signed informed consent form (see Annex 2: VICINITY consent form template), following the respective guidelines set in VICINITY and as described in section 3.5: Ethics and security. <table> <tr> <th> **Importance of GDPR:** * All legal companies are given new duties * Everyone should provide good information on how they process personal data * Everyone should consider risk and privacy consequences * Everyone should embed privacy in new solutions * Many companies must establish privacy ombudsman/Privacy Advisor * The rules also apply to businesses outside Europe * All data processors are given new duties * Everyone has new requirements for non-conformity management * Everyone must be able to fulfil citizens' new rights </th> </tr> </table> **Table 3: Importance of GDPR as applied in VICINITY** Beside this, certain guidelines were implemented in order to limit the risk of data leaks; * Keep anonymised data and personal data of respondents separate; * Encrypt data if it is deemed necessary by the local researchers; * Store data in at least two separate locations to avoid loss of data; * Limit the use of USB flash drives; * Save digital files in one of the preferred formats (see Annex 1), and * Label files in a systematically structured way in order to ensure the coherence of the final dataset GDPR may be broad in its descriptions. To give some examples; * All kinds of personal data can be stored - as long as the data cannot be combined and used to identify the person. E.g. - it's perfectly OK to save date of birth. However, if combined with an address or apartment number, potential issues may arise. * IP address, time and login data that can be correlated with member registers used for statistical purposes represents also a risk. * Transactions related to service or product linked to customer profiles is yet another example. * There are other things that also come into play; whether the data sets are encrypted/hashed, how the tables are indexed, etc. To avoid this situation, DMP is now focusing on "security by design", not "security on demand". Recommended course of action requires a common approach from all EU funded projects. A brief course which leads up to a written statement indicating the partner has understood and adhere to GDPR as presented in the course. A more formal description of best practice principles can be found in Table 5: Best practice for use of production data. <table> <tr> <th> **GDPR - Prominent Rights:** * Easier Access to Own Data (Art. 43) * Data Portability, Right to Access and Freely Transfer Data to Other Service Providers (Art. 18) * Right to Correction (Art. 16) • The Right to “Be Forgotten” (Art. 17) * The right to know if your information is subject to abuse * Service providers; Only deal with one supervisor for the entire area (Art. 55-62) * “Data protection by design and by default” (Art. 25) </th> </tr> </table> **Table 4: GDPR - Prominent Rights** ## 3.3. Participation in the Pilot on Open Research Data VICINITY participates in the Pilot on Open Research Data 2 launched by the European Commission along with the Horizon2020 programme. The consortium believes firmly in the concepts of open science, and the large potential benefits the European innovation and economy can draw from allowing reusing data at a larger scale. Therefore, all data produced by the project may be published with open access – though this objective will need to be balanced with the other principles and sensitivities described below. ## 3.4. Production data The consortium is aware that a number of privacy and data protection issues could be raised by the activities (use case demonstration and evaluation in WP7 and WP8) to be performed in the scope of the project. The project involves the carrying out of data collection in all pilot applications on the virtual neighbourhood. For this reason, human participants were involved in certain aspects of the system development by contributing real life data. During the development life cycle process, it was necessary to operate on datasets. None of the datasets were based on production data, a few were generated (synthetic), but started out from a test environment and did not contain any personal data that will be used for future reference. **Figure 5: The VICINITY architecture is decentralised by design** The VICINITY architecture is decentralised by design (Figure 5). Production data were planned being used for testing purposes, but deemed unnecessary as relevant datasets were created for the project. Certain functionality like the discovery function and the related search criteria, raise the need for proper implementation of Things Ecosystem Description (TED) – which describes IoT assets that exists in the same environment. The public have access to the VICINITY ontology alongside the VICINITY discovery function at the conclusion of the project. However, all data generated through the test phase and development process will be removed. <table> <tr> <th> **BEST PRACTICE FOR PRODUCTION DATA ADOPTED BY VICINITY** The consortium will follow what is considered best practice for handling both copies of production data and live data. * **Data Obfuscation and security safeguards** Use obfuscation methods to remove/protect data or reduce the risk of personal information being harvested on data breach, and encrypt data where appropriate. * **Data minimization** Minimize the size of datasets and the number of fields used. * **Physical/environmental protection and access control** Restrict and secure the environment where the data is used and stored and limit the ability to remove live data in either physical or electronic format from the environment. Also limit access to the data to authorized users with business needs and who have received appropriate data protection training. * **Retention limits and data removal** Limit the time period for use of the data and dispose of live data at end of use period. Destroy physical and electronic live data used for training, testing, or research at the conclusion of the project. * **Use Limits** Limit through controls and education the likelihood that live data, whose integrity is not reliable, is re-introduced into production systems or transferred to others beyond its intended purpose. * **Watermarking** Include warning information on live data where possible to ensure users do not assume it is dummy data. This applies to all pilot sites where time critical actions have to be taken, and where forecast analysis needs to be based on accurate data. * **Legal Controls** Implement Confidentiality and Non-Disclosure Agreements if applicable. This applies to all operators responsible for living labs that address eHealth and assisted living. * **Responsibility for accountability, training and awareness** Ensure that identified personnel (by role) are assigned responsibility for compliance with any conditions of the approval for the use of live data. The personnel responsible for the technical description of the dataset will also serve as contact for the use of live data. This also applies to providing safety and training sessions for all persons having access to live data. The partners responsible for pilot sites handling real time data from living labs will prepare information that is to be handed out to relevant stakeholders. </th> </tr> </table> **Table 5: Best practice for use of production data** How these best practice principles are being implemented, are described in more detail in section 3.5: Ethics and security and 3.9: Data sharing. ## 3.5. Ethics and security The consortium is aware that a number of privacy and data protection issues could be raised by the activities (use case demonstration and evaluation in WP7 and WP8) to be performed in the scope of the project. The project involves the carrying out of data collection in all pilot applications on the virtual neighbourhood. For this reason, human participants became involved in certain aspects of the project and data that was collected. This has been done in full compliance with any European and national legislation and directives relevant to the country where the data collections took place (INTERNATIONAL/EUROPEAN): * The Universal Declaration of Human Rights and the Convention 108 for the Protection of Individuals with Regard to Automatic Processing of Personal Data and * Directive 95/46/EC & Directive 2002/58/EC of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data. <table> <tr> <th> **Data Protection Impact Assessment (DPIA):** The assessment shall include at least * a systematic description of the planned treatment activities and the purposes of the treatment, including, where relevant, the legitimate interest pursued by the treatment officer; * an assessment of whether the treatment activities are necessary and proportionate. for the purposes, * an assessment of the risks to the data subjects' rights and freedoms referred to in paragraph 1; and * the planned measures to manage the risks, including guarantees, security measures and mechanisms for safeguarding personal data and demonstrating that this Regulation be respected, taking into account the rights and legitimate interests of the data subject and other persons concerned. </th> </tr> </table> **Table 6: Data Protection Impact Assessment (DPIA)** In addition to this, the project further ensures that the fundamental human rights and privacy needs of participants have been met whilst they take part in the project. A dedicated section for providing ethical and privacy guidelines for the execution of the Industrial Trials is presented in the Evaluation Plans. In order to protect the privacy rights of participants, a number of best practice principles were prepared. These include: * No data will be collected without the explicit informed consent of the individuals under observation. This involves being open with participants about what they are involving themselves in and ensuring that they have agreed fully to the procedures/research being undertaken by giving their explicit consent. * The owners of personal data are to be granted the right of inspection and the right to be removed from the registers. * No data collected will be sold or used for any purposes other than the current project; * A data minimisation policy will be adopted at all levels of the project and will be supervised by each Industrial Pilot Demonstration responsible. This will ensure that no data which is not strictly necessary to the completion of the current study will be collected; * During the development life cycle process, it will be necessary to operate on datasets. Some of the datasets may be based on production data, while others may be generated (synthetic). These data will be removed by the end of the project. * Any shadow (ancillary) personal data obtained during the course of the research will be immediately cancelled. However, the plan is to minimize this kind of ancillary data as much as possible. Special attention will also be paid to complying with the Council of Europe’s Recommendation R(87)15 on the processing of personal data for police purposes, Art. 2 : _“The collection of data on individuals solely on the basis that they have a particular racial origin, particular religious convictions, sexual behaviour or political opinions or belong to particular movements or organisations which are not proscribed by law should be prohibited. The collection of data concerning these factors may only be carried out if absolutely necessary for the purposes of a particular inquiry.”_ * Compensation – if and when provided – will correspond to a simple reimbursement for working hours lost as a result of participating in the study; special attention will be paid to avoid any form of unfair inducement; * If employees of partner organizations, are to be recruited, specific measures will be in place in order to protect them from a breach of privacy/confidentiality and any potential discrimination; In particular their names will not be made public and their participation will not be communicated to their managers. * Data should be pseudomised and anonymized to allow privacy to be upheld even if an attacker gains access to the system. * Furthermore, if data has been compromised or tampering is detected, the involved parties are to be notified immediately in order to reduce risk of misuse of data gathered for research purposes. The same concern addressed here also applies to open calls (see section 3.7: Open Call). These issues are be exemplified when for instance health related data is gathered as part of service were information is exchanged within a 3rd party ecosystem. Why are these data gathered? How are the data processed? Where are data temporarily stored? How are aggregated data accessed? When are data further exchanged with other actors? VICINITY allow for the interoperability to take place, but the architecture serves as a facilitator – it cannot determine what happens with the data afterward. Even more so if the data are exchanged or processed within other partners ecosystems? Responsibilities cannot be outsourced. Personal responsibilities and approach used in protecting the privacy rights must be clarified. <table> <tr> <th> **10 National Security Agency (NSM** 3 **) Security Requirements:** 1. An established information security and certification management system in accordance with international standards, such as ISO / IEC 27001: 2017 2. Insight into the security architecture used to deliver the service. 3. Development of security in service production and at the supplier, in line with developments in technology and the threat picture over time. 4. An overview of who should have access to the company's information, where and how it should be processed and stored, and the degree of mechanisms for segregation from other customers. 5. Access management which includes encryption, activity logging, physical and logical security. 6. Security monitoring suitable for detecting incidents and actions in line with the business threat image and relevant threat actors. 7. Procedures for incident management, nonconformity and safety reporting. 8. Emergency and contingency plans to harmonize with the company's own plans. 9. That the use of subcontractors and their use of subcontractors must be approved before implementation. 10. What activities are to be performed upon termination of the contract, including the return / relocation / deletion of the company's information. </th> </tr> </table> **Table 7: 10 Security Requirements** ## 3.6. The VICINITY Data Management Portal VICINITY developed a data management portal as part of the project. This portal provides a description of the dataset along with a link to a download section in the case the datasets have been made publicly available: * The portal will be updated each time a new dataset has been collected and **is ready for public distribution** . * The portal will however not contain any datasets that should **not become publicly available** . The initial version of the portal became available during the 2nd year of the project, in parallel to the establishment of the first versions of project datasets that can be made publicly available. The VICINITY data management portal will enable project partners to manage and distribute their public datasets through a common infrastructure as described in Table 8. <table> <tr> <th> **Datasets for:** </th> <th> **Datasets (continued)** </th> <th> **Administrative tools** </th> </tr> <tr> <td> each IoT unit </td> <td> Datasets from pilots (see section 4.2 for examples) </td> <td> List of sensor / grouping </td> </tr> <tr> <td> personal information </td> <td> groups of devices </td> <td> List of actions / sequences </td> </tr> <tr> <td> energy related domains </td> <td> each health device </td> <td> List of users </td> </tr> <tr> <td> • each interface (energy) </td> <td> node/object </td> <td> List of contacts </td> </tr> <tr> <td> • each measuring device (energy) </td> <td> messaging </td> <td> Balancing loads </td> </tr> <tr> <td> • each routing device (energy) </td> <td> sequences / actions (combination tokens / nodes) </td> <td> Booking </td> </tr> <tr> <td> mobility related domains </td> <td> biometric (fingerprint, retina) </td> <td> Messaging </td> </tr> <tr> <td> • parking data (mobility) </td> <td> camera </td> <td> Criteria </td> </tr> <tr> <td> • booking (mobility) </td> <td> access </td> <td> Priorities </td> </tr> <tr> <td> • areas (mobility) _(list continues in next coloumn)_ </td> <td> each smart home device (temperature, smoke, motion, sound) </td> <td> Evaluation / feedback </td> </tr> </table> **Table 8: datasets stored in the VICINITY management portal** ## 3.7. Open Calls The Open Call process of the VICINITY project involved third parties. System integrators (Figure 6) will be one of target groups for the calls. These were presented for opportunities to integrate IoT infrastructures based on the VICINITY framework as well as implementation/integration of ValueAdded Services. The calls adhered to the principles which govern Commission calls, and were referred to in all papers. These principles all include confidentiality: all proposals and related data, knowledge and documents are treated in confidence. **Figure 6: Involving 3rd parties through open calls will provide** **VICINITY with valuable experience, and evolve interoperability** The Project Coordinator presented a legal contract with the third parties that are granted open calls and specified control procedures to be made compliant with the Grant Agreement and the Consortium Agreement. This was done in order to assure that their contributions are in line with the agreed upon work plan; that the third party allows the Commission and the Court of Auditors to exercise their power of control on documents and information stored on electronic media or on the final recipient's premises. Proposals for open calls and the deliverables that come as a result included sections that describe how the data management principles have been implemented. The papers followed the outlines that are presented in the legal contract and adhere to GDPR. This also applies to sharing ideas and intellectual properties. Furthermore, the deliverables presented how the chosen architecture and methodologies will be handled by the stakeholders, integrators and SME’s. That being said, very few of the Open Call winners gathered data that would be affected by privacy concerns. According VICINITY concept the participants can decide with whom they wish to cooperate and to which extent. Participants are held responsible for partners they team up with follow the same guidelines as the main project and the open call project. ## 3.8. Standards and metadata The data that were generated and tested through different test automation technologies, e.g. TDL (Test description language), TTCN-3 (Test and Test Control Notation), UTP (UML Testing Profile). The profile should mimic the data communicated from IoT units following the oneM2M specifications. The Systems Modeling Language 4 (SysML) is used for the collection, analysis and processing of requirements as well as for the specification message exchanges and overviews of architecture and behaviour specifications (Figure 7). **Figure 7: Example of SysML model of Virtual Oslo Science City** The project intends to share the datasets in an internally accessible disciplinary repository using descriptive metadata as required/provided by that repository. Additional metadata to example test datasets will be offered within separate XML-files. They have also been made available in XML and JSON format. Keywords will be added as notations in SysML and modelled on the specifications defined by oneM2M. The content will be similar to relevant data from compatible IoT devices and network protocols. No network protocols have been defined yet, but several have been evaluated. Files and folders will be versioned and structured by using a name convention consisting of project name, dataset name, date, version and ID. ## 3.9. Data sharing The project prepared the API for internal testing through the VICINITY open gateway. The VICINITY open gateway is defined as Interoperability as a Service. In other words - it is a cloudbased service that assumes the data has already been gathered and transferred to the software running on the service layer. These data were made available for developers and researchers in a controlled environment, where login credentials are used to get access to the data in XML and JSONformat (Figure 8). **Figure 8: Data will only be provided partners with proper login credentials** The project focus on developing a framework that allows for a scalable and futureproof platform upon which it can invest and develop IoT applications, without fear of vendor lock-in or needing to commit to one connectivity technology. The researchers must therefore be committed to the requirements, architecture, application programming interface (API) specifications, security solutions and mapping to common industry protocols such as CoAP, MQTT and HTTP. Further analysis will be performed using freely available open source software tools. The data will also be made available as separate files. <table> <tr> <th> **Five Key Concepts (Art. 4):** ❶ **Personal Information:** "any information about an identified or identifiable natural person (" the data subject "); an identifiable natural person is a person who can be directly or indirectly identified, in particular by means of an identifier, e.g. a name, identification number, location information, an online identifier or one or more elements specific to the physical, physiological, genetic, psychological, economic, cultural or social identity of the said person ❷ **Processing:** "any operation or sequence of operations done with personal data, whether automated or not" e.g. collection, registration, organization, structuring, storage, adaptation or modification, retrieval, consultation, use, delivery by transmission, distribution or any other means of making available, assembling or merging, limiting, deleting or destroying ❸ **The controller:** "determines the purpose of the processing of personal data and the means to be used" </th> </tr> <tr> <td> ❹ </td> <td> **The data processor:** "processes personal data on behalf of the controller" </td> </tr> <tr> <td> ❺ </td> <td> **Consent:** "any voluntary, specific, informed and unambiguous expression of the data subject where the person in a declaration or clear affirmation gives his consent to the processing of personal data relating to the person" </td> </tr> </table> **Table 9: Five Key Concepts in GDPR** The goal is to ultimately support the Europe 2020 strategy 5 by offering the open data portal. The Digital Agenda proposes to better exploit the potential of Information and Communication Technologies (ICTs) in order to foster innovation, economic growth and progress. Thus, VICINITY will support EUs efforts in exploiting the potential offered by using ICT in areas like climate change, managing ageing population, and intelligent transport system to mention a few examples. ## 3.10. Archiving and preservation (including storage and backup) As specified by the "rules of good scientific practice" the project aim to preserve data for at least ten years. Approximated end volume of example test dataset is currently 10 GB, but this may be subject to change as the scope of the project may change. The VICINITY architecture itself does neither generate nor store any information. Data in this context refers to Value-Added Services, code and documentation produced as part of the project. Associated costs for dataset preparation for archiving is covered by the project itself, while long term preservation will be provided and associated costs covered by a selected disciplinary repository. During the project data will be stored on the VICINITY web cloud as well as being replicated to a separate external server. The source code and descriptions are available from the project website and GitHub, but no actual data is stored at these locations: * _https://vicinity2020.eu/vicinity/public-deliverables_ * _https://github.com/vicinityh2020_ # Datasets Information that is collected or aggregated from a data source needs to adhere to a common format. This format is used to organise the data for further exchange, storage or manipulation. Datasets can be made up of several parts, and in VICINITY is based on the definition found in W3 “The RDF Data Cube Vocabulary 6 ”, section 5.1: Some of the pilots were deployed in “living labs” with actual residents or other human participants. Several of the activities carried out by the project, depended on collecting some basic personal data (e.g. name, background, contact details). VICINITY2020 avoided collecting this kind of data. However when data was gathered the project protected the data in accordance with “the EU's Data Protection Directive 95/46/EC 7 of the European Parliament” and of “the Council of 24 th of October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data”. National and local legislations applicable to the project will also be strictly applied (full list described in section 3.5: Ethics and security). Data in a data set can be roughly described as belonging to one of the following kinds: * **Observations** This is the actual data, the measured values. In a statistical table, the observations would be the values in the table cells. * **Organizational structure** To locate an observation, spatial information or coordinates are necessary. This applies also to other kind of information about origin and particular information on the sensors that are relevant to the observations. * **Structural metadata** Metadata is necessary to interpret the observations. What is the unit of measurement? Is it a normal value or a series break? Is the value measured or estimated? These metadata are provided as attributes and can be attached to individual observations, or to higher levels. * **Reference metadata** This is metadata that describes the dataset as a whole, such as categorization of the dataset, its publisher, and a SPARQL endpoint where it can be accessed. At the time of this deliverable was written, datasets that so far have been provided by partners and open call winners are presented in “Table 10: Datasets currently provided by partners” and “Table 11: Datasets currently provided by Open Call winners”. **Fehler! Verweisquelle konnte nicht gefunden werden.** <table> <tr> <th> **Partner** </th> <th> **Sector** </th> <th> **Role** </th> <th> **GDPR** </th> </tr> <tr> <td> **Aalborg University (AAU)** DS.AAU.01.GRID_Status DS.AAU.02. Gorenje_Smart_Appliances_Sensor DS.AAU.03. PlacePod_Parking_Sensor DS.AAU.04. Tinymesh_Door_Sensor </td> <td> Smart grid </td> <td> Partner </td> <td> No </td> </tr> <tr> <td> **Enercoutim (ENERC)** DS.ENERC.01.METEO_Station DS.ENERC.02.BUILDING_Status DS.ENERC.03.GRID_Status </td> <td> Smart energy </td> <td> Partner </td> <td> No </td> </tr> <tr> <td> **GNOMON Informatics SA (GNOMON)** DS.GNOMON.01.Pressure_sensor DS.GNOMON.02.Weight_sensor DS.GNOMON.03.Fall_sensor DS.GNOMON.04.Wearable_Fitness_Tracker_Sensor DS.GNOMON.05.Beacon_Sensor DS.GNOMON_CERTH.06.Gorenje_Smart_Appliances_Se nsor </td> <td> eHealth </td> <td> Partner </td> <td> Yes </td> </tr> <tr> <td> **Centre for Research and Technology Hellas (CERTH)** DS.CERTH.01.Door_Sensor DS.CERTH.02.Motion_Sensor DS.CERTH.03.Pressure_Mat </td> <td> eHealth </td> <td> Partner </td> <td> Yes </td> </tr> <tr> <td> **Hafenstrom AS (HITS)** DS.HITS.01.Parkingsensor DS.HITS.02.SmartLight DS.HITS.03.LaptopTeststation DS.HITS.04.Sensio_sensors_temperature_motion_lock DS.HITS.05.Gorenje_Smart_Appliances_Sensor </td> <td> Smart parking </td> <td> Partner </td> <td> Yes </td> </tr> <tr> <td> **Tiny Mesh AS (TINYM)** DS. VITIR.01.Door_Sensor DS. VITIR.02.CO2 Sensor </td> <td> Smart building </td> <td> Partner </td> <td> No </td> </tr> </table> **Table 10: Datasets currently provided by partners** <table> <tr> <th> **Open Call** </th> <th> **Sector** </th> <th> **Role** </th> <th> **GDPR** </th> </tr> <tr> <td> **PilotThings** DS. PilotThings.01.Building DS. PilotThings.02.OPA_Grid </td> <td> Smart building </td> <td> Open Call </td> <td> Yes </td> </tr> <tr> <td> **WearHealth** DS. WearHealth.01.SmartShirt </td> <td> Smart building </td> <td> Open Call </td> <td> Yes </td> </tr> <tr> <td> **SaMMY** DS. SaMMY.Patras.Air_Temperature DS. SaMMY.Patras.Humidity DS. SaMMY.Patras.Water_Temperature DS. SaMMY.Patras.Water_pH DS. SaMMY.Patras.Water_ORP DS. SaMMY.Patras.BerthSpace(5-15)_Occupancy </td> <td> Marina </td> <td> Open Call </td> <td> No </td> </tr> <tr> <td> **Thinkinside Srl** DS. ThinkInside.01.INCANT </td> <td> Positioning </td> <td> Open Call </td> <td> No </td> </tr> <tr> <td> **Sensinov** DS.Sensinov.01.F2IS-VAS </td> <td> Smart building </td> <td> Open Call </td> <td> No </td> </tr> </table> **Table 11: Datasets currently provided by Open Call winners** For each dataset in VICINITY the format in “Table 12: Format of dataset description” were specified: <table> <tr> <th> **DS. PARTICIPANTName.##.Logical_sensorname** </th> </tr> <tr> <td> **Data Identification** </td> <td> </td> </tr> <tr> <td> Dataset description </td> <td> _Where are the sensor(s) installed? What are they monitoring/registering? What is the dataset comprised of? Will it contain future sub-datasets?_ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _How will the dataset be collected? What kind of sensor is being used?_ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _What is the name of the owner of the device?_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _What is the name of the partner in charge of the device? Are there several partners that are cooperating? What are their names?_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _The name of the partner._ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _The name of the partner._ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WPxx and WPxx._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _What is the status with the metadata so far? Has it been defined? What is the content of the metadata (e.g. datatypes like images portraying an action, textual messages, sequences, timestamps etc.)_ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _Has the data format been decided on yet? What will it look like?_ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Example text:_ _Production process recognition and help during the different production phases, avoiding mistakes_ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _Example text:_ _The full dataset will be confidential and only the members of the consortium will have access on it. Furthermore, if the dataset or specific portions of it (e.g. metadata, statistics, etc.) are decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination_ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _Have the data sharing policies been decided yet? What requirements exist for sharing data? How will the data be shared? Who will decide what to be shared?_ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _-_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Who will own the information that has been collected? How will it adhere to partner policies? What kind of limitation is put on the archive?_ </td> </tr> </table> **Table 12: Format of dataset description** <table> <tr> <th> **Current legal requirements as described in General Data Protection Regulation OJ L 119, 04.05.2016:** * Information systems and security measures must be documented (section 13 and pof. section 2-16) * Documented internal control measures (section 14) * Description of security objectives and strategy (section 2-3) * Regular review of the use of IT systems, which must be documented (section 2-3) * Establish criteria for acceptable risk (section 2-4) * Conduct and document risk assessments (section 2-4) * Implement and document security audits (section 2-5) * Report to the Data Inspectorate for certain security breaches (section 2-6) * Establish responsibility and authority structures for the use of the IT system (section 2-7) * Configure IT systems to achieve satisfactory security (section 2-7) * Establish access control and authorizations (section 2-8) * Educate and train personnel in the proper use of IT systems (section 28) * Physical measures against unauthorized access to IT systems (section 2-10) * Document measures against authorized access to confidential information (section 2-11) * Encryption solutions when transferring confidential information (section 2-11) * Ensure necessary accessibility of information (section 2-12) * Proper backup procedures (section 2-12) * Security measures to prevent unauthorized use and measures to detect such attempts (section 2-14) * Measures against unauthorized changes to information (section 2- 13) * Measures against malicious software (section 2-13) * Proper use of cookies / cookies (Electronic Communications Act., section 2-7b) For interactive navigation, please visit _https://gdpr-info.eu/_ _Table is based on material by Kjell Steffner, attorney-at-law, Hill & Co Advokat _ </th> </tr> </table> **Table 13: Current legal requirements in GDPR** ## 4.1. Description of methods for dataset description Some example test dataset was generated by research teams from the participants in the project as XML-files. Certain datasets were based on semantic analysis of data from test sensors and applied to an ontology and made available in XML and JSON format (Figure 9). The collected dataset encompassed different methodological approaches and IoT standards defined by the global standard initiative oneM2M. The data were processed in different test environments like TDD (Test Driven Development), ATDD (Acceptance Test Driven Development), PBT (Property Based Testing), BDD (Behaviour Driven Development). The project focused on using model-based test automation in processes with short release cycles. **Figure 9: Datasets will be prepared and provided in XML and JSON format** Apart from the research teams, these datasets will be useful for other research groups, Standard Development Organisations (SDO) and technical integrators with within the area of Internet of Things (IoT). All datasets were shared between the participants during the lifecycle of the project. Feedback from other participants and test implementations would decide the dataset should be made publicly available, but there were no requests made and only the eHealth and smart parking pilots had data that could be of relevant. Datasets supported the framework defined by the VICINITY ontology can be made public and presented in open access publications. The VICINITY partners can use a variety of methods for exploitation and dissemination of the data including: * Using them in further research activities (outside the action) * Developing, creating or marketing a product or process * Creating and providing a service, or * Using the data in standardisation activities Restrictions: 1. All national reports (which include data and information on the relevant topic) will be available to the public through the HERON web-site or a repository or any other option that the consortium decides and after verification by the partners so as to ensure their quality and credibility. 2. After month 18 so that partners have the time to produce papers; 3) Open access to the research data itself is not applicable. ## 4.2. Datasets for smart grid from Aalborg University (AAU) AAU will mainly deal with control design, energy management systems implementation and Information and Communication Technology (ICT) integration in small scale energy systems. Intensive and iterative Lab tests have been conducted in AAU by using the hardware in the loop solution and experimental platform to 1. restrict some features of VICINITY Gateway API to define a stability and proper operating range for VICINITY platform; 2. ensure that local infrastructure, to be deployed at pilot sites, operates with the VICINITY platform as expected. The lessons learned from the Lab trial will be forwarded to WP3 and WP4 for performance/function improvements, and to WP7 for helping a correct deployment of VICINITY platform at the pilot sites from a technical perspective, as well as to WP8 for helping a technical evaluation approach design. <table> <tr> <th> **DS.AAU.01.GRID_Status** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _This dataset comprised different parameters characterising the electrical grid from the generation to the distribution sections. The cost of the electricity will also be considered in this dataset, so as to have full information that enables micro-trading actions._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The source that feed this dataset is from the hardware in the loop simulation and experimental platform and adapters based on Python 3._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _AAU_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _AAU_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _AAU_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP6, WP7, and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository. Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> </table> **Table 14: Dataset description of the AAU GRID status** <table> <tr> <th> **DS.AAU.02. Gorenje_Smart_Appliances_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors related to Gorenje smart appliances are sensors embodied to a Gorenje smart oven and a Gorenje smart fridge. The equipments are provided by Gorenje partner and are deployed in AAU IoT-microgrid laboratory. The main goal of the sensors is to automatically detect when a resident opens the fridge or uses the oven in order to create behaviour profiles based on relevant criteria (e.g. frequency of use, etc), produce notifications in case of deviation from the normal standards of use and inform the call centre._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The data are collected by specific smart appliances (i.e. oven, fridge) provided by the Gorenje partner and adjusted to VICINITY requirements._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _AAU, Gorenje_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _AAU, Gorenje_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _AAU, Gorenje_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP6, WP7, and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal/repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository. Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> </table> **Table 15: Dataset description of the Gorenje smart appliances sensor in AAU IoT-microgrid laboratory** <table> <tr> <th> **DS.AAU.03. PlacePod_Parking_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The PlacePod parking sensor deployed in AAU IoT-microgrid laboratory are also installed in Tromsø pilot site. The main goal of the sensors is to automatically detect when a car parking at the parking slot in order to create behaviour profiles based on relevant criteria (e.g. vacant parking slot, frequency of use, etc), produce notifications with the number of vacant parking slots._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The source that feed this dataset is from the PlacePod parking sensors and an adapter based on Python 3._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _AAU_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _AAU_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _AAU_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP6, WP7, and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal/repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository. Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> </table> **Table 16: Dataset description of the PlacePod smart parking sensor in AAU IoT-microgrid laboratory** <table> <tr> <th> **DS.AAU.04. Tinymesh_Door_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The Tinymesh door sensor deployed in AAU IoT-microgrid laboratory is provided by Tinymesh partner. The main goal of the sensors is to automatically detect when a door status is changed in order to create behaviour profiles based on relevant criteria (e.g. frequency of use, etc), produce notifications with the number of usages and cleaning requirements._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The data are collected by the Tinymesh door sensor provided by the Tinymesh partner and adjusted to VICINITY requirements._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _AAU, Tinymesh_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _AAU, Tinymesh_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _AAU, Tinymesh_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP6, WP7, and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _The full dataset of grid status deployed in AAU IoT-microgrid testing lab, that is not sensitive, is accessible through a local experimental repository. Data exploitation is foreseen to be achieved through testing value-added services, data analytics and statistical analysis._ </td> </tr> </table> **Table 17: Dataset description of the Tinymesh door sensor in AAU IoT- microgrid laboratory** ## 4.3. Datasets for smart energy from Enercoutim (ENERC) ENERC contributed by providing the facilities and the experience in implementing solar production integrated into municipality smart city efforts. To this end, ENERC actively participated in the deployment, management and evaluation of the “Smart Energy Microgrid Neighbourhood” Use Case. Its contribution focused on the energy resource potential demand studies and economic sustainability. ENERCS expertise allowed ICT integration with smart city management focused on better serving its citizens. The main aim of this project is the demonstration of a Solar Platform which provides a set of shared infrastructures and reduces the total cost per MW as well as improves the environmental impact compared to the stand-alone implementation of these projects. As main responsibilities, ENERC will be in charge of strategic technology planning and integration coordination, designing potential models for municipal energy management, as well as identifying the optimal ownership structure of the microgrid system with a focus on delivering maximum social and economic benefit to the local community. <table> <tr> <th> **DS.ENERC.01.METEO_Station** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The weather conditions will influence the energy production, so it becomes critical to understand the current and foreseen scenarios. It is fundamental to constantly carry out different measures with the meteo station equipment of the parameters that can influence both energy production and consumption over time._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensors that feed this dataset are; temperature, humidity, wind speed and wind direction, barometer, precipitation measurement and sun tracker._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices. However, it is estimated to be 4 KB/transmission._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Due to privacy issues, the collected data are stored at a secured database scheme at SOLAR LAB Facilities, allowing access to registered users. Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. facility managers), and for a broader use in an anonymised/aggregated manner for data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be confidential and only the authorized ENERC personnel and related end-users will have access as defined. Specific consortium members involved in technical development and pilot deployment will further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data will be stored in a database scheme at the SOLAR LAB facilities, allowing only authorised access to external end- users. A back up will be stored in an external storage device, kept by ENERC in a secured place. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 18: Dataset description of the ENERC METEO station** <table> <tr> <th> **DS.ENERC.02.BUILDING_Status** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The information associated to the energy consumption in buildings will allow identifying the usage of resources for each measurement point._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensors that feed this dataset are; Cooling energy demand, heating energy demand, hot water demand, building equipment demand_ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices. However, it is estimated to be 4 KB/transmission._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Due to privacy issues, the collected data are stored at a secured database scheme at SOLAR LAB Facilities, allowing access to registered users. Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. facility managers), and for a broader use in an anonymised/aggregated manner for data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be confidential and only the authorized ENERC personnel and related end-users will have access as defined. Specific consortium members involved in technical development and pilot deployment will further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware_ _as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None._ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data will be stored in a database scheme at the SOLAR LAB facilities, allowing only authorised access to external end- users. A back up will be stored in an external storage device, kept by ENERC in a secured place. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 19: Dataset description of the ENERC building status** <table> <tr> <th> **DS.ENERC.03.GRID_Status** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _This dataset comprises the different parameters that characterise the electrical grid from the generation to the distribution sections. Moreover, the cost of the electricity will be considered in this dataset so as to have full information that enables micro-trading actions._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensors that feed this dataset are; Electrical energy generated on-site from RES, thermal energy generated on-site, thermal energy consumed, grid electricity consumed, instant grid cost of energy consumed, value of energy purchased from the grid_ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _ENERC_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in JSON format. Regarding the volume of data, it depends on the motion/activity levels of the engaged devices. However, it is estimated to be 4 KB/transmission._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Due to privacy issues, the collected data are stored at a secured database scheme at SOLAR LAB Facilities, allowing access to registered users. Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. facility managers), and for a broader use in an anonymised/aggregated manner for data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be confidential and only the authorized ENERC personnel and related end-users will have access as defined. Specific consortium members involved in technical development and pilot deployment will further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data will be stored in a database scheme at the SOLAR LAB facilities, allowing only authorised access to external end- users. A back up will be stored in an external storage device, kept by ENERC in a secured place. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 20: Dataset description of the ENERC grid status** ## 4.4. Datasets for eHealth from GNOMON Informatics SA (GNOMON) GNOMON provided its background knowledge in the specific field of assisted living and tele care in the context of social workers. In addition, GNOMON actively contributes in the use case pilot setup, assessment and benchmarking. The company has developed and provided the remote care and monitoring integrated system for people with health problems as well as of the software applications for support and organization using information and communication technologies of the business operation of HELP AT HOME program in the Municipality of Pilea-Hortiatis. This infrastructure is further exploited and extended for the scope of VICINITY project and specifically for the realisation of the eHealth Use Case. <table> <tr> <th> **DS.GNOMON.01.Pressure_sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors are in possession of patients in need of assisted living and identified by the equivalent municipality (MPH) health care services to ensure the validity of each case. The measurements are taken depending on the scheduling they have agreed with their doctor. The main task of the sensor is to monitor pressure (systolic/diastolic) and heart rate levels._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset is collected via a combination of connected devices consisting of a Bluetooth Blood Pressure monitor and a Connectivity Gateway based on Raspberry pi._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device is property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in XML format. In a later stage, they are converted to JSON format and stored in a database. Regarding the volume of data, it depends on the participation levels of the engaged patients. However, it is estimated to be 16 KB/measurement._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Due to privacy issues, the collected data are stored at a secured database scheme at MPH headquarters, allowing access to registered users (i.e. MPH health care services personnel and eHealth call center). Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. doctors), and for a broader use in an anonymised/aggregated manner for creating behaviour profiles and clustering patients to different medical groups._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset is confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamper-proof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from health devices deployed at seniors’ houses provides added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 21: Dataset description of the GNOMON pressure sensor** <table> <tr> <th> **DS.GNOMON.02.Weight_sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors are in possession of patients in need of assisted living and identified by the equivalent municipality (MPH) health care services to ensure the validity of each case. The measurements are scheduled to be taken depending on the agreement between the patient and the doctor. The main task of the sensor is to keep track of weight measurements and mass index (given the fact that the patient provides an accurate value of his/her height)._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset is collected via a combination of connected devices consisting of a Bluetooth Body Composition monitor and a Connectivity Gateway based on Raspberry pi._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device is a property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received in XML format. In a later stage, they are converted to JSON format and stored in a database. Regarding the volume of data, it depends on the participation levels of the engaged patients. However, it is estimated to be 48 KB/measurement._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Due to privacy issues, the collected data are stored at a secured database scheme at MPH headquarters, allowing access to registered users (i.e. MPH health care services personnel and eHealth call centre). Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. doctors), and for a broader use in an anonymised/aggregated manner for creating behaviour profiles and clustering patients to different medical groups._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset is confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamper-proof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from health devices deployed at seniors’ houses provides added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 22: Dataset description of the GNOMON weight sensor** <table> <tr> <th> **DS.GNOMON.03.Fall_sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The fall sensor is a wearable sensor that is in possession of patients in need of assisted living and identified by the equivalent municipality (MPH) health care services to ensure the validity of each case. The main goal of the sensor is to automatically detect when a patient falls either due to an accident or in the case of a medical incident. The event is triggered automatically after a fall, but a similar event is also triggered by pressing the equivalent panic button (wearable actuator). In both cases, an automated emergency phone call is placed to the eHealth Call Centre._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset is collected via a combination of devices consisting of a hub (Lifeline Vi) and a fall detector that are wirelessly connected._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device is the property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _An audit log containing alerts (incl. false alarms) is stored._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Due to privacy issues, the collected data are stored at a secured database scheme at MPH headquarters, allowing access to registered users (i.e. MPH health care services personnel and eHealth call centre). Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. patient’s doctors), and for a broader use in an anonymised/aggregated manner for data analytics and statistical analysis._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset is confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamper-proof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from fall sensors at seniors’ houses provides added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 23: Dataset description of the GNOMON fall sensor** <table> <tr> <th> **DS.GNOMON.04.Wearable_Fitness_Tracker_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The fitness sensors are sensors embodied to wearable fitness trackers such as activity wristbands. The latter equipment is in possession of middle aged citizens, either with a chronic health issue (e.g. obesity) or not, that are identified by the equivalent municipality (MPH). The municipality tries to promote fitness awareness and improve citizens’ health under the concept of a municipal-scale competition that is based on activity related data coming from the sensors (e.g. step counting)_ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The data are collected by wearable fitness trackers, mainly in the form of activity wristbands (e.g. Xiaomi MiBand)._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device is the property of the test subject, in this case the participating citizen._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, CERTH, MPH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _Collection of data from wearable fitness tracker sensors is event-driven. New data are dispatched once they are produced._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. doctors), and for a broader use in an anonymised/aggregated manner for data analytics and statistical analysis. Additionally, as one of the value-added services introduced is related to the concept of a municipalscale competition, data analysis also serves the needs of calculating and providing a ranking among the competitors._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset is confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamper-proof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from wearable fitness trackers provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 24: Dataset description of the GNOMON Wearable Fitness Tracker Sensor** <table> <tr> <th> **DS.GNOMON.05.Beacon_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The beacon sensors are sensors to deployed in municipality’s sport facilities, e.g. gym, pool, etc. and also tested at CERTH/ITI’s Smart Home. The municipality tries to promote fitness awareness and improve citizens’ health under the concept of a municipal-scale competition that is based on activity related data gathered by the sensors and processed accordingly (e.g. translation of beacon signals to actual time spent in sport facilities)._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The data are collected by beacons deployed in municipality’s sport facilities and at CERTH/ITI’s Smart Home._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device are the property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The collection of data from beacons is event-driven. New data are dispatched once they are produced for example when middle-age person visits a sport centre._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users (e.g. doctors), and for a broader use in an anonymised/aggregated manner for data analytics and statistical analysis. Additionally, as one of the value-added services introduced is related to the concept of a municipalscale competition, data analysis also serve the needs of calculating and providing a ranking among the competitors._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of beacons deployed in CERTH / ITI’s smart house, that is not sensitive, are accessible through a local experimental repository._ _The full dataset of beacons deployed in houses of elderly people are sensitive, therefore, are confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamper-proof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from beacons at sport centres provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 25: Dataset description of the GNOMON Beacon Sensor** <table> <tr> <th> **DS.GNOMON_CERTH.06.Gorenje_Smart_Appliances_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors related to Gorenje smart appliances are sensors embodied to specific house equipment such as ovens and fridges. The latter equipment are provided by Gorenje partner and are in possession of patients in need of assisted living and identified by the equivalent municipality (MPH) health care services to ensure the validity of each case. Similar equipment are also deployed in CERTH / ITI’s facilities. The main goal of the sensors is to automatically detect when a patient opens the fridge or uses the oven in order to create behaviour profiles based on relevant criteria (e.g. frequency of use, etc), produce notifications in case of deviation from the normal standards of use and inform the call centre._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The data are collected by specific smart appliances (i.e. oven, fridge) provided by the Gorenje partner and adjusted to VICINITY requirements._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices are the property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _CERTH, GORENJE, GNOMON, MPH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _CERTH, GORENJE, GNOMON, MPH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _CERTH, GORENJE, GNOMON, MPH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP6, WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The collection of data from Gorenje devices is time-driven and dispatched every 15min and it is depended on the standards that Gorenje provides._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services and for a broader use in an anonymised/aggregated manner for creating behaviour profiles and clustering patients to different medical groups. Significant deviation from the latter profiles is expected to produce relevant notifications._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of Gorenje devices deployed in CERTH / ITI’s facilities, that are not sensitive, is accessible through Gorenje Cloud in a local experimental repository._ _The full dataset from Gorenje devices deployed in elderly’s people houses is confidential and only the authorized MPH personnel and related end-users have access as defined through Gorenje Cloud. The latter authorized groups of users access data in a tamper-proof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from Gorenje devices deployed at seniors’ houses provides added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 26: Dataset description of the GNOMON/CERTH Gorenje Smart Appliances Sensor** ## 4.5. Datasets for eHealth from Centre for Research and Technology Hellas (CERTH) CERTH / ITI contributes in the use case pilot setup for houses at Municipality of Pilea-Hortiatis and provide its background knowledge in the field of assisted living. It also provides its infrastructure of Smart House for cross- domain implementation including building sensors and devices which have been also integrated to houses at MPH. <table> <tr> <th> **DS.CERTH.01.Door_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Door sensors are deployed, on the one hand, in houses of patients in need of assisted living, identified by the equivalent municipality (MPH) health care services to ensure the validity of each case, but also in CERTH’s smart house facilities for testing reasons. The main task of the sensor is to provide a 24/7 door status for the area of its responsibility. Data coming from this sensor are used to create behaviour profiles based on relevant criteria and produce notifications in case of deviation from the normal standards._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset are collected via a combination of connected door sensors (Zwave) and a Connectivity Gateway based on_ _Raspberry pi._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices are the property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _Τhe collection of data from door sensors is event-driven (e.g._ _through REST Services, XML format etc.)._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services and for a broader use in an anonymised/aggregated manner for creating behaviour profiles and clustering patients to different medical groups. Significant deviation from the latter profiles is expected to produce relevant notifications._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of door sensors deployed in CERTH / ITI’s smart house, that are not sensitive, are accessible through a local experimental repository._ _The full dataset of sensors deployed in houses of elderly people are sensitive therefore are confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamperproof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 27: Dataset description of the CERTH Door Sensor** <table> <tr> <th> **DS.CERTH.02.Motion_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Motion sensors are deployed, on the one hand, in houses of patients in need of assisted living, identified by the equivalent municipality (MPH) health care services to ensure the validity of each case, but also in CERTH’s smart house facilities for testing reasons. The main task of the sensor is to provide the 24/7 motion levels for the area of its responsibility. Data coming from this sensor are used to create behaviour profiles based on relevant criteria (e.g. motions level for a specific room and time period, etc.) and produce notifications in case of deviation from the normal standards._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset are collected via a combination of connected motion sensors (Zwave) and a Connectivity Gateway based on Raspberry pi._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices are the property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _GNOMON, MPH, CERTH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP6, WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The collection of data from motion sensors is event-driven (e.g._ _through REST Services, XML format etc.)._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services and for a broader use in an anonymised/aggregated manner for creating behaviour profiles and clustering patients to different medical groups. Significant deviation from the latter profiles is expected to produce relevant notifications._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of door sensors deployed in CERTH / ITI’s smart house, that is not sensitive, is accessible through a local experimental repository._ _The full dataset of sensors deployed in houses of elderly people are sensitive therefore are confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamperproof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 28: Dataset description of the CERTH Motion Sensor** <table> <tr> <th> **DS.CERTH.03.Pressure_Mat** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Pressure mats are deployed, on the one hand, in houses of patients in need of assisted living, identified by the equivalent municipality (MPH) health care services to ensure the validity of each case, but also in CERTH’s smart house facilities for testing reasons. The main task of the device, which is deployed on beds, is to provide the sleeping hours of the users. Data coming from this sensor are used to create behaviour profiles based on relevant criteria and provide notifications in case of deviation from the normal standards._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset is collected via a sensor triggered when a person lies on the bed, an Arduino which reads the pressure mat measurement and a Connectivity Gateway based on Raspberry_ _Pi._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The devices are the property of the test site owners, where the data collection is being performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _MPH, CERTH_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _MPH, CERTH_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are being collected within activities of WP6, WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset is accompanied with the respective documentation of its contents. Indicative metadata include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The collection of data from motion sensors is event-driven (e.g._ _through REST Services)._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services and for a broader use in an anonymised/aggregated manner for creating behaviour profiles. Significant deviation from the latter profiles is expected to produce relevant notifications._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of pressure mat deployed in CERTH / ITI’s smart house, that is not sensitive, is accessible through a local experimental repository._ _The full dataset of devices deployed in houses of elderly people are sensitive therefore are confidential and only the authorized MPH personnel and related end-users have access as defined. The latter authorized groups of users access data in a tamperproof way with an audit mechanism triggered simultaneously to guarantee the alignment with relevant requirements coming from the recently introduced General Data Protection Regulation (GDPR). Specific consortium members involved in technical development and pilot deployment further have access under a detailed confidentiality framework._ _Furthermore, if the dataset in an anonymised/aggregated manner is decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal. Dataset from VICINITY could be used and exploited anonymized from another European project. Dataset from sensors deployed at seniors’ houses provide added value and be the base for other research projects (e.g. statistical data). VICINITY could have an open portal / repository on its website, providing anonymized data’s information like timestamp and description._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Due to ethical and privacy issues, data are stored in a database scheme at the headquarters of MPH, allowing only authorised access to external end- users. A back up is stored in an external storage device, kept by MPH in a secured place. Data are kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 29: Dataset description of the CERTH Pressure Mat** ## 4.6. Datasets for intelligent mobility from Hafenstrom AS (HITS) HITS provided the user requirements specifications and demonstration of transport domain use case, while actively participating in dissemination and exploitation activities of the project. By employing knowhow within standardization bodies, mobility and smart city governance, HITS allowed municipalities and smart cities to better utilize internal resources and improve on services offered to citizens and agencies alike. Furthermore, HITS was responsible for the Use cases “Virtual Neighbourhood of Buildings for Assisted Living integrated in a Smart Grid Energy Ecosystem” and “Virtual Neighbourhood of Intelligent (Transport) Parking Space”. Towards this direction, it was the main partner to bring/arrange the required infrastructure, in collaboration with other Consortium partners (i.e., TINYM partner), for the use case demonstration. <table> <tr> <th> **DS.HITS.01.Parkingsensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors will be installed at a test site, and will register proximity of objects of a certain size. Future subset may contain information about temperature, humidity, noise, light and other temperature, visual and touch related data. The sensors main task is to detect if the space is occupied. This information will later on be integrated with identification in order to verify that the vehicle/unit that occupies the space is licenced through either booking or ticketing action being taken._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset will be collected through a sensor that is mounted at the parking site._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device will be the property of the test site owners, where the data collection is going to be performed_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata include: (a) description of the experimental setup (e.g. process system, date, etc.) and procedure which is related to the dataset (e.g. proactive maintenance action, unplanned event, nominal operation. etc.), (b) scenario related procedures, state of the monitored activity and involved workers, involved system etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored at XML format and are estimated to be 50-300 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Registering parking activity based upon availability, vehicle, ownership/licence, comparing with nearby infrastructure and surrounding ITS technology._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to participants in the project. If the dataset or specific portions of it (e.g. metadata, statistics, etc.) are decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device of the developed system (computer). A back up will be stored in an external storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 30: Dataset description of the HITS parking sensor** <table> <tr> <th> **DS.HITS.02.SmartLight** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Smart lights will be installed at the lab, and will demonstrate how light and colours can indicate the state of access and availability. Future subset may contain information about proximity, movement, heat sensing (infrared), sound sensing and door contact sensors. The smart lights main task is to visually inform about the state of the parking space. This information may later on be integrated with indicators for occupancy, time to availability and validity._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset will be received from a laptop in the lab._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device will be the property of the test site owners, where the data collection is going to be performed_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata include: (a) description of the experimental setup (e.g. process system, date, etc.) and procedure which is related to the dataset (e.g. proactive maintenance action, unplanned event, nominal operation. etc.), (b) scenario related procedures, state of the monitored activity and involved workers, involved system etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored at XML format and are estimated to be 50-300 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Registering parking activity based upon availability, vehicle, ownership/licence, comparing with nearby infrastructure and surrounding ITS technology._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium. If the dataset or specific portions of it (e.g. metadata, statistics, etc.) are decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device of the developed system (computer). A back up will be stored in an external storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 31: Dataset description of the HITS Smart lighting** <table> <tr> <th> **DS.HITS.03.LaptopTeststation** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The laptop test station will be installed at the workbench where the operator normally works, and will aggregate data and process information received wirelessly from other devices delivering data of relevance to the mobility domain and parking in particular. Future subset may contain information about other domains – energy, and data packages from smart home and health-devices. The test stations main task is to process data and trigger activate and log actions accordingly._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset will be collected wirelessly and via USB ports._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device will be the property of the test site owners, where the data collection is going to be performed_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata include: (a) description of the experimental setup (e.g. process system, date, etc.) and procedure which is related to the dataset (e.g. proactive maintenance action, unplanned event, nominal operation. etc.), (b) scenario related procedures, state of the monitored activity and involved workers, involved system etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored at XML format and are estimated to be 50-300 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Registering parking activity based upon availability, vehicle, ownership/licence, comparing with nearby infrastructure and surrounding ITS technology._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium. If the dataset or specific portions of it (e.g. metadata, statistics, etc.) are decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device of the developed system (computer). A back up will be stored in an external storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 32: Dataset description of the HITS laptop test station** <table> <tr> <th> **DS.HITS.04.Sensio_sensors_temperature_motion_lock** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensors for measuring temperature, motion detection and identifying status of door/window lock will be installed in apartments that are managed by caretakers employed by Tromsø municipality._ _The datasets will contain general information about activities, and offer insight that building manager, caretakers and medical staff can utilize to offer better service and trigger messages should deviations situations occur._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset will be received from a Sensio gateway that stores the data on an external server, and made available to a laptop at the pilot site through an API._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device will be the property of the test site owners, where the data collection is going to be performed_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information on location, and be accompanied with the respective documentation of its contents. Indicative metadata include: scenario related procedures, state of the monitored activity and involved workers, involved system etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored at XML format and are estimated to be 30-50 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Identifying usage history used for resource planning and detecting unexpected activities based on activity or lack of activity, as well as measured values versus expected data._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium. Specific portions will be accessible to building managers and medical staff. Parts of the data will be anonymised, while other will available through a two-pass data management porta. For privacy reasons, the data access will be limited, so configuration will be made in close cooperation with the service provider._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _Due to confidentially, the created dataset will only be made accessible through a data management portal that is open to medical staff and managers._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device of the developed system (computer). A back up will be stored in an external storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 33: Dataset description of the Sensio sensors** <table> <tr> <th> **DS.HITS.05.Gorenje_Smart_Appliances_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The Gorenje smart appliances installed at the Tromsø pilot site includes a fridge and an oven. The appliances are managed by caretakers employed by Tromsø municipality, the tenants themselves and the building manager. The appliances contain sensors that among other things can measure timestamps and temperature._ _The data harvested will be used to identify usage history in order to offer better service, identify abnormal behaviour, and otherwise generate logs that can be used for statistical analysis._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The data will be collected by specific smart appliances (i.e. oven, fridge) provided by Gorenje and adjusted to VICINITY requirements. The data will be made available to a laptop at the pilot site through an API._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The device will be the property of the test site owners, where the data collection is going to be performed._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _HITS, GORENJE_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _HITS, GORENJE_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _HITS_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP6, WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _A collection of data is dispatched every 15 minutes. The format is based on standards provided by Gorenje._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Usage data to identify behaviour patterns and as mean for training disabled users in being more self-sufficient are examples are examples of value-added services that can be built on top of the platform. As the data pool increases, more services are expected to be included._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset of Gorenje devices deployed at the Tromsø pilot site “Teaterkvarteret 1. Akt”, will be stored at the Gorenje Cloud in a local experimental repository._ _The full dataset will be available to selected members of the consortium. Specific portions will be accessible to building managers and medical staff. Parts of the data will be anonymised, while other will available through a two-pass data management porta. For privacy reasons, the data access will be limited, so configuration will be made in close cooperation with the service provider._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _Anonymised parts of the dataset will be available for training and statistic purposes. Aggregated data that could be used to identify the user or other privacy related information will be limited. Due to confidentially, the created dataset will only be made accessible through a data management portal that is open to medical staff and managers._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device of the developed system (computer). A back up will be stored in an external storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 34: Dataset description of the Gorenje smart appliances sensor** ## 4.7. Datasets for buildings from Tiny Mesh AS (TINYM) Tiny Mesh focus on creating new products, services and business model as part of the Internet-ofEverything (IoE). New potential arises when IoE is used for connecting, integrating and controlling all kinds of meters, street lights, sensors, actuators, assets, devices, tags and other devices. The primary role of Tiny Mesh Company was as a developer and technology provider, with the company´s IoT solution as the main enabling technology. The goal was to offer promising technology solutions through participation in use cases. TINYM contributed in the practical implementation through their work with definitions of use case. TINYM took practical ownership of the various demo sites through the role as of leader of WP7. <table> <tr> <th> **DS. VITIR.01.Door_Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors will be installed in the door of a room where there is a need for monitoring usage._ _Data packet contains sensor data of movement._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _Discrete digital input_ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The property owner Tiny-Mesh_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _Tiny-Mesh_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _Tiny-Mesh_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _Tiny-Mesh_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _Metadata about location of the sensor, network topology and network status will be available in VITIR’s mqtt server.._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _Data is delivered as a discrete value indicating if door has been opened or closed, volume of data depends on the usage._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _The purpose of this collection is to give input data for analysis of room usage for analyses to the building owner and Facility manager._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _Data access for building manager and facility manager. Data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _Data access is confidential. Only members of the consortium, building manager and facility manager will have access on it for privacy reasons._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _-_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Unless specified otherwise by client the data will be stored in a Value-Added Service._ </td> </tr> </table> **Table 35: Dataset description of the VITIR Door Sensor** <table> <tr> <th> **DS. VITIR.02.CO2 Sensor** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _The sensors will be installed to measure CO2 levels. Data packet contains sensor data of movement._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _Data is retrieved through industry-standard meters and communicated through Tiny-Mesh infrastructure before being made available to the consortium._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The property owner Tiny-Mesh_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _Tiny-Mesh_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _Tiny-Mesh_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _Tiny-Mesh_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected within activities of WP7 and WP8._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _Metadata about location of the sensor, network topology and network status will be available in VITIR’s mqtt server._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _Communication with meter will be on a proprietary interface according to meter vendor. Data will be delivered as ppm_ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Purpose is to give additional information about air quality inside various office spaces to the building management._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _Data access is restricted to the consortium, building manager and facility manager. Data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _Data access is confidential. Only members of the consortium, building manager and facility manager will have access on it for privacy reasons._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _-_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will only be stored by the Value Added Service._ </td> </tr> </table> **Table 36: Dataset description of the VITIR’s sensor for CO2.** ## 4.8. Datasets for buildings from Open Call Winner PilotThings RTNRG – Real Time EneRGy management Pilot Things focus is IoT Data as a service: we transform real sensors data to human data. Our novel approach helps cities and industries to leverage IoT. Our software Pilot Things is compliant with oneM2M worldwide standard since the origin thanks to a collaboration with the research laboratory LAAS-CNRS. oneM2M is based on a centralized architecture with interworking and semantic representation capabilities. oneM2M suits very well to integrate multiple buildings protocols. Our project fits in the smart building and energy management use case. The main objective is to build an infrastructure for facility management companies to help them deliver energy management services. The extension objective is to monitor an office building energy consumption and compare it to a grid production. The drawing hereunder sum-up the data flow: **Figure 10: PilotThings data flow** Here is a dataflow from the plug to VICINITY: **Figure 11: PilotThings global dataflow** The oneM2M ADN client connects to oneM2M IN. Pilot Things is also a oneM2M client. The oneM2M ADN is advised for new plugs data. It then exposes the data to VICINITY cloud. To understand oneM2M terminology see oneM2M TS001 spec here http://www.onem2m.org/technical/publisheddrafts/release-3 Documentation and software can be found in GitHub at: * M2MoneM2M IPE VICINITY adapter: _https://github.com/vicinityh2020/adapter-pilot-things-one_ * This agent gets the OPA dataset and send it to VICINITY network: https://github.com/vicinityh2020/adapter-pilot-things-OPA SPIE group ask us to connect this project to their IoT solution so called Colligo. Colligo is based on Thingsworx platform. So we plan to develop an interface to Thingsworxto send VICINITY and Pilot Things data. This project will be used by SPIE operational business entities. We also introduced VICINITY to Vinci Energie group which is a big worldwide SI. We won a smart city challenge at vivatechlogy show. We will have more information on Vinci objectives after the show (see _www.vivatechnology.com_ ) During the project we developed an oneM2M zwave IPE (aka driver) allowing us to connect any commercial zwave product. The oneM2M Pilot Things VICINITY adapter bridges Pilot Things and VN thus allowing the connection of all the sensors we support including zwave. We now have more than 200 LoRaWan and Sigfox sensors preintegrated in our catalog <table> <tr> <th> **DS. PilotThings.01.Building** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _This dataset includes energy consumption measurement points for buildings to identity usage._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset includes smartplugs energy consumption and building global energy consumption from building management system._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The smartplugs and the gateway are Pilot Things property._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _Pilot Things_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _Pilot Things_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _Pilot Things_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected during OC WP4._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data are received using ZWave radio protocol and csv files from the building management system._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _The collected data are store in Pilot Things database and used to produce energy consumption dashboards. The building facility manager will have access to the dashboard and thus the data._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be confidential and only the members of the consortium will have access on it. Furthermore, if the dataset or specific portions of it (e.g. metadata, statistics, etc.) are decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using Pilot Things APIs as well as a data management portal._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _-_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be store in Pilot Things database for 1 year period._ </td> </tr> </table> **Table 37: Description of the PilotThings building dataset.** <table> <tr> <th> **DS. PilotThings.02.OPA_Grid** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _This dataset includes solar energy production measurement._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The dataset includes the net AC production per minute._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _The solar premises are LAAS/CNRS property._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _Pilot Things gateway retrieves the data._ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _Pilot Things_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _Pilot Things_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are going to be collected during OC WP4._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will be accompanied with the respective documentation of its contents. Indicative metadata may include device id, measurement date, device owner, state of the monitored activity, etc._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data format is acsv files from the OPA._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _The collected data are store in Pilot Things database and used to produce energy consumption dashboards. The building facility manager will have access to the dashboard and thus the data._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be confidential and only the members of the consortium will have access on it. Furthermore, if the dataset or specific portions of it (e.g. metadata, statistics, etc.) are decided to become of widely open access, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using Pilot Things APIs as well as a data management porta._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _-_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be store in Pilot Things database for 1-year period._ </td> </tr> </table> **Table 38: Description of the PilotThings OPA Grid-dataset.** ## 4.9. Datasets for buildings from Open Call Winner WearHealth WearHealth – Safety and Health Intelligence **WearHealth is an intelligent decision support system that can use any 3rd party wearables and IoT devices. 3rd party hardware is integrated into our proprietary cognitive technologies to build an intelligent software platform that can identify markers of workers’ occupational risks, health, workload, and work efficiency. With these insights, businesses can detect and predict worker safety and health risks, thus preventing accidents and managing their workforce to improve productivity and operational efficiency.** IoT communication infrastructures and data stream integrations are not yet standardized in the industry and need to comply with the EU General Data Protection Regulation. The goal of this project is to further develop our technology and market WearHealth version 2.0 that includes a modularity approach to connect to wearables and process the data integrated into VICINITY following its decentralized interoperability concept. The key functionality of the adapter is to upload semantic data from the wearable to the Gateway. We have also implemented a Value-Added Service (VAS) to provide real-time response to another service/adapter. The semantic data for the service and adapter was stored in the VICINITY Cloud using the VICINITY Neighbourhood Manager. For this, we used the VICINITY gateway API and VICINITY agent API. The integration of wearables and our cloud services was planned together with the VICINITY technical team and the final system was deployed and tested in an industrial environment. The key results are: we have built an adapter for a wearable (smart shirt) and a value-added service, uploaded semantic data to the VICINITY cloud, improved data privacy and security for our solution and validated the value of the integration of our cloud service with VICINITY. Key elements of the final WearHealth – VICINITY integration are the VICINITY gateway API and VICINITY agent API to communicate and deliver access to the wearables sensor semantic data to the VICINITY Neighbourhood Manager. We have developed a custom adapter in order to access the semantic data from the wearable via BLE and a gateway. Moreover, a value-added service (VAS) was developed to provide real-time feedback based on the raw data from the wearable. The smart shirt has been successfully registered to the VICINITY Neighbourhood Manager by integrating our gateway. The data from the smart shirt was collected in an industrial environment. Due to the VICINITY integration, we have validated the improvements in terms of data privacy and the value for potential customers. In this report, we will show the final results of the integrations and evaluations. WearHealth improve data privacy and transparency in such form that: * The user has full control over his own data * Conclusions only at an aggregate level, not from each user * All data is anonymized and stored securely * No data shared with third parties * No data collected without the explicit and voluntary consent of the use * No data is used for purposes other than occupational safety and health Based on the results of this project, WearHealth will strongly focus on the further improving data privacy strategy and access to additional data sets to increase the service. <table> <tr> <th> **DS. WearHealth.01.SmartShirt** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _This dataset includes a one channel ECG (Electrocardiogram) to generate heart rate, R-R intervals, steps, activity and motion data of the users._ _This wearable uses which_ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The main input device in the present use-case is the smart-shirt from Ambiotex using standard BLE (Bluetooth Low Energy) transmission._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _Private owned_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _WearHealth_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _WearHealth_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _WearHealth_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _-_ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _-_ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _-_ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _WearHealth provides analysis on the data from the smart shirt such as heart rate and heart rate variability to detect high workers workload to reduce potential accidents and health related problems._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The Worker decides who has access to the data._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The owner of the data (User or worker) will be able to share his data with other solutions providers. The collected data during the project can be made available in_ _anonymized form to partners for additional services. Test users (volunteers) will sign specific documentation to allow the processing of their raw data._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _-_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the WearHealth cloud and compliant with the GDPR. Key features are:_ * _Personal data is stored anonymously_ * _Encrypted data transmission and storage_ * _Data is stored in a private European data base_ * _The right to be forgotten_ * _Data is stored for the shortest time possible The following standards are employed:_ * _SHA-256_ * _SSL_ * _AES-COM at the BLE_ * _Login information are raw data are stored in physically_ * _separated servers_ * _Passwords expire every 60 days_ </td> </tr> </table> **Table 39: Description of the WearHealth dataset.** ## 4.10. Datasets for yachting marina from Open Call Winner SaMMY **SaMMY is an IoT cloud-based service solution for the Yachting Marina ecosystem, which enables the Marinas to efficiently manage their resources, to attract and efficiently service more yachts and support the local markets by linking them with the yachting community. Its main goal is to simplify Yachting Marinas’ operational processes, strengthening the yachters’ engagement with them as well as with the surrounding economies and also offer easy-to-use online services to yachters and skippers.** Taking advantage of its open architecture (FIWARE enablers and RESTful data model interfaces) SaMMY can definitely contribute to any IoT ‘Smart City’ or/and ‘Smart Mobility’ ecosystem, enhancing the availability of Yachting- Marina sector data resources. VICINITY Adapter for SaMMY code repository has been uploaded in VICINITY Github repository in URL: _https://github.com/vicinityh2020/vicinity-adapter-sammy_ VICINITY SaMMY demo IoT dashboard for code repository has been uploaded in VICINITY Github repository in URL: _https://github.com/vicinityh2020/vicinity- sammy-demo_ A demo application illustrating the integration between SaMMY IoT platform and VICINITY infrastructure is publicly available at: https://vicinity.optionsnet.gr/demo/. The application is a web application developed with Spring Boot, Thymeleaf and other web technologies. The values are continuously refreshed every 10-seconds, a value that is configurable in the application.properties files. However, user can directly request an IoT value by pressing “Query Now” link at the desired IoT. SaMMYacht IoT platform neither gathers nor handles any kind of personal data as its infrastructure involves environmental and occupancy sensors. Sensors’ data are saved and kept for statistical/historical and evaluation reasons in the cloud database of the SaMMYacht platform. On the other hand, SaMMY- VICINITY integration does not consider these historical data, since Vicinity adapter implemented for this integration only handles the current values of the sensors, which in turn are ot saved. Moreover, no aggregated data regarding the integration of the SaMMY-Vicinity are stored. Finally, the demonstration webapp only keeps the last set of sensors’ data in order to minimize querying the SaMMY infrastructure, which are in turn served in the demo UI. Concluding, due to the nature of the SaMMY IoT platform sensors and data, only READ operations are allowed and supported, since platform does not involve any kind of services or actuators at the IoT level. That being said, malicious actions won‘t have any effect. <table> <tr> <th> **DS. SaMMY.Patras.Air_Temperature** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensor for measuring Ait temperature, have been installed in Patras Port Mooring for Mega-yachts._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensor that feed this dataset is Libelium Smart City Plug &Sence - Air Temperature probe). _ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _SaMMYacht (Patras Port Mooring / Patras Port Authority)._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are already collected._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information for Environment Temperature on Patras Port Mega Yacht Mooring and geolocation info._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored to the CLOUD and are estimated to be 5 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium or public._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device on the Cloud. A back up will be stored in an external cloud storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 40: Description of the Air Temperature dataset.** <table> <tr> <th> **DS. SaMMY.Patras.Humidity** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensor for measuring Humidity, has been installed in Patras Port Mooring for Mega-yachts._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensor that feed this dataset is Libelium Smart City Plug &Sence - Humitidy probe). _ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _SaMMYacht (Patras Port Mooring / Patras Port Authority)._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are already collected._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information for Environment Humidity on Patras Port Mega Yacht Mooring and geo-location info._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored to the CLOUD and are estimated to be 5 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium or public._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device on the Cloud. A back up will be stored in an external cloud storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 41: Description of the Humidity dataset.** <table> <tr> <th> **DS. SaMMY.Patras.Water_Temperature** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensor for measuring Water Temperature, has been installed in Patras Port Mooring for Mega-yachts._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensor that feed this dataset is Libelium Smart Water Plug &Sence - Water Temperature probe). _ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _SaMMYacht (Patras Port Mooring / Patras Port Authority)._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are already collected._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information for Water Temperature on Patras Port Mega Yacht Mooring and geo-location info._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored to the CLOUD and are estimated to be 5 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium or public._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device on the Cloud. A back up will be stored in an external cloud storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 42: Description of the Water Temperature dataset.** <table> <tr> <th> **DS. SaMMY.Patras.Water_pH** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensor for measuring Water pH, has been installed in Patras Port Mooring for Mega-yachts._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensor that feed this dataset is Libelium Smart Water Plug &Sence - Water pH probe). _ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _SaMMYacht (Patras Port Mooring / Patras Port Authority)._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are already collected._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information for Water pH on Patras Port Mega Yacht Mooring and geo-location info._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored to the CLOUD and are estimated to be 5 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium or public._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device on the Cloud. A back up will be stored in an external cloud storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 43: Description of the Water pH dataset.** <table> <tr> <th> **DS. SaMMY.Patras.Water_ORP** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensor for measuring Water Oxidation Reduction Potential, has been installed in Patras Port Mooring for Mega-yachts._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensor that feed this dataset is Libelium Smart Water Plug &Sence - water ORP probe). _ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _SaMMYacht (Patras Port Mooring / Patras Port Authority)._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are already collected._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information for Water ORP on Patras Port Mega Yacht Mooring and geo-location info._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored to the CLOUD and are estimated to be 5 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium or public._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device on the Cloud. A back up will be stored in an external cloud storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 44: Description of the Water ORP dataset.** <table> <tr> <th> **DS. SaMMY.Patras.BerthSpace(5-15)_Occupancy** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _Sensors for monitoring berth spaces availability at Patras Port Mooring for Mega-yachts._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _The sensors that feed this dataset is Libelium Smart City Plug &Sence - UltraSound probes). _ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _SaMMYacht (Patras Port Mooring / Patras Port Authority)._ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _SaMMYacht_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _The data are already collected._ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _The dataset will contain information for berth space availability at Patras Port Mega Yacht Mooring and geo-location info._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _The data will be stored to the CLOUD and are estimated to be 5 MB per month._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _Data exploitation is foreseen to be extended through envisioned value-added services, allowing full access to specific authorised users._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _The full dataset will be available to the members of the consortium or public._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _The created dataset could be shared by using open APIs through the middleware._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _None_ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _Data will be stored in the storage device on the Cloud. A back up will be stored in an external cloud storage device. Data will be kept indefinitely allowing statistical analysis._ </td> </tr> </table> **Table 45: Description of the Berth Spaces occupancy dataset.** ## 4.11. Datasets for INCANT from Open Call Winner Thinkinside Srl **INCANT integrates IoT infrastructures providing indoor positioning data and the relevant supporting services into the VICINITY ecosystem.** Location is a first-class citizen in IoT: it can play a role in the aggregation/processing of IoT-generated information (e.g., spatial aggregation) or it can be the data point itself (e.g., tracking). In particular, we focus on indoor location, where the support for location-based services can enable and augment a rich set of application scenarios, ranging from retail marketing, interactive spaces, to healthcare medical portable equipment tracking, facility management and manufacturing safety of personnel, process monitoring. INCANT will provide a set of standard and actionable APIs, based upon the VICINITY ones, able to overcome such fragmentation of technologies and services in the indoor positioning area, enabling interoperability across indoor location technology, platforms and domains with the aim of unlocking the full potential of IoT applications. Break silos, avoid duplication of efforts and infrastructures) will be one of the primary outcomes of the INCANT proposal for the VICINITY ecosystem. During the initial months of the project, we have performed the initial design of the INCANT architecture and corresponding mapping to the existing VICINITY infrastructure. The Real Time Locating System (RTLS) is an infrastructure used to locate assets inside an indoor environment. It is typically based on a network of localisation antennas connected with each other, and able to localise a number of transmitting devices. A transmitting device can be either a TAG or a mobile device such as, e.g., a Smartphone. The output of an RTLS system is typically a timestamped series of location coordinates. All adapters have been implemented and tested. The adapters allow to introduce localisation into the IoT scenarios supported by VICINITY. The INCANT adapters have been released as open source and can be accessed in the VICINITY GitHub at the following link: https://github.com/vicinityh2020/vicinity-adapter-thinkinside The INCANT project aims at proving an indoor localisation service to the VICINITY infrastructure and ecosystem. This will be based on the ThinkIN platform and technology, integrated with a set of VICNITY adapters for the collection and sharing of location data streams. This will allow to abstract developers from the creation and management of an RTLS infrastructure. No data will be stored in the INCANT infrastructure and no specific knowledge is present beforehand on the Object being localised and tracked. In terms of system architecture, the INCANT project will be running in (i) the ThinkIN cloud infrastructure (ii) a local deployment managing the RTLS component and the integration with VICINITY (iii) the VICINITY infrastructure. The data exchanges between all components will occur over secure connections (https) thus ensuring the end-to-end confidentiality and security of the data being exchanged. Developers or System integrators will rely on INCANT to develop specific application scenarios. This will range from retail applications, to smart manufacturing and smart buildings. In all cases, they will be in charge of defining the specific policy according to which data will be managed and in case enforce any privacy protection action that might be required to comply with the current GDPR regulation. For the INCANT pilot and demonstrator we have utilised a retail scenario, where we have tracked anonymously the shopping carts and baskets moving in a retail store. Each asset is not linked to any personal information (e.g., shopper ID or other personal identifier) and data is utilised for store operational efficiency (e.g., detect queuing patterns at checkouts). <table> <tr> <th> **DS. ThinkInside.01.INCANT** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> _ThinkInside_ </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> _ThinkInside_ </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> _ThinkInside_ </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> _ThinkInside_ </td> </tr> <tr> <td> WPs and tasks </td> <td> _None_ </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> _No relevant information provided._ </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> _No relevant information provided._ </td> </tr> </table> **Table 46: Description of the INCANT dataset.** ## 4.12. Datasets for F2IS-VAS from Open Call Winner Sensinov Ensuring accurate datasets through fault monitoring and isolation is crucial for operational Internet of Things (IoT) deployments. In fact, IoT devices and sensors can generate incorrect measurements which can be attributed to software and hardware issues. As an example, if an IoT system is used to perform predictive maintenance of a smart building, the collected IoT datasets must accurately reflect the status of the monitored system. Continually monitoring and isolating faults is an important feature in IoT. The objective of F2I-VAS project is experimenting and validating a Value-Added Service for fault detection in a smart building environment. F2IS-VAS relies on IBM Watson IoT platform to capture incorrect measurements which can be attributed to software and hardware issues. F2I-VAS objectives are articulated around the following aspects: * Integration of VICINITY with IBM Watson IoT platform to offer fault detection as a service providing insights to dashboard application of smart building operators. Fault detection with IBM Watson IoT is based on both machine learning of data sets as well as detection models known as SPSS (Statistical Package for the Social Sciences). We refine SPSS models for data sets pertaining to Smart Building. * To integrate smart building infrastructure and fault detection and isolation capabilities, we need to develop adapters that translate between NGSI-LD model and the VICINITY Thing Description and VICINITY ontology to Watson IoT data model. * To enhance the learning capabilities of our value-added service, we bring different datasets types from smart building. Djane is available in sensinov github. Link: _https://github.com/sensinov/djane_ The anomaly detection service is available in this link _:_ _https://github.com/mbenalaya/VCNT_F2I_VAS_ <table> <tr> <th> **DS.Sensinov.01.F2IS-VAS** </th> </tr> <tr> <td> **Data Identification** </td> <td> </td> </tr> <tr> <td> Dataset description </td> <td> The sensors are deployed on ADREAM building at LAAS-CNRS. They measure building occupants’ comfort, energy/ water consumption, access, lighting. Registered data types are as follows: * Temperature * Humidity * Luminosity * Energy consumption </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> Smart building data is collected via connectors (software) which get data in device-specific data format and translate it into NGSI-LD data model. Integrated device types are: * electricalCounter * waterCounter * temperature * humidity * luminosity </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> LAAS-CNRS </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> Sensinov </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> Sensinov </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> Sensinov </td> </tr> <tr> <td> WPs and tasks </td> <td> The data are going to be collected within activities of WP2 (System implementation, testing and documentation) and WP3 (System integration, exploitation plan and business model). </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> We provide metadata related to location and timestamp. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> We follow these standards to specify the data model: SAREF4BLDG and NGSI-LD. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Data is used to train fault detection models and perform analysis to isolate faulty data. </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> Our solution does not deal with personal data. Regulation such as free flow of (non-personal) data will apply in the context of the digital single market. We will comply with application regulations from this perspective. The full dataset will be confidential and only the members of the consortium will have access on it. </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> The data will be processed to make learning and detect anomalies and will be destroyed gradually. There is not storage nor preservation of the VICINITY data after the processing and insights generation. Vicinity Infrastructures owners have the complete control of their data. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> Sensinov will store data with backups. </td> </tr> </table> **Table 47: Description of the F2I-VAS dataset.** ## 4.13. Datasets for drEven from Open Call Winner Ubiwhere drEVen allows any supplier of energy and owner of an OCPP-compliant Electrical Vehicle (EV) charging station to easily become a real e-mobility Operator, offering EV charging services with a proper transparent billing and settlements layer. The decentralised solution uses blockchain and smart contracts technology to process all monetary transactions and transparently manage energy tariff and financial settlements. In a nutshell, it provides energy suppliers with a suite of software allowing for complete management of Electrical Vehicle stations, users’ RFID tokens, transaction monitoring and transparent financial settlements; on the other hand, end users of the solution (EV drivers) make use of a dedicated mobile app facilitating the discovery of available and geo-positioned charging stations, as well as management of all monetary transactions: topping up of different wallets, acceptance of energy tariff and monitoring of past transactions (amount paid to a specific provider, based on the consumed energy and provider’s specified tariff). Leveraging the project’s use of ontologies and abstraction layers, Ubiwhere was able to design and implement a Vicinity adapter for this specific EV charging station, exposing the services which effectively allow such partner like Enercoutim to offer e-mobility services, making use of the Vicinityenabled EV charging station. The set of services abstract the complexities associated with the business layer (financial settlements and billing processes), exposing also the set of Smart Contracts deployed in the Ethereum blockchain, via common RESTful interfaces. <table> <tr> <th> **DS. Ubiwhere.##.EVCharger** </th> </tr> <tr> <td> **Data Identification** </td> <td> </td> </tr> <tr> <td> Dataset description </td> <td> The EV charging station has been deployed at Enercoutim’s facilities, providing information about meter readings, realtime operation status, and information about last start and stop timestamp of an EV charging session. The collected information references only data that has been collected during the experiment phase, from either Ubiwhere or Enercoutim. </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> The information will be relayed from the platform managing the actual charging station which has been installed on premises. The device naturally sends this data to a platform which, in turn, relays to our device adapter, for others (Entities registered in Vicinity Neighbourhood) to consume, after proper authorisation (device adapter) from Ubiwhere. </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> Ubiwhere </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> Ubiwhere and, partially, Enercoutim (the latter during the experiment phase only, when using the actual device) </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> Ubiwhere and Enercoutim </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> Ubiwhere </td> </tr> <tr> <td> WPs and tasks </td> <td> NA </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> Collected data during testing of this EV charging solution took place only during the official testing period with Enercoutim, on the 08th of November. This data has been temporarily stored in Ubiwhere’s controlled Servers (cloud- level) for further analysis (more details afterwards) and reporting (this deliverable only). </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> Only a very small portion of data has been stored during the 08th of November, in an SQL database. No specific data standard nor format has been followed - the data model has been entirely designed from scratch by Ubiwhere. As for the device adapter, the temporary collection of data followed the available ontologies, as already documented in previous sections. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The small portion of data collected during the real pilot, which has happened at Enercoutim’s facilities, reflected the bare minimum required to assess the viability of such a solution, by Ubiwhere alone. This data allowed us to determine the outcome of different tests (section 4). </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> Access to data may be provided to the public, if required. Data which has been deployed in the distributed ledger (Ethereum testnet), will forever be publicly accessible. </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> Ubiwhere sees no value in sharing such data (besides the one stored in Ethereum), neither re-using it nor distributing. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> NA </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> Collected data will be stored only until this deliverable has been officially deemed as valid and accepted by Vicinity’s consortium. Afterwards, it shall be archived with an encryption key held by Ubiwhere only, for 6 months, temporarily stored in the company’s data center. </td> </tr> </table> **Table 48: Description of the EVcharger dataset.** ## 4.14. Datasets for MyBigFitnessTwin from Open Call Winner Nissatech Using the VICINITY concept of interoperability as a service, this proposal has paved the way for a new generation of fitness monitoring and analytics services which enable improving performances of a trainee, based on the advanced real-time analytics on the data integrated from various information silos in a sport club. The main outcome is a novel service for a deep analysis of the personal fitness data, obtained from fitness wearables and smartwatches. The goal is to learn the model of the usual behaviour of a trainee and to apply that model in the real-time for the detection of unusual situations. The main advantage is the application of the novel algorithms for the comparison of the fitness data in order to get a very precise modelling and consequently very accurate anomaly detection. We support the GDPR through the rights to be removed, the rights to have access to your own data, and secure storage of data. All users will have to give their consent through a contract with terms of use in order to get access to data and use the system. Content of consent is our policy with info about collected data and way of processing it. The owner (the originator of the data) cannot have any control of how aggregated data is used for statistics or tailormade offers. In order to be either removed or have data anonymized, concrete actions need to be taken. A request needs to be directed to a trainer who will forward request for deletion to us, and in period of 30 days the all users data will be anonymized or completely removed. The same way, a user needs to send a request, which will be processed by us as soon as possible and within 30 days the latest. The user data will be exported in CSV format. We have implemented security protocols to avoid man-in-the-middle attacks. These measures include encrypted data available using JWT tokens during data transfer through a REST interface. The data is currently physically stored on our server, but plan is that database will be put in the cloud. There can be various Value-Added Services prepared for collected data, and we already implemented Anomaly detection services, as described in the report. Data are secured at rest and in transit with encryption. We are using JWT tokens while sending data to avoid man-in- the-middle attacks. <table> <tr> <th> **DS. Nissatech.##.smartWatch** </th> </tr> <tr> <td> **Data Identification** </td> <td> </td> </tr> <tr> <td> Dataset description </td> <td> The sensors are wearables. Trainees in a fitness club are using them. They are monitoring the psychophysiological parameters of a trainee during a training (usually HIIT – High Intensive Interval Training) </td> </tr> <tr> <td> Source (e.g. which device?) </td> <td> Smartwatch Polar M600 is used. Data is collected in the real time and sent (BTE) for further processing Huawei smartscale and Contour plus One glucometer are used as contextualized sensors </td> </tr> <tr> <td> **Partners services and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> Nissatech </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> Nissatech </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> Nissatech </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> Nissatech </td> </tr> <tr> <td> WPs and tasks </td> <td> The data are going to be collected within activities of WP10. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> Metadata is related to the fitness context of the training, like the type of exercises. In addition, the data from the glucometer and smartscale are used as the context for a fitness training. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data </td> <td> Data format is defined, based on Smart4Fit data format. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Data analysis enables the creation of the models of the normal behavior (in a fitness training) that will be compared with the real-time data during a training in order to detect unusual/anomalous behavior </td> </tr> <tr> <td> Data access policy / Dissemination level (Confidential, only for members of the Consortium and the Commission Services) / Public </td> <td> First point here is the authentication, by using credentials (username and password). Second, is that all communication is SSL-encrypted. Which means that by using certificates and keys, trainees data is protected from outside attacks while being sent through network </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> There is no data sharing planned </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> NA </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> Nissatech Data belongs to the partner fitness club and will be not deleted (except on the request from a user) </td> </tr> </table> **Table 49: Description of the MyBigFitnessTwin dataset.** # Conclusions This document is the third and final version of the Data Management Plan. It is based on knowledge harvested through describing requirements, preparing the VICINITY architecture, and planning the pilot sites. The updated datasets have been delivered from the participants that are responsible for the test labs and the living labs, and describes procedures and infrastructure that have been defined at this point in the project. The work on semantics and privacy issues has continued. It is the process of clarification of procedures that has led to many of the updates that are found in this document. Certain areas still need some attention. This will in particular matter for Open Calls, as these are still tentative and the documents and other material is still being worked out by the VICINITY consortium. Activities for a Data Management Portal have proceeded, and a demonstration has been held twice that presented how the VICINITY architecture works, how it integrates and how the concept of virtual neighbourhood functions in practical terms. More updates are envisaged after studies of the pilot sites proceeds, and open calls are being presented. Future versions may have updated Consent forms as well since the upcoming GDPR may lead to changes in how privacy and ethics issues are formulated. Lessons learned from the work conducted in this period is there has been introduced more IoT assets that will be integrated within the ecosystems that will be tested. There has been a fruitful discussion between project partners, which increases the quality of this document. Ownership of data become more important, and will receive special attention in the next part. The Data Management Portal is still under work, but need for each project partner to contribute to editing / access rights will need to be managed accordingly. It must also be noted that the partners are unable to exactly specify what kind of datasets that will be relevant as the project proceeds. This is what they expect to learn from the pilot sites and other tests conducted at the workbench. It is therefore expected that the datasets may change accordingly. The VICINITY Data Management Plan still put a strong emphasis of the appropriate collection – and publication should the data be published – of metadata, storing all the information necessary for the optimal use and reuse of those datasets. This metadata will be managed by each data producer, and will be integrated in the Data Management Portal. This is considered even more important with the upcoming deployment of the General Data Privacy Regulations (GDRP). This is the final version of DMP from December 2019. It presents the final datasets and lessons learned, alongside plans for further management of test data and production data. The document debates information of data, integration and reuse. The document also discuss data preservation, costs and a detailed Management Plan for each dataset.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1090_WhoLoDancE_688865.md
# Executive Summary The WhoLoDancE Work Package 5 is responsible for the overall data management infrastructure to be built and deployed by ATHENA RC with the objective to collect, store, pre-process and manage the multimodal data acquired in the project. Deliverable 5.1 provides technical information about the type of data that will be produced, managed and maintained by the data management platform and the methodologies which will be applied for the data integration and management in order to deliver the various applications of the project. The process of comprehending and driving conclusions on data sources related to the project is an integral part of the WhoLoDancE data management approach and is presented in this report. For gathering information from partners, a special questionnaire has been designed and implemented by the project’s data management team and has been populated by the individual data-providing partners. More information on questionnaire structure the can be found in §3, “Recorded dataset information”, while the results are presented in §7, “Datasets”. A set of dataset management practices and tools that can be used for storing, delivering, preserving and licensing the data evaluated for the needs of the project, complying with best practices, and generally acceptable paradigms in the context that the project activates is then defined. The results of this work are presented in §4, “Data Management”. Subsequently, the data model of the project data is presented in §5, “Motion Capture Dataset ”. Finally, the policies for ensuring data interoperability and integration across project’s services, be it data management or end-user ones, is covered in section §6, “Data Integration”. # Introduction The WhoLoDancE work package 5 is responsible for the overall data management infrastructure to be built and deployed by ATHENA RC with the objective to collect, store, pre-process and manage the multimodal data acquired. The data management infrastructure will be able to deliver that data as input to various similarity search algorithms, multimodal analysis and modeling tools and promote data exchange between different components and modalities of interaction to support the realization of a variety of learning scenarios. WP5 is responsible for integrating the ”groundtruth” data selected in WP2, indexing and annotating based on the models and high-level descriptors prepared in WP3, and creating a Learning content repository of movement data following the requirements and theoretical guidelines defined in WP1 and modeled in WP3. This deliverable outlines how the data “produced” (either generated or collected) during the WhoLoDancE project will be managed during the project and after the project completion. In particular, it describes the practices characterizing research data handling during and after the project, what data will be collected, processed or generated, what methodology and standards will be applied, whether data will be shared / made open access and how, how data will be curated and preserved. # Data management general approach In the WhoLoDancE project, the produced data sets are a first class citizen for empowering the project’s research, technological development, usage and outreach activities. To facilitate this enhanced role of the data in the project, the management approach of WhoLoDancE is to lay the foundations for processes, technologies and policies related to data so that their production and consumption via stakeholders is streamlined in a smooth and effective way. In this direction, it is important to view data elements from various perspectives and not as standalone artefacts but rather as integral part of the project platform. This requires that both technological facets of data (model, manifestation, volume, protocols, etc) and as well as policies around them (availability, preservation, volume, etc) need to be handled when planning their incorporation in the project platform and work plan. The WhoLoDancE data management approach lays its foundations on the following sources: * The Description of Action of the Project, which describes the principles for data management as well as the baseline plan (be it explicit or implicit) for data collection and usage in the project’s lifetime. * The availability and capacity of project partners to generate, describe and provide data, which relates to partner role, equipment, data sizing etc. * The learning scenarios and user needs defined in D1.4, that guide the production and consumption of data in overall. * Common practices regarding data management in the context of H2020 datasets, which relates to provenance, preservation, policies etc. The process of comprehending and driving conclusions on those sources, is herein called Conceptual Analysis of Datasets, and is an integral part of WhoLoDancE data management approach and is presented in this Report. Collecting and understanding the project’s datasets gathers information from two sources: * Partner statement on dataset generation: i.e. listing of datasets that will become available in the process of the project, along with several data attributes that characterize their nature, use and policies. * Specific dataset analysis: i.e. analysis of initial datasets gathered prior to the finalization of project’s Conceptual Analysis of Datasets. For gathering information from partners, a special questionnaire has been designed and implemented by the project’s data management team and has been populated by individual dataproviding partners. More information on questionnaire structure the can be found in §3, “Recorded dataset information”, while the results are presented in §7, “Datasets”. The outcome of the conceptual analysis of datasets, covers a number of topics and delivers the results presented below. First we define the **set of dataset management practices and tools** that can be used for storing, delivering, preserving and licensing the data evaluated for the needs of the project, complying with best practices, and generally acceptable paradigms in the context that the project activates. The results of this work are presented in §4, “Data Management”. Subsequently, the **data model** of the project motion capture dataset, presented in §5, “Motion Capture Dataset ”, which covers:  aspects of dataset internal models (i.e. how data is structured inside datasets)  the WhoLoDancE data model which covers: * element referencing approach (i.e. how dataset/metadata elements are cross referenced) * descriptive metadata that allow datasets to be discovered and consumed by end users (and services) * structural metadata that allow datasets to be handled by the system and consumed and explored by services (and users) * semantic extensions: the approach of project for handling semantic metadata Finally, the policies for ensuring data interoperability and integration across project’s services, be it data management or end-user ones, is covered in section §6, “Data Integration”. # Recorded dataset information To record information for the foreseen datasets to be created during the project lifetime, a questionnaire has been created in an on-line form format and each partner has been asked to contribute by providing information on the datasets they foresee to generate. For each dataset, the classes of information that have been collected are presented in the remainder of this section. ## Dataset description The provided description should characterise the dataset, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. Requested information includes: * _**Label:** _ Please provide a (tentative) label for the dataset;  _**Description:** _ Please provide a brief description of the dataset;  _**Generated/Collected:** _ Method of acquisition: * Is the dataset genuinely generated within the project, transformed or produced by aggregating content out of existing datasets / data sources? * In the case the dataset is derived from existing datasets or imported, please indicate its origin(s). Add the dataset download URLs if available, otherwise the origin URL. * _**Origin:** _ * In the case the dataset is derived from existing datasets or imported, please indicate its origin(s). Add the dataset download URLs if available, otherwise the origin URL. * In the case the dataset is derived from existing datasets or imported, please indicate the licenses of the original data. Add URLs if available. * **_Nature_ : ** * What is the nature of the dataset content? The options provided include: Optical motion capture data, audio data, biometric sensing data, 3D data, Video data, programming and code data, documentation and reporting data * Please provide details on the nature of the data (for example "greek dance video data") * **_Scale/Size_ : ** What is the estimated scale (size) of the dataset? (indicate size of its constituents if applicable); * _**Potential use:** _ What is the foreseen use of the dataset by different communities for specific applications or research purposes? * **_Scientific publications / references_ : ** Please indicate publications that are related to the dataset; * _**Availability:** _ When will the dataset be made available to the project (indicate month/year)? ## Content and Metadata Types * Is the dataset comprised by items of the same or different typologies? (e.g. video streams, audio streams, textual documents, tabular data etc) * What are the data formats used? (For example, XML, CSV, FBX) Please refer to possible standards used. * What are the metadata formats used? Please refer to possible standards used. ## Dataset use and sharing This section includes a description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacyrelated, security-related). Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). Requested information includes: * What are the repositories the dataset or/and its metadata have been published in? * Is there specific software required for consuming the dataset? * Indicate the dissemination mechanisms through which the availability of the dataset will be announced (e.g. metadata catalogs). * What is the license under which the dataset is provided (e.g. Creative Commons Attribution 4.0 ) ? * What are the policies governing access to the dataset (e.g. the dataset is “open”, the dataset is made available to authorised users only)? * What is the procedure for a user to consume to the dataset (e.g. standard content access API, web site download / view, behind a service API)? * If this is a generated dataset, will there be an embargo period for providing (open) access to the dataset? If yes, how long (in months)? * If creating or collecting data in the field how will you ensure its safe transfer into the main data management system? ## Dataset handling and synchronization * What is the preservation strategy for the dataset? * What is the foreseen preservation period duration? * What are the instruments (tools) put in place to implement the preservation strategy ? * Please provide any additional information on issues related to the handling of the dataset (synchronization, referencing, etc). # Data Management WhoLoDancE defines a series of practices and approaches for data storage, data access and data preservation. These practices are briefly described in the following sections. ## Data Storage and Repositories WhoLoDanceE preliminary data analysis shows that there is no single data manifestation that may cover all project’s pilot cases and data generation needs and as such has to adopt a rather more generic approach to data and metadata storage. As per example, datasets may contain motion capture data in various manifestation, video or audio streams, graphs, tables etc. Furthermore those datasets shall be described, in order to facilitate discoverability, and interconnected in order to facilitate their consumption. As a result the data storage elements and repositories, , presented in Figure 1 are included in the WhoLoDancE platform. More specifically, these include: * **Storage Layer** : A file-based object store for depositing binary data objects. The repository is implemented over a redundant store with one delayed replica and is accessible via a number of standard protocols such as FTP and HTTP, while special protocols are also available depending on the data type (e.g. streams for media objects). Items in the repository obtain URLs that can be disseminated via standard web means, yet access may be provided only with granted credentials. * CKAN metadata repository tailormade w.r.t. configuration and plugins, to fit WhoLoDancE project data and metadata servicing needs. Offers full web UI for managing and accessing metadata and a rich set of REST web services for consuming/exploring projects datasets. * A relational database management system (PostgreSQL) for managing dataset metadata, behind the CKAN repository and pilot-specific services. * **An ontology management system** (Protégé) to provide management functionalities for the WhoLoDancE ontologies and access through appropriate APIs. **Figure 1 WhoLoDancE data management infrastructure** ## Data Dissemination and Catalogues Data dissemination has two aspects, referring either to end-users or to other external systems / catalogues. To support both, a data catalogue based on the CKAN 1 technology is available offering rich user interfaces to end users for data search and discovery, as well as a REST Application Programming Interface (API 2 ) for machine interaction. In addition, aiming on being interoperable with external catalogues the system is compatible to the most known and adopted interoperable standard, the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH 3 ). Interoperability is the ability of two or more information systems to exchange metadata with minimal loss of information. The OAI-PMH is a protocol developed for harvesting metadata descriptions of records in an archive so that services can be built using metadata from many archives. An implementation of OAI-PMH must support representing metadata in Dublin Core 4 schema, but may also support additional representations depending on their nature and origin. In the current implementation, only the default schema is supported. Moreover, every data object residing in the WhoLoDancE platform is accessible via a URL, which allows accessing its payload via a standard web protocol. However, this does not preclude the involvement of an authentication/authorization mechanism before accessing the actual dataset. Due to the large size of project’s datasets it is required to protect infrastructure availability via several means, one of those being limiting the access to datasets. ## Data Preservation In essence, every data is archived in a secure manner. This is done in two different yet complementary ways. Every data is constantly copied in a backup area (the periodicity of the backup procedures varies from case by case yet it is never longer than one day). Certain storage solutions automatically store the content in multiple copies, e.g. this is the case of the technologies behind the file-oriented storage. No format migration strategy or approach is in place, the data are managed with their native format. ## Licenses The library of motion capture data produced in the contect of WhoLoDancE is a collection of high quality dance movement content that would be a valuable asset for the researchers of the community. It will be decided in the consortium what will be the approach to be taken for the access provided to external parties to this data. A common approach would be to make the data, available through a Creative Commons 5 license, possibly CC BY-NC-SA or CC BY-NC-ND. Normally the data would be made available upon request to interested parties after registration. # Motion Capture Dataset Modeling Although WhoLoDancE will produce a variety of Datasets, ranging from motion capture files to text reports, the main content outcome that will be the basis of the project research and development work is the Dataset of motion capture dance segments, along with the multimodal data that accompany it. This Dataset is presented in more detail in Section 7.1 - Table 5. This section presents the metadata modelling approach employed for it. The Dataset is organized in the following structures: * **Motion capture recording** : contains resources relevant to a particular motion capture segment and its relevant multimodal data, more specifically: .c3D, .fbx, .json, .txt, .mp4, .mp3 * **Collection** : a conceptual grouping of recordings, included as a metadata field in each recording, for example the Ballos Collection. A collection contains 1 to many recordings  **Dance Genre** : set of collections relevant to a particular dance genre. * **Dataset** : it contains all motion capture data captured in the motion capture sessions organized by Motek and Unige, along with the data supporting multimodal material (music, videos, etc). The motion capture recordings have been stored in the FTP server and annotated with metadata available through a CKAN metadata server 6 . The metadata server offers a view to the user that emphasizes the four dance genres. (Figure 2). **Figure 2 The C-KAN view of the content organized by dance genre** The first step for organizing the collected data was to create four recording schemas corresponding to the four dance genres represented in the project (Ballet, Contemporary, Greek Folk, Flamenco). The recording schemas have some common fields that describe both the dances and the files related to them (Table 1). Table 2 presents the recording fields that are specific to Greek folk dance. Each file (or resource) within the recording is described by a set of fields. There is a subset that is common among the four genres (Table 3) as well as a subset with is genre-specific and thus containing different fields for each of the dance genres (Table 4). The fields that are specific to each genre schema are presented in Table 4. **Table 1 Fields that describe each recording** <table> <tr> <th> **Field name** </th> <th> **Field type** </th> </tr> <tr> <td> Source </td> <td> URL to the FTP server </td> </tr> <tr> <td> Author </td> <td> Text </td> </tr> <tr> <td> Author email </td> <td> Text </td> </tr> <tr> <td> Maintainer </td> <td> Text </td> </tr> <tr> <td> Maintainer Email </td> <td> Text </td> </tr> <tr> <td> Dance Genre </td> <td> Text </td> </tr> <tr> <td> Description </td> <td> Text </td> </tr> </table> **Table 2 Greek folk dance - specific recording fields** <table> <tr> <th> **Field name** </th> <th> **Field type** </th> <th> **Genre** </th> </tr> <tr> <td> Dance name </td> <td> Text </td> <td> Greek folk </td> </tr> <tr> <td> Local name </td> <td> Text </td> <td> Greek folk </td> </tr> <tr> <td> Region </td> <td> Text </td> <td> Greek folk </td> </tr> <tr> <td> Dance type </td> <td> Selection from the following values: circle, face to face </td> <td> Greek folk </td> </tr> <tr> <td> Dance gender </td> <td> Selection from the following values: male, female, mixed </td> <td> Greek folk </td> </tr> <tr> <td> Time signature </td> <td> Text </td> <td> Greek folk </td> </tr> </table> **Table 3 Common resource fields for all genres** <table> <tr> <th> **Field name** </th> <th> **Field type** </th> </tr> <tr> <td> Last updated </td> <td> Date </td> </tr> <tr> <td> Created </td> <td> Date </td> </tr> <tr> <td> Format </td> <td> Text </td> </tr> <tr> <td> License </td> <td> Text </td> </tr> <tr> <td> File Type (e.g. Motion Capture) </td> <td> Text </td> </tr> <tr> <td> Dance Genre </td> <td> Text </td> </tr> <tr> <td> Movement Principle </td> <td> Multiple selection from the following values: Alignment and Posture, Balance, Coordination, Directionality, Motion Through Space, Motorics, Rhythm and Phrasing, Stillness, Symmetry, Weight bearing vs. Gesturing </td> </tr> <tr> <td> Capture Venue </td> <td> Text </td> </tr> <tr> <td> Capture day </td> <td> Date </td> </tr> <tr> <td> Capture Quality </td> <td> Text </td> </tr> <tr> <td> Capture Comments </td> <td> Text </td> </tr> <tr> <td> Duration </td> <td> Text </td> </tr> <tr> <td> Frames </td> <td> Text </td> </tr> <tr> <td> Performer </td> <td> Text </td> </tr> <tr> <td> Company </td> <td> Text </td> </tr> <tr> <td> Dataset name </td> <td> Text </td> </tr> </table> **Table 4 Dance-genre specific resource fields** <table> <tr> <th> **Field name** </th> <th> **Field type** </th> <th> **Dataset /genre** </th> </tr> <tr> <td> Type of Segment </td> <td> Text </td> <td> Greek folk </td> </tr> <tr> <td> Actions in Segment </td> <td> Text </td> <td> Greek folk </td> </tr> <tr> <td> Actions in Segment </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Relation in space </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Orientation </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Body Parts Leading </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Planes </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Axis </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Other Characteristics in Motion </td> <td> Text </td> <td> Contemporary </td> </tr> <tr> <td> Actions in Segment </td> <td> Text </td> <td> Ballet </td> </tr> </table> **Figure 3 The metadata values for a motion capture resource (file)** Having defined the recording and resource schemas according to the dance types, it was decided to create a set of recordings per genre, to reflect a categorization which is meaningful for each genre. In total 111 recordings have been created with more than 1400 dance movement sequences in two different formats. For each dance genre, * Ballet: 33 recordings, according to various ballet exercises, containing 185 .fbx resources and 216 .c3d ones * Greek folk dance: 53 recordings, to reflect different dances, containing 189 .fbx resources and 302 .c3d ones * Contemporary dance: 13 recordings, to reflect the movement principles and improvisations, containing 727 .fbx resources and 877 .c3d ones * Flamenco: 12 recordings, to reflect movement principles, containing 99 .fbx resources and 65 .c3d ones More information on the resources (files) per created recording can be found in Appendix A. # Data Integration Data integration will be based on a number of principles: * Accessibility * Discoverability * Consumability In line with the first principle, data access will be provided via standards’ compliant methods. For instance, where adequate formal or defacto standards exist, those will be utilized. This may be baseline standards such as FTP, HTTP, XML, MPEG4 etc or higher level protocols such as WebDav, OAI-ORE, etc. In areas where no applicable standards exist, which may be the case in higher complexity interactions, access to data will be provided by standard REST web services. In the direction of the second principle, data need to be described adequately in order to support discoverability. A thorough investigation of metadata descriptors adequate to cover the needs of the project is performed and the results are presented in Section 5. Yet, the required supporting mechanism is the delivery of a registry which allows data consumers to locate the dataset in need. Discoverability will be greatly supported by standard’s compliance, as for example OAI-PMH support will allow the diffusion of data descriptions in other registries, promoting their recognition and reuse. Regarding consumability, the objective is multifold: (a) completeness of data access services and (b) facilitation of machine-to-machine exchange of data. In this direction, full coverage of services for locating and obtaining the dataset element in need will be provided, under the manifestation preferred (if many) and, in particular cases the granularity of access required. Furthermore data model will be complete and will have enough metadata, to allow locating data via search or direct transversal of data linking and grouping descriptors, and will utilize common, machine readable, data formats that allow consumption of the dataset payload. Documentation will be provided on the data model, further facilitating its utilization. # Datasets The datasets WhoLoDancE is called to manage belong to the following categories: * Motion Capture datasets, i.e., the results of the motion capture activities in the project, in .fbx, .c3d, .json and/or .csv format as well as accompanying files with music or physiometric data; * Videos, collected to either prepare for the motion capture process or record it; * Questionnaire and interview results * Software, i.e. datasets resulting from the software enabling WhoLoDancE. * Reports and deliverables ## Motion capture related datasets **Table 5 Motion capture data dataset description** <table> <tr> <th> **Dataset name:** Motion capture data </th> </tr> <tr> <td> _**Dataset description:** _ Motion capture data produced in the motion capture sessions organized by Motek and UniGE, processed and rigged to inverse kinematic skeleton, along with accompanying files with relevant music and/or physiometric data . The dataset includes motion capture data from the four dance genres, Greek folk, contemporary, ballet and flamenco. _**Generated/Collected:** _ Generated. _**Origin(s):Download URL:** _ N/A _**Origin(s):Licenses:** _ N/A _**Nature:** _ Optical motion capture data; 3D data _**Size/Scale:** _ 11 GB - More than 1400 dance movement sequences in two different formats. For each dance genre, * Ballet: 33 recordings, according to various ballet exercises, containing 185 .fbx resources and 216 .c3d ones * Greek folk dance: 53 recordings, to reflect different dances, containing 189 .fbx resources and 302 .c3d ones * Contemporary dance: 13 recordings, to reflect the movement principles and improvisations, containing 727 .fbx resources and 877 .c3d ones * Flamenco: 12 recodings, to reflect movement principles, containing 99 .fbx resources and 65 .c3d ones _**Potential use:** _ Use for the WhoLoDancE research purposes: design of dance learning material, HLF/LLF extraction, similarity search, etc. _**References:** _ None at the moment </td> </tr> <tr> <td> **Content and Metadata Types** </td> </tr> <tr> <td> _**Typologies:** _ Files in this dataset are of the same typology, motion capture dance segments in two different formats _**Data Formats: .** _ fbx,.c3d, .csv, .json motion capture formats and .mp3 and .mp4 relevant files _**Metadata formats:** _ Available in json format through CKAN API and XML through OAI publisher </td> </tr> <tr> <td> _**Availability date:** _ December 2016 </td> </tr> <tr> <td> **Dataset Use and Sharing** </td> </tr> <tr> <td> _**Repositories:** _ Internal WhoLoDancE repository _**Software and tools for re-use** _ : MotionBuilder, Unity, fbx viewer, custom viewers _**Dissemination Mechanisms** _ : Not available yet, will include the project blending engine and learning tools for parts of the dataset. _**License:** _ Currently internal use for the consortium research purposes. To be decided if the data will be released under a licensing schema, possibly Creative Commons CC BY-NC-SA or CC BY-NC-ND _**Access policies:** _ Currently accessible only to the members of the consortium for research purpose. According to the selected licensing schema they could be available to external parties through a registration process _**.** _ _**Access Procedure** _ : Web access after a registration process _**Embargo Periods** _ : To be decided. _**Transfer process:** _ Directly uploaded by Motek to the ftp server central storage system </td> </tr> <tr> <td> **Dataset handling and synchronization** _**Preservation strategy:** _ Backup at the WhoLoDancE repository and in the interested partners’ individual storage facilities _**Preservation period:** _ indefinite _**Preservation implementation instruments:** _ Multiple Backups in different locations online and offline </td> </tr> </table> **Table 6 Motion capture preparation videos dataset descrption** <table> <tr> <th> **Dataset name:** Motion capture preparation audio and videos </th> </tr> <tr> <td> _**Dataset description:** _ videos for preparation for motion capturing of dance (movement principles, qualitative modules), recorded by the dance partners prior to the motion capture sessions. For example, for the greek dances, different videos have been recorded: full dance or parts, with and without costumes, a) danced by a group of dancers, men and/or women, b) danced by one or two dancers. The scope of the videos was focused on the variety of dances and kinetic patterns and on the relation with the dance principles. In some cases audio files with the accompanying music </td> </tr> </table> <table> <tr> <th> has been provided _**Generated/Collected:** _ Generated. _**Origin(s):Download URL:** _ N/A _**Origin(s):Licenses:** _ N/A _**Nature:** _ Audio data, video data _**Size/Scale:** _ Appr. 1,3 GB music audio files and 50GB video files _**Potential use:** _ Use for the WhoLoDancE research purposes: to generate the short list for motion capture; for the greek dances specifically, to be used as a repository of Greek traditional dances and their kinetic patterns for study by researchers of the field, teachers/students and use by choreographers of any genre of dance. _**References:** _ For Greek dances: “Improvisation in the Greek folk dances" by Lefteris Drandakis </th> </tr> <tr> <td> **Dataset Use and Sharing** </td> </tr> <tr> <td> _**Repositories:** _ Internal WhoLoDancE repository _**Software and tools for re-use** _ : Video and audio players _**Dissemination Mechanisms** _ : For the project internal use only. An exception is greek folk dances _**License:** _ For the project internal use only. _**Access policies:** _ For the project internal use only. For greek dances available as supplementary material in the learning scenarios _**Access Procedure** _ : Through the WhoLoDancE web applications _**Embargo Periods** _ : N/A </td> </tr> <tr> <td> _**Transfer process:** _ Directly uploaded by Motek to the ftp server central storage system </td> </tr> <tr> <td> **Dataset handling and synchronization** _**Preservation strategy:** _ Backup at the WhoLoDancE repository and in the interested partners’ individual storage facilities _**Preservation period:** _ to be decided _**Preservation implementation instruments:** _ Multiple Backups in different locations online and offline </td> </tr> </table> **Table 7 Motion capture features dataset description** <table> <tr> <th> **Dataset name:** Motion capture, video and audio features </th> </tr> <tr> <td> _**Dataset description:** _ The dataset will contain features extracted from audio, video and motion capture recordings, along with relevant motion capture segments. Depending on the project needs, it will be around 5 to 50% of the original data from which features will be extracted _**Generated/Collected:** _ Generated. _**Origin(s):Download URL:** _ N/A _**Origin(s):Licenses:** _ N/A _**Nature:** _ Raw and structured textual and numerical data extracted from the computation of features from audio, video and motion capture recordings. I will include the original video, audio and motion capture content. _**Size/Scale:** _ 5GB _**Potential use:** _ The dataset will be used for data analysis and machine learning purposes. _**References:** _ None at the moment </td> </tr> <tr> <td> **Dataset Use and Sharing** </td> </tr> <tr> <td> _**Repositories:** _ Internal WhoLoDancE repository _**Software and tools for re-use** _ : Depending on the format, Matlab or Python with the Numpy library might be needed _**Dissemination Mechanisms** _ : Publications on conferences and journals, social networks and mailing lists _**License:** _ To be discussed, possibly available under Creative commons license _**Access policies:** _ To be discussed _**Access Procedure** _ : Web site download or ftp access _**Embargo Periods** _ : The dataset will be available for the partners from the moment of its production, they will become publicly available from the moment of the publication of the related work in a conference / journal _**Transfer process:** _ Secure transfer protocol, such as SFTP </td> </tr> <tr> <td> **Dataset handling and synchronization** _**Preservation strategy:** _ Backup at the WhoLoDancE repository and in the interested partners’ individual storage facilities _**Preservation period:** _ indefinite _**Preservation implementation instruments:** _ Multiple Backups in different locations online and offline </td> </tr> </table> **Table 8 Motion capture videos dataset description** <table> <tr> <th> **Dataset name:** Motion capture videos </th> </tr> <tr> <td> _**Dataset description:** _ Two camera views recording all motion capture takes _**Generated/Collected:** _ Generated. _**Origin(s):Download URL:** _ N/A </td> </tr> </table> <table> <tr> <th> _**Origin(s): Licenses:** _ N/A _**Nature:** _ Video data _**Size/Scale:** _ 160GB _**Potential use:** _ Use for the WhoLoDancE research purposes, for example, semi-automatic finger tracking on mocap _**References:** _ None at the moment </th> </tr> <tr> <td> **Dataset Use and Sharing** </td> </tr> <tr> <td> _**Repositories:** _ Internal WhoLoDancE repository _**Software and tools for re-use** _ : Video players _**Dissemination Mechanisms** _ : For project internal use only _**License:** _ For project internal use only _**Access policies:** _ For project internal use only _**Access Procedure** _ : For project internal use only _**Embargo Periods** _ : N/A. _**Transfer process:** _ Directly uploaded to the ftp server central storage system </td> </tr> <tr> <td> **Dataset handling and synchronization** _**Preservation strategy:** _ Backup at the WhoLoDancE repository and in the interested partners’ individual storage facilities _**Preservation period:** _ indefinite _**Preservation implementation instruments:** _ Multiple Backups in different locations online and </td> </tr> </table> offline ## Survey results datasets <table> <tr> <th> **Dataset name:** Movement principles interview and questionnaire data </th> </tr> <tr> <td> _**Dataset description:** _ The information will be collated from a number of interviews conducted by the COVUNI team and a series of paper questionnaires collected. All of the information gathered comes from professional dancers/teachers/choreographers from a diverse range of disciplines. The aim is to have an equal number of men and women. _**Generated/Collected:** _ Generated. _**Origin(s): Download URL:** _ N/A _**Origin(s): Licenses:** _ N/A _**Nature:** _ Documentation and reporting data; Questionnaire results data _**Size/Scale:** _ 50 MB _**Potential use:** _ For internal research purposes only _**References:** _ None at the moment </td> </tr> <tr> <td> **Dataset Use and Sharing** </td> </tr> <tr> <td> _**Repositories:** _ Internal university private shared drive. _**Software and tools for re-use** _ : Document viewers and sound players _**Dissemination Mechanisms** _ : For project internal use only </td> </tr> <tr> <td> _**License:** _ For project internal use only _**Access policies:** _ For project internal use only. Confidential and following COVUNI ethics guidelines. _**Access Procedure** _ : For project internal use only. Data is confidential and only COVUNI team has permission to hear the audio recordings. _**Embargo Periods** _ : N/A. _**Transfer process:** _ N/A. </td> </tr> <tr> <td> **Dataset handling and synchronization** _**Preservation strategy:** _ Backup at the WhoLoDancE repository and in the interested partners’ individual storage facilities _**Preservation period:** _ To be decided _**Preservation implementation instruments:** _ N/A </td> </tr> </table> <table> <tr> <th> **Dataset name:** Dance learning on-line survey results </th> </tr> <tr> <td> _**Dataset description:** _ The dataset includes results in spreadsheet format by the on-line surveys conducted by ATHENA and COVUNI _**Generated/Collected:** _ Generated. _**Origin(s):Download URL:** _ N/A _**Origin(s):Licenses:** _ N/A _**Nature:** _ Questionnaire results data _**Size/Scale:** _ 5MB _**Potential use:** _ For internal research purposes only, to extract conclusions on user requirements and potential learning scenarios to be implemented within the project. _**References:** _ None at the moment </td> </tr> <tr> <td> **Dataset Use and Sharing** </td> </tr> <tr> <td> _**Repositories:** _ Internal university private shared drive. _**Software and tools for re-use** _ : Spreadsheet viewers _**Dissemination Mechanisms** _ : For project internal use only _**License:** _ For project internal use only _**Access policies:** _ For project internal use only. _**Access Procedure** _ : For project internal use only. _**Embargo Periods** _ : N/A. _**Transfer process:** _ N/A. </td> </tr> <tr> <td> **Dataset handling and synchronization** _**Preservation strategy:** _ Backup at the WhoLoDancE repository and in the interested partners’ individual storage facilities _**Preservation period:** _ To be decided _**Preservation implementation instruments:** _ N/A </td> </tr> </table> ## Software and reports The software to be developed within WhoLoDancE will be deposited in Github and published in repositories like Zenodo. For the day to day management of project reports and deliverables, the project coordinator has set up and maintains a Dropbox folder. All partners have access to the folder and use it to exchange project reports and disseminate the deliverables to the rest of the consortium. # Conclusions This deliverable contains a description of data management processes that have been set up within the project as well as the description of the datasets collected, processed or generated by the project and an initial plan on how sharing, archiving and preservation of these datasets will be guaranteed. The motion capture datasets generated within the project comprise a library of dance movements with high quality, diverse content, valuable for a variety of research purposes. The project consortium will use this rich data through the data modeling and management infrastructure described in this document to perform research in different fields, from feature extraction, conceptual modeling and semantic analysis to advanced search algorithms and innovative presentation approaches for the content, in the general context of a dance learning system. # Appendix A – Motion capture resources ## Ballet **Table 9 Ballet motion capture .fbx and .c3d files per recording** <table> <tr> <th> **Dataset name** </th> <th> **.fbx resources** </th> <th> **.c3d resources** </th> <th> **Total** </th> </tr> <tr> <td> Adagio </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Attitude </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Ballet Center Variations </td> <td> 5 </td> <td> 5 </td> <td> 10 </td> </tr> <tr> <td> Cou de pied </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Coupe </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Developpe </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Enchainement </td> <td> 5 </td> <td> 5 </td> <td> 10 </td> </tr> <tr> <td> Flic Flac </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Fondue </td> <td> 6 </td> <td> 8 </td> <td> 14 </td> </tr> <tr> <td> Fouette </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Frappe </td> <td> 8 </td> <td> 8 </td> <td> 16 </td> </tr> <tr> <td> Grand Battement </td> <td> 4 </td> <td> 6 </td> <td> 10 </td> </tr> <tr> <td> Grand Jete </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> </table> <table> <tr> <th> Grand Rond de Jambe </th> <th> 4 </th> <th> 4 </th> <th> 8 </th> </tr> <tr> <td> Improvisation </td> <td> 7 </td> <td> 7 </td> <td> 14 </td> </tr> <tr> <td> Jump </td> <td> 15 </td> <td> 15 </td> <td> 30 </td> </tr> <tr> <td> Other </td> <td> 3 </td> <td> 10 </td> <td> 13 </td> </tr> <tr> <td> Pad de deux </td> <td> 2 </td> <td> 3 </td> <td> 5 </td> </tr> <tr> <td> Pas de Cheval </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Pas Marche </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Passe </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Petit Battement </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Pied a la main </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Pirouette </td> <td> 18 </td> <td> 18 </td> <td> 36 </td> </tr> <tr> <td> Plie </td> <td> 12 </td> <td> 16 </td> <td> 28 </td> </tr> <tr> <td> Port de Bras </td> <td> 3 </td> <td> 11 </td> <td> 14 </td> </tr> <tr> <td> Releve </td> <td> 6 </td> <td> 9 </td> <td> 15 </td> </tr> <tr> <td> Rond de Jambe </td> <td> 12 </td> <td> 12 </td> <td> 24 </td> </tr> <tr> <td> Rond de Jambe en l'air </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Soutenu </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Temps Lie </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Tendu </td> <td> 15 </td> <td> 18 </td> <td> 33 </td> </tr> <tr> <td> Tendu Jete </td> <td> 17 </td> <td> 18 </td> <td> 35 </td> </tr> <tr> <td> **Total** </td> <td> **185** </td> <td> **216** </td> <td> **401** </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> ## Contemporary **Table 10 Contemporary dance motion capture .fbx and .c3d files per recording** <table> <tr> <th> **Dataset name** </th> <th> **.fbx resources** </th> <th> **.c3d resources** </th> <th> **Total** </th> </tr> <tr> <td> Alignment and Posture </td> <td> 60 </td> <td> 52 </td> <td> 112 </td> </tr> <tr> <td> Balance </td> <td> 73 </td> <td> 74 </td> <td> 147 </td> </tr> <tr> <td> Coordination </td> <td> 62 </td> <td> 62 </td> <td> 124 </td> </tr> <tr> <td> Directionality </td> <td> 237 </td> <td> 235 </td> <td> 472 </td> </tr> <tr> <td> Emotions </td> <td> 0 </td> <td> 11 </td> <td> 11 </td> </tr> <tr> <td> Motion Through Space </td> <td> 32 </td> <td> 31 </td> <td> 63 </td> </tr> <tr> <td> Motorics </td> <td> 76 </td> <td> 76 </td> <td> 152 </td> </tr> <tr> <td> Multumodal_Genoa </td> <td> 0 </td> <td> 110 </td> <td> 110 </td> </tr> <tr> <td> Other </td> <td> 9 </td> <td> 44 </td> <td> 53 </td> </tr> <tr> <td> Rhythm and Phrasing </td> <td> 19 </td> <td> 11 </td> <td> 30 </td> </tr> <tr> <td> Stillness </td> <td> 11 </td> <td> 11 </td> <td> 22 </td> </tr> <tr> <td> Symmetry </td> <td> 78 </td> <td> 90 </td> <td> 168 </td> </tr> <tr> <td> Weight bearing vs. Gesturing </td> <td> 70 </td> <td> 70 </td> <td> 140 </td> </tr> <tr> <td> **Total** </td> <td> **727** </td> <td> **877** </td> <td> **1604** </td> </tr> </table> ## Flamenco **Table 11 Flamenco motion capture .fbx and .c3d files per recording** <table> <tr> <th> **Dataset name** </th> <th> **.fbx resources** </th> <th> **.c3d resources** </th> <th> **Total** </th> </tr> <tr> <td> Alignment and Posture </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Asymmetry </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Balance </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Coordination </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Directionality </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Flamenco combinations </td> <td> 20 </td> <td> 36 </td> <td> 56 </td> </tr> <tr> <td> Motion Through Space </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Motorics </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Rhythm and Phrasing </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Stillness </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Symmetry </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Weight bearing vs. Gesturing </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> **Total** </td> <td> **49** </td> <td> **65** </td> <td> **114** </td> </tr> </table> ## Greek folk dance **Table 12 Greek folk dance motion capture .fbx and .c3d files per recording** <table> <tr> <th> **Dataset name** </th> <th> **.fbx resources** </th> <th> **.c3d resources** </th> <th> **Total** </th> </tr> <tr> <td> Baintouska </td> <td> 0 </td> <td> 5 </td> <td> 5 </td> </tr> </table> <table> <tr> <th> Ballos </th> <th> 12 </th> <th> 23 </th> <th> 35 </th> </tr> <tr> <td> Basic Steps </td> <td> 0 </td> <td> 9 </td> <td> 9 </td> </tr> <tr> <td> Chaniotikos </td> <td> 1 </td> <td> 6 </td> <td> 7 </td> </tr> <tr> <td> Chassapiko </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Enteka </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Forlana </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Gaida </td> <td> 3 </td> <td> 4 </td> <td> 7 </td> </tr> <tr> <td> Ikariotiko </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Issos </td> <td> 5 </td> <td> 5 </td> <td> 10 </td> </tr> <tr> <td> Kalamatianos </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Kaneloriza </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Karatzova </td> <td> 8 </td> <td> 11 </td> <td> 19 </td> </tr> <tr> <td> Karsilamas </td> <td> 0 </td> <td> 7 </td> <td> 7 </td> </tr> <tr> <td> Kastrinos </td> <td> 7 </td> <td> 16 </td> <td> 23 </td> </tr> <tr> <td> Katsivelikos </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Kotsari </td> <td> 2 </td> <td> 1 </td> <td> 3 </td> </tr> <tr> <td> Koutsos </td> <td> 0 </td> <td> 9 </td> <td> 9 </td> </tr> <tr> <td> Letsina </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Leventikos </td> <td> 11 </td> <td> 14 </td> <td> 25 </td> </tr> <tr> <td> Nisamikos </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Other greek dance </td> <td> 0 </td> <td> 13 </td> <td> 13 </td> </tr> <tr> <td> Papadia </td> <td> 13 </td> <td> 13 </td> <td> 26 </td> </tr> <tr> <td> Patima </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> </table> <table> <tr> <th> Patinada </th> <th> 3 </th> <th> 3 </th> <th> 6 </th> </tr> <tr> <td> Patrouninos </td> <td> 0 </td> <td> 4 </td> <td> 4 </td> </tr> <tr> <td> Pentozali </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Pidiktos </td> <td> 7 </td> <td> 7 </td> <td> 14 </td> </tr> <tr> <td> Pousnitsa </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Proskynitos </td> <td> 0 </td> <td> 6 </td> <td> 6 </td> </tr> <tr> <td> Pyrgousikos </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Raiko </td> <td> 1 </td> <td> 9 </td> <td> 10 </td> </tr> <tr> <td> Sera </td> <td> 6 </td> <td> 10 </td> <td> 16 </td> </tr> <tr> <td> Seranitsa </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Sfarlys </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Sousta </td> <td> 4 </td> <td> 4 </td> <td> 8 </td> </tr> <tr> <td> Sta Dio </td> <td> 1 </td> <td> 2 </td> <td> 3 </td> </tr> <tr> <td> Sta Tria </td> <td> 2 </td> <td> 3 </td> <td> 5 </td> </tr> <tr> <td> Streis </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Sygathistos </td> <td> 8 </td> <td> 8 </td> <td> 16 </td> </tr> <tr> <td> Syrtos </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> TikPal </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> TikTrom </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Trigona </td> <td> 6 </td> <td> 3 </td> <td> 9 </td> </tr> <tr> <td> Tritepati </td> <td> 0 </td> <td> 4 </td> <td> 4 </td> </tr> <tr> <td> Tsamiko </td> <td> 13 </td> <td> 13 </td> <td> 26 </td> </tr> <tr> <td> Vagelitsa </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> Vlaha Naxos </td> <td> 3 </td> <td> 3 </td> <td> 6 </td> </tr> <tr> <td> Zagorisio </td> <td> 5 </td> <td> 14 </td> <td> 19 </td> </tr> <tr> <td> Zervodexos </td> <td> 0 </td> <td> 5 </td> <td> 5 </td> </tr> <tr> <td> Zervos </td> <td> 2 </td> <td> 2 </td> <td> 4 </td> </tr> <tr> <td> Zonaradikos </td> <td> 6 </td> <td> 6 </td> <td> 12 </td> </tr> <tr> <td> Zorbas </td> <td> 1 </td> <td> 1 </td> <td> 2 </td> </tr> <tr> <td> **Total** </td> <td> **189** </td> <td> **302** </td> <td> **491** </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1091_ADASANDME_688900.md
**Executive Summary** This Deliverable is an update of Deliverable 10.2 ‘Data Management Plan’ and the new additions are introduced in **Chapter 1** along with the next steps and future updates, to be presented in the final version (M30) of this Deliverable. An overview of ADAS&ME overall system (discussed in D 2.1: ‘ADAS&ME Architectural Framework and System Specifications’) and its sub-systems is presented in **Chapter 2** accompanied by a description of the project’s repository (discussed in D4.1: ‘Driver/rider state reference database and integration suite’) to highlight the data sources, profiles, distribution and storage destinations/ access standards across the UCs and pilot sites. **Chapter 3** provides an overview of partners responsibilities in relation to data management and protection. **Chapter 4** describes the data handling according to FAIR. The different data management roles in the project are discussed in **Chapter 5** . Data security is re-visited in this version and discussed with consideration for GDPR compliance ( **Chapter 6** ). This update concludes with **Chapter 7** , with the next steps to be taken and the work to be performed and included in the final version of this Deliverable. The GDPR compliance questionnaire and the Data Privacy Impact Assessment (DPIA) template can be found in **Annex 1** that will be distributed to partners to be completed by the end of the project. The latter will be circulated and completed two times by the end of the project. The first assessment will be included in the final version of the Deliverable (M30) and the final version in the final technical report. Additional data management plan templates were prepared in this version based on the data collected per Use Case, as presented in D7.1: ‘Evaluation Framework’. The consolidated filled in template can be found in **Annex 2** . 1. **Introduction** This document is an update of the version submitted in M6. The Data Management Plans aims to define the processes of data handling during and after the end of the project. In the first version of this Deliverable, data generated by sensors and devices were annexed, the standards and methodologies were presented, and the data privacy protection procedure was defined along with guidelines on how data can be openly shared in order to comply with ORDP requirements with regards to their storage, curation and preservation. This update aims to address the following aspects: * Data collection per UC pilots, as presented within D7.1 (submitted M18). Types of data along information about privacy, confidentiality, and other characteristics are presented in Table 1 (Annex 2), following the same format with the table annexed in the first version of this deliverable. * A short description of the ADAS&ME data repository, based on D4.1, submitted in M18. In the first version of this deliverable, Annex I, included a very detailed and long table including all sensors and objective data collected during the pilots in order to develop the affective state algorithms and respective DSS. In the second version, the same table includes the data collected per UC in each pilot site. 2. **Data architecture and storage in ADAS &Me: an overview ** The general ADAS&ME system encompasses several subsystems that are further divided into modules, including both hardware and software components (Figure 1). There are five different subsystems: a) the **Sensor Subsystem (SS)** , b) the **Driver state Monitoring subsystem (DSMS)** , c) the **Environmental Situation Awareness Subsystem (ESAS)** , d) the **ADAS &ME Core (ADAS&ME C) ** , and e) the **Vehicle Automation Subsystem (VAS)** . Therefore, 5 different types of data are collected: _Soft sensor data, driver monitoring data, environmental/ vehicle data, digital infrastructure data and HMI data_ . How these data are handled in the project is presented in the annexed tables in both this and the previous version of D10.2. Data collection, storage and analysis is conducted always in accordance to the project’s data privacy policy, as it has been defined within the first version of this Deliverable and the Ethics Manual (D9.1). Environmental Situation Awareness Subsystem ( ESAS ) Algorithms for Environmental Situation Determination ADAS & ME Core ( ADAS & ME C ) Personalization Module ( PM ) Decision Support Module ( DSM ) HMI Controller Module ( HMI CM ) Vehicle Automation Subsystem ( VAS ) Vehicle Automation Module ( VAM ) Soft Sensor data Driver monitoring Data Environmental / Vehicle Data Digital Infrastructure Data HMI Data Interface Module ( IM ) Sensor Subsystem ( SS ) Driver State Monitoring Subsystem ( DSMS ) Algorithms for Driver State Determination p.18) All these data will be collected and stored in the ADAS&ME repository for sharing datasets among partners. A data repository and a suite of optimization/machine learning algorithms was created within A4.1. The framework, the mechanisms and the repository are described in detail in Deliverable 4.1 – Driver/rider state reference database and integration suite. The repository allows for sharing the sets of data collected during the pilots with different access rights per allocated partner to ensure alignment and confidentiality across the project. The repository is accessed through a GUI and an API. It also offers an optimization/machine learning suit for seamless prototyping and testing of detection algorithms as well as a RESTful interface for the integration of customized algorithms, where partners can further use denoising algorithms. The repository is available (http://150.140.150.85:8000/repox/index.php) to registered users only (Figure 2) and it is login protected with accounts obtained by the administrator (UPatras). Registered users have access only to their own data, which are categorized per UC. Data are grouped under Use Cases and not necessarily per pilot site. There are occasions where certain data are collected across UCs and pilot sites (e.g. subjective forms and questionnaires). GDPR compliance for all these aforementioned parts of the ADAS&ME is being monitored and will be reported in the final version of this Deliverable. Compliance is monitored through a dedicated questionnaire that can be found Annex 1. Additionally, the impact of adherence to GDPR to the project will be estimated through another questionnaire (Data Privacy Impact Assessment (DPIA; Annex 1). This questionnaire will be distributed to ADAS&ME partners twice until the end of the project. The first Privacy Impact Assessment (PIA) will be reported in the final version of this deliverable in M30 and the final one in the final technical project reporting. **3 Partners’ Responsibilities** In order to face data management challenges efficiently, All ADAS&ME partners have to respect the policies set out in this DMP and datasets have to be created, managed and stored appropriately. The **Data controller role** within ADAS&ME Ethics will be undertaken by the Technical Manager of the project who will directly report to the ADAS&ME Ethics Board. The **Data controller** acts as the point of contact for Data Protection issue and will coordinate the actions required to liaise between different beneficiaries and their affiliates, as well as their respective Data Protection agencies, to ensure that data collection and processing within the scope of ADAS&ME, will be carried out according to EU and national legislation. Regarding the ORDP, the data controller must ensure that data are shared and easily available. Each data producer and pilot site leader is responsible for the integrity and compatibility of its data during the project lifetime. The data producer is responsible for sharing its datasets through open access repositories. They are in charge of providing the latest version of their datasets. Regarding ethical issues, the deliverable D9.2 details all the measures that ADAS&ME will use to comply with the H2020 Ethics requirements. The **Data Manager** roles within ADAS&ME will directly report to the coordination team. The **Data Managers** (CERTH and UPatras) will coordinate the actions related to data management and in particular the compliance to Open Research Data Pilot guideline. The data manager is responsible for implementing the data management plan and to ensure it is reviewed and revised. **4 Findable, accessible, interoperable and reusable (FAIR) data** ADAS&ME project will in principle participate in the Open Research Data Pilot (ORDP) but data marked as “restricted” or under an “embargo” period will be excluded. To this end, the data that will be generated during and after the project and will be included in ORDP should be ‘FAIR’, that is **findable, accessible, interoperable and reusable** . These requirements don’t affect implementation choices and don’t necessarily suggest any specific technology, standard, or implementation solution. The FAIR principles were generated to improve the practices for data management and datacuration, and FAIR aims to describe the principles to be applied to a wide range of data management purposes, whether it is data collection or data management of larger research projects regardless of scientific disciplines. With the endorsement of the FAIR principles by H2020 and their implementation in the guidelines for H2020, the FAIR principles serve as a template for lifecycle data management and ensure that the most important components for lifecycle are covered. This is intended as an implementation of the FAIR concept rather than a strict technical implementation of the FAIR principles. **Making data findable, including provisions for metadata:** * The datasets will have very rich metadata to facilitate the findability. * All the datasets will have a Digital Object Identifiers provided by the ADAS&ME (public part of) repository or another online data storage repository (e.g., ZENODO). * The reference used for the dataset will follow this format: ADAS&ME _WPX_AX.X_XX, including clear indication of the related WP, activity and version of the dataset. • The standards for metadata will be defined in the “Standards and metadata” section of the dataset description table (see Table 1 for the current version of the template). **Making data openly accessible:** * Datasets openly available will be marked as “Open” in the “Data Sharing” section of the dataset description table (see **Fel! Hittar inte referenskälla.** ). * The repository that each dataset is stores, including Open access datasets, are mentioned in the “Archiving and Preservation” section of the dataset description table (see Table 1). Public repositories such as ZENODO will be one of the considered options. * “Data sharing” section of the dataset description table (see Table 1) will also include information with respect to the methods or software used to access the data of each dataset. * Data and their associated metadata will be deposed either in a public repository or in an institutional repository. * “Data sharing” section of the dataset description table (see Table 1) will outline the rules to access the data if restrictions exist. **Making data interoperable:** * Metadata vocabularies, standards and methodologies will depend on the repository to be hosted (incl. public, institutional, etc.) and will be provided in the “Standards and metadata” section of the dataset description table (see Table 1). **Increase data re-use (through clarifying licenses):** * All the data producers will license their data to allow the widest reuse possible. More details about license types and rules will be provided in the next version “Data Sharing” section of the dataset description table (see Table 1) is the field where the data sharing policy of each dataset is defined. By default, the data will be made available for reuse. If any constrains exist, an “embargo period” or “restricted flag” will be explicitly raised in this section of Table 1\. * The data producers will make their data available for third-parties within public repositories only for scientific publications validation purposes. **5 Data management roles in the project** In order to face the data management challenges efficiently, all ADAS&ME partners have to respect the policies set out in this DMP and datasets have to be created, managed and stored appropriately. This Chapter identifies ADAS&ME roles related to the management of the data and their responsibilities. These are: a) the data controller, b) the data producer and c) the data manager. The GDPR compliance questionnaire that can be found in Annex 1 will be used in order to define which partners will play these key roles. At this stage, the roles have provisionally been appointed to CERTH and UPatras (as discussed in previous chapter). However, data producers are the pilot sites per se. The **data controller** acts as the point of contact for data protection issues and will coordinate the actions required to liaise between different beneficiaries and their affiliates, as well as their respective data protection agencies, in order to ensure that data collection and processing within the scope of ADAS&ME, will be carried out according to EU and national legislation. The data controller must ensure that data are shared and easily available. Given the different pilot sites that are considered in ADAS&ME, the consortium investigates who is the most appropriate partner to take the role of the data controller. This role may be undertaken either by the pilot leader of each specific pilot site or the overall data repository administrator (UPatras) or by each data provider per se depending on the data to be collected and or generated. The name(s) of the data controller(s) will be announced in the final DMP version (M30). The **data producer** is any entity that produces data within ADAS&ME scope. Each data producer is responsible for the integrity and compatibility of its data during the project lifetime. The data producer is responsible for sharing its anonymised datasets through open access repositories, according to the principles and mechanisms defined in the current document. He is in charge of providing the latest version. Last but not least, the **data manager** (to be announced in the final version of the DMP) will coordinate the actions related to data management, will be responsible for the actual implementation of the DMP successive versions and for the compliance to Open Research Data Pilot (ORDP) guidelines. As the ADAS&ME open data will be hosted either by institutional databases or by an open free of charge platform (e.g. Zenodo), no additional costs will be required for hosting the data. As partners are not positive in sharing data outside the data, due to the sensitive and innovative nature of the work performed within the project, therefore aggregated or inferential data, metadata and description, as well as open publications might be shared instead. However, this decision will be made for the final version of this document. All research entities participating in the ADAS&ME project shall ensure that they have entered into an appropriate data sharing agreement prior to any personal data being shared. **6 Data Security and GDPR** ADAS&ME partners ensure that security mechanisms and management procedures are in place, to: a) ensure personal (sensitive) data protection through a strict process of data collection, anonymization, harmonization and integration and b) guarantee data integrity and reliability, ensuring system’s high- performance operation through the exchange of the necessary information. The consortium research partners will fully comply at all times with all applicable data protection legislation and regulation during this project, to ensure the security and protection of individuals' personal information in relation to this project. This includes compliance with the General Data Protection Regulation (GDPR), which came into effect on 25 May 2018. The consortium and research partners acknowledge the various new obligations and the new rights granted to data subjects under the GDPR and are aware of the significant fines that may be imposed should a data breach occur. In terms of **personal data protection** , personal data will be anonymised and strictly used for project’s purposes. Before collecting any personal data, the Local Ethics Representative (will be responsible for informing the involved pilot users/participants and collecting their informed consents that will be maintained and stored based on the Grant Agreement rules and European/local laws. No personal data will be centrally stored, without anonymisation or pseudonymisation. No personal information will be made available by the Local Ethics Representative to the pilot sites, i.e., ADAS&ME partners participating in the pilots. Only one person per site (the Local Ethics Representative) will have access to the informed consent form containing the personal information and only that person will be aware of the relation between the participant’s unique identifier code and their personal identity, in order to administer the tests. In practice, the Local Ethics Representative will collect those data required for contacting the participants and arranging with them the sequence of the current or future tests. The Local Ethics Representative will then issue a single Test ID (unique identifier code) for each of them. This person (Local Ethics Representative) will not participate in the evaluation and will not know how each user behaved. One month before the end of the project, this reference, i.e., the reference between the Test ID and the real-life contact details of the participant, together with any other personal information held on the participant will be deleted, thus safeguarding full anonymisation of the results. The stored data will refer to a user’s age, gender, nationality and other information relevant to the testing objectives will be safeguarded, stored and processed only in accordance with all applicable data protection laws and regulations. The stored data will not contain any other identifier apart from the Test ID. In no circumstances will a participant be asked for information relating to their beliefs, political or sexual preferences. User-related data will be securely and safely stored. Also, data will be scrambled where possible and abstracted to permit its use to achieve project outcomes while ensuring data integrity and security. Any party which provides any data or information (the "Providing Party") to another party (the "Receiving Party") in connection with the project will not include any personal information relating to an identified or identifiable natural person or data subject. To this end, the Providing Party will anonymise or pseudonymise all data delivered to other parties to an extent sufficient to ensure that a person without prior knowledge of the original data and its collection cannot, from the anonymised or pseudonymised data and any other available information, deduce the personal identity of participants. Each party shall be solely responsible for the selection of specific database vendors/data collectors/data providers, and for the performance (including any breach) of its contracts between it and such database vendors/data collectors, (to which no other project partner shall be a party, and under which no other partner assumes any obligation or liability) and shall further warrant that it has the authority to disclose the information, if any, which it provides to the other parties, and that where legally required and relevant, it has obtained appropriate informed consents from all the individuals involved. Partners supplying special data analysis tooling, shall have the right on written notice and without liability to terminate the license that it has granted for such tooling to be used in connection with the project, if the supplying partner knows or has reasonable cause to believe that the processing of particular data through such tooling infringes the rights (including without limitation privacy, publicity, reputation and intellectual property rights) of any third party, including of any individual. Each pilot site will have its own Ethics Committee and one person will be nominated per site as responsible for following the project’s Ethics Board recommendations and data protection. Moreover, a Data Privacy Impact Assessment (DPIA) will be used to support this DMP. In particular, the DPIA will assist with, and demonstrate, compliance with the GDPR. This will be carried out within the next 6 months and as the project evolves it will demonstrate compliance in the collection and processing of personal information. This DPIA will ensure that collection, handling, use and storage of personal information are reviewed to ensure compliance with all applicable privacy legislation and regulations. The DPIA will: • assess whether the proposed collection, handling use and storage is legally permissible; * assess whether the proposal is justified and proportionate; * detail the actions necessary to ensure collection, handling, use and storage is compliant and to mitigate any risk to personal information; and * provide a record of the decisions that are made to introduce or change the way in which personal information is collected, handled, used or stored. An indicative template of the DPIA to be used for ADAS&ME can be found in Annex 1. Last but not least, whenever **authorisations for data collection** , processing and management have to be obtained from national bodies, those authorisations shall be considered as documents relevant to ADAS&ME. Copies of all relevant authorisations and approvals shall be submitted prior to commencement of the relevant part of the research work. The cases where such authorizations are needed are identified at the pilots by the pilot execution teams. **7 Conclusion and next steps** The data collected per UC and pilot site (Table 1) and the data management roles (Chapter 5) were defined in this version accompanied by an overview of the ADAS&ME system and the data management repository characteristics (Chapter 2)for sharing the datasets created and stored per UC tests. Data privacy was re-visited with consideration for different data management roles, as required by GDPR. The overall plan related to data management to be followed during the lifecycle of the project was further refined with consideration for both pilot planning and respective indicators (D7.1) as well as the ADAS&ME Ethics related policy (D10.3). A decision upon data visibility remains to be made. The next steps entail the following: * Circulate the GDPR Questionnaire to further update compliance processes followed by each Pilot site and each data owner; * Estimate DPIA twice by the end of the project (M30 an dM36, respectively); * Finalise decisions about data openness, structure of shared data, place of sharing. In addition, it sets the steps, roles and processes towards GDPR adoption within the project. However, the complete adherence will be described in the final version of this deliverable in M30. The final version will include a description of the datasets that will be shared outside the Consortium of the project (if any), the duration and the sharing medium and/ technologies used. In addition, the steps taken within the project to comply to GDPR will be recorded and reported as well as any issues, difficulties and lessons learnt from this process, as it is new for the project partners. The final version of the Data Management Plan will also clearly state which datasets will be shared with the public and which parts (or whole datasets) will not.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1092_DESTINATIONS_689031.md
1 EXECUTIVE SUMMARY 5 2 INTRODUCTION 6 2.1 OBJECTIVES OF DESTINATIONS PROJECT 6 2.2 ROLE OF PROJECT DMP IN DESTINATIONS 6 2.3 CROSS-RELATIONS BETWEEN PROJECT DMP AND ETHICS COMPLIANCE 7 3 DATA COLLECTED AND PROCESSED IN DESTINATIONS 7 4 DMP COVERED PERIOD AND UPDATING PROCESS 8 5 DETAIL OF DATA CATEGORIES 9 6 DATA MANAGEMENT PLAN 12 6.1 WP2-WP7 12 6.2 WP9 31 6.3 WP10 35 # Executive Summary The DESTINATIONS project embeds the process of data management and the procedure of compliance to ethical/privacy rules set out in Ethics Compliance Report (D1.1) into its whole work programme, for the whole research and demonstration life cycle. The data management procedures within DESTINATIONS project arise within the detail of the work, and not with the overall raison d’être of the project itself, which is part of the EC Horizon 2020 programme, Mobility for Growth sub-programme. This document represents the first version of D1.2 Project Data Management Plan (PDMP) related to the data collected, handled and processed by DESTINATIONS project over the first six-monthly period (M1-M6, September 2016 – February 2017). It will be updated regularly in order to integrate the different data typologies the project will manage period by period. This document specifies the overall approach to data management issues adopted by DESTINATIONS project according to the guidelines and indications defined in the Ethics Compliance Report (D1.1) respect to PDMP plays the role of “implementation monitoring tool”. The Project Data Management Plan is structured as follows: * Section 1 is an introduction of the document covering the identification of objectives for its elaboration and delivery, the role of Data Management Plan (PDMP+LDMP) into the whole DESTINATIONS project and the cross-relations between Data Management Plan (DMP) and Ethic Compliance Report (D1.1); * Section 2 identifies the different typologies of data managed by the whole DESTINATIONS project; * On the basis of the data typologies identified in section 2, section 3 details the specific data collected and generated by DESTINATIONS in the first six-monthly period (M1-M6, September 2016 – February 2017);  Section 4 covers a two-fold role: o Relating to demo WPs and data collected/managed at local level (project pilot sites), section 4 represents a guided template to be filled in by pilot sites local partners in order to generate the contributions to Local Data Management Plan (D1.3); o Relating to horizontal WPs and data managed/processed by expert partners supporting sites local partners in demo WPs, section 4 provides the description of data management procedures adopted (when applicable). # Introduction ## Objectives of DESTINATIONS project The DESTINATIONS project implements a set of mutually reinforcing and integrated innovative mobility solutions in six medium small urban piloting areas in order to demonstrate how to address the lack of a seamless mobility offer in tourist destinations. The overall objective of DESTINATIONS project is articulated in the following operational goals: * Development of a Sustainable Urban Mobility Plan (SUMP) for residents and tourists focusing on the integrated planning process that forms the basis of a successful urban mobility policy (WP2); * Development of a Sustainable Urban Logistics Plan (SULP) targeted on freight distribution processes to be integrated into the SUMP (WP5); * Implementation and demonstration of pilot measures to improve mobility for tourists and residents (WP3-WP7); * Development of guidelines to sites for stakeholders engagement (WP2-WP8); * Development of guidelines to sites for the definition of business models to sustain the site pilot measures and the future implementation of any other mobility actions/initiatives designed in SUMP (WP8): * Development of guidelines to sites for the design, contracting and operation of ITS (WP8). * Evaluation of results both at project level and at site level (WP9); * Cross-fertilization of knowledge and best practice replication including cooperation with Chinese partners (WP10); * Communication and Disseminations (WP11). ## Role of Project DMP in DESTINATIONS The role and the positioning of Project DMP within the whole DESTINATIONS project (in particular with Ethics Compliance Report, D1.1) is detailed in the following: * The Project DMP will specify the project data typologies managed in DESTINATIONS; * Based on the identified data typologies,, the Project DMP will provide a guided template allowing project partners/site local partners (Site Managers specifically) to describe how data are collected, handled, accessed, make openly available/published (eventually). The Project DMP will be used to collect information from Site Managers on how data are collected, stored, managed and to identify the data ownership, the access rights and possible use for dissemination and exploitation; * The Project DMP will be used by Site Managers for Local DMP included in D1.3. ## Cross-relations between Project DMP and Ethics Compliance The Project DMP represents the “monitoring” tool (guided template to be filled in by project partners and site local partners, in particular) to allow: * the partners (in particular Site Managers) certify to the Project Data Manager (PDM) designated in the Ethic Compliance Report (D1.1) and the Ethic Review Board (ERB) the conformity to the requirements set in the Ethic Report itself; * the ERB check the compliance with the requirements set out by the project Ethics Compliance Report. # Data collected and processed in DESTINATIONS The DESTINATIONS project covers different activities (identified in section 2) and then it deals with an extended range of possible data to be considered. The term “data” can mean different things and can be related to different kind/set of information (connected to the wide range of actions taking place during the project). A specification of “data” in DESTINATIONS is required together with a first comprehensive classification of the different main typologies involved. In particular, data in DESTINATIONS can be divided between the two following levels: 1. Data collected by the project; 2. Data processed/produced within the project. **Data collected** by the project can be classified in the following main categories: * Data for SUMP-SULP elaboration (i.e. baseline, current mobility offer, needs analysis, etc.); * Data for the design of mobility measures in demo WPs (i.e. baseline, current mobility offer, needs analysis, etc.); * Data produced in the operation of demo mobility measures (i.e. users registration to service, validation, transactions/payment, points for green credits, etc.);  Data collected to carry out the ex-ante and ex-post evaluation. Data collected by the DESTINATIONS project are mainly related to local activities of demo measures design, setup and implementation and then this process deals mostly with responsibilities of Site Managers. This will be reflected in the production of Local DMP for which each site will provide its contribution (filling in the guided template provided by the Project DMP). **Data processed/produced** by the project are mainly: * SUMP/SULP; * Outputs coming from WP8 (business principles and scenarios, ITS contracting documents, etc.), WP9 (evaluation), WP10 (transferability) and WP11 (communication). This level of data management (process/elaboration) is related both to local activities (SUMP/SULP) and to Horizontal WP (WP8, 9, 10, 11). The data management process for the first group deals mostly with responsibilities of Site Managers and will be described in the Local DMP. Regarding the second group of data, the data management process deals mostly with responsibilities of Horizontal WP Leaders/Task Leaders and it will be described in the Project DMP. # DMP covered period and updating process The DESTINATIONS project includes a wide range of activities spanning from users needs analysis of demo measures including SUMP/SULP (survey for data collection, assessment of current mobility offer which could include the management of data coming from previous survey and existing data sources, personal interview, collection of requirements through focus groups and co- participative events, etc.) to demo measures operation (data of users registered to use the demo services, management of images for access control, management of video surveillance images in urban areas, infomobility, bookings of mobility services, payment data/validation, data on the use of services for promotion purpose: green credits, etc.) but also from the elaboration of SUMP/SULP to its circulation among the relevant stakeholders for its consolidation, from data collection on baseline and ex-ante evaluation to ex- post evaluation. Data can be grouped in some main categories but the details varies from WP to WP (in particular the demo ones) and from sites to sites. Due to the duration of the project, data to be managed will also evolve during the project lifetime. For the abovementioned reasons, the approach used for the delivery of Project DMP and Local DMP is to restrict each six-monthly version to the specific data collected, handled or processed in the reference period: this will allow also the project partners and, in particular, Site Managers, to familiarize smoothly in touch with the issues related to data management. This version of Project DMP covers the first six month of project activities (M1-M6, September 2016 – February 2017). The DMP will be updated on a six- month basis and accordingly integrated with new data collected, handled and produced by the project. The activities which have been taken place in this six-month period of DESTINATIONS project are the following: * WP2 – webinar on SUMP stakeholder and citizens engagement. Started coordination of local WP activities and provision of support to sites. Started data collection for baseline in sites; * WP3, WP4, WP5, WP6, WP7 – Started consolidation of users needs analysis for the design of demo services and measures * WP8 – webinar on cross-relation within WP2, WP8 and WP9. Webinars on T8.1 to support the sites in the identification and mapping of stakeholders and on T8.3 to support the sites in the identification of key choices for the design of the “core” ITS 1 . Started identification of key principles. * WP9 – identification of indicators categories for ex-ante/ex-post evaluation; * WP10 – identified possible topics for cross-fertilization among partners and outside the consortium and related scheduled events; * WP11 – fixed cooperation with SATELLITE, started developing communication and dissemination strategies and material, production of local dissemination plan for each site. The activities more impacted by data management issues and related to this deliverable are: * Data collection at site level for demo WPs and WP9; * Identification of indicators categories for ex-ante/ex-post evaluation. # Detail of data categories In section 4 we have identified the main activities of DESTINATIONS project impacted by data management issues and related to this deliverable. In the following the typologies of data produced, handled or managed by these activities are identified 2 : ### WP2 __Task 2.2-Task 2.3 Mobility context analysis and baseline_ _ Data collection/survey for SUMP elaboration: * Census/demographic data; * Economics data; * Tourists flow; * Accessibility in/out; * O/D matrix; * Traffic flow; * Network; * Emissions; * Pollution; * Questionnaires on travel behaviour, attitudes and expectations; * On-field measuring campaign carried out during the data collection phase. __Task 2.6 Smart metering and crowdsourcing_ _ Automatic data collection supporting SUMP development:  Traffic flow; * Passengers counting. ### WP3 __Task 3.1 Users needs analysis, requirements and design_ _ Data collection/survey for safety problem assessment at local level and design of demo measures: * Data about network, cycling lanes, walking path, intersections, cross points, traffic lights; * Road safety statistics (number of incidents on the network, etc.); * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group; * Statistics produced by Traffic Management System, Traffic Supervisor or similar. ### WP4 __Task 4.1 Users needs analysis, requirements and design_ _ Data collection/survey for extension/improvement of sharing services and design of demo measures: * Data on sharing/ridesharing service demand; * Data on sharing/ridesharing service offer; * Statistics produced by the platform of management of bike sharing already operated (registered users, O/D trips, etc.); * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group. Data collection/survey for take up of electrical vehicles and design of demo measures: * Data on the demand of electrical vehicles; * Data on the offer of electrical vehicles; * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group. ### WP5 __Task 5.1 Users needs analysis, requirements and design_ _ Data/collection surveys for SULP elaboration: * Data on shops, supply process, logistics operators, etc.; * On-field measuring campaign carried out during the data collection phase; * Questionnaires/survey on supply/retail process; * Reports coming from stakeholder and target users focus group. Data/collection surveys for demo logistics services * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group. ### WP6 __Task 6.1 Users needs analysis, requirements and design_ _ Data/collection for the design of demo measures for increasing awareness on sustainable mobility: * Data on promotional initiatives already under operation; * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group. Data/collection for the design of demo measures for mobility demand management: * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group. ### WP7 __Task 7.1 Users needs analysis, requirements and design_ _ Data/collection for the design of demo measures for Public Transport services: * Data on PT service demand; * Data on PT service offer; * Statistics produced by the systems already operated (i.e. ticketing); * Survey on users needs and expectations; * Reports coming from stakeholder and target users focus group. ### WP8 N/A - The data collected in this WP in the reference period are not included in the list of “sensible” data identified in D1.1. The data generated by this WP over the reference period deal with the description of site background. These data are out of the scope of the present deliverable. ### WP9 __Task 9.2 – Task 9.3 – Task 9.4 Evaluation Plan, Ex-ante/Ex-post evaluation_ _ * Baseline (BAU); * Sustainable mobility; * Energy consumption; * Environmental impacts; * Societal impacts;  Health impacts. ### WP10 __Task 10.5 – International cooperation in research and innovation in China_ _ * Info about Chinese tourist flows in DESTINATIONS sites; * Info about perspective, approach and strategies to strengthen and to make more sustainable Chinese tourist presence in DESTINATIONS sites. ### WP11 N/A – The data collected in this WP in the reference period are not included in the list of “sensible” data identified in D1.1. # Data management plan ## WP2-WP7 For each of the data categories identified in section 5 (sub-sections related to WP2-WP7) Site Managers will provide the following information (to be included in Local DMP): <table> <tr> <th> **WP2** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 2.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP2 in section 5)_ </td> </tr> <tr> <td> 2.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 2.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> <table> <tr> <th> **WP2** </th> </tr> <tr> <td> 2.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 2.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> <tr> <td> 2.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID2.1.1)_ </td> </tr> </table> **Table 1: Template to be filled in by Site Managers to detail WP2 data management procedures over the reference period (included in Local DMP)** <table> <tr> <th> **WP3** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 3.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP3 in section 5)_ </td> </tr> <tr> <td> 3.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> <table> <tr> <th> **WP3** </th> </tr> <tr> <td> 3.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 3.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> <tr> <td> 3.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID3.1.1)_ </td> </tr> </table> **Table 2: Template to be filled in by Site Managers to detail WP3 data management procedures over the reference period (included in Local DMP)** <table> <tr> <th> **WP4** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 4.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP4 in section 5)_ </td> </tr> <tr> <td> 4.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 4.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> <table> <tr> <th> **WP4** </th> </tr> <tr> <td> 4.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 4.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> <tr> <td> 4.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID4.1.1)_ </td> </tr> </table> **Table 3: Template to be filled in by Site Managers to detail WP4 data management procedures over the reference period (included in Local DMP)** <table> <tr> <th> **WP5** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 5.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP5 in section 5)_ </td> </tr> <tr> <td> 5.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 5.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> <table> <tr> <th> **WP5** </th> </tr> <tr> <td> 5.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 5.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> <tr> <td> 5.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID5.1.1)_ </td> </tr> </table> **Table 4: Template to be filled in by Site Managers to detail WP5 data management procedures over the reference period (included in Local DMP)** <table> <tr> <th> **WP6** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP6 in section 5)_ </td> </tr> <tr> <td> 6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> <table> <tr> <th> **WP6** </th> </tr> <tr> <td> 6.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> <tr> <td> 6.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID6.1.1)_ </td> </tr> </table> **Table 5: Template to be filled in by Site Managers to detail WP6 data management procedures over the reference period (included in Local DMP)** <table> <tr> <th> **WP7** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 7.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP7 in section 5)_ </td> </tr> <tr> <td> 7.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 7.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> <table> <tr> <th> **WP7** </th> </tr> <tr> <td> 7.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> <tr> <td> 7.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID7.1.1)_ </td> </tr> </table> **Table 6: Template to be filled in by Site Managers to detail WP7 data management procedures over the reference period (included in Local DMP)** ## WP9 For each of the data categories identified in section 5 (sub-sections related to WP9) Site Managers will provide the following information (to be included in Local DMP): <table> <tr> <th> **WP9** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 9.1.1 </td> <td> Which kind of data will be collected in your site ? </td> <td> _(to be filled in by Site Managers)_ _(please indicated each data among the categories indicated for WP9 in section 5)_ </td> </tr> <tr> <td> 9.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.1.3 </td> <td> Please detail the data origin </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 9.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> </table> <table> <tr> <th> **WP9** </th> </tr> <tr> <td> 9.2.3 </td> <td> Are data collected anonymously or not ? If not, please confirm that data are collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 9.3.1 </td> <td> How data are stored ? Please detail where the data are stored and in which modality/format (if applicable) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.3.2 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.3.3 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.3.4 </td> <td> Which international regulation will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access ? (for reference, please see D1.1) </td> <td> _(to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID1.1)_ </td> </tr> <tr> <td> **WP9** </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 9.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose ? Please indicate the format (aggregated/not aggregated) </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> <tr> <td> 9.4.2 </td> <td> Are data planned to be published as open format ? If so, please describe the technological solution used and he metadata format. </td> <td> _to be filled in by Site Managers)_ _(please repeat for each data indicated in row ID9.1.1)_ </td> </tr> </table> **Table 7: Template to be filled in by Site Managers to detail WP9 data management procedures over the reference period (included in Local DMP)** For each of the data categories identified in section 5 (sub-sections related to WP9) Project Evaluation Manager (PEM) provides the following information: <table> <tr> <th> **WP9** </th> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 9.5.1 </td> <td> How data collected by sites related to exante evaluation are stored ? </td> <td> Ex ante and ex post data collected by the Local Evaluation Manager (LEMs) and Site Managers are stored in an ad hoc Excel file according to a structured data collection template.. </td> </tr> <tr> <td> 9.5.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> </tr> <tr> <td> 9.5.3 </td> <td> How data will be used ? </td> <td> These data will be then transposed to the Measures Evaluation Report according to the format provided by the Satellite project. They will be used under an aggregated format. </td> </tr> <tr> <td> 9.5.4 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> ISINNOVA </td> </tr> <tr> <td> 9.5.5 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> Data are accessible by the ISINNOVA evaluation manager (Mr. Stefano Faberi) and his colleagues. </td> </tr> </table> **Table 8: Description of WP9 data management procedures adopted by Project Evaluation** **Manager (PEM) over the reference period** ## WP10 <table> <tr> <th> **WP10** </th> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 10.1.1 </td> <td> How data collected by sites related to exante evaluation are stored ? </td> <td> Data collected from the sites are stored in an ad hoc file. These data will be used in an aggregated format in order to better design the promotion of DESTINATIONS sites towards possible Chinese investors and tourist stakeholder. </td> </tr> <tr> <td> 10.1.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> </tr> <tr> <td> 10.1.3 </td> <td> How will be data used ? </td> </tr> <tr> <td> 10.1.4 </td> <td> Who is the organization responsible for data storing and management ? </td> <td> GV21 </td> </tr> <tr> <td> 10.1.5 </td> <td> By whom (organization, responsible) data are accessible ? </td> <td> Data are accessible by GV21 (Mrs. Julia Perez Cerezo) and her colleagues. </td> </tr> </table> **Table 9: Description of WP10 data management procedures over the reference period**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1093_DESTINATIONS_689031.md
# Executive Summary This document represents the first version of D1.3 Local Data Management Plan (LDMP) relating to the data under collection/collected, handled and processed by DESTINATIONS sites over the first six-month period (M4-M6, December 2016 – February 2017). As the project is at the beginning of its activities (in particular demo activities in the 6 sites have been launched in December 2016) and this is the first edition of the document (with the Data Management procedure to be tuned accordingly to the requirements of the Ethics Compliance Report, Deliverable 1.1), when relevant, info are provided also to data collection procedures/campaigns planned but still running/to be started. This document will be updated regularly in order to integrate the different data typologies the project will manage period by period. This document follows the methodological approach adopted by DESTINATIONS project and described in D1.2 (PDMP) according to the guidelines defined in the Ethics Compliance Report (D1.1). D1.2 (PDMP) represents the framework document of this deliverable which can be considered also an integration of D1.2. This deliverable is structured as follows: * Section 2 is an introduction of the document covering the identification of objectives for its elaboration and delivery, the role of Local Data Management Plan (LDMP) into the whole DESTINATIONS project and the cross-relations with Project Data Management Plan (PDMP); * Section 3 details the specific data collected and generated by DESTINATIONS sites in the first six-month period (M4-M6, December 2016 – February 2017). The data is presented with reference to each demo WP (WP2-WP7). For each demo WP the data is presented also per site. # Role of Project and Local DMPs in DESTINATIONS PDMP (D1.2) defines the overall approach assumed by the project, it identifies the data typology involved, it describes the data collected/handled/processed by horizontal WPs (WP8-WP11) and it sets the framework for the LDMP. LDMP details the data collected/under collection/planned for collection by DESTINATIONS sites over the first six-month period (M4-M6, December 2016 – February 2017). This choice (to include also the description of data collection just planned) has been carried out due to the initial stage of local project activities and in particular the design of demo measures: due to the project timing data collection is started for some measure but still planned/on-going for others. Data have been collected through the contribution of Site Managers (SM) according to the template defined in PDMP (D1.2). LDMP can be considered an integration of PDMP (D1.2) which sets the framework for approaching data management in DESTINATIONS project. # Local Data Management Plan In the following sections the DESTINATIONS Local Data Management Plans are presented. In order to improve the readability, each LDMP is presented per (demo) WP (first level) and per site (second level). ## WP2 <table> <tr> <th> **WP2 – MADEIRA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 2.1.1.1 </td> <td> Which kind of data have been/will be collected in your site? </td> <td> * Census/demographic data (completed) * Tourists flow (planned) * Road network (planned) * Passengers counting (Public transport) (planned) * Questionnaires on travel behaviour, attitudes and expectations (planned) </td> </tr> <tr> <td> 2.1.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> Census/demographic data * _Number of residents, age, education level_ Tourists flow * _Number of tourists arrivals & staying by nationality _ Road network * _Counting traffic congestions_ Passengers counting (Public transport) * _Number of entries and exits of buses_ Questionnaires on travel behaviour, attitudes and expectations * _Paper questionnaires_ </td> </tr> <tr> <td> 2.1.1.3 </td> <td> Please detail the data origin </td> <td> Census/demographic data * _The data is available in the website of Regional Government Statistic Department._ Tourists flow * _The data is available in the website of Regional Government Statistic Department._ Road network * _Visual counting or implement sensors._ Passengers counting (Public transport) * _Sensor system on buses and Ticketing system_ Questionnaires on travel behaviour, attitudes and expectations * _The target of questionnaires will be residents and tourists._ </td> </tr> </table> <table> <tr> <th> 2.1.1.4 </th> <th> Please provide some figure allowing to estimate the data dimension </th> <th> Questionnaires on travel behaviour, attitudes expectations  _500 questionnaires_ </th> <th> and </th> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 2.1.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Census/demographic data * _Questionnaires or interviews_ Tourists flow * Data from airport and Port of Funchal Road network * Data collection Passengers counting (Public transport) * _Data extraction from database_ Questionnaires on travel behaviour, attitudes expectations * _Questionnaires or interviews_ </td> <td> and </td> </tr> <tr> <td> 2.1.2.2 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Questionnaires on travel behaviour, attitudes and expectations  _A sampling of the target users will be selected to be provided with the questionnaires_ </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 2.1.3.1 </td> <td> How data is stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Census/demographic data * _Database is stored in the office of Regional Government Statistic Department_ Tourists flow * _Database is stored in the office of Regional Government Statistic Department_ Road network * _The data base will be stored in CMF or SRETC office_ Passengers counting (Public transport) * _Database in HF office_ Questionnaires on travel behaviour, attitudes and expectations * _Questionnaires will be stored in SRETC office._ </td> </tr> </table> <table> <tr> <th> 2.1.3.2 </th> <th> Who is the organization responsible for data storing and management? </th> <th> Census/demographic data * _Regional Government Statistics Department_ Tourists flow * _Regional Government Statistics Department_ Road network * _CMF and SRETC_ Passengers counting (Public transport) * _HF_ Questionnaires on travel behaviour, attitudes expectations * _SRETC_ </th> <th> and </th> </tr> <tr> <td> 2.1.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Census/demographic data * _Regional Government Statistic Department_ Tourists flow * _Regional Government Statistic Department_ Road network * _CMF and SRETC_ Passengers counting (Public transport) * _HF_ Questionnaires on travel behaviour, attitudes expectations * _SRETC_ </td> <td> and </td> </tr> <tr> <td> 2.1.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> _All the data is anonymous; there is no need to apply international regulation._ </td> </tr> <tr> <td> 2.1.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> _All the data is anonymous; there is no need to apply national regulation._ </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 2.1.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> _Data can be made available by DESTINATIONS under an aggregated form_ </td> </tr> <tr> <td> 2.1.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> _No_ </td> </tr> </table> **Table 1: Local Data Management Plan – WP2 (MADEIRA)** <table> <tr> <th> **WP2 – RETHYMNO** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 2.2.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Data collection for SUMP elaboration: * Census / Demographic Statistics * Economics data * Statistics on Tourists flow * Statistics on accessibility incoming / outgoing; * Data about road network; * Statistics on traffic accidents, deaths and injuries _The data collection described is completed_ </td> </tr> <tr> <td> 2.2.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> **Census / Demographic Statistics** * Resident population size by sex and educational level for the municipality of Rethymno * Resident population size by age for the Regional Unit of Rethymno * Employment by sector annually in Rethymno Municipality **Economic data** * New business openings by sector (net balance with closures) * Secondary distribution of income account of households **Statistics on Tourists flow and distribution** ; * Tourist arrivals & staying by nationality in municipality level * Tourists distribution by accommodation (hotels, rented apartments, camping, other) in regional level **Statistics on accessibility incoming / outgoing;** * Number of ferry passengers disembarked/embarked in regional level * Availability of slots for incoming private boats * Number of cruise ships visitors by months in regional level * Number of flight passengers in & out by days in regional level **Data about road network** * Car, cycling, walking network **Statistics on traffic accidents, deaths and injuries** * Traffic Accidents, casualties and injuries (seriously injured and slightly injured) </td> </tr> </table> <table> <tr> <th> **WP2 – RETHYMNO** </th> </tr> <tr> <td> 2.2.1.3 </td> <td> Please detail the data origin </td> <td> Census / Demographic Statistics * _The data is stored in a database_ Economics data * _The data is stored in a database_ Statistics on Tourists flow and distribution * _The data is stored in a database or in paper archive_ Statistics on accessibility incoming / outgoing * _The data is stored in a database or in paper archive_ Data about road network * _The data is stored in a database_ Statistics on traffic accidents, deaths and injuries * _The data is stored in a database_ </td> </tr> </table> <table> <tr> <th> **WP2 – RETHYMNO** </th> </tr> <tr> <td> 2.2.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _All data is collected on an aggregated form_ **Census / Demographic Statistics** * Resident population size by sex and educational level for the municipality of Rethymno, Resident population size by age for the Regional Unit of Rethymno, Employment by sector annually in Rethymno Municipality: _Data provided for 2011_ **Economic data** * New business openings by sector (net balance with closures): _Annual data for the years 2014-2016_ * Secondary distribution of income account of households: _Annual data for 2014, income distribution regarding 5 main categories_ **Statistics on Tourists flow and distribution** ; * Tourist arrivals & staying by nationality in municipality level: _Number of overnight stays per month provided for the years 2013-2016_ * Tourists’ distribution by accommodation in regional level: _Annual data for the years 2014- 2015 on arrivals by 2 main categories (hotels/similar establishments and camping) divided in residents and non-residents._ **Statistics on accessibility incoming / outgoing;** * Number of ferry passengers disembarked/embarked in regional level _: Annual number of ferry passengers disembarked in each of the 2 main Cretan ports for 2014_ * Number of cruise ships visitors by months in regional level: _Annual number of cruise visitors for 2015_ * Number of flight passengers in & out by days in regional level: _Monthly data on the number of flights (arrivals/departures) and number of passengers by domestic and international flights, for the 2 main Cretan airports for 2016._ **Data about road network** * Car, cycling, walking network: _Data regarding the total length of each type of network was gathered._ **Statistics on traffic accidents, deaths and injuries** * Traffic Accidents, casualties and injuries (seriously injured and slightly injured): _Monthly data for the years 2015 -2016_ </td> </tr> </table> <table> <tr> <th> **WP2 – RETHYMNO** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 2.2.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Census / Demographic Statistics * _Desk research-Data extraction from database_ Economic data * _Desk research-Data extraction from database_ Statistics on Tourists flow and distribution * _Desk research-Data extraction from database_ Statistics on accessibility incoming / outgoing * _Desk research-Data extraction from database_ Data about road network * _Data collected from the Municipality’s technical department_ Statistics on traffic accidents, deaths and injuries * _Desk research-Data extraction from database_ </td> </tr> <tr> <td> 2.2.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> All the above data were gathered from open databases without using any sampling process </td> </tr> <tr> <td> 2.2.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> All above data is on an aggregated form and collected anonymously </td> </tr> </table> <table> <tr> <th> **WP2 – RETHYMNO** </th> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 2.2.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> _**Census / Demographic Statistics** _ Resident population size by sex and educational level for the municipality of Rethymno, Resident population size by age for the Regional Unit of Rethymno, Employment by sector annually in Rethymno Municipality: _Database is stored in the Hellenic Statistics Authority records_ _**Economic data** _ * New business openings by sector (net balance with closures): _Database is stored in the office of Rethymno Chamber of Commerce_ * Secondary distribution of income account of households: _Database is stored in the office of the Hellenic Statistics Authority_ _**Statistics on Tourists flow and distribution** _ * Tourist arrivals & staying by nationality: _Database is stored in the office of the Association of Greek Tourism Enterprises and the Hellenic Chamber of Hotels_ * Tourists distribution by accommodation (hotels, rented apartments, camping, other) in regional level: _Database is stored in the office of the Hellenic Statistics Authority_ _**Statistics on accessibility incoming / outgoing** _ * Number of ferry passengers disembarked/embarked in regional level: _Database is stored in the office of the Port of Heraklion and Port of Chania_ * Availability of slots for incoming private boats _: Database is stored in the office of the Port of Rethymno_ * Number of cruise ships visitors by months in regional level: _Database is stored in the office of the Union of Greek Ports_ * Number of flight passengers in & out by days in regional level: _Database is stored in the office of the Civil Aviation Authority_ _**Data about road network;** _ * Car, cycling, walking network: _Database is stored in the office of the technical department of Rethymno Municipality_ _**Statistics on traffic accidents, deaths and injuries** _ * Traffic Accidents, casualties and injuries (seriously injured and slightly injured): _Database is stored in the office of the Hellenic Statistics Authority_ </td> </tr> </table> <table> <tr> <th> **WP2 – RETHYMNO** </th> </tr> <tr> <td> 2.2.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> _**Census / Demographic Statistics** _ Resident population size by sex and educational level for the municipality of Rethymno, Resident population size by age for the Regional Unit of Rethymno, Employment by sector annually in Rethymno Municipality: Hellenic Statistics Authority _**Economic data** _ * New business openings by sector: _Rethymno Chamber of Commerce_ * Secondary distribution of income account of households _: Hellenic Statistics Authority_ _**Statistics on Tourists flow and distribution** _ * Tourist arrivals & staying by nationality: _Association of_ _Greek Tourism Enterprises, Hellenic Chamber of Hotels_ * Tourists distribution by accommodation: _Hellenic_ _Statistics Authority_ _**Statistics on accessibility incoming / outgoing;** _ * Number of ferry passengers disembarked/embarked _: Port of Heraklion, Port of Chania_ * Availability of slots for incoming private boats: _Port of Rethymno_ * Number of cruise ships visitors by months: _Union of Greek Ports_ * Number of flight passengers in & out by days: _Civil Aviation Authority_ _**Data about road network** _ * Car, cycling, walking network _: Rethymno Municipality_ _**Statistics on traffic accidents, deaths and injuries** _ * Traffic Accidents, casualties and injuries: _Hellenic Statistics Authority, local Police Department_ </td> </tr> <tr> <td> 2.2.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> As described in the previous section </td> </tr> <tr> <td> **WP2 – RETHYMNO** </td> </tr> <tr> <td> 2.2.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Regulations 2016/679; 2016/680; 2016/681 (EU) Regulations: 2009/136, 2006/24, 2002/58, 95/46 (EC) </td> </tr> <tr> <td> 2.2.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Law 2472/1997 Protection of Individuals with regard to the Processing of Personal Data Law 3471/2006 Protection of personal data and privacy in the electronic telecommunications sector and amendment of law 2472/1997 </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 2.2.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used for DESTINATIONS dissemination purpose under an aggregated form </td> </tr> <tr> <td> 2.2.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 2: Local Data Management Plan – WP2 (RETHYMNO)** <table> <tr> <th> **WP2 – LIMASSOL** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 2.3.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**LIM2.1 Sustainable Mobility Tourist Action Plan** _ _**(SMTAP)** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ * _Questionnaires targeting the user group, for awareness level, needs and expectations_ _The data collection is on-going_ </td> </tr> <tr> <td> 2.3.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td>  _Questionnaires targeting the user group, for awareness level, needs and expectations_ \- _Paper questionnaires_ </td> </tr> <tr> <td> 2.3.1.3 </td> <td> Please detail the data origin </td> <td> * _Public Works Department database_ * _Limassol Municipality database_ * _LTC database_ * _Questionnaires filled by tourists and local citizens_ </td> </tr> <tr> <td> 2.3.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td>  _30 questionnaires_ </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 2.3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * _CO2 emissions_ * Data extraction from database or data gathered from the field * _Energy consumption_ * Data extraction from database or data gathered from the field * _Economy_ * Data extraction from database and data gathered from the field * _Questionnaires targeting the user group, for awareness level, needs and expectations_ * Data from questionnaires </td> </tr> </table> <table> <tr> <th> **WP2 – LIMASSOL** </th> </tr> <tr> <td> 2.3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> * Since already existing data from current surveys will also be used, the sampling will not be random and it might be enough for statistical analysis. * The only sampled data that will be random will be the questionnaires since this survey will involve randomly selected tourists and local citizens for questions </td> </tr> <tr> <td> 2.3.2.3 </td> <td> Are data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td>  The questionnaires will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 2.3.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * All the data extracted from the mentioned databases is stored in the involved partner’s database * The data from the questionnaires will be stored in the involved partner’s office </td> </tr> <tr> <td> 2.3.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td>  STRATAGEM </td> </tr> <tr> <td> 2.3.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td>  STRATAGEM </td> </tr> <tr> <td> **WP2 – LIMASSOL** </td> </tr> <tr> <td> 2.3.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> The Charter of Fundamental Rights of EU States </td> </tr> <tr> <td> 2.3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td>   </td> <td> The Market Research Society code of conduct ISO 20252 </td> </tr> <tr> <td> **Data availability for dissemination** </td> <td> </td> </tr> <tr> <td> 2.3.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td>   </td> <td> If data is referring to personal data collected from survey/questionnaires and from the involved partners and stakeholders, then no. If data is referring to the final publication of the measure to be shared with the public, then yes. </td> </tr> <tr> <td> 2.3.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> _No._ </td> </tr> </table> **Table 3: Local Data Management Plan – WP2 (LIMASSOL)** <table> <tr> <th> **WP2 – ELBA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 2.4.1.1 </td> <td> Which kind of data have been collected in your site? </td> <td> _Census/demographic data_ _Passengers, freight, vehicles flows by ferryboats_ _Car ownership rate_ _Accident rate_ _All data have been already collected_ </td> </tr> <tr> <td> 2.4.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Census/demographic data:_ * Residents in the 8 municipalities of Elba in 2016 divided by sex _Passengers, freight, vehicles flows by ferryboats:_ * Number of passengers from Piombino Harbour to Elba in 2016 _Car ownership rate in Elba_ * Cars owners in Elba in 2015 divided by municipality and by type of emission _Accident rate in Elba_ * accident rate in Elba in 2015 divided by each municipality and street network </td> </tr> <tr> <td> 2.4.1.3 </td> <td> Please detail the data origin </td> <td> _Census/demographic data:_ * The data is available on the website of ISTAT (Italian Institute of Statistics) _Passengers, freight, vehicles flows by ferryboats:_ * The data is available on Port Authority website _Car ownership rate in Elba, Accident rate in Elba_ * The data is available in ACI (Italian Private Car Association) website (www.aci.it) related to 2015 </td> </tr> </table> <table> <tr> <th> **WP2 – ELBA** </th> </tr> <tr> <td> 2.4.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Census/demographic data:_ Data related to one year (2016) divided by sex and municipality _Passengers, freight, vehicles flows by ferryboats:_ Data related to one year (2016) divided by month and category _Car ownership rate in Elba_ Data related to one year (2015) divided by municipality and type of emission (from euro 0 to euro 6) _Accidents rate in Elba_ Data related to one year (2015) divided by municipality and street network </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 2.4.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Census/demographic data:_ * From ISTAT database (website) _Passengers, freight, vehicles flows by ferryboats:_ * From Port Authority database (web site) _Car ownership rate in Elba, Accident rate in Elba_ * From ACI website (www.aci.it) database (website) </td> </tr> <tr> <td> 2.4.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> N/A </td> </tr> </table> <table> <tr> <th> **WP2 – ELBA** </th> </tr> <tr> <td> 2.4.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 2.4.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> See row 2.4.2.1 </td> </tr> <tr> <td> 2.4.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> See row 2.4.2.1 </td> </tr> <tr> <td> 2.4.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> All data is published as open data </td> </tr> <tr> <td> 2.4.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> **WP2 – ELBA** </td> </tr> <tr> <td> 2.4.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is national regulation </td> <td> anonymous; there is any need to apply </td> </tr> <tr> <td> **Data availability for dissemination** </td> <td> </td> </tr> <tr> <td> 2.4.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used aggregated form </td> <td> by DESTINATIONS under an </td> </tr> <tr> <td> 2.4.4.2 </td> <td> Are data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 4: Local Data Management Plan – WP2 (ELBA)** <table> <tr> <th> **WP2 – MALTA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 2.5.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> N/A Awaiting data gap analysis and Strategic Objectives Framework Report to be compiled as part of WP2. </td> </tr> </table> **Table 5: Local Data Management Plan – WP2 (MALTA)** <table> <tr> <th> **WP2 – LAS PALMAS** </th> <th> </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> <td> </td> </tr> <tr> <td> 2.6.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td>   </td> <td> Census/demographic data Economics data </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> O/D matrix </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Traffic flow </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Network </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Emissions and Pollution </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Questionnaires on travel behaviour, attitudes and expectations </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Tourists flow </td> </tr> <tr> <td> 2.6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td>   </td> <td> Most of the data is collected in the SUMP drafted in 2012 Tourists flow is collected online </td> </tr> </table> <table> <tr> <th> **WP2 – LAS PALMAS** </th> </tr> <tr> <td> 2.6.1.3 </td> <td> origin </td> <td> Please detail the data </td> <td> </td> <td> * Census/demographic data (National, regional and local statistical institute) * Economic data (National, regional and local statistical institute) * O/D matrix * Traffic flow (City council) * Network (City council, Guaguas Municipales, Sagulpa) * Questionnaires on travel behaviour, attitudes and expectations (City council, Guaguas Municipales, Sagulpa, Cinesi) (The SUMP that has already been developed for Las Palmas de Gran Canaria in 2012 has already collected all this kind of data. However, this data needs to be updated once the Mobility Office is implemented within CIVITAS DESTINATIONS project) * Emissions and Pollution (Regional network of climate station. “Red de Control y Vigilancia de la Calidad del Aire de Canarias”) * Tourists number (Gran Canaria Tourism Board “Patronato del Turismo de Gran Canaria” and Observatory of Tourism the City council “Observatorio de Turismo del Ayuntamiento de Las Palmas de Gran Canaria”) </td> </tr> <tr> <td> **Data collection pr** </td> <td> **ocedure** </td> <td> **s** </td> </tr> <tr> <td> 2.6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * Census/demographic data (Official statistics) * Economic data (Official statistics) * O/D matrix (Traffic and passenger counts, surveys) * Traffic flow (Traffic and passenger counts) * Network * Emissions and Pollution (Official statistics, climate station networks) * Questionnaires on travel behaviour, attitudes and expectations (Surveys) * Tourists flow (Surveys, Official statistics) </td> </tr> </table> <table> <tr> <th> **WP2 – LAS PALMAS** </th> <th> </th> </tr> <tr> <td> 2.6.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> Sampling process is random. </td> <td> </td> </tr> <tr> <td> 2.6.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Anonymously </td> <td> </td> </tr> <tr> <td> **Data management and storing procedures** </td> <td> </td> </tr> <tr> <td> 2.6.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * Census/demographic data (Online) * O/D matrix, Traffic flow, Network documents) * Emissions and Pollution (Online) * Tourists flow (Online) </td> <td> (SUMP </td> </tr> </table> <table> <tr> <th> **WP2 – LAS PALMAS** </th> <th> </th> </tr> <tr> <td> 2.6.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td>   </td> <td> Census/demographic data (National, regional and local statistical institute) Economic data (National, regional and local statistical institute) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td>  </td> <td> Traffic flow (City council) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td>  </td> <td> Network (City council, Guaguas Municipales, Sagulpa) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td>  </td> <td> Questionnaires on travel behaviour, attitudes and expectations (City council, Guaguas Municipales, Sagulpa, Cinesi) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td>  </td> <td> Emissions and Pollution (Regional network of climate station. “Red de Control y Vigilancia de la Calidad del Aire de Canarias”) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td>  </td> <td> Tourists flow (Gran Canaria Tourism Board “Patronato del Turismo de Gran Canaria” and Observatory of Tourism the City council “Observatorio de Turismo del Ayuntamiento de Las Palmas de Gran Canaria”) </td> </tr> <tr> <td> 2.6.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td>   </td> <td> Census/demographic data (National, regional and local statistical institute) Economics data (National, regional and local statistical institute) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Traffic flow (City council) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Network (City council, Guaguas Municipales, Sagulpa) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Questionnaires on travel behaviour, attitudes and expectations (City council, Guaguas Municipales, Sagulpa, Cinesi) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Emissions and Pollution (Regional network of climate station. “Red de Control y Vigilancia de la Calidad del Aire de Canarias”) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Tourists flow (Gran Canaria Tourism Board “Patronato del Turismo de Gran Canaria” and Observatory of Tourism the City council “Observatorio de Turismo del Ayuntamiento de Las Palmas de Gran Canaria” </td> </tr> <tr> <td> **WP2 – LAS PALMAS** </td> </tr> <tr> <td> 2.6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> “Ley Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter Personal” </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 2.6.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> 2.6.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 6: Local Data Management Plan – WP2 (LAS PALMAS)** ## WP3 <table> <tr> <th> **WP3 - MADEIRA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 3.1.1.1 </td> <td> Which kind of data is been/will be collected in your site? </td> <td> * _Statistics on incidents on the road network (to be collected) and inside the buses_ (planned) * _Survey to target user groups to collect needs and expectations_ (planned) * _Data about road network_ (planned) </td> </tr> </table> <table> <tr> <th> **WP3 - MADEIRA** </th> </tr> <tr> <td> 3.1.1.2 </td> <td> _Please detail data typology and structure/format (if_ applicable) </td> <td> _Statistics on incidents on the road network and inside the buses_ * Number of traffic accidents * Number of accidents inside the buses _Survey to target user group to collect needs and expectations_ * Paper questionnaires _Data about road network_ * Counting pedestrians and identifying who is using wheel-chair or who is mobility-impaired * Counting traffic congestions </td> </tr> <tr> <td> 3.1.1.3 </td> <td> Please detail the data origin </td> <td> _Statistics on incidents on the road network and inside the buses_ * The data is stored in a database or in paper archive _Survey to target user group to collect needs and expectations_ * The target data of questionnaires will be schools students and professors, public transport users. _Data about road network_ * Visual counting or implement sensors. </td> </tr> <tr> <td> 3.1.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Statistics on incidents on the road network and inside the buses_ * Data of incidents on the road are not available at this moment. 30 incidents inside the bus. _Survey to target user group to collect needs and expectations_ * 100 questionnaires _Data about road network_ * Not available at this moment. </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 3.1.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Statistics on incidents on the road network and inside the buses_ * Data extraction from database _Survey to target user group to collect needs and expectations_ * Questionnaires or interviews _Data about road network_ * Data collection </td> </tr> </table> <table> <tr> <th> **WP3 - MADEIRA** </th> </tr> <tr> <td> 3.1.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _Survey to target user group to collect needs and expectations_  A sample of target users will be selected to be provided with the questionnaires </td> </tr> <tr> <td> 3.1.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _Statistics on incidents on the road network and inside the buses_ * Anonymously _Survey to target user group to collect needs and expectations_ * Anonymously _Data about Road network_ * Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 3.1.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Statistics on incidents on the road network and inside the buses * Database is stored in the office of local Police Department, and in HF office Survey to target user group to collect needs and expectations * Questionnaires will be stored in HF, CMF and AREAM office Data about Road network * The data base will be stored in CMF or AREAM office </td> </tr> <tr> <td> 3.1.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> Statistics on incidents on the road network and inside the buses * Police Department and HF Survey to target user group to collect needs and expectations * HF, CMF and AREAM Data about road network * CMF and AREAM </td> </tr> <tr> <td> **WP3 - MADEIRA** </td> </tr> <tr> <td> 3.1.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Statistics on incidents on the road network and inside the buses * Police Department and HF Survey to target user group to collect needs and expectations * HF, CMF and AREAM office Data about Road network  CMF and AREAM </td> </tr> <tr> <td> 3.1.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> 3.1.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply national regulation. </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 3.1.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used by DESTINATIONS under an aggregated form </td> </tr> <tr> <td> 3.1.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 7: Local Data Management Plan – WP3 (MADEIRA)** <table> <tr> <th> **WP3 – RETHYMNO** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 3.2.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _As described in WP2 – RETHYMNO:_ * Data about road network; * Statistics on traffic accidents, deaths and injuries _The data collection is completed_ </td> </tr> </table> **Table 8: Local Data Management Plan – WP3 (RETHYMNO)** <table> <tr> <th> **WP3 - LIMASSOL** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 3.3.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**LIM3.1 Increase cycling and walking in combination with special interest tourist activities as an integrated product:** _ * CO2 emissions * Energy consumption * Economy _**LIM3.2 Safe routes to school:** _  CO2 emissions * Energy consumption * Economy * Questionnaires _**LIM3.4 Attractive and accessible public places to promote intermodal leisure trips:** _ * CO2 emissions * Energy consumption * Economy _The data collection is on-going_ </td> </tr> </table> <table> <tr> <th> **WP3 - LIMASSOL** </th> </tr> <tr> <td> 3.3.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _**3.1 Increase cycling and walking in combination with special interest tourist activities as an integrated product:** _ * CO2 emissions * Kg/km CO2 x total distance covered by the persons * Energy consumption * kWh/lt x total litres of total distance * Economy * Average cost of fuel x fuel savings from energy consumption _**3.2 Safe routes to school:** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption * Questionnaires _**3.4 Attractive and accessible public places to promote intermodal leisure trips:** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption </td> </tr> <tr> <td> 3.3.1.3 </td> <td> Please detail the data origin </td> <td> * Public Works Department database * Limassol Municipality database * LTC database * Questionnaires filled by tourists and local citizens </td> </tr> <tr> <td> 3.3.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td>  30 questionnaires </td> </tr> </table> <table> <tr> <th> **WP3 - LIMASSOL** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 3.3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * CO2 emissions * Data extraction from database or data gathered from the field * Energy consumption * Data extraction from database or data gathered from the field * Economy * Data extraction from database and data gathered from the field * Questionnaires targeting the user group, for awareness level, needs and expectations * Data from questionnaires </td> </tr> <tr> <td> 3.3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> * Since already existing data from current surveys will also be used, the sampling will not be random and it might be enough for statistical analysis. * The only sampled data that will be random will be the questionnaires since this survey will involve randomly selected tourists and local citizens for questions </td> </tr> <tr> <td> 3.3.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td>  The questionnaires will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 3.3.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * All the data extracted from the mentioned databases is stored in the involved partner’s database * The data from the questionnaires will be stored in the involved partner’s office </td> </tr> <tr> <td> **WP3 - LIMASSOL** </td> </tr> <tr> <td> 3.3.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> * LTC * STRATAGEM * LIMA </td> </tr> <tr> <td> 3.3.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * LTC * STRATAGEM * LIMA </td> </tr> <tr> <td> 3.3.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> The Charter of Fundamental Rights of EU States </td> </tr> <tr> <td> 3.3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> * The Market Research Society code of conduct * ISO 20252 </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 3.3.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> The data can be used for the dissemination of the project and will be aggregated </td> </tr> <tr> <td> 3.3.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 9: Local Data Management Plan – WP3 (LIMASSOL)** <table> <tr> <th> **WP3 - ELBA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 3.4.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> Users’ needs analysis has been carried out based on focus group with citizens and expertise/knowledge of Municipality technicians N/A for the scope of this document </td> </tr> </table> **Table 10: Local Data Management Plan – WP3 (ELBA)** <table> <tr> <th> **WP3 – MALTA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 3.5.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> No demo measures planned in WP3 </td> </tr> </table> **Table 11: Local Data Management Plan – WP3 (MALTA)** <table> <tr> <th> **WP3 – LAS PALMAS** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 3.6.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> **LPA3.1. Attractive, safe and accessible public space at major attractions** * __Environment_ : _ CO2 emissions (not yet collected) * __Society_ : _ Physical accessibility towards transport (under collection) * __Transport_ : _ * Congestion levels: Average vehicle speed over total network) (not yet collected) * Traffic levels: Average vehicles per hour by vehicle type at peak hour) (not yet collected) * Opportunity for walking (not yet collected) * Opportunity for cycling (not yet collected) </td> </tr> <tr> <td> 3.6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> **LPA3.1. Attractive, safe and accessible public spaces at major attractions** * CO2 emissions: Digital data * Physical accessibility towards transport: Survey to target user group to collect needs and expectations. Paper questionnaires * Congestion levels: Digital data. * Traffic levels: Digital data * Opportunity for walking: GIS analysis * Opportunity for cycling: GIS analysis </td> </tr> <tr> <td> 3.6.1.3 </td> <td> Please detail the data origin </td> <td> **LPA3.1. Attractive, safe and accessible public space at major attractions** * CO2 emissions: Data from existing pollution stations * Physical accessibility towards transport: Paper questionnaires to tourists and public transport users * Congestion levels: Automatic car counting * Traffic levels – Automatic car counting * Opportunity for walking – GIS Database * Opportunity for cycling – GIS Database </td> </tr> <tr> <td> 3.6.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> No data available yet Planned dimension: * CO2 emissions: 11 stations * Paper questionnaires to tourists and public transport users: 37 * Congestion levels: 21 measure points * Traffic levels: 19 measure points </td> </tr> </table> <table> <tr> <th> **WP3 – LAS PALMAS** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 3.6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * Data extraction from database or data gathering from survey to target user groups to collect needs and expectations * Questionnaires or interviews </td> </tr> <tr> <td> 3.6.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> For all indicators that might need data from a survey, questionnaire  A sampling of the target users have been selected to be provided with the questionnaires </td> </tr> <tr> <td> 3.6.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> All data is collected anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 3.6.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> All data will be stored in the Municipality’s databases </td> </tr> <tr> <td> 3.6.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> The organization responsible for data storing and management is the Municipality of Gran Canaria Las Palmas </td> </tr> <tr> <td> **WP3 – LAS PALMAS** </td> </tr> <tr> <td> 3.6.3.3 </td> <td> Through whom (organization, responsible) is the data accessible? </td> <td> Municipality, Public bodies (Guaguas, Sagulpa, etc.), Local Police and CINESI </td> </tr> <tr> <td> 3.6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> “Ley Orgánica 15/1999, ed 13 de diciembre, de Protección de Datos de Carácter Personal” </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 3.6.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> 3.6.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 12: Local Data Management Plan – WP3 (LAS PALMAS)** ## WP4 <table> <tr> <th> **WP4 – MADEIRA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 4.1.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Data on the demand of electrical vehicles (planned) Survey of target group (planned) </td> </tr> <tr> <td> 4.1.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> Data on the demand of electrical vehicles * Number of new electrical vehicles in the region Survey of target group * Questionnaires to owners of new electrical vehicles </td> </tr> </table> <table> <tr> <th> **WP4 – MADEIRA** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 4.1.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Data on the demand of electrical vehicles * Questionnaire Survey to target group * Questionnaires or interviews </td> </tr> <tr> <td> 4.1.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> Survey of target group  A sampling of the target users will be selected to be provided with the questionnaires </td> </tr> <tr> <td> 4.1.2.3 </td> <td> Are data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Data on the demand of electrical vehicles * Anonymously Survey to target group * Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 4.1.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Data on the demand of electrical vehicles * Local vehicle sellers Survey to target group * The data base will be stored in AREAM office </td> </tr> <tr> <td> 4.1.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> Data on the demand of electrical vehicles * AREAM office Survey of target group * AREAM office </td> </tr> <tr> <td> **WP4 – MADEIRA** </td> </tr> <tr> <td> 4.1.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Data on the demand of electrical vehicles * AREAM office Survey of target group * AREAM office </td> </tr> <tr> <td> 4.1.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> 4.1.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is national regulation </td> <td> anonymous; there is no </td> <td> need to apply </td> </tr> <tr> <td> **Data availability for dissemination** </td> <td> </td> <td> </td> </tr> <tr> <td> 4.1.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used aggregated form </td> <td> by DESTINATIONS </td> <td> under an </td> </tr> <tr> <td> 4.1.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 13: Local Data Management Plan – WP4 (MADEIRA)** <table> <tr> <th> **WP4 - RETHYMNO** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 4.2.1.1 </td> <td> Which kind of data have been/will be collected in your site? </td> <td> Design activities of the measure are starting No data collection to be reported </td> </tr> </table> **Table 14: Local Data Management Plan – WP4 (RETHYMNO)** <table> <tr> <th> **WP4 - LIMASSOL** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 4.3.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**4.1 Electric car rentals connecting the Limassol areaairports-ports** _ * CO2 emissions * Energy consumption * Economy _**4.2 Expansion of bike sharing system. Add new bikes and e-bikes for rent** _ * CO2 emissions * Energy consumption * Economy _**4.3 Promote the uptake of electric vehicles. Campaign on electro-mobility** _ * CO2 emissions * Energy consumption * Economy * Revenues _The data collection is on-going_ </td> </tr> </table> <table> <tr> <th> **WP4 - LIMASSOL** </th> </tr> <tr> <td> 4.3.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _**4.1 Electric car rentals connecting the Limassol areaairports-ports** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption _**4.2 Expansion of bike sharing system. Add new bikes and e-bikes for rent** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption _**4.3 Promote the uptake of electric vehicles. Campaign on electro-mobility** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption * Revenues * Total rental price x total rental days </td> </tr> <tr> <td> 4.3.1.3 </td> <td> Please detail the data origin </td> <td> * LTC * LIMA </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> </table> <table> <tr> <th> **WP4 - LIMASSOL** </th> </tr> <tr> <td> 4.3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * CO2 emissions * Data extraction from database or data gathered from the field * Energy consumption * Data extraction from database or data gathered from the field * Economy * Data extraction from database and data gathered from the field </td> </tr> <tr> <td> 4.3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td>  Since already existing data from current surveys will also be used, the sampling will not be random and it might be enough for statistical analysis </td> </tr> <tr> <td> 4.3.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td>  The data collected will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 4.3.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * All the data extracted from the mentioned databases is stored in the involved partner’s database * The data from the questionnaires will be stored in the involved partner’s office </td> </tr> <tr> <td> **WP4 - LIMASSOL** </td> </tr> <tr> <td> 4.3.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td>  LTC </td> </tr> <tr> <td> 4.3.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * LTC * STRATAGEM </td> </tr> <tr> <td> 4.3.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> The Charter of Fundamental Rights of EU States </td> </tr> <tr> <td> 4.3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> * The Market Research Society code of conduct and * ISO 20252 </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 4.3.4.1 </td> <td> Are data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> The data will be used for the dissemination of the project and will be aggregated </td> </tr> <tr> <td> 4.3.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 15: Local Data Management Plan – WP4 (LIMASSOL)** <table> <tr> <th> **WP4 - ELBA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 4.4.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> No data collection to be reported for the period </td> </tr> </table> **Table 16: Local Data Management Plan – WP4 (ELBA)** <table> <tr> <th> **WP4 - MALTA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 4.5.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) * Periodical survey data on the effectiveness of the campaign ( to be done) * Statistics produced by the platform of management of bike and car sharing (once these start to be operated): registered users, O/D trips, etc. _Data collection procedure is planned_ </td> </tr> <tr> <td> 4.5.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – Report of findings (onetime report) * Survey data on the effectiveness of the campaign – Bar graph format to show awareness comparison * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – excel format showing number of users per period (periodic) </td> </tr> </table> <table> <tr> <th> **WP4 - MALTA** </th> </tr> <tr> <td> 4.5.1.3 </td> <td> Please detail the data origin </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – contractor to compile research and report * Survey data on the effectiveness of the campaign – telephone surveys conducted by contractor * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – data provided by operators </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 4.5.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – desk top research * Survey data on the effectiveness of the campaign – telephone surveys done periodically (prior to launch of campaign; during the campaign; after end of campaign) * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – data provided by RFID cards which shows service usage </td> </tr> <tr> <td> 4.5.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> Sampling shall only be used for the survey on Campaign Awareness. It shall be ensured that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant sub-sample breakdowns </td> </tr> </table> <table> <tr> <th> **WP4 - MALTA** </th> </tr> <tr> <td> 4.5.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – not applicable * Survey data on the effectiveness of the campaign – telephone surveys done periodically (prior to launch of campaign; data will be anonymous * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – No information regarding RFID users shall be provided, only the anonymous number of users per period </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 4.5.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – PDF * Survey data on the effectiveness of the campaign – excel * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – excel </td> </tr> <tr> <td> 4.5.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – Transport Malta * Survey data on the effectiveness of the campaign – Transport Malta * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – The operators </td> </tr> <tr> <td> **WP4 - MALTA** </td> </tr> <tr> <td> 4.5.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * Data on average cost of owning a car and comparable data on costs of sharing transport/using public transport (to be used in dissemination of the information campaign) – Transport Malta * Survey data on the effectiveness of the campaign – Transport Malta * Statistics produced by the platform of management of bike and car sharing once these start to be operated (registered users, O/D trips, etc.) – The operators and Transport Malta </td> </tr> <tr> <td> 4.5.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018 </td> </tr> <tr> <td> 4.5.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 4.5.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Yes, in aggregate. </td> </tr> <tr> <td> 4.5.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 17: Local Data Management Plan – WP4 (MALTA)** <table> <tr> <th> **WP4 – LAS PALMAS** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 4.6.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * Statistics on public bike systems (planned) * Statistics on fast charging EV and e-cars (planned) </td> </tr> <tr> <td> 4.6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Users by week and month * Average users by day of the week. * Average users by day of the week and by bike * Average users by hour in summer and winter * Source – Destination Matrix * Economical saving on fuel by month * Economical saving on mileage payment * Number of Kwh consumed * CO2 Tons saved </td> </tr> <tr> <td> 4.6.1.3 </td> <td> Please detail the data origin </td> <td> All the data will be stored in a database and the different statistics will be stored in electronic format </td> </tr> <tr> <td> 4.6.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> 375 bikes, 20 e-bikes and 2 bikes for physically impaired people will be analysed 6 fast charging EV 3 electrical cars </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 4.6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Data extraction from database Data extraction from electrical counters </td> </tr> <tr> <td> 4.6.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> * Statistics on public bikes system For the implementation of the public bikes system, a personal data collection will be done in order to manage a customer database system that of course will comply with national and international regulation regarding personal data storing, access and management * Statistics on fast charging EV and electrical cars. Anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 4.6.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Statistics on public bike systems * Data is stored in NextBike server systems Statistics on fast charging EV and electrical cars * Data is stored in SAGULPA servers </td> </tr> <tr> <td> **WP4 – LAS PALMAS** </td> </tr> <tr> <td> 4.6.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> SAGULPA </td> </tr> <tr> <td> 4.6.3.3 </td> <td> Through whom (organization, responsible) is the data accessible? </td> <td> SAGULPA </td> </tr> <tr> <td> 4.6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> “Ley Orgánica 15/1999, ed 13 de diciembre, de Protección de Datos de Carácter Personal” </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 4.6.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> 4.6.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 18: Local Data Management Plan – WP4 (LAS PALMAS)** ## WP5 <table> <tr> <th> **WP5 - MADEIRA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 5.1.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * Questionnaires/survey on supply/retail process (planned) * Data about road network (planned) </td> </tr> <tr> <td> 5.1.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Questionnaires/survey on supply/retail process_ * Paper questionnaires _Data about road network_ * Counting traffic congestions </td> </tr> <tr> <td> 5.1.1.3 </td> <td> Please detail the data origin </td> <td> _Questionnaires/survey on supply/retail process_ * The target data of questionnaires will be the local commerce owners _Data about road network_ * Visual counting or implement sensors </td> </tr> <tr> <td> 5.1.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Questionnaires/survey on supply/retail process_ * 50 questionnaires _Data about road network_ * Not available at this moment </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 5.1.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Questionnaires/survey on supply/retail process_ * Questionnaires or interviews _Data about road network_ * Data collection </td> </tr> <tr> <td> 5.1.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _Questionnaires/survey on supply/retail process_  A sampling of the target users will be selected to be provided with the questionnaires. </td> </tr> </table> <table> <tr> <th> **WP5 - MADEIRA** </th> </tr> <tr> <td> 5.1.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _Questionnaires/survey on supply/retail process_ * Anonymously _Data about Road network_ * Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 5.1.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> _Questionnaires/survey on supply/retail process_ * Questionnaires will be stored in CMF office _Data about road network_ * The data base will be stored in CMF office </td> </tr> <tr> <td> 5.1.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> _Questionnaires/survey on supply/retail process_ * CMF _Data about Road network_ * CMF </td> </tr> <tr> <td> 5.1.3.3 </td> <td> Through whom (organization, responsible) data is accessible? </td> <td> _Questionnaires/survey on supply/retail process_ * CMF _Data about Road network_ * CMF </td> </tr> <tr> <td> 5.1.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> 5.1.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply national regulation </td> </tr> <tr> <td> **WP5 - MADEIRA** </td> <td> </td> </tr> <tr> <td> **Data availability for dissemination** </td> <td> </td> </tr> <tr> <td> 5.1.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used aggregated form </td> <td> by DESTINATIONS under an </td> </tr> <tr> <td> 5.1.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> <td> </td> </tr> </table> **Table 19: Local Data Management Plan – WP5 (MADEIRA)** <table> <tr> <th> **WP5 - RETHYMNO** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 5.2.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Design activities of the measure are starting No data collection to be reported </td> </tr> </table> **Table 20: Local Data Management Plan – WP5 (RETHYMNO)** <table> <tr> <th> **WP5 - LIMASSOL** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 5.3.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**5.1 Limassol city centre urban freight logistic action plan** _ * CO2 emissions * Energy consumption * Economy * Questionnaires targeting the user group, for awareness level, needs and expectations _**5.2 Promotion and creation of network for collecting of used cooking oil (UCO)** _ * Litres of cooking oil collected from hotels and restaurants </td> </tr> </table> <table> <tr> <th> **WP5 - LIMASSOL** </th> </tr> <tr> <td> 5.3.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _**5.1 Limassol city centre urban freight logistic action plan** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption * Questionnaires targeting the user group, for awareness level, needs and expectations * Paper questionnaires _**5.2 Promotion and creation of network for collecting of used cooking oil (UCO)** _ * Litres of used cooking oil collected _The data collection is on-going_ </td> </tr> <tr> <td> 5.3.1.3 </td> <td> Please detail the data origin </td> <td> * LTC * LIMA * Public Works Department * Questionnaires (to be done) </td> </tr> <tr> <td> 5.3.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _**5.1 Limassol city centre urban freight logistic action plan** _ * 100 shops involved _**5.2 Promotion and creation of network for collecting of used cooking oil (UCO)** _ * 3 hotels * 10 restaurants </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 5.3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * _CO2 emissions_ * Data extraction from database or data gathered from the field * _Energy consumption_ * Data extraction from database or data gathered from the field * _Economy_ * Data extraction from database and data gathered from the field * _Questionnaires targeting the user group, for awareness level, needs and expectations_ * Data from questionnaires </td> </tr> </table> <table> <tr> <th> **WP5 - LIMASSOL** </th> </tr> <tr> <td> 5.3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> * When data already exist from current surveys, they will also be used, the sampling will not be random and it might be enough for statistical analysis * The only sampled data that will be random will be the questionnaires since this survey will involve randomly selected tourists and local citizens for questions </td> </tr> <tr> <td> 5.3.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td>  The questionnaires will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> </table> 5.3.3.1 How is data stored? Please detail where the data is stored and in which modality/format (if applicable) * All the data extracted from the mentioned databases is stored in the relevant involved partner’s database * The data from the questionnaires will be stored in the involved partner’s office <table> <tr> <th> 5.3.3.2 </th> <th> Who is the organization responsible for data storing and management? </th> <th>  </th> <th> STRATAGEM </th> </tr> </table> 5.3.3.3 Through whom  STRATAGEM (organization, responsible) is data accessible? <table> <tr> <th> **WP5 - LIMASSOL** </th> </tr> <tr> <td> 5.3.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> The Charter of Fundamental Rights of EU States </td> </tr> <tr> <td> 5.3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> * The Market Research Society code of conduct * ISO 20252 </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 5.3.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> The data will be used for the dissemination of the project and will be aggregated </td> </tr> <tr> <td> 5.3.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 21: Local Data Management Plan – WP5 (LIMASSOL)** <table> <tr> <th> **WP5 – ELBA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 5.4.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> Surveys based on questionnaires for collecting data on goods distribution in the island (on going) Survey based on questionnaires targeting freight operators in the island in order to update/integrate the available data on freight flow to the island (on going) </td> </tr> </table> <table> <tr> <th> **WP5 – ELBA** </th> </tr> <tr> <td> 5.4.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Survey of freight distribution/operators_  Paper questionnaires </td> </tr> <tr> <td> 5.4.1.3 </td> <td> Please detail the data origin </td> <td> _Survey of freight distribution/operators_  The target data of questionnaires will be the local shop owners and freight operators </td> </tr> <tr> <td> 5.4.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Survey of freight distribution/operators_  About 50 questionnaires for stores and 50 for freights operators </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 5.4.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td>  Questionnaires are submitted through an interview </td> </tr> <tr> <td> 5.4.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td>  A sample of the target users will be selected to be provided with the questionnaires </td> </tr> <tr> <td> 5.4.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td>  Anonymously </td> </tr> </table> <table> <tr> <th> **WP5 – ELBA** </th> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 5.4.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td>  Questionnaires will be stored in Portoferraio Municipality </td> </tr> <tr> <td> 5.4.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td>  Portoferraio Municipality </td> </tr> <tr> <td> 5.4.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td>  Staff of Portoferraio and Rio Marina Municipalities working on the project </td> </tr> <tr> <td> 5.4.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> 5.4.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All Data is anonymous; there is any need to apply national regulation </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 5.4.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used by DESTINATIONS under an aggregated form </td> </tr> <tr> <td> **WP5 – ELBA** </td> <td> </td> </tr> <tr> <td> 5.4.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 22: Local Data Management Plan – WP5 (ELBA)** <table> <tr> <th> **WP5 - MALTA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 5.5.1.1 </td> <td> Which kind of data have been/will be collected in your site? </td> <td> * Data on shops, supply process, logistics operators, etc. * Survey on user needs and expectations * Reports coming from stakeholder and target users focus group _Data collection procedure is planned_ </td> </tr> <tr> <td> 5.5.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Data on shops, supply process, logistics operators, etc. – _survey data collected from target group; number of registered outlets_ * Survey on user needs and expectations – _questionnaire survey_ * Reports coming from stakeholder and target users focus group – _questionnaire survey_ </td> </tr> <tr> <td> 5.5.1.3 </td> <td> Please detail the data origin </td> <td> * Data on shops, supply process, logistics operators, etc. – _Valletta Local Council, Transport Malta operators’ licence database_ * Survey on user needs and expectations – _targeted_ _participants in pilot_ * Reports coming from stakeholder and target users focus group – _targeted participants in pilot_ </td> </tr> </table> <table> <tr> <th> **WP5 - MALTA** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 5.5.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * Data on shops, supply process, logistics operators, etc. – desktop research * Survey on users’ needs and expectations – questionnaire * Reports coming from stakeholder and target users focus group – questionnaire </td> </tr> <tr> <td> 5.5.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td>  No sampling will be done in this case </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> 5.5.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Targeted participants, anonymous </td> <td> once </td> <td> selected, </td> <td> will </td> <td> not </td> <td> be </td> </tr> <tr> <td> **Data management and storing procedures** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> 5.5.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * Data on shops, supply process, logistics operators, etc. – Official databases currently used to store respective data shall continue to be used * Survey on users’ needs and expectations – excel * Reports coming from stakeholder and target users focus group – PDF </td> </tr> <tr> <td> **WP5 - MALTA** </td> </tr> <tr> <td> 5.5.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> * Data on shops, supply process, logistics operators, etc. – _Valletta Local Council; Transport Malta_ * Survey on users’ needs and expectations – _Transport Malta_ * Reports coming from stakeholder and target users focus group – _Transport Malta_ </td> </tr> <tr> <td> 5.5.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * Data on shops, supply process, logistics operators, etc. – _Valletta Local Council; Transport Malta_ * Survey on users’ needs and expectations – _Transport Malta_ * Reports coming from stakeholder and target users focus group – _Transport Malta_ </td> </tr> <tr> <td> 5.5.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018 </td> </tr> <tr> <td> 5.5.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 5.5.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Yes, non- aggregate </td> </tr> <tr> <td> 5.5.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 23: Local Data Management Plan – WP5 (MALTA)** <table> <tr> <th> **WP5 – LAS PALMAS** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 5.6.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * _Number of incoming parcels on customs_ (planned) * _Data on movements on logistics operators_ (planned) </td> </tr> <tr> <td> 5.6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Info on freight distribution related to:_ * Size * Price * Urgent or not * Final destination </td> </tr> <tr> <td> 5.6.1.3 </td> <td> Please detail the data origin </td> <td> Surveys on logistics operators and official data from customs </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 5.6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Official data and surveys to logistic operators </td> </tr> <tr> <td> 5.6.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> No data extraction for statistical inferences is expected at the moment </td> </tr> <tr> <td> 5.6.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Anonymously </td> </tr> <tr> <td> **WP5 – LAS PALMAS** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 5.6.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Electronically </td> </tr> <tr> <td> 5.6.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> CTO </td> </tr> <tr> <td> 5.6.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> CTO </td> </tr> <tr> <td> 5.6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> “Ley Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter Personal” </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 5.6.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> 5.6.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 24: Local Data Management Plan – WP5 (LAS PALMAS)** ## WP6 <table> <tr> <th> **WP6 -MADEIRA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 6.1.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * Statistics on incidents on the road network (planned) * Survey of target user group to collect needs and expectations (planned) * Data about road network (planned) </td> </tr> <tr> <td> 6.1.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Statistics on incidents on the road network_ * Number of traffic accidents _Survey of target user group to collect needs and expectations_ * Paper questionnaires _Data about road network_ * Counting traffic congestions * Average speed of public transport </td> </tr> <tr> <td> 6.1.1.3 </td> <td> Please detail the data origin </td> <td> _Statistics on incidents on the road network_ * The data is stored in a database or in paper archive _Survey of target user group to collect needs and expectations_ * Public transport users, shop owners. _Data about road network_ * Visual counting or implement sensors.  Bus exploitation (AVM) system </td> </tr> <tr> <td> 6.1.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Survey of target user group to collect needs and expectations_  100 questionnaires </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 6.1.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Statistics on incidents on the road network_ * Data extraction from database _Survey of target user group to collect needs and expectations_ * Questionnaires or interviews _Data about road network_ * Data collection * Data extraction from database </td> </tr> </table> <table> <tr> <th> **WP6 -MADEIRA** </th> </tr> <tr> <td> 6.1.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> Survey of target user group to collect needs and expectations  A sampling of the target users will be selected to be provided with the questionnaires </td> </tr> <tr> <td> 6.1.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _Statistics on incidents on the road network_ * Anonymously _Survey of target user group to collect needs and expectations_ * Anonymously _Data about Road network_ * Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.1.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> _Statistics on incidents on the road network_ * Database is stored in the office of local Police Department _Survey of target user group to collect needs and expectations_ * Questionnaires will be stored in HF, CMF, SRETC and ARDITI office _Data about Road network_ * The data base will be stored in CMF or HF office </td> </tr> <tr> <td> 6.1.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> _Statistics on incidents on the road network_ * Police Department _Survey of target user group to collect needs and expectations_ * HF, CMF, SRETC and ARDITI _Data about Road network_  CMF and HF </td> </tr> <tr> <td> **WP6 -MADEIRA** </td> </tr> <tr> <td> 6.1.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> _Statistics on incidents on the road network_ * Police Department _Survey of target user group to collect needs and expectations_ * HF, CMF, SRETC and ARDITI _Data about Road network_  CMF and HF </td> </tr> <tr> <td> 6.1.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply international regulation. </td> </tr> <tr> <td> 6.1.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply national regulation </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.1.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be by DESTINATIONS under an aggregated form </td> </tr> <tr> <td> 6.1.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 25: Local Data Management Plan – WP6 (MADEIRA)** <table> <tr> <th> **WP6 - RETHYMNO** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.2.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Design activities of the measure are starting No data collection to be reported </td> </tr> </table> **Table 26: Local Data Management Plan – WP6 (RETHYMNO)** <table> <tr> <th> **WP6 – LIMASSOL** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.3.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**6.1 Awareness on the use of sustainable mobility modes for leisure trips** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ * _Questionnaires targeting the user group, for awareness level, needs and expectations_ _**6.3 Bicycle challenge: competition** _ _**employees of companies** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ * _Participants’ logbook data_ _**6.4 Smart Parking Guidance System** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ _The data collection is on-going_ </td> <td> _**between** _ </td> </tr> </table> <table> <tr> <th> **WP6 – LIMASSOL** </th> </tr> <tr> <td> 6.3.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _**6.1 Awareness on the use of sustainable mobility modes for leisure trips** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption * Questionnaires targeting the user group, for awareness level, needs and expectations * Paper questionnaires _**6.3 Bicycle challenge: competition between employees of companies** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption * Participant’s logbook data * Logbook _**6.4 Smart Parking Guidance System** _ * CO2 emissions * Kg/km CO2 x total distance covered by the students * Energy consumption * kWh/lt x total litre of the total distance covered * Economy * Average cost of fuel x fuel savings from energy consumption _The data collection is on-going_ </td> </tr> <tr> <td> 6.3.1.3 </td> <td> Please detail the data origin </td> <td> * LTC * Kmeaters cycling club * LIMA * Public Works Department </td> </tr> </table> <table> <tr> <th> **WP6 – LIMASSOL** </th> </tr> <tr> <td> 6.3.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _**6.1 Awareness on the use of sustainable mobility modes for leisure trips** _ * _30 questionnaires_ _**6.3 Bicycle challenge: competition between employees of companies** _ * _100 employees_ _**6.4 Smart Parking Guidance System** _ \- _2000 users_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * _CO2 emissions_ * Data extraction from database or data gathered from the field * _Energy consumption_ * Data extraction from database or data gathered from the field * _Economy_ * Data extraction from database and data gathered from the field * _Questionnaires targeting the user group, for awareness level, needs and expectations_ * Data from questionnaires </td> </tr> <tr> <td> 6.3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> * Since already existing data from current surveys will also be used, the sampling will not be random and it might be enough for statistical analysis * The only sampled data that will be random will be the questionnaires since this survey will involve randomly selected tourists and local citizens for questions </td> </tr> </table> <table> <tr> <th> **WP6 – LIMASSOL** </th> </tr> <tr> <td> 6.3.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> The questionnaires will be anonymous apart from the bicycle challenge where the name of the participant needs to be known as there will be a prize for the winner. But no other personal information that it will invade the individual’s personal life will be given </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.3.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * All the data extracted from the mentioned databases is stored in the relevant involved partner’s database * The data from the questionnaires will be stored in the involved partner’s office </td> </tr> <tr> <td> 6.3.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> * STRATAGEM * LTC * LIMA </td> </tr> <tr> <td> 6.3.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * STRATAGEM * LTC * LIMA </td> </tr> <tr> <td> 6.3.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> The Charter of Fundamental Rights of EU States </td> </tr> <tr> <td> 6.3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> * The Market Research Society code of conduct * ISO 20252 </td> </tr> <tr> <td> **WP6 – LIMASSOL** </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.3.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> The data will be used for the dissemination of the project and will be aggregated </td> </tr> <tr> <td> 6.3.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 27: Local Data Management Plan – WP6 (LIMASSOL)** <table> <tr> <th> **WP6 – ELBA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.4.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Surveys based on questionnaires on mobility services offered by the accommodation facilities (hotels, camps ...): data about integrated packages (hospitality and mobility) offer (to be started soon) </td> </tr> <tr> <td> 6.4.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td>  Paper questionnaires </td> </tr> <tr> <td> 6.4.1.3 </td> <td> Please detail the data origin </td> <td> The origin of data of questionnaires will be the owners of hotels and camps or Bed and Breakfast or other types of accommodation facilities </td> </tr> <tr> <td> 6.4.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Questionnaires:_  About 50 questionnaires according with the different type of accommodation facilities all around the island </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.4.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Questionnaires about integrated packages stay and mobility_  Questionnaires compiled through interviews </td> </tr> </table> <table> <tr> <th> **WP6 – ELBA** </th> </tr> <tr> <td> 6.4.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> A sampling of the target users will be selected to be provided with the questionnaires </td> </tr> <tr> <td> 6.4.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.4.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Questionnaires will be stored in Portoferraio Municipality </td> </tr> <tr> <td> 6.4.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> Portoferraio Municipality </td> </tr> <tr> <td> 6.4.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Staff of Portoferraio and Rio Marina Municipalities employed in the project </td> </tr> <tr> <td> **WP6 – ELBA** </td> </tr> <tr> <td> 6.4.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> 6.4.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All data is anonymous; there is any need to apply international regulation </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.4.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used by DESTINATIONS under an aggregated form </td> </tr> <tr> <td> 6.4.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 28: Local Data Management Plan – WP6 (ELBA)** <table> <tr> <th> **WP6 – MALTA** </th> </tr> <tr> <td> **Data details- MAL6.1** </td> </tr> <tr> <td> 6.5.1.1 (a) </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Data that needs to be collected includes: 1.Desk and Field research/data conducted by the appointed economic operator: the data type can be quantitative or qualitative in nature and the tools used may vary in line with exigencies of each respective data set 2. Field research carried out to collect data about hotel operations in the field of green mobility. These data sets will be accounted for through the hotel audit process (Findings Reports, Recommendations list and Improvement Plans) 3. Data related to cost structures for the green mobility plan 4 .Data collection which will feed into the dissemination and awareness raising initiatives with stakeholders Data collection procedure is planned The above is an indicative list of the data required for Measure 6.1 and amendments might be considered in order to implement the measure As such the Ministry for Tourism has not compiled data but research was compiled during the period under review i.e. M1 (Sep 16) – M6 (Feb 17) to establish an appropriate framework for the implementation of the action: * Research for the draft preliminary award criteria for the Green Mobility Hotel Award * Statistical data provided by the MTA about the Chinese Market * Technical specifications compiled to establish the model structure. This data will serve to guide the company that will be selected to carry out market testing, stakeholder consultations, launching and subsequent implementation of a pilot model structure with the hotel industry for the Green Mobility Award and Labelling scheme </td> </tr> </table> <table> <tr> <th> **WP6 – MALTA** </th> </tr> <tr> <td> **Data details- MAL6.1** </td> </tr> <tr> <td> 6.5.1.2 (a) </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> The data typology used for point numbers 1 to 4 (as detailed in section 6.5.1.1 (a) above) can be quantitative or qualitative in nature and the tools used (i.e. surveys, questionnaires, interviews, statistical analysis, site visits, comparative assessments, branding techniques) may vary in line with the exigencies of each respective data set Further information about the specific data typologies shall be communicated once the Ministry for Tourism (MOT) appoints the economic operator who will be responsible to carry out market testing, stakeholder consultations, launching and subsequent implementation of the pilot model structure with the hotel industry for the Green Mobility Hotel Award and labelling scheme The data typologies used for the tasks carried out during the months under review M1 (Sep 16) – M6 (Feb 17) included: * Analysis of existing labelling/awards and case study analysis * Cross tabulation and statistical analysis from market profile data </td> </tr> <tr> <td> 6.5.1.3 (a) </td> <td> Please detail the data origin </td> <td> Further information about specific data origins used for point numbers 1 to 4 (as detailed in section 6.5.1.1 (a) above) shall be communicated once the Ministry for Tourism appoints the economic operator who will be responsible to carry out market testing, stakeholder consultations, launching and subsequent implementation of the pilot model structure with the hotel industry for the Green Mobility Hotel Award and labelling scheme. The data origins used for the tasks carried out during the months under review M1 (Sep 16) – M6 (Feb 17) included: * Online research papers * market profile survey (MTA) * Project progression data * Policy analysis/statistical analysis and analysis of current frameworks * Data originating from stakeholder consultations </td> </tr> </table> <table> <tr> <th> **WP6 – MALTA** </th> </tr> <tr> <td> **Data details- MAL6.1** </td> </tr> <tr> <td> 6.5.1.4 (a) </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> Precise figures about the interest registered from the eligible hotels in relation to this initiative shall be communicated once the awarded economic operator conducts the necessary evaluation with the hotels and subsequently after this is made available to MOT. Eligible hotels comprise those accommodation establishments situated in the Valletta region as per the demarcation communicated to MOT by Transport of Malta </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.5.2.1 (a) </td> <td> Please detail the procedure adopted for data collection </td> <td> Further information about the procedure to be adopted for data collection specifically in relation to points number 1 to 4 (as detailed in section 6.1.1 above) shall be communicated once the Ministry for Tourism appoints the economic operator and once this data is made available by the awarded economic operator to MOT The procedures adopted for data collection for the months under review M1 (Sep 16) – M6 (Feb 17) included: * Data extraction from online research papers * Applicability analysis for the local tourism accommodation offering * Data extraction from statistical tools devised by the Malta Tourism Authority * Policy analysis/statistical analysis and analysis of current frameworks * Analysis of data originating from stakeholder consultations </td> </tr> <tr> <td> 6.5.2.2 (a) </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> The sample chosen for the testing phase should represent the eligible population of hotels/accommodation establishments in terms of the following criteria: * number of hotel establishments by sub-region making up the Valletta region * size of hotel establishments * hotel categories * years in operation and * most recent refurbishment/extension The approached collective accommodation establishments need to be located in the designated area i.e. Valletta region (as per the demarcation communicated to MOT by Transport for Malta (Site Manager) </td> </tr> </table> <table> <tr> <th> **WP6 – MALTA** </th> </tr> <tr> <td> **Data details- MAL6.1** </td> </tr> <tr> <td> 6.5.2.3 (a) </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> The supply side data collected is not anonymous as the awarded economic operator has a list of eligible collective accommodation establishments which are situated in the Valletta region and from which the sampling process should be carried out The findings however may be presented on a two pronged level: \- Findings per hotel (not anonymous) -Collective findings report (anonymous) to summarise the findings stemming from the individual hotel audits. This report will be used for information dissemination at a public level. Since most of the data pertaining to the eligible collective accommodation establishments will be acquired following the implementation of the hotel audits by the awarded economic operator, the Ministry for Tourism will undertake discussion processes with the selected economic operator to define adequate dissemination levels. This applies mostly to the individual hotel findings The most appropriate modality should be discussed in order to safeguard the confidentiality aspect when and if necessary </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.5.3.1 (a) </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Since most of the data pertaining to the eligible collective accommodation establishments will be acquired following the implementation of the hotel audits by the awarded economic operator, the Ministry for Tourism will undertake discussion processes with the awarded economic operator to define the storage modality and to define adequate dissemination levels. This applies mostly to the individual hotel findings The most appropriate modality should be discussed in order to safeguard the confidentiality aspect when and if necessary </td> </tr> <tr> <td> 6.5.3.2 (a) </td> <td> Who is the organization responsible for data storing and management? </td> <td> Ministry for Tourism </td> </tr> <tr> <td> **WP6 – MALTA** </td> </tr> <tr> <td> **Data details- MAL6.1** </td> </tr> <tr> <td> 6.5.3.3 (a) </td> <td> Through whom is (organization, responsible) data accessible? </td> <td> Ministry for Tourism </td> </tr> <tr> <td> 6.5.3.4 (a) </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018 </td> </tr> <tr> <td> 6.5.3.5 (a) </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Data Protection Act Chapter 440 </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.5.4.1 (a) </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> The data which can be disseminated will be qualified at a later stage in line with 6.5.2.3 (a) and 6.5.3.1 (a) </td> </tr> <tr> <td> 6.5.4.2 (a) </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 29: Local Data Management Plan – WP6 (MALTA) – MEASURE MAL 6.1** <table> <tr> <th> **WP6: MAL 6.2** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.5.1.1 (b) </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _Low Emission Zone:_ * Number of entries by type of vehicle _SMS Alert_ * Number of reports * Number of downloads _Data collection procedure is planned_ </td> </tr> <tr> <td> 6.5.1.2 (b) </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Low Emission Zone:_ * Number. of entries by type of vehicle: CVA database (excel) _SMS Alert_ * Number of reports: Excel * Number of downloads: Excel </td> </tr> <tr> <td> 6.5.1.3 (b) </td> <td> Please detail the data origin </td> <td> _Low Emission Zone:_ * Number of entries by type of vehicle: CVA Operator _SMS Alert_ * Number of reports: App server * Number of downloads: App server </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.5.2.1 (b) </td> <td> Please detail the procedure adopted for data collection </td> <td> _Low Emission Zone:_ * CVA Operator collects and presents data periodically to Transport for Malta as per ongoing contract _SMS Alert_ * Number of reports: App server download * Number of downloads: App server download </td> </tr> <tr> <td> 6.5.2.2 (b) </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> No sampling will be adopted in this case </td> </tr> </table> <table> <tr> <th> **WP6: MAL 6.2** </th> </tr> <tr> <td> 6.5.2.3 (b) </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Data made available for project use will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.5.3.1 (b) </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> _Low Emission Zone:_ * Number. of entries by type of vehicle: CVA Operator server _SMS Alert_ * Number. of reports: App server * Number of downloads: App server </td> </tr> <tr> <td> 6.5.3.2 (b) </td> <td> Who is the organization responsible for data storing and management? </td> <td> _Low Emission Zone:_ * Number of entries by type of vehicle: CVA Operator _SMS Alert_ * Number of reports: Transport for Malta * Number of downloads: Transport for Malta </td> </tr> <tr> <td> 6.5.3.3 (b) </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> _Low Emission Zone:_ * Number of entries by type of vehicle: CVA Operator and TM _SMS Alert_ * Number of reports: Transport for Malta and University of Malta * Number of downloads: Transport for Malta and University of Malta </td> </tr> <tr> <td> 6.5.3.4 (b) </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018 </td> </tr> <tr> <td> **WP6: MAL 6.2** </td> </tr> <tr> <td> 6.5.3.5 (b) </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.5.4.1 (b) </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Yes in aggregate </td> </tr> <tr> <td> 6.5.4.2 (b) </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No due to data protection </td> </tr> </table> **Table 30: Local Data Management Plan – WP6 (MALTA) – MEASURE MAL 6.2** <table> <tr> <th> **WP6: MAL 6.3** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.5.1.1 (c) </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * Baseline surveys to ascertain tourist travel behaviour * Number of downloads * Number. of users * Mode share _Data collection procedure is planned._ </td> </tr> <tr> <td> 6.5.1.2 (c) </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Baseline surveys to ascertain tourist travel behaviour- survey * Number of downloads- excel * Number of users - excel * Mode share - excel </td> </tr> <tr> <td> 6.5.1.3 (c) </td> <td> Please detail the data origin </td> <td> * Baseline surveys to ascertain tourist travel behaviour – field data * Number of downloads - App server * Number of users - App server * Mode share - App server </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.5.2.1 (c) </td> <td> Please detail the procedure adopted for data collection </td> <td> * Baseline surveys to ascertain tourist travel behaviour – field data compiled directly with tourists * Number of downloads - App server download * Number of users - App server download * Mode share - App server download </td> </tr> <tr> <td> 6.5.2.2 (c) </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> Sample of tourists to participate in survey will be random and statistically significant </td> </tr> </table> <table> <tr> <th> **WP6: MAL 6.3** </th> </tr> <tr> <td> 6.5.2.3 (c) </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Data made available for project use will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.5.3.1 (c) </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * Baseline surveys to ascertain tourist travel behaviour – excel * Number. of downloads - App server * Number of users - App server * Mode share - App server </td> </tr> <tr> <td> 6.5.3.2 (c) </td> <td> Who is the organization responsible for data storing and management? </td> <td> Transport for Malta </td> </tr> <tr> <td> 6.5.3.3 (c) </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Transport for Malta, University of Malta </td> </tr> <tr> <td> 6.5.3.4 (c) </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018 </td> </tr> <tr> <td> 6.5.3.5 (c) </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **WP6: MAL 6.3** </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.5.4.1 (c) </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Yes in aggregate </td> </tr> <tr> <td> 6.5.4.2 (c) </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No due to data protection. </td> </tr> </table> **Table 31: Local Data Management Plan – WP6 (MALTA) – MEASURE MAL 6.3** <table> <tr> <th> **WP6: MAL 6.4** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 6.5.1.1 (d) </td> <td> Which kind of data has been/will be collected in your site? </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential (to be collected during the operation of the measure) * Number of daily entrants to the city (to be collected during the operation of the measure) _Data collection procedure is planned_ </td> </tr> <tr> <td> 6.5.1.2 (d) </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential – field data * Number of daily entrants to the city – Number Plate recognition </td> </tr> <tr> <td> 6.5.1.3 (d) </td> <td> Please detail the data origin </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential – field data compiled by Local Council personnel. * Number of daily entrants to the city – CVA Operations (data collected by ANPR Cameras) </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 6.5.2.1 (d) </td> <td> Please detail the procedure adopted for data collection </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential – on site data collection * Number of daily entrants to the city – real time data collected by ANPR Cameras </td> </tr> </table> <table> <tr> <th> **WP6: MAL 6.4** </th> </tr> <tr> <td> 6.5.2.2 (d) </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> No sampling will be adopted in this case </td> </tr> <tr> <td> 6.5.2.3 (d) </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Data made available for project use will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.5.3.1 (d) </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential – excel * Number of daily entrants to the city – data will be stored as per current processes already adopted by CVA Operator </td> </tr> <tr> <td> 6.5.3.2 (d) </td> <td> Who is the organization responsible for data storing and management? </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential – Valletta Local council * Number of daily entrants to the city – CVA Operator </td> </tr> <tr> <td> 6.5.3.3 (d) </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * Number of parking spaces in the city of Valletta, classified by residential and non-residential – Valletta Local council and Transport Malta * Number of daily entrants to the city – CVA Operator and Transport Malta as per ongoing contract </td> </tr> <tr> <td> **WP6: MAL 6.4** </td> </tr> <tr> <td> 6.5.3.4 (d) </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018 </td> </tr> <tr> <td> 6.5.3.5 (d) </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.5.4.1 (d) </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Yes, in aggregate </td> </tr> <tr> <td> 6.5.4.2 (d) </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 32: Local Data Management Plan – WP6 (MALTA) – MEASURE MAL 6.4** <table> <tr> <th> **WP6 - LAS PALMAS** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 6.6.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> **\- LPA6.1 - Green Credits Scheme**  Statistics about urban public transport cards (ongoing) </td> </tr> <tr> <td> 6.6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Statistics about urban public transport cards_  Number of monthly users of the contactless urban public transport smart card “BonoGuagua”. The data is stored in an electronic format </td> </tr> <tr> <td> 6.6.1.3 </td> <td> Please detail the data origin </td> <td> _Statistics about urban public transport cards._  The data is stored in a database and statistics data tables are made depending on the needs </td> </tr> <tr> <td> 6.6.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Statistics about urban public transport cards._  40 different urban public transport routes and 15 urban public transport fares to be analysed. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 6.6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Data extraction from database </td> </tr> <tr> <td> 6.6.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> The data collected belong to the whole database regarding urban public transport cards. </td> </tr> <tr> <td> 6.6.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> At first step (first six months of the project), and in order to analyse the suitability of the chosen card to develop the Green Credits Scheme, data collected are anonymous However, for the implementation of the Green Credits Scheme (after business model development), a personal data collection will be probably needed in order to manage a customer loyalty database system that of course will comply with national and international regulation regarding personal data storing, access and management </td> </tr> <tr> <td> **WP6 - LAS PALMAS** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 6.6.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Data is stored in Guaguas Municipales server (Urban Public Transport Company) </td> </tr> <tr> <td> 6.6.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> Guaguas Municipales (Urban Public Transport Company) </td> </tr> <tr> <td> 6.6.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Guaguas Municipales (Urban Public Transport Company) </td> </tr> <tr> <td> 6.6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> “Ley Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter Personal” </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 6.6.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> 6.6.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 33: Local Data Management Plan – WP6 (LAS PALMAS)** ## WP7 <table> <tr> <th> **WP7 - MADEIRA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 7.1.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _Data on PT service demand_ (planned) _Statistics produced by ticketing systems_ (planned) _Survey of target user group to collect needs and expectations_ (planned) </td> </tr> <tr> <td> 7.1.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Data on PT service demand_ * Number of PT users Statistics produced by ticketing systems * Number of PT users per line and bus stop _Survey of target user group to collect needs and expectations_ * Paper and online questionnaires </td> </tr> <tr> <td> 7.1.1.3 </td> <td> Please detail the data origin </td> <td> _Data on PT service demand_ * The data is stored in a database Statistics produced by ticketing systems * Number of PT users per line and bus stop _Survey of target user group to collect needs and expectations_ * The target data of questionnaires will be public transport users _._ </td> </tr> <tr> <td> 7.1.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Survey of target user group to collect needs and expectations_  100 questionnaires </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 7.1.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Data on PT service demand_ * Data extraction from database Statistics produced by ticketing systems * Data extraction from database _Survey of target user group to collect needs and expectations_ * Questionnaires or interviews </td> </tr> </table> <table> <tr> <th> **WP7 - MADEIRA** </th> </tr> <tr> <td> 7.1.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _Survey of target user group to collect needs and expectations_  A sampling of the target users will be selected to be provided with the questionnaires </td> </tr> <tr> <td> 7.1.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _Data on PT service demand_ * _Anonymously_ Statistics produced by ticketing systems * Anonymously _Survey of target user group to collect needs and expectations_ * Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 7.1.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> _Data on PT service demand_ * Database is stored in Horario do Funchal (HF) office or in a cloud service owned by HF Statistics produced by ticketing systems * Database is stored in HF office or in a cloud service owned by HF _Survey of target user group to collect needs and expectations_ * Questionnaires will be stored in HF and AREAM office. </td> </tr> <tr> <td> 7.1.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> _Data on PT service demand_ * Horario do Funchal (HF) Statistics produced by ticketing systems * HF _Survey of target user group to collect needs and expectations_ * HF and AREAM </td> </tr> <tr> <td> **WP7 - MADEIRA** </td> </tr> <tr> <td> 7.1.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> _Data on PT service demand_ * Horario do Funchal (HF) Statistics produced by ticketing systems * _HF_ _Survey of target user group to collect needs and expectations_ * _HF and AREAM_ </td> </tr> <tr> <td> 7.1.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply international regulation. </td> </tr> <tr> <td> 7.1.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is any need to apply national regulation. </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.1.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be by DESTINATIONS under an aggregated form </td> </tr> <tr> <td> 7.1.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 34: Local Data Management Plan – WP7 (MADEIRA)** <table> <tr> <th> **WP7 – RETHYMNO** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 7.2.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> Data/collection for the design of demo measures for Public Transport services:  Data on PT service offer _The data collection described is completed_ </td> </tr> <tr> <td> 7.2.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Data on PT service offer:_  Number of available routes, bus lines </td> </tr> <tr> <td> 7.2.1.3 </td> <td> Please detail the data origin </td> <td> _Data on PT service offer_  Timetables from _PT operator_ </td> </tr> <tr> <td> 7.2.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Data on PT service offer;_  A total of 40 bus routes have been recorded </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 7.2.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Data on PT service offer;_  Data on PT service offer were provided from the PT operator </td> </tr> <tr> <td> 7.2.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> No sampling process was used </td> </tr> </table> <table> <tr> <th> **WP7 – RETHYMNO** </th> </tr> <tr> <td> 7.2.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _Data on PT service offer;_  Anonymously; No personal data were collected </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 7.2.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Data on PT service offer;  Database is stored in the offices of the PT operator </td> </tr> <tr> <td> 7.2.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> Data on PT service offer;  PT operator </td> </tr> <tr> <td> 7.2.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> Data on PT service offer;  PT operator </td> </tr> <tr> <td> 7.2.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Regulations 2016/679; 2016/680; 2016/681 (EU) Regulations: 2009/136, 2006/24, 2002/58, 95/46 (EC) </td> </tr> <tr> <td> 7.2.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Law 2472/1997 Protection of Individuals with regard to the Processing of Personal Data Law 3471/2006 Protection of personal data and privacy in the electronic telecommunications sector and amendment of law 2472/1997 </td> </tr> <tr> <td> **WP7 – RETHYMNO** </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.2.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Data can be used for DESTINATIONS dissemination purpose under an aggregated form </td> </tr> <tr> <td> 7.2.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 35: Local Data Management Plan – WP7 (RETHYMNO)** <table> <tr> <th> **WP7 - LIMASSOL** </th> </tr> <tr> <td> **Data details** </td> </tr> </table> <table> <tr> <th> **WP7 - LIMASSOL** </th> </tr> <tr> <td> 7.3.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**7.1 Improvement of PT routes, time tables, ticket procedure and bike transportation on buses to make the service more attractive** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ _**7.2 Creation of an electric bus hop on hop off service in the old town** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ * _Revenues_ _**7.3 PT Traveller Information System** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ * _Revenues_ _**7.4 Mobility application and travel planner for smart phones to provide real time information** _ * _CO2 emissions_ * _Energy consumption_ * _Economy_ _The data collection is on-going_ </td> </tr> </table> <table> <tr> <th> **WP7 - LIMASSOL** </th> </tr> <tr> <td> 7.3.1.2 </td> <td> Please detail data typology structure/format applicable) </td> <td> and (if </td> <td> _**7.1 Improvement of PT routes, time tables, ticket procedure and bike transportation on buses to make the service more attractive** _ _CO2 emissions_ * _Kg/km CO2 x total distance covered by the students_ _Energy consumption_ * _kWh/lt x total litre of the total distance covered_ _Economy_ * _Average cost of fuel x fuel savings from energy consumption_ _**7.2 Creation of an electric bus hop on hop off service in the old town** CO2 emissions _ * _Kg/km CO2 x total distance covered by the students_ _Energy consumption_ * _kWh/lt x total litre of the total distance covered_ _Economy_ * _Average cost of fuel x fuel savings from energy consumption_ _Revenues_ * _Total ticket price x passengers per year_ _**7.3 PT Traveller Information System** _ _CO2 emissions_ * _Kg/km CO2 x total distance covered by the students_ _Energy consumption_ * _kWh/lt x total litre of the total distance covered_ _Economy_ * _Average cost of fuel x fuel savings from energy consumption_ _Revenues_ * _Total ticket price x passenger per year_ _**7.4 Mobility application and travel planner for smart** _ _**phones to provide real time information** _ _CO2 emissions_ * _Kg/km CO2 x total distance covered by the students_ _Energy consumption_ * _kWh/lt x total litre of the total distance covered_ _Economy_ \- _Average cost of fuel x fuel savings_ </td> </tr> <tr> <td> 7.3.1.3 </td> <td> Please detail the data origin </td> <td> * _LTC_ * _LIMA_ * _Public Works Department_ </td> </tr> </table> <table> <tr> <th> **WP7 - LIMASSOL** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 7.3.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * _CO2 emissions_ * Data extraction from database or data gathered from the field * _Energy consumption_ * Data extraction from database or data gathered from the field * _Economy_ * Data extraction from database and data gathered from the field </td> </tr> <tr> <td> 7.3.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> Since already existing data from current surveys will also be used, the sampling will not be random and it might be enough for statistical analysis </td> </tr> <tr> <td> 7.3.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> The data collected will be anonymous </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 7.3.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * All the data extracted from the mentioned databases is stored in the involved partner’s database * The data from the questionnaires will be stored in the involved partner’s office </td> </tr> <tr> <td> **WP7 - LIMASSOL** </td> </tr> <tr> <td> 7.3.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td>  LTC </td> </tr> <tr> <td> 7.3.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * LTC * STRATAGEM </td> </tr> <tr> <td> 7.3.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> The Charter of Fundamental Rights of EU States </td> </tr> <tr> <td> 7.3.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> * The Market Research Society code of conduct * ISO 20252 </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.3.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> The data will be used for the dissemination of the project and will be aggregated </td> </tr> <tr> <td> 7.3.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 36: Local Data Management Plan – WP7 (LIMASSOL)** <table> <tr> <th> **WP7 - ELBA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 7.4.1.1 </td> <td> Which kind of data has/been will be collected in your site? </td> <td> Data collection will relate to: * Analysis of Public Transport (PT) network and offer * Analysis of data collected during the operation by AVM system (assessment of service performances) in order to identify any weakness and aspects to be improved (as feedback for planning) * Survey on satisfaction level of users on current PT offer * Survey on users’ needs collection Data collection for this demo WP has been planned but not operationally launched. </td> </tr> <tr> <td> 7.4.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Bus lines and stops * Coverage time * Travelling time per line, stop by stop * Interchange points * Comodality (between ferry timetable and bus lines to/from Piombino and from/to Portoferraio/Rio Marina) </td> </tr> <tr> <td> 7.4.1.3 </td> <td> Please detail the data origin </td> <td> Two Public Transport Operators operating the service in Elba (CTT Nord) and to/from Piombino, harbour to ELBA (Tiemme) </td> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 7.4.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> Service contracted from CTT Nord and Tiemme </td> </tr> <tr> <td> 7.4.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> No sampling procedure adopted </td> </tr> </table> <table> <tr> <th> **WP7 - ELBA** </th> </tr> <tr> <td> 7.4.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Survey on satisfaction level of users on current PT offer Survey on users’ needs collection Both will be collected anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 7.4.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> Analysis of Public Transport (PT) network and offer * Scheduled service and timetable (planning SW, SQL database) Analysis of data collected during the operation of service and performances assessment * AVM system (SQL database) Survey * Paper questionnaires </td> </tr> <tr> <td> 7.4.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> CTT Nord and Tiemme </td> </tr> <tr> <td> 7.4.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> CTT Nord and Tiemme, Rio Marina and Portoferraio Municipality </td> </tr> <tr> <td> 7.4.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is no need to apply international regulation </td> </tr> <tr> <td> 7.4.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> All the data is anonymous; there is no need to apply national regulation </td> </tr> <tr> <td> **WP7 - ELBA** </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.4.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> 7.4.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 37: Local Data Management Plan – WP7 (ELBA)** <table> <tr> <th> **WP7 - MALTA** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 7.5.1.1 </td> <td> Which kind of data have been/will be collected in your site? </td> <td> * Data on PT service demand * Statistics produced by the systems already operated (i.e. ticketing) * Survey on users’ needs and expectations _Data collection procedure is planned._ </td> </tr> <tr> <td> 7.5.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> * Data on PT service demand – questionnaire with users * Statistics produced by the systems already operated Ferry ticketing data, user data during pilot * Survey on users’ needs and expectations – survey with users </td> </tr> <tr> <td> 7.5.1.3 </td> <td> Please detail the data origin </td> <td> * Data on PT service demand – ferry users * Statistics produced by the systems already operated ferry operator, PT operator * Survey on users’ needs and expectations – PT users during pilot </td> </tr> </table> <table> <tr> <th> **WP7 - MALTA** </th> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 7.5.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> * Data on PT service demand – questionnaire with users * Statistics produced by the systems already operated Ferry ticketing data, user data during pilot * Survey on users’ needs and expectations – survey with users </td> </tr> <tr> <td> 7.5.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> No sampling will be adopted in this case </td> </tr> <tr> <td> 7.5.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 7.5.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> * Data on PT service demand – pdf * Statistics produced by the systems already operated Ferry ticketing data, user data during pilot - excel * Survey on users’ needs and expectations – pdf </td> </tr> <tr> <td> **WP7 - MALTA** </td> </tr> <tr> <td> 7.5.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> * Data on PT service demand – Transport Malta * Statistics produced by the systems already operated – ferry operator * Survey on users’ needs and expectations – Transport Malta </td> </tr> <tr> <td> 7.5.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> * Data on PT service demand – Transport Malta * Statistics produced by the systems already operated – ferry operator and Transport Malta as per ongoing contract * Survey on users’ needs and expectations – Transport Malta </td> </tr> <tr> <td> 7.5.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018. </td> </tr> <tr> <td> 7.5.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.5.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> Yes, in aggregate </td> </tr> <tr> <td> 7.5.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 38: Local Data Management Plan – WP7 (MALTA)** <table> <tr> <th> **WP7 – LAS PALMAS** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 7.6.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> _**LPA 7.2 - Hybrid buses in the urban bus fleet** _ * _Urban Public transport buses feature (planned)_ **LPA 7.3 - Real time mobility and tourism information services** * _Statistics about urban public transport trips at bus stops (carried out)_ </td> </tr> <tr> <td> 7.6.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> _Urban Public transport buses features_  _Average age, fuel consumption, size_ _Statistics about urban public transport trips at bus stops_  _Number of travellers at bus stops_ </td> </tr> <tr> <td> 7.6.1.3 </td> <td> Please detail the data origin </td> <td> _Urban Public transport buses features_ * _The data is stored in a database and statistics data tables are made depending on the needs_ _Statistics about urban public transport trips at bus stops_ * _The data is stored in a database and statistics data tables are made depending on the needs_ </td> </tr> <tr> <td> 7.6.1.4 </td> <td> Please provide some figure allowing to estimate the data dimension </td> <td> _Urban Public transport buses features_  _242 buses_ _Statistics about urban public transport trips at bus stops_  _784 bus stops, 2.8 M passenger monthly_ </td> </tr> <tr> <td> **Data collection procedures** </td> </tr> <tr> <td> 7.6.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> _Urban Public transport buses’ features_  Data extraction from database _Statistics about urban public transport trips at bus stops_  Data extraction from database </td> </tr> <tr> <td> 7.6.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> _Urban Public transport buses features_ * The data collected belong to the whole database regarding urban public transport buses _Statistics about urban public transport trips at bus stops_ * The data collected belong to two representative months (May and October) </td> </tr> </table> <table> <tr> <th> **WP7 – LAS PALMAS** </th> </tr> <tr> <td> 7.6.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> _Urban Public transport buses features_ * Anonymously _Statistics about urban public transport trips at bus stops_ * Anonymously </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 7.6.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> _Statistics about urban public transport trips at bus stops._  _Data is stored in Guaguas Municipales server (Urban Public Transport Company)_ </td> </tr> <tr> <td> 7.6.3.2 </td> <td> Who is the organization responsible for data storing and management? </td> <td> _Statistics about urban public transport trips at bus stops._  _Guaguas Municipales (Urban Public Transport Company)_ </td> </tr> <tr> <td> 7.6.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> _Statistics about urban public transport trips at bus stops._  _Guaguas Municipales (Urban Public Transport Company)_ </td> </tr> <tr> <td> 7.6.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> _Statistics about urban public transport trips at bus stops._ In case a national regulation is needed, the national regulation applicable will be “Ley Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter Personal” </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 7.6.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> No </td> </tr> <tr> <td> **WP7 – LAS PALMAS** </td> <td> </td> </tr> <tr> <td> 7.6.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 39: Local Data Management Plan – WP7 (LAS PALMAS)** ## WP9 <table> <tr> <th> **WP9 - MADEIRA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 9.1.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> WP9 will deal with all the data described in WP2 to WP7 </td> </tr> </table> **Table 40: Local Data Management Plan – WP9 (MADEIRA)** <table> <tr> <th> **WP9 - RETHYMNO** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 9.2.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> _Baseline data (as described in WP2)_ * Data about road network * Statistics on traffic accidents, deaths and injuries </td> </tr> </table> **Table 41: Local Data Management Plan – WP9 (RETHYMNO)** <table> <tr> <th> **WP9 - LIMASSOL** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 9.3.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> WP9 will deal with all the data described in WP2 to WP7 </td> </tr> </table> **Table 42: Local Data Management Plan – WP9 (LIMASSOL)** <table> <tr> <th> **WP9 - ELBA** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 9.4.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> WP9 will deal with all the data described in WP2 to WP7 </td> </tr> </table> **Table 43: Local Data Management Plan – WP9 (ELBA)** <table> <tr> <th> **WP9** </th> </tr> <tr> <td> **Data details** </td> </tr> <tr> <td> 9.5.1.1 </td> <td> Which kind of data has been/will be collected in your site? </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour **MAL4.1** Society - Acceptance level / Awareness level **MAL5.1** Transport – Freight movements Transport – Service reliability Economy – Average Operating Costs Energy – Vehicle Fuel Efficiency - fuel use per vkm Environment – CO2 emission - CO2/vkm/type Transport – Goods carried - kg Society - Awareness level **MAL6.1** Society – Acceptance level / Awareness level Society – Satisfaction **MAL6.2** Health – Number of polluting vehicles reported - number Society – Acceptance level / Awareness level **MAL6.3** Social Interactions – No. of users Transport – Modal split of users **MAL6.4** Economy – Operating Transport – Use of space for parking Transport – Traffic levels - vehicles/hr Society – Satisfaction **MAL7.1** Society – Satisfaction Energy – Fuel Mix Society – Awareness level </td> </tr> </table> <table> <tr> <th> **WP9** </th> </tr> <tr> <td> 9.5.1.2 </td> <td> Please detail data typology and structure/format (if applicable) </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level – excel table Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – excel table **MAL4.1** Society - Acceptance level / Awareness level – excel table **MAL5.1** Transport – Freight movements – excel table Transport – Service reliability – excel table Economy – Average Operating Costs – excel table Energy – Vehicle Fuel Efficiency - fuel use per vkm – excel table Environment – CO2 emission - CO2/vkm/type – excel table Transport – Goods carried - kg – excel table Society - Awareness level – excel table **MAL6.1** Society – Acceptance level / Awareness level – excel table Society – Satisfaction – excel table **MAL6.2** Health – Number of polluting vehicles reported - number – excel table Society – Acceptance level / Awareness level – excel table **MAL6.3** Social Interactions – No. of users – excel table Transport – Modal split of users – excel table **MAL6.4** Economy – Operating – excel table Transport – Use of space for parking – excel table Transport – Traffic levels - vehicles/hr – excel table Society – Satisfaction – excel table **MAL7.1** Society – Satisfaction – excel table Energy – Fuel Mix – excel table Society – Awareness level – excel table </td> </tr> <tr> <td> 9.5.1.3 </td> <td> Please detail the data origin </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level - survey Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – CVA operator **MAL4.1** Society - Acceptance level / Awareness level - survey **MAL5.1** Transport – Freight movements - survey Transport – Service reliability - operator Economy – Average Operating Costs - operator Energy – Vehicle Fuel Efficiency - fuel use per vkm - operator Environment – CO2 emission - CO2/vkm/type - operator Transport – Goods carried – kg - operator Society - Awareness level - survey **MAL6.1** Society – Acceptance level / Awareness level -survey Society – Satisfaction - survey **MAL6.2** Health – Number of polluting vehicles reported – number – system server Society – Acceptance level / Awareness level - survey **MAL6.3** Social Interactions – No. of users – system server Transport – Modal split of users – survey / system server **MAL6.4** Economy – Operating - operator Transport – Use of space for parking – system server Transport – Traffic levels - vehicles/hr – CVA operator Society – Satisfaction - survey **MAL7.1** Society – Satisfaction - survey Energy – Fuel Mix - operator Society – Awareness level - survey </td> </tr> </table> <table> <tr> <th> **WP9** </th> <th> </th> </tr> <tr> <td> **Data collection procedures** </td> <td> </td> </tr> <tr> <td> 9.5.2.1 </td> <td> Please detail the procedure adopted for data collection </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level - survey Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – extracted from CVA operator **MAL4.1** Society - Acceptance level / Awareness level - survey **MAL5.1** Transport – Freight movements - survey Transport – Service reliability – provided by operator Economy – Average Operating Costs – provided by operator Energy – Vehicle Fuel Efficiency - fuel use per vkm – provided by operator Environment – CO2 emission - CO2/vkm/type – provided by operator Transport – Goods carried – kg – provided by operator Society - Awareness level - survey **MAL6.1** Society – Acceptance level / Awareness level -survey Society – Satisfaction - survey **MAL6.2** Health – Number of polluting vehicles reported – number – extracted from system server Society – Acceptance level / Awareness level - survey **MAL6.3** Social Interactions – No. of users – extracted from system server Transport – Modal split of users – survey / system server **MAL6.4** Economy – Operating – provided by operator Transport – Use of space for parking – extracted from system server Transport – Traffic levels - vehicles/hr – extracted from CVA operator Society – Satisfaction - survey **MAL7.1** Society – Satisfaction - survey Energy – Fuel Mix – provided by operator Society – Awareness level - survey </td> </tr> </table> <table> <tr> <th> **WP9** </th> </tr> <tr> <td> 9.5.2.2 </td> <td> If a sampling process is used, please confirm that the sample is random and of a size that can be analysed with the ability to make statistical inference for the overall sample and for the most significant subsample breakdowns (for reference, please see D1.1) </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level – survey (random) Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – CVA operator **MAL4.1** Society - Acceptance level / Awareness level – survey (random **MAL5.1** Transport – Freight movements – survey (random) Transport – Service reliability - operator Economy – Average Operating Costs - operator Energy – Vehicle Fuel Efficiency - fuel use per vkm - operator Environment – CO2 emission - CO2/vkm/type - operator Transport – Goods carried – kg - operator Society - Awareness level – survey (random) **MAL6.1** Society – Acceptance level / Awareness level -survey Society – Satisfaction – survey (random) **MAL6.2** Health – Number of polluting vehicles reported – number – system server Society – Acceptance level / Awareness level – survey (random) **MAL6.3** Social Interactions – No. of users – system server Transport – Modal split of users – survey (random) / system server **MAL6.4** Economy – Operating - operator Transport – Use of space for parking – system server Transport – Traffic levels - vehicles/hr – CVA operator Society – Satisfaction – survey (random) **MAL7.1** Society – Satisfaction – survey (random) Energy – Fuel Mix - operator Society – Awareness level – survey (random) </td> </tr> <tr> <td> 9.5.2.3 </td> <td> Is data collected anonymously or not? If not, please confirm that data is collected in such a way preventing the tracking of personal habits or feelings (for reference, please see D1.1) </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level – survey (anonymous) Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – CVA operator (anonymous) **MAL4.1** Society - Acceptance level / Awareness level – survey (anonymous) **MAL5.1** Transport – Freight movements – survey (anonymous) Transport – Service reliability - operator Economy – Average Operating Costs - operator Energy – Vehicle Fuel Efficiency - fuel use per vkm - operator Environment – CO2 emission - CO2/vkm/type - operator Transport – Goods carried – kg - operator Society - Awareness level – survey (anonymous) **MAL6.1** Society – Acceptance level / Awareness level –survey (anonymous) Society – Satisfaction – survey (anonymous) **MAL6.2** Health – Number of polluting vehicles reported – number – system server Society – Acceptance level / Awareness level – survey (anonymous) **MAL6.3** Social Interactions – No. of users – system server (anonymous) Transport – Modal split of users – survey / system server (anonymous) **MAL6.4** Economy – Operating - operator Transport – Use of space for parking – system server Transport – Traffic levels - vehicles/hr – CVA operator (anonymous) Society – Satisfaction – survey (anonymous) **MAL7.1** Society – Satisfaction – survey (anonymous) Energy – Fuel Mix – operator Society – Awareness level – survey (anonymous) </td> </tr> </table> <table> <tr> <th> **WP9** </th> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 9.5.3.1 </td> <td> How is data stored? Please detail where the data is stored and in which modality/format (if applicable) </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level – excel table - TM Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – excel table - TM **MAL4.1** Society - Acceptance level / Awareness level – excel table - TM **MAL5.1** Transport – Freight movements – excel table - TM Transport – Service reliability – excel table - TM Economy – Average Operating Costs – excel table - TM Energy – Vehicle Fuel Efficiency - fuel use per vkm – excel table - TM Environment – CO2 emission - CO2/vkm/type – excel table - TM Transport – Goods carried - kg – excel table - TM Society - Awareness level – excel table - TM **MAL6.1** Society – Acceptance level / Awareness level – excel table - MOT Society – Satisfaction – excel table - MOT **MAL6.2** Health – Number of polluting vehicles reported - number – excel table – TM / UOM Society – Acceptance level / Awareness level – excel table - TM **MAL6.3** Social Interactions – No. of users – excel table – TM / UOM Transport – Modal split of users – excel table – TM / UOM **MAL6.4** Economy – Operating – excel table – TM / VLC Transport – Use of space for parking – excel table – TM / VLC Transport – Traffic levels - vehicles/hr – excel table – TM / VLC Society – Satisfaction – excel table – TM / VLC **MAL7.1** Society – Satisfaction – excel table TM Energy – Fuel Mix – excel table TM Society – Awareness level – excel table TM </td> </tr> </table> <table> <tr> <th> **WP9** </th> </tr> <tr> <td> 9.5.3.2 </td> <td> Who is organization responsible for data storing management? </td> <td> the and </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level – excel table - TM Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – excel table - TM **MAL4.1** Society - Acceptance level / Awareness level – excel table - TM **MAL5.1** Transport – Freight movements – excel table - TM Transport – Service reliability – excel table - TM Economy – Average Operating Costs – excel table - TM Energy – Vehicle Fuel Efficiency - fuel use per vkm – excel table - TM Environment – CO2 emission - CO2/vkm/type – excel table - TM Transport – Goods carried - kg – excel table - TM Society - Awareness level – excel table - TM **MAL6.1** Society – Acceptance level / Awareness level – excel table - MOT Society – Satisfaction – excel table - MOT **MAL6.2** Health – Number of polluting vehicles reported - number – excel table – TM / UOM Society – Acceptance level / Awareness level – excel table - TM **MAL6.3** Social Interactions – No. of users – excel table – TM / UOM Transport – Modal split of users – excel table – TM / UOM **MAL6.4** Economy – Operating – excel table – TM / VLC Transport – Use of space for parking – excel table – TM / VLC Transport – Traffic levels - vehicles/hr – excel table – TM / VLC Society – Satisfaction – excel table – TM / VLC **MAL7.1** Society – Satisfaction – excel table TM Energy – Fuel Mix – excel table TM Society – Awareness level – excel table TM </td> </tr> <tr> <td> 9.5.3.3 </td> <td> Through whom (organization, responsible) is data accessible? </td> <td> _**MAL 2.2** _ _Society - Acceptance level / Awareness level – excel table - TM_ _Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – excel table - TM_ **MAL4.1** Society - Acceptance level / Awareness level – excel table - TM **MAL5.1** Transport – Freight movements – excel table - TM Transport – Service reliability – excel table - TM Economy – Average Operating Costs – excel table - TM Energy – Vehicle Fuel Efficiency - fuel use per vkm – excel table - TM Environment – CO2 emission - CO2/vkm/type – excel table - TM Transport – Goods carried - kg – excel table - TM Society - Awareness level – excel table - TM **MAL6.1** Society – Acceptance level / Awareness level – excel table - MOT Society – Satisfaction – excel table - MOT **MAL6.2** Health – Number of polluting vehicles reported - number – excel table – TM / UOM Society – Acceptance level / Awareness level – excel table - TM **MAL6.3** Social Interactions – No. of users – excel table – TM / UOM Transport – Modal split of users – excel table – TM / UOM **MAL6.4** Economy – Operating – excel table – TM / VLC Transport – Use of space for parking – excel table – TM / VLC Transport – Traffic levels - vehicles/hr – excel table – TM / VLC Society – Satisfaction – excel table – TM / VLC **MAL7.1** Society – Satisfaction – excel table TM Energy – Fuel Mix – excel table TM Society – Awareness level – excel table TM </td> </tr> </table> <table> <tr> <th> **WP9** </th> </tr> <tr> <td> 9.5.3.4 </td> <td> Which international regulation will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Directive 95/46/EC which is due to be replaced by the General Data Protection Regulation (GDPR) EU 2016/679 which will become effective on 25 May 2018. </td> </tr> <tr> <td> 9.5.3.5 </td> <td> Which national regulation and applicable ‘opinion statements’ will be applied for data storing and access? (for reference, please see D1.1) </td> <td> Chapter 440 Data Protection Act </td> </tr> <tr> <td> **WP9** </td> </tr> <tr> <td> **Data availability for dissemination** </td> </tr> <tr> <td> 9.5.4.1 </td> <td> Is data usable for DESTINATIONS dissemination purpose? Please indicate the format (aggregated/not aggregated) </td> <td> **MAL 2.2** Society - Acceptance level / Awareness level – excel table AGGREGATED Transport - Traffic flow by vehicle (peak) -avg. vehicles per hour – excel table – TM AGGREGATED **MAL4.1** Society - Acceptance level / Awareness level – excel table – TM AGGREGATED **MAL5.1** Transport – Freight movements – excel table – TM NOT AVAILABLE Transport – Service reliability – excel table – TM NOT AVAILABLE Economy – Average Operating Costs – excel table – TM NOT AVAILABLE Energy – Vehicle Fuel Efficiency - fuel use per vkm – excel table – TM NOT AVAILABLE Environment – CO2 emission - CO2/vkm/type – excel table – TM NOT AVAILABLE Transport – Goods carried - kg – excel table – TM NOT AVAILABLE Society - Awareness level – excel table – TM AGGREGATED **MAL6.1** Society – Acceptance level / Awareness level – excel table – MOT AGGREGATED Society – Satisfaction – excel table – MOT AGGREGATED **MAL6.2** Health – Number of polluting vehicles reported - number – excel table – TM / UOM NOT AVAILABLE Society – Acceptance level / Awareness level – excel table – TM AGGREGATED **MAL6.3** Social Interactions – No. of users – excel table – TM / UOM AGGREGATED Transport – Modal split of users – excel table – TM / UOM AGGREGATED **MAL6.4** Economy – Operating – excel table – TM / VLC NOT AVAILABLE Transport – Use of space for parking – excel table – TM / VLC NOT AVAILABLE Transport – Traffic levels - vehicles/hr – excel table – TM / VLC NOT AVAILABLE Society – Satisfaction – excel table – TM / VLC AGGREGATED **MAL7.1** Society – Satisfaction – excel table TM AGGREGATED Energy – Fuel Mix – excel table TM NOT AVAILABLE Society – Awareness level – excel table TM AGGREGATED </td> </tr> <tr> <td> 9.5.4.2 </td> <td> Is data planned to be published as open format? If so, please describe the technological solution used and the metadata format. </td> <td> No </td> </tr> </table> **Table 44: Local Data Management Plan – WP9 (MALTA)** <table> <tr> <th> **WP9 – LAS PALMAS** </th> <th> </th> </tr> <tr> <td> **Data details** </td> <td> </td> </tr> <tr> <td> 9.6.1.1 </td> <td> Which kind of data has been collected in your site? </td> <td> WP9 will deal with all the data described in WP2 to WP7 </td> </tr> </table> **Table 45: Local Data Management Plan – WP9 (LAS PALMAS)** **4 Conclusions** D1.3 details the data typologies collected/under collection/planned for collection for the design of demo measures. For each data typology the procedures adopted by sites for data collection, handling and storing is described. The description is provided per WP, per site and per data typology. The covered period for description is M4-M6 (December 2016 – February 2017).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1094_SMARTool_689068.md
**1\. Executive Summary** The current deliverable is the second version of the SMARTool Data Management Plan (DMP) and describes the procedures of data generation and collection within the project. Specifically, the standards and the methodology that was followed for the collection and processing are well defined. In addition, it is described the methodology for these data to become accessible for re-use [1]. SMARTool project includes gathering and analysis of plenty of datasets, such as data from user requirements collection, to clinical and other patient related data. Specifically, datasets were collected in two ways: (i) the collected data related to the users' personal information, demographics, clinical, imaging, molecular, omics and other health related data anonymized on premised which assisted in the development of the SMARTool platform, (ii) the collected data through the application of the multivariate analysis, the data mining and the non-imaging classification algorithm for CAD stratification, the multiscale and multilevel site specific models for plaque progression and the noninvasive FFR, the pharmacological therapy modulation algorithms using data mining techniques and the virtual stent deployment approach. The second one consistute the main output of the SMARTool platform. The second version of the deliverable combines the feedback provided by the SMARTool partners and the datasets they contributed to. More specifically, the deliverable was circulated to the project partners responsible for the different tasks, who defined the dataset descriptions and the procedure of data collection and analysis as well as the methods and processes, which are followed to ensure adherence with the ethics issues. In fact, most of the SMARTool datasets include the data collection from CAD patients; the produced and processed data were handled with caution, following all the ethical and privacy requirements involved in such datasets. This is the second version of the DMP of SMARTool and the datasets described represent the reflection of the collected and processed data. These data were enriched with more data (collected and processed) as the context of the datasets is continuously updated during the project. However, the main principles as well the processes of data collection, processing and preservation were those as described in the previous version (version 1) of the document. The deliverable is composed of different chapters and sub- sections. **Chapter 2** consists of * an overview of the SMARTool project, its overall vision and objectives * the main scope of this deliverable including the objectives of the DMP, * the steps and actions involved in the data management cycle and, * the background of the SMARTool DMP. **Chapter 3** presents * the principle of SMARTool FAIR data management * the main categories of the SMARTool data, (iii) the data access and storage processes * the available databases along with the level roles and permissions and, * a detailed description of the datasets collected and processed per WP, the involved partners, the standards and metadata, the data exploitation and sharing, the archiving and preservation (including storage and backup) processes. **Chapter 4** presents the types of data, their processes as well as the visualization of data lineage. **In Chapter 5,** the specific methods and tools for ensuring data security, including the data anonymization, encryption, storage and backups are described. The chapter also mentions the ethical considerations related to the protection of the enrolled patients, the EU legislation and Ethics documents and experts’ opinions and Ethical Committees, that were taken into consideration during the lifecycle of SMARTool project. **2\. Introduction** A new element that has been included in the Horizon 2020 projects is the Data Management Plan (DMP) which is related to the data generated in the framework of the project and how this data became accessible [2]. The D4.2 – Data Management Plan v2 is the second version of the DMP of the SMARTool project, which has received funding by the European Union’s Horizon 2020 Programme under Grant Agreement number: 689068. This second version of the DMP concerns an overview of the produced datasets and the specific conditions that are related to them. **About the project** Coronary artery disease (CAD) is a chronic disease with a high prevalence and epidemic proportions in the middle aged and elderly population, accounting for about 50% of all deaths [3], [4]. Medical therapy, lifestyle and diet suggestions and clinical risk factors, including dyslipidemia, hypertension, diabetes, are among the key factors in CAD patient-specific management. Currently, an integrated site specific and patient-specific comprehensive predictive model of plaque progression is lagging. Although several algorithms have been developed for primary and secondary prevention of CAD, a unified platform that incorporates all local and systemic risk factors into a personalized clinical decision support tool for stratification, diagnosis, prediction and treatment is not available. SMARTool is providing a Cloud platform for enabling clinical decision support for prevention of CAD. This is achieved through the standardization and integration of health data from heterogeneous sources and existing patient/artery specific multiscale and multilevel predictive models. Specifically, the SMARTool models rely on the extension of the already available multiscale and multilevel ARTreat models for coronary plaque assessment and progression over time. Moreover, it utilizes non-invasive imaging by coronary computed tomography angiography (CCTA) to provide functional site-specific assessment of arterial stenosis based on the non-invasive Fractional Flow Reserve (SmartFFR) and provides additional heterogeneous patient-specific models for risk stratification, stent deployment and medical treatment prediction. More specifically, SMARTool allows: * **Patient-specific CAD stratification.** Specifically, the already available clinical stratification models (e.g Framingham, Genders, GES - Gene Expression Score) are implemented by patient genotyping and phenotyping (cellular/molecular markers) and examined in retrospective and prospective clinical data (EVINCI project population) towards stratifying the patients to the following categories: non-obstructive CAD, obstructive CAD and those without CAD categories. * **Site specific plaque progression prediction.** Existing multiscale and multilevel models for plaque progression prediction (ARTreat project) are updated and refined through the incorporation of additional genotyping and phenotyping features and examined by retrospective/ prospective non-invasive CCTA imaging data plus non-imaging patient-specific information (follow-up - EVINCI population). * **Patient-specific CAD diagnosis and CAD-related CHD treatment** Personalised and patientspecific therapeutical management (eg lifestyle changes, standard or high intensity medical therapy) are provided. Additionally, the interventional cardiologists are able to select the optimal stent type(s) and site(s) for appropriate stent deployment through the utilization of the SMARTool virtual angioplasty tool. Additionally, the final Clinical Decision Support System (CDSS) includes a microfluidic lab-on-chip device for blood analysis of cellular/molecular inflammatory markers. The overall platform will be assessed in the participating clinical sites. _**2.1 Purpose of the SMARTool Data Management Plan** _ The DMP is a cornerstone for ensuring good data management. DMP is related to the data management life cycle for the data, which are collected, processed and created during SMARTool project. More specifically, the DMP concerns the following activities: * research data management during / after the project end * description of the type of the collected, processed and generated data * definition of standards that are used for data storage, safety and security * description of the data, which are shared/enable open access * process of data storage * assistance in streamlining the whole research process. The DMP provides _a priori_ the required data process and facilities to store data. The DMP: (i) includes, in combination with ethical issues, the availability of research data, (ii) defines the measures and processes that were followed for proper data anonymization and assurance of their privacy, and (iii) concerns the strategy that is followed for the open data, which does not violate the conditions of the interlinked Research and Innovation projects. Figure 1 presents the steps and actions involved in the SMARTool data management cycle. _Figure 1: Overview of the SMARTool data management cycle._ As far as research data is concerned, SMARTool provides access to this data through the CRFA platform. The imaging data is stored in the cloud-based 3DnetMedical.com DICOM compliant database from B3D’s UK based ISO-27001 accredited datacenter – providing security, redundancy, reliability and scalability. DICOM studies can be automatically uploaded from PACS systems through 3Dnet Gateway using 2048 bit encryption, or can be uploaded manually by the user using SSL encryption (Figure 2). Non-imaging clinical data is securely stored in a NoSQL MongoDB Database, accessible only via locally installed application server. Data redundancy, reliability and scalability is provided by the native MongoDB replica-set feature. MongoDB cluster members communicate (for continuous synchronization) via encrypted TLS/SSL channels. Data is uploaded via https (secure http) application interface and can be accessed, from within the project infrastructure via secure RESTful API (OAuth 2.0, OpenID session token). The OAuth2 authentication and session token issuing, renewing and revoking, is provided by a WSO2 Identity Server deployed in the project infrastructure. Clinical documents are acquired and stored in a Document Repository according to IHE XDS.b (Cross-Enterprise Document Sharing) and XDS-I.b (Cross-enterprise Document Sharing for Imaging) standards [5]. Data security and privacy for clinical documents is provided by implementing the IHE ATNA (Audit Trail and Node Authentication) [6] and IHE XUA (Cross- Enterprise User Assertion) profiles [7]. The purpose of the clinical data collection/generation in relation to the objectives of the project are: * To optimize the learning stage of SMARTool in order to extract and select discriminative markers (from any data source, ranging from history, genetics, circulating molecules and coronary anatomy by CT scan) of CAD severity and progression. * To provide a "smart" data storage to the final CDSS, which is the main outcome of the project, adequate for: (i) the external validation of the selected panel of markers by external cohorts, and (ii) CDSS application and exploitation in clinics. _**2.2 Background of the SMARTool Data Management Plan** _ SMARTool DMP is in accordance with the following articles of the Grant Agreement: * **Article 36 - Confidentiality.** During the project duration and for four years after the period set out in Article 3, the Consortium will keep confidential any data, documents or other relevant material that is defined as confidential. * **Article 39.2 - Processing of personal data.** All personal data are processed in compliance with the applicable EU and national law on data protection. The Consortium provides adequate information and explanation to the personnel whose personal data are collected and processed and provide them with the specific privacy statement. 3. **SMARTool Data Management Plan** _**3.1. FAIR Data** _ The SMARTool Consortium is the end-responsible for the DMP and makes the research and clinical data **findable, accessible, interoperable and reusable (FAIR)** , towards ensuring appropriate data management [8]. The research data management is not an objective itself, but it is rather the way to achieve knowledge discovery and innovation, and to accomplish data extension, integration and reuse. More specifically, the FAIR principles will promote the ability of machines to automatically discover and use the data, to support its reuse by individuals (Figure 3). To accomplish this, the data are available through well-defined APIs and the CRFA web-based user interface. The developed software tools, which are used to create and process the data could be made available under the open source Apache 2.0 license, whenever possible. To support the FAIR principles, the following practices are followed in the SMARTool platform. * Data discoverability and metadata provision * Unique identifiers support persistent for a long time * Data naming conventions * Keywords search * Clear versioning approach Τhe collected, processed and generated clinical and research data are preserved and stored in a specific format so as to ensure long-time accessibility. To avoid file format obsolescence and avoid the risk of missing useful information, specific actions are followed, such as the specific file format selection with a high chance of being usable in the future. In addition, during this deliverable the following issues are addressed: * Specified / updated the data, which are open available. * Specified / updated how the data and associated metadata, documentation and code are stored * Specified / updated the access rights and restrictions To support the data interoperability, specific actions are followed. In order to acquire data from the legacy data sources, a specific HL7 compliant integration layer using clinical data semantics is designed [9]. HL7 integration layer is written using HAPI (HL7 application programming interface). The IHE Cross-Enterprise Document Sharing (XDS) Integration Profile is adopted to allow the registration, distribution and access across health enterprises of patient electronic health data. In any single Hospital, the HIS (Hospital Information System) provides the enterprise specific patient ID as well as the historical demographic data. For Patient Medical Records the CDA Release 2 HL7 Standard (Clinical Reports) is used according to the IHE Cardiology Technical Framework (Volume 1 (CARD TF-1): Integration Profiles [10] and Volume 2 (CARD TF-2): Transactions [11]). For Genomics data exchange we will adhere to HL7 IG CG_GENO, R1 Version 3 Genotype, Release1 - – January 2009, HL7 IG LOINCGENVA, R1 Version 2 Implementation Guide: Clinical Genomics; Fully LOINC-Qualified Genetic Variation Model, Release 1. For medical images, DICOM standard is adopted. For the refined models, standards Μarkup language (ML) format is used; ML for the multiscale and multilevel model, Predictive Model Markup language (PMML) for the data mining and stratification models, Portable Format for Analytics (PFA), which is an emerging standard for statistical models [12]. All the aforementioned standards and protocols ensure the required security in information exchange and anonymization as well as interoperability. To support the data re-use the following aspects were taken into consideration: * Specific data that are licensed to permit its possible re-use * The period of data embargo is defined for the data which is available for re-use * The access to the data by third parties after the end of the project is defined * The data quality assurance processes are defined. * The length of time for which the data is re-usable will be defined 2. _**Datasets to be gathered and processed** _ The data of SMARTool project can be categorised in the following classes: **Cat1.** _Collected Data:_ Data that has not been subjected to quality assurance or control **Cat2.** _Validated Collected Data:_ Data that has been assessed in terms of completeness, correctness, integrity, credibility **Cat3** . _Analyzed Collected / Generated Data:_ Data which has been validated, analysed and processed _Table 1: Overview of data sets._ <table> <tr> <th> **Dataset** </th> <th> **Related WP** </th> <th> **Brief description** </th> </tr> <tr> <td> Dataset 1 </td> <td> WP1, WP2, WP3 </td> <td> This dataset contains the SMARTool patients' personal information, demographics, clinical, imaging, molecular, omics and other health related data.(anonymized on SMARTool platform) </td> </tr> <tr> <td> Dataset 2 </td> <td> WP3 </td> <td> This dataset contains the CAD stratification, which is extracted through the application of data mining techniques. </td> </tr> <tr> <td> Dataset 3 </td> <td> WP4 </td> <td> This dataset contains the reconstructed arteries and analysed DICOM files as well the results of the multiscale and multilevel </td> </tr> <tr> <td> </td> <td> </td> <td> site-specific models for plaque progression prediction and the SmartFFR results, which are used in the prognostic and diagnostic CDSS. </td> </tr> <tr> <td> Dataset 4 </td> <td> WP4 </td> <td> This dataset contains the results of the pharmacological therapy modulation algorithms using data mining techniques. In addition, data from the virtual stent deployment approach are also included. </td> </tr> <tr> <td> Dataset 5 </td> <td> WP5 </td> <td> This dataset contains information on users’ requirements, use cases, questionnaires, specifications and system architecture. </td> </tr> <tr> <td> Dataset 6 </td> <td> WP6 </td> <td> The dataset contains the results of the research performed in SMARTool project, which are communicated through public deliverables, journal and conference presentations, as well as other dissemination channels (website, social media, etc). </td> </tr> <tr> <td> Dataset 7 </td> <td> WP7 </td> <td> This dataset contains information related to the project management and coordination. </td> </tr> </table> 3. _**Data Access Procedures** _ **Public (PU).** For the data that will be available in the public, the project web page [13] provides a description of the dataset and allows the user to download the relevant file. **Protected Data (PR).** Data indicated as protected may be communicated out of the consortium, as long as the interested parties ask for access from the SMARTool consortium, after explaining and providing evidence on how this data will be utilised, for instance for research or commercial purposes. **Confidential/Private (CO).** Data which are denoted as private/confidential have been stored at a specific space, namely the databases, to which only selected partners have access. In order for other SMARTool partners to have access to these data, a proper written application to the partner responsible for the data storage has been provided, accompanied with a justification of the need for access. **Controlled access (CA).** Raw sequencing data upon publication will be deposited in pseudomized form in the European Genome-phenome Archive ( _https://www.ebi.ac.uk/ega/home_ ) under control of a data access committee (DAC) representing the consortium partners. 4. **SMARTool Open Data** The consortium of SMARTool has already or intends to make the following data open to public. No final decision has been made yet about which of the following will be made public. * The collaboration with the utilization of VPH Share tools 1 , for making the models in standardized workflow, in ML format and their dissemination in the BiomedTown, in CellML repository and collaboration with the VPH Institute, where CNR is a member. The plan of the consortium was to provide in open ML format the refined site specific multiscale and multilevel model of plaque progression as well as the risk stratification algorithms * Open access to peer reviewed scientific publications * Statistical analysis metadata (Descriptive statistics) of the SMARTool Dataset for all the following categories (described in detail in D2.2): o Biohumoral * Molecular * Clinical o Omics o Lipidomics The tests performed during the metadata analysis were: * One-Way ANOVA test * Krurskal Wallis test * Tukey - Kramer Multiple Comparison Test of the ANOVA Test Output * Tukey - Kramer Multiple Comparison Test of the Kruskal-Wallis Test Output ▪ t-Test * Wilcoxon Test * Fisher Exact Test * Chi-square test * Shapiro-Wilk Normality Test * Imaging (CCTA DICOM files) data (retrospective/baseline collection and acquisition at follow-up) * 3D artery reconstruction Model Examples 1. _**Legislation for Data publication** _ Regarding publication of clinical data within the European Union, under the EU regulation (EU) No 536/2014 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 16 April 2014 2 all information from clinical trials can be publicly accessible unless its confidentiality can be justified on the basis of protection of commercially confidential information, protection of personal data and protection of confidential communication between EU countries and of ensuring effective supervision of the conduct of clinical trials by EU countries. For each EU country, national legislation, is compliant with EU legislation (for example Greek legislation on Directive 2003/98/EC of the European Parliament and of the Council of 17 November 2003 on the re-use of public sector information 3 ). As far as research data is concerned, specifically in the medicine field that often contains personal data, datasets in their raw form cannot be made openly available as required by the Open Research Data Pilot 4 due to conflicts with rules on protection of personal data. The best way to fulfil the requirements of the Open Research Data Pilot and data protection rules (GDPR5) at the same time is to anonymise personal (research) data before making them openly available. Anonymised data are no longer personal data, consequently data protection rules are no longer applicable. Effective anonymisation prevents third parties from re-identifying individuals in anonymised datasets, i.e., associating a record to a natural person by using other sources of information. Moreover, anonymisation provides further privacy guarantees that prevent third parties from inferring that a person is associated with a certain property, e.g., a particular health condition, with high probability, or even to infer the participation of a person in a published dataset. All data on SMARTool platform are appropriately anonymised on premise before being uploaded. **_Tools_ ** Various tools are available for publication and sharing of anonymised open data, such as the Openaire 5 and the Open Data Portal 6 . When possible, data anonymisation is the best solution to avert data protection risks. AMNESIA, a service currently being developed by OpenAIRE, will allow data curators to anonymise their data. Amnesia is a data anonymization tool, that allows to remove identifying information from data. Amnesia 7 not only removes direct identifiers like names, SSNs etc but also transforms secondary identifiers like birth date and zip code so that individuals cannot be identified in the data. Amnesia supports _k_ anonymity and _km_ -anonymity. 2. _**Data Storage** _ During the project, the data have been collected and systematically stored in a complex repository based on the following three components: * Imaging data is stored in cloud-based 3DnetMedical.com DICOM compliant repository. 3DnetMedical.com conforms to the DICOM SOP’s of the Storage Service Class at Level 2 (Full). 3DnetMedical.com offers Vendor Neutral Archive (VNA) functionalities from a UK based ISO27001 accredited datacentre – providing security, redundancy, reliability and scalability through onshore outsourcing. 3DnetMedical.com follows specific privacy and security-conscious policies applicable to all of its information handling practices. The 3Dnet Dicom store is employed also for genomic data storage, following the Dicom. * Structured clinical data (data acquired during the project for the trial population as well as data employed by the final platform) are stored in the MongoDB database; * Clinical data acquired by means of HL7/IHE integrations with hospital sources are stored in an XDS/XDSi repository User data (such as usernames and application privileges) are managed by a platform's Identity Server (WSO2IS) and stored in the LDAP server embedded in the Identity Server. Raw sequencing data generated in WP3 will be made available upon publication in the European Genomephenome Archive (EGA) under controlled access (DAC) to provide long-term storage and accessibility of the data to the research community. **4.2.1. Databases** _**Level roles and permissions in databases** _ To easily manage the permissions in the databases, several roles have been defined in the following three groups of roles; Administrator, WP leaders and Researchers, Users (Table 2, Table 3, Table 4). _Table 2: Level roles and permissions in the 3DnetMedical.com DICOM compliant Database._ <table> <tr> <th> **Description of permissions** </th> <th> **Administrator** </th> <th> **WP leaders & Researchers ** </th> <th> **User** </th> </tr> <tr> <td> Manage user accounts </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage user roles and access </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage folders, worklists and gateways. </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Upload studies </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Delete or assign a study* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Visualisation and manipulation of imaging data from studies in accessible worklists and folders* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Download data* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Report a study* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Access to patient information* </td> <td> </td> <td> </td> <td> </td> </tr> </table> *Depending on the user role _Table 3: Level roles and permissions in the CRFA NoSQL MongoDB Database._ <table> <tr> <th> **Description of permissions** </th> <th> **Administrator** </th> <th> **WP leaders & Researchers ** </th> <th> **User** </th> </tr> <tr> <td> Download anonymized non imaging clinical data </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Visualisation and editing of non-imaging data from studies </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Upload non imaging data for studies </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Delete and report a study* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Access to patient information* </td> <td> </td> <td> </td> <td> </td> </tr> </table> _Table 4: Level roles and permissions in the WSO2IS embedded LDAP User Database._ <table> <tr> <th> **Description of permissions** </th> <th> **Administrator** </th> <th> **WP leaders & Researchers ** </th> <th> **User** </th> </tr> <tr> <td> Manage user accounts </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage user roles and access </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage DB, Identity Server, Integration Bus and Application Servers </td> <td> </td> <td> </td> <td> </td> </tr> </table> _**Process** _ The 3DnetMedical.com conforms to DICOM SOP’s of the Storage Service Class at Level 2 (Full). The DICOM Information Model is derived from the DICOM Model of the Real World where the DICOM view of the Real-World that identifies the relevant Real-World Objects and their relationships within the scope of the DICOM Standard is shown. It provides a common framework to ensure consistency between the various Information Objects defined by the DICOM Standard. Clinical documents will be made available to the production SMARTool platform by means of IHE XDS.b (Cross-Enterprise Document Sharing) and XDS-I.b (Cross- enterprise Document Sharing for Imaging) integration profiles. An XDS Affinity Domain will be defined for the platform, in order to ease clinical information sharing with the clinical organizations that will use the platform. The platform itself will act as a Document Consumer to query/retrieve documents containing data used to model the patient, and as a Document Producer for registering the documents generated by the platform. Non imaging data will be collected directly during or after a patient's visit via the platform's data entry application (CRFA). This data will be stored in the SMARTool MongoDB database for processing and kept for future encounters with the patient. In order to ease data entry activities, when non imaging data is available in clinical documents stored in XDS.b repository the data entry application (CRFA) will pre-fill retrievable data in the corresponding form fields for user review. Non imaging data retention and clinical document retention will be subject to the clinical organization policy. **4.2.2. Datasets** _**Origin of WP1, WP2 and WP3 datasets** _ _Table 5: Details of Dataset 1._ <table> <tr> <th> **Data identification: Dataset 1** </th> </tr> <tr> <td> **Description** This dataset contains the SMARTool users' personal information, demographics, clinical, imaging, molecular, omics and other health related data. More specifically, the following categories of data are also included: * Clinical data and risk factors * Blood tests/ Biohumoral * Imaging - CTA scan visual/quantitative analysis: plaque composition (calcified, mixed, noncalcified) and features , nominal categories * Circulating soluble proteins: consolidated biomarkers (hsTN, hsCRP, BNP, ALT) and inflammatory markers (IL6, IL 10, ICAM 1, VCAM, e-.selectin), values of blood concentration. * Genetics: associated selected SNPs, selected RNA genes from bioinformatics analysis of DNA/RNA sequencing. * Lipids: selected lipid species and concentrations in blood; names and values of plasma concentration from bioinformatics analysis. * Circulating MN surface proteins to quantify Mon1 Mon 2 and Mon 3 subpopulations: relative and absolute concentrations in blood </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat1, Cat2 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CNR, LUMC, ALACRIS </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> B3D, EHIT </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP1 Task 1.1 Clinical and imaging (CCTA) data collection (eg Evinci baseline data) Task 1.2 CCTA analysis under standardized criteria WP2 Task 2.1 Biohumoral data collection and analysis at baseline and at follow-up Task 2.2 Patient-specific phenotyping (cellular and molecular data) at baseline and at follow-up WP3 Task 3.1 Genomics and transcriptomics Task 3.2 Omics data analysis </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> RAW, DICOM, XLM, JSON, BSON Clinical data will not contain any metadata. Volume: To be calculated at the end of patient data collection </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy/ Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> Internal validation of discriminative markers by bootstrapping techniques </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal protection according to the project ethical and legal guidelines Written consent received. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> 3DnetMedical.com DICOM compliant database NoSQL MongoDB Database Long term archival </td> </tr> </table> **Methodologies for data collection.** For the follow-up CT acquisition, in order to achieve optimal quality for the 3D reconstruction of the desired arterial segments, the 128/256/320 MSCT scanners are used with interval between the slices not exceeding the threshold of 0.5 mm. For the elimination of motion or other artifacts in the acquired images, heart rate during scan should be less than 65 beats/min and optimally less than 60 beats/min. Nitroglycerin prior to the CTA acquisition is administered. Also, multiple cardiac phases are captured to choose different phases for different coronary segments, if needed. The reconstructed field of view is reduced to maximize the number of pixels devoted to depiction of the heart, usually a field of view of 200–250mm for coronary CTA studies of native coronary arteries. For the blood sampling, the responsible clinical partners collect for each enrolled patient during venipuncture: * N= 8/5 ml EDTA tubes (VIOLET tube) * N=2/5ml Li-heparin tubes (GREEN tube) * N= 2/5ml Clot activator tubes (RED tube) During blood sampling, details are recorded such as: * Samples collected (blood [which tubes, how many and in which order]). * Date and time of blood sampling * Last time of food/drink consumption and smoking. Special attention for 12h fasting and 12h refraining from smoking. The separation of blood aliquots is described below: _For the VIOLET tubes: EDTA N=8 tubes_ * N=6 tubes/8 Ø EDTA-Plasma is separated by centrifugation at +4°C in a refrigerated centrifuge, 10 minutes at 1500xg. The plasma is separated from cell pellet within 1 hour from the withdrawing, keeping the vial over this time in an ice bath. Samples are subdivided in aliquots about 1 ml each (18 aliquots), by using small plastic vials (VIOLET CAPS) and keeping all the vials in an ice-bath during the whole procedure. * N=2 tubes/8 Ø EDTA-Whole-Blood tubes are stored at -20°C or -80°C without centrifugation and NOT aliquoted. _For the RED tubes: Clot Activator N=2 tubes_ * Ø Serum samples are separated by centrifugation, for 10 minutes at 1500xg. Samples are subdivided in aliquots about 1 ml each (6 aliquots), by using small plastic vials (RED CAPS) and keeping all the vials in an ice-bath during the whole procedure. _For the GREEN tubes: Li-HEPARIN N=2 tubes_ * Ø Heparin-Plasma are separated by centrifugation, for 10 minutes at 1500xg. Samples should be subdivided in aliquots about 1 ml each (6 aliquots), by using small plastic vials (GREEN CAPS) and keeping all the vials in an ice-bath during the whole procedure. _For the TEMPUS tubes: N=3 tubes_ * Ø TEMPUS tubes (obtained by CNR) are stored at -20°C without centrifugation and NOT aliquoted For the blood storage, upon arrival in the lab, samples are maintained at room temperature (18-22 °C) for 2-4 hours before transferring to freezer, in an upright position. TEMPUS tubes can be left for up to72 hours at room temperature prior to freezing. The TEMPUS tubes should then be frozen at -20 °C upright until freezing and storage at -20°C or -80°C (if available) until shipping. The blood samples are packaged appropriately with sufficient dry ice or ensure that samples do not thaw and packaged in a manner that prevents breakage or leakage. For mandatory safety reasons the blood samples are tested for * HIV * HEPATITIS B * HEPATITIS C _Table 6: Details of Dataset 2._ <table> <tr> <th> **Data identification: Dataset 2** </th> </tr> <tr> <td> **Description:** This dataset contains the CAD stratification which is extracted through the application of data mining techniques </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat3 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> FORTH </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> EHIT </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP3 Task 3.4 Multivariate analysis, data mining and non-imaging classification algorithm for CAD stratification </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> RAW, JSON, ML,XML, PMML, PFA Volume: ~5GB </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy/ Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> No data sharing, Only the data mining and nonimaging classification algorithm will be public available in peer review journals and conferences </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal data protection according to the project ethical and legal guidelines </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> NoSQL MongoDB Database Long term archival </td> </tr> </table> _**Origin of WP4 datasets** _ _Table 7: Details of Dataset 3._ <table> <tr> <th> **Data identification: Dataset 3** </th> </tr> <tr> <td> **Description:** This dataset contains the results of 3D artery reconstruction and of the multiscale and multilevel site specific models for plaque progression and of the non-invasive SmartFFR calculation, which are all used for the development of predictive models integrated in the prognostic and diagnostic CDSS </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat3 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> FORTH </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> EHIT </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 Task 4.1 Prognostic CDSS: Refinement of multiscale and multilevel site specific models for plaque progression Task 4.2 Diagnostic CDSS: Refinement of noninvasive FFR computation Task 4.4 Validation of prognostic CDSS (Multiscale-multilevel site specific plaque progression models) Task 4.5 Validation of diagnostic CDSS (3D artery reconstruction, plaque characterization and noninvasive FFR </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> DICOM, RAW, XLM, STL, IGES A data volume estimation will be later provided. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy / Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> No data sharing. Only the results from the plaque progression and the non- invasive SmartFFR will be public available in peer review journals and conferences </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal data protection according to the project ethical and legal guidelines </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> NoSQL MongoDB Database Long term archival </td> </tr> </table> _Table 8: Details of Dataset 4._ <table> <tr> <th> **Data identification: Dataset 4** </th> </tr> <tr> <td> **Description:** This dataset contains the results of the pharmacological therapy modulation algorithms (EVINCI database) using data mining techniques. In addition, data from the virtual stent deployment approach are also included. </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat3 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> FINK, FORTH </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> B3D, EHIT, FINK </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 Task 4.3 Treatment CDSS: Refinement of medical therapy and virtual angioplasty decision support methods Task 4.6 Validation of treatment CDSS (Pharmacological therapy and virtual angioplasty tool) </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> DICOM, STL, DAT, UNV, LST, BMP, AVI A data volume estimation will be later provided. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy / Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential, Public only some specific results </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> Data sharing, publication in peer review journal </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal data protection according to the project ethical and legal guidelines </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> B3D, FINK </td> </tr> </table> _**Origin of WP5 datasets** _ _Table 9: Details of Dataset 5._ <table> <tr> <th> **Data Identification: Dataset 5** </th> </tr> <tr> <td> Description: This dataset contains information related with T5.1 regarding users’ requirements, use cases, questionnaires, functional specifications and system architecture for the SMARTool Platform. </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat 3 </td> </tr> <tr> <td> Partners activities and responsibilities </td> </tr> <tr> <td> Partner owner of the data </td> <td> Technical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> B3D, FORTH, FINK, EHIT </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> B3D </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP5 Task 5.1 User requirements, functional specifications and architecture of the SMARTool platform </td> </tr> <tr> <td> Standards and metadata </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> RAW, XML, UML, PDF, CSV Volume~500ΜΒ </td> </tr> <tr> <td> Data exploitation and sharing </td> </tr> <tr> <td> Data access policy / Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> No data sharing. </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> This dataset does not include personal data. Questionnaires are anonymized. </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> Data will be available through D5.1 and its annexes. As WP leader B3D will store and backup the data. </td> </tr> </table> _**Origin of WP6 datasets** _ Each SMARTool partner needs to disseminate its results in accordance with Article 29 of the Grant Agreement. The results of the research performed in SMARTool project are communicated through public deliverables, journal and conference presentations, as well as other dissemination channels (website, social media, etc). On the other side, the exploitation is mainly achieved through the main SMARTool exploitable products: * SMARTool 3DNet™-framework: a common, centralized shareable platform that provides a better control, management and streamlining of CAD clinical workflow based on CDSS. * SMARTool CDSS: an integrated system for supporting the clinicians in diagnosis, prognosis and treatment of CAD patients and subjects at risk. * SMARTool Point-Of-Care-Testing (POCT), a portable device for on chip blood analysis and patient phenotyping exploitable in diagnostic CDSS. _**Origin of WP7 datasets** _ _Table 10: Details of Dataset 7._ <table> <tr> <th> **Data identification: Dataset 7.1** </th> </tr> <tr> <td> Description: This dataset contains information related to the project management and coordination </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat 1, Cat 2, Cat 3 (by the Commission) </td> </tr> <tr> <td> Partners activities and responsibilities </td> </tr> <tr> <td> Partner owner of the data </td> <td> Joint Ownership </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CNR </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> CNR </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP7 T7.1 Project Management </td> </tr> <tr> <td> Standards and metadata </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> No specific standards for this data. Files will be in Microsoft Office in doc, .docx, .xls, .xlsx, and .pdf formats. Volume~1GB </td> </tr> <tr> <td> Data exploitation and sharing </td> </tr> <tr> <td> Data access policy/ Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential: information shared only among the Consortium and between the Consortium and the Commission </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> Effort (quantified in person months – PMs) and financial data of each partner is collected by CNR and compiled for monitoring purposes. Data is entered/uploaded in the EC system SyGMa in order to allow the Commission to oversee and assess use of the resources by the consortium. </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> No personal data collected </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> The data is collected for internal use in the project. Standard daily offsite backup on CNR systems. Length: 10 years </td> </tr> </table> _**4.3. Visualization of Data Lineage** _ Data lineage refers to data life cycle that included the data’s origins and where it moves over time. It can be represented visually to discover the data flow/movement from its source to destination via various changes and hops on its way in the enterprise environment, how the data gets transformed along the way, how the representation and parameters change, and how the data splits or converges after each hop. Data lineage answers a wide range of questions about how data is being used. More specifically it is useful to track all calculations and transformations actually defined. **4.3.1. Data used in CRFA** **_Data stored in MondoDB database_ ** **MongoDB** is a free and open-source cross-platform document-oriented database program; it is classified as a NoSQL database program. It is used within the project as a database engine for clinical/molecular data and data- model storage. MondoDB SMARTool database consists of six collections: 1. _**Studies** _ collection: contains all SMARTool patients; 2. _**Fragments** _ collection: illustrates each data showed in CRFA pages; 3. _**Hfragments** _ collection: keeps stored all historical documents of _fragments_ collection (previous versions of data entry); 4. _**Images** _ collection: contains all image references of the patients; 5. _**Institutions** _ collection: provides the name of each organization involved in the project, by distinguishing them between clinical and not clinical; 6. _**Types** _ collection: describes the structure of each page of CRFA. Unlike SQL databases, where a table’s schema must be determined before inserting data, MongoDB’s collections, by default, does not require its documents to have the same schema.This implies that the document in a single collection do not need to have the same set of fields and the data type for a field can differ across documents within a collection. Frthermore, it is possible to change the structure of the documents in a collection (add new fields, remove existing fields, change the field values to a new type, update the documents to a new structure, etc.). As a result, MongoDB provides more flexibility in mapping of documents to an entity or an object. Figure 4 below describe the data model of Care Report Forms (CRF) Repository. There is a 1:N relationship between _studies/fragments_ , _studies/images, institutions/studies,_ that is each patient case is linked to at least one document of _fragments_ and _images,_ and more study cases can be related to a single institution. Similarly, there is a 1:N relationship between _fragments/types_ , i.e. there are several forms of a single type. Finally, _hfragments_ behaviour is the same as _fragments_ collection; however, its documents are no more linked to the correspondent study, since they refer to historical versions. An important difference should be clarified about data relationships: they can be references or embedding. Reference between documents are described as “normalized” data model. Two method for relating documents are used in MongoDB: * Manual references where the __id_ field of one document is saved in another document as a reference. This appears between _studies/images_ and _fragments/types_ collections. * DBRefs are references from one document to another using the value of the first document’s __id_ field, collection name, and , optionally, its database name. This is the case of _studies/institutions_ and _studies/fragments_ collections. Embedding allows to include related data in a single structure or document. These schema are generally known as “denormalized” models, and take advantage of MongoDB’s rich documents, by storing related pieces of information in the same database record. Embedding occurs between _fragments/types, studies/images_ . Some examples of the data structures stored in MongoDB Database: 1. **Studies:** new study by Clinical Partner direct input <table> <tr> <th> _id </th> <th> ObjectId </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> studyCode </td> <td> String </td> </tr> <tr> <td> subjectCode </td> <td> String </td> </tr> <tr> <td> progressiveStudyNumber </td> <td> Int32 </td> </tr> <tr> <td> creationDate </td> <td> Date </td> </tr> <tr> <td> Enabled </td> <td> Boolean </td> </tr> <tr> <td> fragmentsDisabled </td> <td> Array </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> Institution </td> <td> Object </td> </tr> <tr> <td> fragments </td> <td> Array </td> </tr> </table> 2. **Institutions** <table> <tr> <th> **Institutions** </th> <th> </th> </tr> <tr> <td> _id </td> <td> Strin </td> </tr> <tr> <td> Name </td> <td> String </td> </tr> <tr> <td> clinical </td> <td> Boolean </td> </tr> </table> 3. **Images** : Clinical Partner direct upload <table> <tr> <th> **Images** </th> <th> </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> fileName </td> <td> String </td> </tr> <tr> <td> fileSize </td> <td> Int64 </td> </tr> <tr> <td> contentType </td> <td> String </td> </tr> <tr> <td> sourcePath </td> <td> String </td> </tr> <tr> <td> studyCode </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> 4. **Fragments** : 1. **Exclusion Criteria** : Partner direct input <table> <tr> <th> _id </th> <th> ObjectId </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Loked </td> <td> Boolean </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Status </td> <td> String </td> </tr> <tr> <td> Valid </td> <td> Boolean </td> </tr> <tr> <td> Historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> Study </td> <td> Object </td> </tr> </table> 2. **Clinical** : Clinical Partner direct input **CTA Report CL** : Image Analysis measures by LUMC <table> <tr> <th> _id </th> <th> ObjectId </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Status </td> <td> String </td> </tr> <tr> <td> Valid </td> <td> Boolean </td> </tr> <tr> <td> Historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> </table> 3. **Lipid Profile / Biohumoral Markers / Inflammatory Markers / QCTA / Exome / RNA** : Clinical Partner direct upload <table> <tr> <th> _id </th> <th> ObjectId </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Status </td> <td> String </td> </tr> <tr> <td> Valid </td> <td> Boolean </td> </tr> <tr> <td> Historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> </table> 4. **Blood Tests** : Clinical Partner direct input <table> <tr> <th> _id </th> <th> ObjectId </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Loked </td> <td> Boolean </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Status </td> <td> String </td> </tr> <tr> <td> Valid </td> <td> Boolean </td> </tr> <tr> <td> Historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> </table> 5. **Types** : <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> Display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> Order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> Definition </td> <td> Object </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Data produced from CRFA** _Data processing via Apache Zeppelin:_ * LUMC CTA score function definition * Stenosis function definition * Queries for Data Exports (.csv) _Clinical Characteristics_ * Body Mass Index (BMI): function of height and weight * BMI Class: function of BMI <table> <tr> <th> $% $& </th> <th> ℎ 18.50 ℎ 25.00 </th> <th> 18.50 24.99 29.99 ’ 30.00 </th> </tr> </table> _**CTA Report CL** _ * Segment Weight Factor (seg_wf): function of System Dominance (sd) 1, ℎ ) * +0, * , ℎ * Stenosis Weight Factor 1 and (ste_wf_1): function of % of Lumen Diameter 1 (prox_lumen_1) )*)1 ⎧ ⎪⎪1 1 2 ) 3 _1 30% | 30% 1 2 ) 3 _1 50% ⎪1.4 50% 1 2 ) 3 ) 1 70% | 70% 1 2 ) 3 ) 1 90% | 1 2 ) 3 ) 1 8 90% 1.4 _1 ⎨ 1.2 1 2 ) 3 ) 1 3 1 & ⎪ 0 ℎ ⎪⎪ ⎩ \- Stenosis Weight Factor 2 and (ste_wf_2): function of % of Lumen Diameter 2 (prox_lumen_2) )*)2 ⎧ ⎪⎪1 1 2 ) 3 _2 30% | 30% 1 2 ) 3 _2 50% ⎪1.4 50% 1 2 ) 3 ) 2 70% | 70% 1 2 ) 3 ) 2 90% | 1 2 ) 3 ) 2 8 90% 1.4 _2 ⎨ 1.2 1 2 ) 3 ) 2 3 1 & ⎪ 0 ℎ ⎪⎪ ⎩ * Stenosis Weight Factor (ste_wf): function of _ste_wf_1_ and _ste_wf_2_ ) * max< ) * ) 1, ) * ) 2= * Plaque Weight Factor 1 (pla_wf_1): function of Plaque type 1 (prox_type_1) <table> <tr> <th> 1.2 1 2 ⎧ ) >1 1 ⎪1.5 1 ) * ) 1 1.6 1 2 ) >1 1 2 | 1 2 ) >1 1 ⎨ 1.7 1 2 ) >1 1 ⎪ ⎩0 ℎ \- Plaque Weight Factor 2 (pla_wf_2): function of Plaque type 2 (prox_type_2) </th> <th> 1 3 4 </th> </tr> <tr> <td> 1.2 1 2 ⎧ ) >1 2 ⎪1.5 1 ) * ) 2 1.6 1 2 ) >1 2 2 | 1 2 ) >1 2 ⎨ 1.7 1 2 ) >1 2 ⎪ ⎩0 ℎ </td> <td> 1 3 4 </td> </tr> </table> * Plaque Weight Factor (ste_wf): function of _pla_wf_1_ and _pla_wf_2_ 1 _ * max< 1 _ * ) 1,1 _ * ) 2= * Segment Weight Factor (seg_wf), Stenosis Weight Factor (ste_wf), Plaque Weight Factor (pla_wf) ) @ ) * CAD Score (cad_score): function of _seg_score_ CD @ ) @ A ) @ B BEF **4.3.2. Data used and produced at risk stratification algorithm** The risk stratification algorithms uses as input all the non-imaging collected data in the machine-learning algorithms or in the statistical analyses. The metadata regards the classification tables of each model and the output will be the probability of CAD depending on the outcome of the algorithm. More specifically, the algorithm may provide the classification at each classes or even a score, which defines one of the three classes. The data lineage is shown in Figure 5. **4.3.3. Data used and produced at diagnostic CDSS** The diagnostic CDSS is based on the calculation of SmartFFR. However, the diagnostic CDSS will also use imaging data and metrics in order to improve the predictive value of the system. The data which are used are imaging metrics and morphological characteristics such as the lumen area and volume, the plaque area and volume, the volume of calcified and non-calcified plaques, the degree of stenosis and other. These data are stored in txt files during the run of the 3D reconstruction algorithm. The FEM solver is also provides the simulation results as well the SmartFFR per reconstructed branch stored in files. These data are used as input into two different approached of building predictive models: one simplistic based on statistics and one based on machine learning. Both approaches provide the predictive value for the diagnosis of CAD. Figures Figure _6_ , Figure _7_ Figure _8_ present the data lineage for the diagnostic CDSS of SMARTool. conditions and the material properties in order to provide the pressures required for the calculation of SmartFFR. diagnostic CDSS of SMARTool. **4.3.4. Data used and produced at prognostic CDSS** The prognostic CDSS provides the prediction of site-specific plaque progression integrating in a CDSS the plaque growth model with other SMARTool modules. More specifically, the plaque growth model requires the reconstructed geometries of the lumen and the arterial wall, some clinical and biohumoral data (LDL, HDL serum concentration, monocytes count, blood pressure) and the boundary conditions. The predictive models use additionally as input all clinical and patients’ data and the SmartFFR. The figures Figure _9_ , Figure _10_ , Figure _11_ present the data lineage of the prognostic CDSS. **4.3.5. Data used and produced at treatment CDSS** The treatment CDSS consists of the medical therapy prediction and the stent deployment module. The medical treatment prediction combines in a machine learning model all clinical non-imaging data including also the medical therapy of each patient in order to provide the output of the optimal medical therapy (Figure 12). SMARTool. The stent deployment module uses as input in the FEM solver the arterial geometries of the lumen and the outer wall, the material properties of the stent and the arterial wall and the boundary conditions. The output is the deformed geometries in VTK format and the distribution of forces and stress in VTK format also. Figure _13_ presents the data lineage of the stent deployment module. 5. **Data security and ethical considerations** _**5.1. Data security** _ The SMARTool project utilises and adopts specific methods and tools for ensuring adequate field access and extended contact and communication between the participants, who are sensitive in the activities that are related to ethical concerns and research activities. The following instructions have been used to ensure data security: * Perform anonymization of personal data * Encrypt data, if necessary by the local researchers * Data storage in two separate locations for avoiding data loss * Perform frequent backups (every 24 hours) * Ensure the final dataset coherence by marking up files in a structured way SMARTool databases for imaging and for non-imaging clinical data are hosted in B3D’s secure data centre, collocated in dedicated spaces at top-tier datacentre accredited withISO 27001 for Information Security Management, ISO 9001 for Quality Management, ISO 14001 for Environment Management. This facility provides carrier-level support, including: * Access control and physical security * Environmental controls * Power * Network * Fire detection and suppression * Network protection * Disaster Recovery * Security Monitoring B3D’s Information Governance has the purpose to ensure: (i) the patient information confidentiality, and (ii) that the adherence to information governance is built into the design of the 3DnetMedical service and derived products provided to healthcare professionals. Information governance and security underpins 3DnetMedical and, as an organisation, B3D strives to achieve excellence in the services provided. B3D operates under ISO 13485:2012 (quality management system). B3D complies with ISO 62304 (medical software development) and incorporates ISO 14971 (risk analysis) into product life cycle. All B3D products and operations conform to industry standards including CE Annex II of directive 93/42/EEC, DICOM, HL7 and IHE. SMARTool non imaging clinical data and clinical documents have been encrypted at rest by means of a transparent filesystem encryption strategy with AES-256 symmetric key cypher. Disk encryption keys have been managed in a keystore (vaultproject.io), and encrypted themselves with a master key which was separate from the data and the databases. Network traffic carrying non imaging data will be encrypted via TLS/SSL (HTTPS), with public key certificates issued and renewed by a root certificate authority. EHIT is an ISO 13485:2013 and ISO/IEC 27001 certified organization. This ensures consistent information security management and product development life cycle. Moreover, EHIT is certified for 21 IHE integration profiles. _**5.2. Ethical issues** _ SMARTool supports the protection of the enrolled patients and make their rights more visible by adopting the National Laws and EU Legislation and be in compliance with the Directives as described in the following section. **EU legislation and Ethics documents** The Directives, Ethics Documents and experts’ opinions and Ethical Committees are presented in the following table accompanied with a short reference note on the general matter as well as the stated principles. All the European Union countries of SMARTool adopted the European Directives. Additionally, the partners of the SMARTool project follow the European Charter for researchers. _**Table 11:** Ethical considerations in SMARTool project. _ <table> <tr> <th> **Directive 1995/46/EC of the European Parliament (October 1995):** </th> </tr> <tr> <td> Protection of individuals related to personal data processing and free movement. It concerns the protection for the individuals’ privacy and the free movement of personal data within the European Union (EU). The Directive defines specific criteria for the collection and utilisation of personal data. Furthermore, the Directive stresses the obligation of each Member State to set up an independent national authority for the monitoring of the Directive application. </td> </tr> <tr> <td> **Charter of Fundamental Rights of the EU that became legally binding on the EU institutions and the** **national governments on 2009 with the entry into force of the Treaty of Lisbon** </td> </tr> <tr> <td> The Charter includes the rights and freedoms in terms of Dignity, Freedoms, Equality, Solidarity, Citizens' Rights, and Justice. </td> </tr> <tr> <td> **Opinion of the European Group on Ethics in Science and New Technologies No. 13 (July 1999)** </td> </tr> <tr> <td> Protection of recognisable personal health data. </td> </tr> <tr> <td> **Opinion of the European Group on Ethics in Science and New Technologies No. 26 (February** **2012)** </td> </tr> <tr> <td> Ethics of information and communication technologies. </td> </tr> <tr> <td> **Opinion of the European Group on Ethics in Science and New Technologies No. 29 (13 October** </td> </tr> </table> <table> <tr> <th> **2015)** </th> </tr> <tr> <td> Examination of the principal health technologies and definition of a set of recommendations for the EU and national-level policymakers, industry and other stakeholders, towards maximising the benefits and minimise the issues associated with new health technologies and citizen participation in health policy, research and practice </td> </tr> <tr> <td> **Directive 2001/20/EC of the European Parliament (April 2001)** </td> </tr> <tr> <td> Performance of clinical trials and medicine under good clinical practice. </td> </tr> <tr> <td> **Directive 2005/28/EC or Good Clinical Practice Directive of the European Parliament and of the** **Council(8 April 2005)** </td> </tr> <tr> <td> Presents the principles and detailed guidelines for good clinical practice with regard to investigational medicinal products for human use, as well as the requirements for authorisation of the manufacturing of such products. </td> </tr> <tr> <td> **Universal Declaration on the human genome and human rights adopted by UNESCO (1997)** </td> </tr> <tr> <td> Refers to national and regional legislation on medicine, privacy and genetic research. </td> </tr> <tr> <td> **Clinical Trials Regulation (CTR) EU No 536/2014** </td> </tr> <tr> <td> Ensures that the rules for conducting clinical trials are the same throughout the EU. It is imperative to ensure that all Member States, in authorising and supervising the conduct of a clinical trial, are based on identical rules. </td> </tr> <tr> <td> **Italy** </td> <td> A number of ministerial decrees cover this area. A key one is Legislative Decree no.211 of June 24, 2003 “Transposition of Directive 2001/20/EC relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for clinical use”. </td> </tr> <tr> <td> Law n.675 of 31 December 1996 Tutela delle persone e di altri soggetti rispetto al trattamento dei dati personali (published on the G.U. n.5 of8 January 1996; </td> </tr> <tr> <td> C. M. n. 6 of 2 September 2002, Attività dei comitati etici istituiti ai sensi del decreto ministeriale 18 marzo 1998. (published on the G.U. n. 214 of 12 September 2002) </td> </tr> <tr> <td> **Finland** </td> <td> Medical Research Act (L 488/1999); </td> </tr> <tr> <td> Act of the Medical Use of Human Organs and Tissues (L 101/2001), </td> </tr> <tr> <td> Act on the Status and Rights of Patients (L785/1992), </td> </tr> <tr> <td> Personal Data Act (L 523/1999); </td> </tr> <tr> <td> Good Clinical Practice (GCP) guidelines in accordance with the International </td> </tr> <tr> <td> Conference on Harmonization (www.ich.org) </td> </tr> <tr> <td> **Switzerland** </td> <td> Loi fédérale sur les médicaments et les dispositifs médicaux 812.21 du 15 décembre 2000 (Chapitre 4 Section 2 Essais cliniques, Section 4 Obligation de garder le secret et la communication de donées) </td> </tr> <tr> <td> Loi fédérale sur l’analyse génétique humaine 810.12 du 8 octobre 2004 (Section 2; Art. 5 Consentement, Art. 7 Protection des données génétiques, Art. 20. Réutilisation du matériel biologique) </td> </tr> <tr> <td> Local: Approval by the IRB Bundesgesetz 812.21 über Arzneimittel und Medizinprodukte (Heilmittelgesetz, HMG) as of Dezember 15, 2000 </td> </tr> <tr> <td> Verordnung 812.214.2 über klinische Versuche mit Heilmitteln (VKlin) as of October 17, 2001 </td> </tr> <tr> <td> Patientinnen- und Patientengesetz 813.13 as of April 5, 2004 </td> </tr> <tr> <td> Heilmittelverordnung (HMV) 812.1 as of May 21, 2008 </td> </tr> <tr> <td> **UK** </td> <td> Integrated Research Application Systems (IRAS) </td> </tr> <tr> <td> </td> <td> National Research Ethics Service (NRES) </td> </tr> <tr> <td> Department of Health’s Research Governance Framework for Health and Social Care (2nd Edition, 2005) </td> </tr> <tr> <td> **France** </td> <td> Public Health Code (articles L. 1121-1 et seq.).6 </td> </tr> <tr> <td> The bioethics law of 2004 creating the French Biomedicine Agency </td> </tr> <tr> <td> The Advisory Committee on the Treatment of Research Information in the Health </td> </tr> <tr> <td> Field was created by a law n°94-548 1 July 1994 </td> </tr> <tr> <td> **Germany** </td> <td> Das Deutsche Referenzzentrum für Ethik in den Biowissenschaften - DRZE </td> </tr> <tr> <td> Deutscher Ethikrat </td> </tr> <tr> <td> Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag - TAB </td> </tr> <tr> <td> **Spain** </td> <td> Comités de Ética en Investigación Clínica, CEIC </td> </tr> <tr> <td> Royal Decree 561/1993 </td> </tr> <tr> <td> Royal Decree 223/2004 </td> </tr> <tr> <td> Law of Biomedical Research (2007) </td> </tr> <tr> <td> Asociación Nacional de Comités de Ética de la Investigación, ANCEI </td> </tr> <tr> <td> **Poland** </td> <td> First Polish Code of Ethics (1977) </td> </tr> <tr> <td> Extraordinary National Assembly of Delegates of the Polish Society of Medicine in Szczecin on 22 June 1984 </td> </tr> <tr> <td> Code of Medical Ethics (1991) </td> </tr> <tr> <td> First Polish Research Ethics Committee (1979) </td> </tr> <tr> <td> Bioethics Committee at the Ministry of Health </td> </tr> <tr> <td> Zbiór zasad i wytycznych pt. "Dobre obyczaje w nauce" </td> </tr> <tr> <td> April 7, 2005 Order of the Minister of Health concerning the nature and extent of inspection of clinical trials. Legislation Journal of the Republic of Poland (2005) 69: pos. 623 </td> </tr> <tr> <td> January 3, 2007 Order of the Minister of Health concerning the application form for authorization of clinical trials, payments for authorization and final report of the clinical trial Legislation Journal of the Republic of Poland (2010) 222: pos. 1453 </td> </tr> <tr> <td> February 11, 2011 Order of the Minister of Health concerning requirements related with the basic documentation of clinical trials. Legislation Journal of the Republic of Poland (2011) 40: pos. 210 </td> </tr> <tr> <td> May 12, 2012 Order of the Minister of Health concerning Good Clinical Practice Legislation Journal of the Republic of Poland (2012) 0: pos. 489 </td> </tr> <tr> <td> **Netherlands** </td> <td> Personal Data Protection Act of 6 july 2000 (Wet bescherming persoonsgegevens) </td> </tr> </table> **6\. References** 1. “H2020 Programme Guidelines on FAIR Data Management in Horizon 2020.” . 2. “OpenAIRE | How to create a DMP Plan | Open Research Data Pilot,” _{SITE_NAME}_ . . 3. “Cardiovascular diseases statistics - Statistics Explained.” . 4. E. J. Benjamin _et al._ , “Heart Disease and Stroke Statistics—2017 Update: A Report From the American Heart Association,” _Circulation_ , p. CIR.0000000000000485, Jan. 2017. 5. “Cross-Enterprise Document Sharing - IHE Wiki.” . 6. “Audit Trail and Node Authentication - IHE Wiki.” . 7. “Cross-Enterprise User Assertion (XUA) - IHE Wiki.” . 8. M. D. Wilkinson _et al._ , “The FAIR Guiding Principles for scientific data management and stewardship,” _Sci Data_ , vol. 3, p. sdata201618, Mar. 2016. 9. “Introduction to HL7 Standards.” . 10. “IHE Volume 1 (CARD TF-1): Integration Profiles.” . 11. “IHE Volume 2 (CARD TF-2): Transactions.” . 12. “Predictive Model Markup language.” . 13. “SMARTool - Simulation Modeling of coronary ARTery disease: a tool for clinical decision support.” . 7. **Certifications** **B3D** **EHIT** **END OF DOCUMENT** 8. **ANNEX** **Data Protection** The General Data Protection Regulation (GDPR) is the new EU’s regulation repealing previous EU Data Protection Directive and designed to protect and empower every subject’s data privacy located in the EU regardless of where the processing is happening. In addition, the GDPR applies to every organisation located in the EU that processes personal data regardless of the data subject’s nationality and covers several activities and aspects including data collection, processing, transfer, storage, security and the data subject rights. In terms of substance, the basic principles of the GDPR are similar to those in the DPD, as are many of its definitions. Thus the GDPR requires again that data processing adheres to the principles of lawfulness and fairness: in this regard, the key fairness principles of the DPD are reiterated and amplified, and include transparency; purpose limitation; data minimization; storage limitation; accuracy and integrity 8 . Moreover, like the DPD, the GDPR sets out further restrictions when it comes to the processing of sensitive data, including health data10. Data protection by design is implemented taking into account nature, scope and context of processing, data controllers and processors, including the purposes of processing. This includes also potential risks that can affect the rights and freedoms of patients that are participating in the processing. All principles of the GDPR were taken into account11. **Key factors and definitions - Data controllers, subjects, and other actors** Originally, it is beneficial to clarify the definitions of the various actors involved in the processing of personal data. As per Article 4(1), the person that the data relate to is referred to as the ‘data subject’. The protection of data subjects against unlawful processing of their data is the main objective of the EU data protection framework. Secondly, ‘the natural or legal person which, alone or jointly with others, determines the purposes and means of the processing of personal data’ is referred to as the ‘data controller’ 9 . Sometimes, however, the controller may have recourse to a third party service provider; this entity that processes personal data on behalf of the controller is referred to as the ‘data processor’ 10 . The controller is able to determine the purpose himself (or jointly with others). By contrast, the processor is subordinated to the controller and subject to his directions concerning the processing of personal data. **General requirements for the processing of personal data** **Data Processing principles** As previously noted, the basic principles of the GDPR are very similar to those in the DPD. In this regard, in Article 5(1)(a) continues the requirement from the DPD that data processing should adhere to the principles of: lawfulness and fairness; however, it also adds, thirdly, the need for ‘transparency’ – a key concept that resurfaces in many subsequent more detailed provisions of the GDPR, and places a new emphasis on the need for data controllers and processors to demonstrate the purpose and ambit purpose of their operations, as well as their compliance with the rules of the GDPR. First, as to the requirement of ‘lawfulness’, this means that any data processing must be based on a legal basis, in the shape either of the data subject’s explicit consent, or another basis defined by law. These conditions for lawfulness are further specified in Article 6, and – in the case of processing of special categories of ‘sensitive’ data (where greater restrictions apply) – Article 9 of the GDPR. **Lawful processing basis** As noted above, the first data processing (under Article 5(1) (a)) requires that the processing of personal data is lawful, fair and transparent. As regards the lawfulness aspect, the general legal bases a controller may rely on to justify processing of personal data are further specified under Article 6 GDPR; however, as regards the processing of certain ‘special categories’ of personal data, the controller needs to show the existence of an additional, more restrictively drawn, processing basis under Article 9. Such special categories (commonly referred to as ‘sensitive data’) comprise data, “revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation”. **Data Subject Rights** In Articles 12-22, the GDPR enumerates a number of key rights enjoyed by the data subject in relation to processing that occurs with their personal data. Transparency consists of various rights of the data-subject. A major aspect relates to the data-subject's right to information and access to the information processed about him, thereby contributing to the ‘transparency’, which is mentioned in Article 5(a) – together with lawfulness and fairness - as a key principle of data processing. This is reflected also in Recital 39 of the GDPR, where it is stated: “The principle of transparency … concerns, in particular, information to the data subjects on the identity of the controller and the purposes of the processing and further information to ensure fair and transparent processing in respect of the natural persons concerned and their right to obtain confirmation and communication of personal data concerning them which are being processed”. Here, the GDPR largely restates rights that were already granted to data subjects under the Data Protection Directive, but it also introduces a couple of new rights. However (as was the position under the Directive too), the various rights are prima facie rather than absolute in character: that is to say, they are subject to exemptions in certain situation. **Obligations on data controllers** In Article 24 and following, the GDPR imposes a number of further obligations on data controllers (and in some cases on data processors, who store and/or process data on the controller’s behalf). The ensuing obligations include some that ‘concretise’, or spell out in more detail how the controller should comply with the principles of fair processing outlined under Articles 5(b)-(f) of the GDPR. We consider these various obligations, so far as relevant to data controllers (and processors) dealing with health data for research, below. a. Data Protection by Design and Default Under Article 25, the controller is required to implement data protection by design and default, requiring a focus on the organization and technical security of the relevant processing operations. In particular, these have as their objective the practical implementation of the data minimisation principle under Article 5(b). b. Data security The processing of personal data requires security measures to prevent any accidental disclosure or unauthorised access to the data. Data security is an integral part of data protection that focuses on maintaining the confidentiality, integrity and availability of information. **Data protection in SMARTool** Most of the SMARTool data processing in the framework of the project happened before the Regulation's entry into force (ARTreat/EVINCI data) and that processing was compliant with the provisions of EU law applicable at the time (the repealed Directive 95/46/EC), as well as any relevant national data protection laws. The personal data of the project has been anonymised from the beginning of the project and, therefore, the GDPR does not apply (see Recital 26 of the Regulation). In a broader sense, SMARTool complies with GDPR because: * we can assume that data handled in SMARTool can be qualified as personal data and that requirements broadly pertaining to the processing of such data under the GDPR are addressed; * Recital 33 of GDPR states that because it is not always possible to fully identify the purpose of a processing when it comes to scientific research already when the data is being collected, data subjects should be allowed to give a (broader) consent (which is our case for ARTreat and EVINCI). Clinical partners also ensure that their process is transparent by creating and posting a Privacy Policy that outlines: * What data they collect * How they store the information - How they use the information * Whom they share the data with * Whether they share the data with third parties * When and how they delete the data All these have been defined in SMARTool defining clearly the use of data, their storing in the various databases and modules of SMARTool as well as their use in the decision support tools. The sharing is only permitted between the clinical and technical partners inside the consortium. **APPENDIX** **_Data structure_ ** Metadata refers to a set of data that described and gives information about other data. One option to store metadata is to store that information within the object itself. The following table illustrate the structure of a typical document stored in each collection. For each document, the collection and type of data entry (Clinical Partner direct input, Clinical Partner direct upload) have been specified. **Clinical Characteristics, “fragments” / “hfragments” structures (Clinical Partner direct input)** <table> <tr> <th> **Clinical Characteristics** </th> <th> </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> gender </td> <td> String </td> </tr> <tr> <td> yearOfBirth </td> <td> String </td> </tr> <tr> <td> height </td> <td> String </td> </tr> <tr> <td> weight </td> <td> String </td> </tr> </table> <table> <tr> <th> bmi </th> <th> String </th> </tr> <tr> <td> bmiClass </td> <td> String </td> </tr> <tr> <td> sbp </td> <td> String </td> </tr> <tr> <td> dbp </td> <td> String </td> </tr> <tr> <td> waist_circumference </td> <td> Int32 </td> </tr> <tr> <td> hip_circumference </td> <td> Int32 </td> </tr> <tr> <td> familyHistoryCHD </td> <td> Boolean </td> </tr> <tr> <td> familiyHistoryCHDClass </td> <td> String </td> </tr> <tr> <td> currentSmoking </td> <td> Boolean </td> </tr> <tr> <td> currentSmokingNCigaretteDie </td> <td> Int32 </td> </tr> <tr> <td> pastSmoking </td> <td> Boolean </td> </tr> <tr> <td> smokingInterruptedSince </td> <td> Int32 </td> </tr> <tr> <td> diabetesMellitus </td> <td> Boolean </td> </tr> <tr> <td> metabolicSindrome </td> <td> Boolean </td> </tr> <tr> <td> dyslipidemia </td> <td> Boolean </td> </tr> <tr> <td> hypertension </td> <td> Boolean </td> </tr> <tr> <td> obesity </td> <td> Boolean </td> </tr> <tr> <td> noRiskFactors </td> <td> Boolean </td> </tr> <tr> <td> alcohol </td> <td> Boolean </td> </tr> <tr> <td> alcoholAmount </td> <td> Boolean </td> </tr> <tr> <td> dietVegetable </td> <td> Boolean </td> </tr> <tr> <td> dietVegetableAmount </td> <td> Int32 </td> </tr> <tr> <td> physicalActivity </td> <td> Boolean </td> </tr> <tr> <td> physicalActivityAmount </td> <td> Int32 </td> </tr> <tr> <td> environmentHome </td> <td> String </td> </tr> <tr> <td> expositionPollutants </td> <td> String </td> </tr> <tr> <td> not applicable </td> <td> Boolean </td> </tr> <tr> <td> AMI1_date </td> <td> String </td> </tr> <tr> <td> AMI1_vessel_site </td> <td> String </td> </tr> <tr> <td> AMI2_date </td> <td> String </td> </tr> <tr> <td> AMI2_vessel_site </td> <td> String </td> </tr> <tr> <td> CAGB1_date </td> <td> String </td> </tr> <tr> <td> CABG2_date </td> <td> String </td> </tr> <tr> <td> NonSTEMIUnstableAngina1 </td> <td> String </td> </tr> <tr> <td> NonSTEMIUnstableAngina2 </td> <td> string </td> </tr> <tr> <td> PCIStenting1_date </td> <td> String </td> </tr> <tr> <td> PCIStenting1_site </td> <td> String </td> </tr> <tr> <td> PCIStenting2_indication_for_revascularization </td> <td> String </td> </tr> <tr> <td> PCIStenting2_date </td> <td> String </td> </tr> <tr> <td> PCIStenting2_site </td> <td> String </td> </tr> <tr> <td> PCIStenting2_indication_for_revascularization </td> <td> String </td> </tr> <tr> <td> NonCardiacHosp1_date </td> <td> String </td> </tr> <tr> <td> NonCardiacHosp1_spec </td> <td> String </td> </tr> <tr> <td> NonCardiacHosp2_date </td> <td> String </td> </tr> <tr> <td> NonCardiacHosp2_spec </td> <td> String </td> </tr> <tr> <td> pastSymptoms </td> <td> String </td> </tr> </table> <table> <tr> <th> currentSymptoms </th> <th> String </th> </tr> <tr> <td> oralAntidiabetics </td> <td> Boolean </td> </tr> <tr> <td> insulin </td> <td> Boolean </td> </tr> <tr> <td> statins </td> <td> Boolean </td> </tr> <tr> <td> statinsMgDie </td> <td> Int32 </td> </tr> <tr> <td> statinsType </td> <td> String </td> </tr> <tr> <td> aCEInhibitors </td> <td> Boolean </td> </tr> <tr> <td> diuretics </td> <td> Boolean </td> </tr> <tr> <td> ARB </td> <td> Boolean </td> </tr> <tr> <td> bETABlockers </td> <td> Boolean </td> </tr> <tr> <td> calciumAntagonists </td> <td> Boolean </td> </tr> <tr> <td> aspirin </td> <td> Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> novelAnticoagulants **CTA Report Core Lab** traditionalAnticoagulants </td> <td> Boolean Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> _id nitrates </td> <td> ObjectId Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> _class DAPT </td> <td> Boolean String </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> typeCode otherDrugs </td> <td> String Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> payload otherDrugsText </td> <td> Object String </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> lumc.ctaAnalysesCompleted noTreatment </td> <td> Boolean Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> lumc.imageQuality status </td> <td> String String </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> ccsNotApplicable valid </td> <td> Boolean Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> TCS_HU historical </td> <td> Boolean </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> TCScore versionNumber </td> <td> String Int64 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> RCA_HU createAt </td> <td> Date </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> RCAScore lastModified </td> <td> String Date </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> LM_HU createBy </td> <td> String </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> LMScore lastModifiedBy </td> <td> String String </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> LAD_HU study </td> <td> Object </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> LADScore $ref </td> <td> String String </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> LCX_HU $id </td> <td> ObjectId </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> LCXScore </td> <td> String </td> </tr> <tr> <td> nonCoronaryCardiacFindings </td> <td> Boolean </td> </tr> <tr> <td> systemDominance </td> <td> String </td> </tr> <tr> <td> noPlaquesInAnySegment </td> <td> Boolean </td> </tr> <tr> <td> noVisibleSegmentsExist </td> <td> Boolean </td> </tr> <tr> <td> noVisibleSegmentDescription </td> <td> String </td> </tr> <tr> <td> plaque1 </td> <td> Boolean </td> </tr> <tr> <td> p1_Prox_type </td> <td> String </td> </tr> <tr> <td> p1_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p1_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p1_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque1_2 </td> <td> Boolean </td> </tr> <tr> <td> p1_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p1_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p1_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p1_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque2 </td> <td> Boolean </td> </tr> </table> **CTA Report CL,** **“fragments” /** **“hfragments” structures (Clinical Partner direct input)** <table> <tr> <th> p2_Prox_type </th> <th> String </th> </tr> <tr> <td> p2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque2_2 </td> <td> Boolean </td> </tr> <tr> <td> p2_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p2_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p2_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p2_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque3 </td> <td> Boolean </td> </tr> <tr> <td> p3_Prox_type </td> <td> String </td> </tr> <tr> <td> p3_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p3_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p3_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque3_2 </td> <td> Boolean </td> </tr> <tr> <td> p3_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p3_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p3_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p3_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque4 </td> <td> Boolean </td> </tr> <tr> <td> p4_Prox_type </td> <td> String </td> </tr> <tr> <td> p4_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p4_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p4_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque4_2 </td> <td> Boolean </td> </tr> <tr> <td> p4_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p4_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p4_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p4_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque5 </td> <td> Boolean </td> </tr> <tr> <td> p5_Prox_type </td> <td> String </td> </tr> <tr> <td> p5_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p5_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p5_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque5_2 </td> <td> Boolean </td> </tr> <tr> <td> p5_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p5_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p5_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p5_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque6 </td> <td> Boolean </td> </tr> <tr> <td> p6_Prox_type </td> <td> String </td> </tr> <tr> <td> p6_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p6_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p6_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque6_2 </td> <td> Boolean </td> </tr> <tr> <td> p6_2_Prox_type </td> <td> String </td> </tr> </table> <table> <tr> <th> p6_2_Prox_stent </th> <th> Boolean </th> </tr> <tr> <td> p6_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p6_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque7 </td> <td> Boolean </td> </tr> <tr> <td> p7_Prox_type </td> <td> String </td> </tr> <tr> <td> p7_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p7_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p7_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque7_2 </td> <td> Boolean </td> </tr> <tr> <td> p7_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p7_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p7_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p7_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque8 </td> <td> Boolean </td> </tr> <tr> <td> p8_Prox_type </td> <td> String </td> </tr> <tr> <td> p8_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p8_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p8_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque8_2 </td> <td> Boolean </td> </tr> <tr> <td> p8_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p8_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p8_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p8_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque9 </td> <td> Boolean </td> </tr> <tr> <td> p9_Prox_type </td> <td> String </td> </tr> <tr> <td> p9_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p9_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p9_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque9_2 </td> <td> Boolean </td> </tr> <tr> <td> p9_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p9_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p9_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p9_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque10 </td> <td> Boolean </td> </tr> <tr> <td> p10_Prox_type </td> <td> String </td> </tr> <tr> <td> p10_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p10_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p10_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque10_2 </td> <td> Boolean </td> </tr> <tr> <td> p10_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p10_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p10_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p10_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque11 </td> <td> Boolean </td> </tr> <tr> <td> p11_Prox_type </td> <td> String </td> </tr> <tr> <td> p11_Prox_stent </td> <td> Boolean </td> </tr> </table> <table> <tr> <th> p11_Prox_lumen </th> <th> String </th> </tr> <tr> <td> p11_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque11_2 </td> <td> Boolean </td> </tr> <tr> <td> p11_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p11_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p11_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p11_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque12 </td> <td> Boolean </td> </tr> <tr> <td> p12_Prox_type </td> <td> String </td> </tr> <tr> <td> p12_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p12_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p12_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque12_2 </td> <td> Boolean </td> </tr> <tr> <td> p12_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p12_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p12_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p12_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque13 </td> <td> Boolean </td> </tr> <tr> <td> p13_Prox_type </td> <td> String </td> </tr> <tr> <td> p13_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p13_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p13_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque13_2 </td> <td> Boolean </td> </tr> <tr> <td> p13_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p13_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p13_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p13_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque14 </td> <td> Boolean </td> </tr> <tr> <td> p14_Prox_type </td> <td> String </td> </tr> <tr> <td> p14_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p14_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p14_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque14_2 </td> <td> Boolean </td> </tr> <tr> <td> p14_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p14_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p14_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p14_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque15 </td> <td> Boolean </td> </tr> <tr> <td> p15_Prox_type </td> <td> String </td> </tr> <tr> <td> p15_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p15_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p15_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque15_2 </td> <td> Boolean </td> </tr> <tr> <td> p15_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p15_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p15_2_Prox_lumen </td> <td> String </td> </tr> </table> <table> <tr> <th> p15_2_Plaque_progression </th> <th> Boolean </th> </tr> <tr> <td> plaque16 </td> <td> Boolean </td> </tr> <tr> <td> p16_Prox_type </td> <td> String </td> </tr> <tr> <td> p16_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p16_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p16_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque16_2 </td> <td> Boolean </td> </tr> <tr> <td> p16_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p16_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p16_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p16_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque17 </td> <td> Boolean </td> </tr> <tr> <td> p17_Prox_type </td> <td> String </td> </tr> <tr> <td> p17_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p17_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p17_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque17_2 </td> <td> Boolean </td> </tr> <tr> <td> p17_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p17_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p17_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p17_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> p1_seg_wf </td> <td> Double </td> </tr> <tr> <td> p1_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p1_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p1_ste_wf </td> <td> Double </td> </tr> <tr> <td> p1_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p1_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p1_pla_wf </td> <td> Double </td> </tr> <tr> <td> p1_score </td> <td> Double </td> </tr> <tr> <td> p2_seg_wf </td> <td> Double </td> </tr> <tr> <td> p2_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p2_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p2_ste_wf </td> <td> Double </td> </tr> <tr> <td> p2_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p2_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p2_pla_wf </td> <td> Double </td> </tr> <tr> <td> p2_score </td> <td> Double </td> </tr> <tr> <td> p3_seg_wf </td> <td> Double </td> </tr> <tr> <td> p3_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p3_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p3_ste_wf </td> <td> Double </td> </tr> <tr> <td> p3_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p3_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p3_pla_wf </td> <td> Double </td> </tr> <tr> <td> p3_score </td> <td> Double </td> </tr> <tr> <td> p4_seg_wf </td> <td> Double </td> </tr> </table> <table> <tr> <th> p4_ste_wf_1 </th> <th> Double </th> </tr> <tr> <td> p4_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p4_ste_wf </td> <td> Double </td> </tr> <tr> <td> p4_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p4_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p4_pla_wf </td> <td> Double </td> </tr> <tr> <td> p4_score </td> <td> Double </td> </tr> <tr> <td> p5_seg_wf </td> <td> Double </td> </tr> <tr> <td> p5_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p5_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p5_ste_wf </td> <td> Double </td> </tr> <tr> <td> p5_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p5_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p5_pla_wf </td> <td> Double </td> </tr> <tr> <td> p5_score </td> <td> Double </td> </tr> <tr> <td> p6_seg_wf </td> <td> Double </td> </tr> <tr> <td> p6_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p6_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p6_ste_wf </td> <td> Double </td> </tr> <tr> <td> p6_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p6_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p6_pla_wf </td> <td> Double </td> </tr> <tr> <td> p6_score </td> <td> Double </td> </tr> <tr> <td> p7_seg_wf </td> <td> Double </td> </tr> <tr> <td> p7_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p7_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p7_ste_wf </td> <td> Double </td> </tr> <tr> <td> p7_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p7_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p7_pla_wf </td> <td> Double </td> </tr> <tr> <td> p7_score </td> <td> Double </td> </tr> <tr> <td> p8_seg_wf </td> <td> Double </td> </tr> <tr> <td> p8_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p8_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p8_ste_wf </td> <td> Double </td> </tr> <tr> <td> p8_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p8_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p8_pla_wf </td> <td> Double </td> </tr> <tr> <td> p8_score </td> <td> Double </td> </tr> <tr> <td> p9_seg_wf </td> <td> Double </td> </tr> <tr> <td> p9_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p9_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p9_ste_wf </td> <td> Double </td> </tr> <tr> <td> p9_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p9_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p9_pla_wf </td> <td> Double </td> </tr> </table> <table> <tr> <th> p9_score </th> <th> Double </th> </tr> <tr> <td> p10_seg_wf </td> <td> Double </td> </tr> <tr> <td> p10_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p10_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p10_ste_wf </td> <td> Double </td> </tr> <tr> <td> p10_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p10_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p10_pla_wf </td> <td> Double </td> </tr> <tr> <td> p10_score </td> <td> Double </td> </tr> <tr> <td> p11_seg_wf </td> <td> Double </td> </tr> <tr> <td> p11_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p11_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p11_ste_wf </td> <td> Double </td> </tr> <tr> <td> p11_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p11_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p11_pla_wf </td> <td> Double </td> </tr> <tr> <td> p11_score </td> <td> Double </td> </tr> <tr> <td> p12_seg_wf </td> <td> Double </td> </tr> <tr> <td> p12_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p12_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p12_ste_wf </td> <td> Double </td> </tr> <tr> <td> p12_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p12_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p12_pla_wf </td> <td> Double </td> </tr> <tr> <td> p12_score </td> <td> Double </td> </tr> <tr> <td> p13_seg_wf </td> <td> Double </td> </tr> <tr> <td> p13_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p13_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p13_ste_wf </td> <td> Double </td> </tr> <tr> <td> p13_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p13_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p13_pla_wf </td> <td> Double </td> </tr> <tr> <td> p13_score </td> <td> Double </td> </tr> <tr> <td> p14_seg_wf </td> <td> Double </td> </tr> <tr> <td> p14_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p14_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p14_ste_wf </td> <td> Double </td> </tr> <tr> <td> p14_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p14_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p14_pla_wf </td> <td> Double </td> </tr> <tr> <td> p14_score </td> <td> Double </td> </tr> <tr> <td> p15_seg_wf </td> <td> Double </td> </tr> <tr> <td> p15_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p15_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p15_ste_wf </td> <td> Double </td> </tr> <tr> <td> p15_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p15_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p15_pla_wf </td> <td> Double </td> </tr> <tr> <td> p15_score </td> <td> Double </td> </tr> <tr> <td> p16_seg_wf </td> <td> Double </td> </tr> <tr> <td> p16_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p16_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p16_ste_wf </td> <td> Double </td> </tr> <tr> <td> p16_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p16_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p16_pla_wf </td> <td> Double </td> </tr> <tr> <td> p16_score </td> <td> Double </td> </tr> <tr> <td> p17_seg_wf </td> <td> Double </td> </tr> <tr> <td> p17_ste_wf_1 </td> <td> Double </td> </tr> <tr> <td> p17_ste_wf_2 </td> <td> Double </td> </tr> <tr> <td> p17_ste_wf </td> <td> Double </td> </tr> <tr> <td> p17_pla_wf_1 </td> <td> Double </td> </tr> <tr> <td> p17_pla_wf_2 </td> <td> Double </td> </tr> <tr> <td> p17_pla_wf </td> <td> Double </td> </tr> <tr> <td> p17_score </td> <td> Double </td> </tr> <tr> <td> CAD_score </td> <td> Double </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **Blood Tests, “fragments” / “hfragments” structures (Clinical Partner direct input(** <table> <tr> <th> **Blood Tests** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> loked </td> <td> Boolean </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> FastingGlucose </td> <td> Double </td> </tr> <tr> <td> FastingGlucoseMetric </td> <td> String </td> </tr> <tr> <td> TotChol </td> <td> Double </td> </tr> <tr> <td> TotCholMetric </td> <td> String </td> </tr> <tr> <td> LDL </td> <td> Double </td> </tr> <tr> <td> LDLMetric </td> <td> String </td> </tr> <tr> <td> HDL </td> <td> Double </td> </tr> <tr> <td> HDLMetric </td> <td> String </td> </tr> <tr> <td> Triglycerides </td> <td> Double </td> </tr> <tr> <td> TriglyceridesMetric </td> <td> String </td> </tr> <tr> <td> UricAcidMetric </td> <td> Double </td> </tr> <tr> <td> UricAcidMetricMetric </td> <td> String </td> </tr> <tr> <td> Creatinine </td> <td> Double </td> </tr> <tr> <td> CreatinineMetric </td> <td> String </td> </tr> <tr> <td> Leukocytes </td> <td> Double </td> </tr> <tr> <td> LeukocytesMetric </td> <td> String </td> </tr> <tr> <td> Erythrocytes </td> <td> Double </td> </tr> <tr> <td> ErythrocytesMetric </td> <td> String </td> </tr> <tr> <td> Platelets </td> <td> Double </td> </tr> <tr> <td> PlateletsMetric </td> <td> String </td> </tr> <tr> <td> Hemoglobin </td> <td> Double </td> </tr> <tr> <td> HemoglobinMetric </td> <td> String </td> </tr> <tr> <td> Fibrinogen </td> <td> Double </td> </tr> <tr> <td> FibrinogenMetric </td> <td> String </td> </tr> <tr> <td> HTC </td> <td> Double </td> </tr> <tr> <td> MCV </td> <td> Double </td> </tr> <tr> <td> MCH </td> <td> Double </td> </tr> <tr> <td> INR </td> <td> Double </td> </tr> <tr> <td> aPTT </td> <td> Double </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **Inflammatory markers / Lipid profile / Biohumoral markers, “fragments” / “hfragments” structures (Clinical Partner direct upload, data extraction)** <table> <tr> <th> **Inflammatory markers** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Content </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> <table> <tr> <th> **Lipid profile** </th> <th> </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Content </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> </table> <table> <tr> <th> value </th> <th> String </th> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> </table> <table> <tr> <th> [19] </th> <th> Object </th> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [20] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [21] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [22] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [23] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [24] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [25] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [26] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [27] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [28] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [29] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [30] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> </table> <table> <tr> <th> value </th> <th> String </th> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [31] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [32] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [33] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [34] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [35] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [36] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [37] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [38] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [39] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [40] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [41] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> </table> <table> <tr> <th> [42] </th> <th> Object </th> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [43] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [44] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [45] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [46] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [47] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [48] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [49] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [50] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [51] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [52] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [53] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> </table> <table> <tr> <th> value </th> <th> String </th> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [54] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [55] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [56] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [57] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [58] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [59] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> <table> <tr> <th> **Biohumoral markers** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> Payload </td> <td> Object </td> </tr> <tr> <td> Content </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> </table> <table> <tr> <th> metric </th> <th> string </th> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [19] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [20] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [21] </td> <td> Object </td> </tr> </table> <table> <tr> <th> param </th> <th> String </th> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> [22] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> metric </td> <td> string </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **Monocytes, “fragments” / “hfragments” structure (Clinical Partner direct upload)** <table> <tr> <th> **Monocytes** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> content </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> object </td> </tr> <tr> <td> row1_cell0 </td> <td> String </td> </tr> <tr> <td> prefix1 </td> <td> String </td> </tr> <tr> <td> row1_cell2 </td> <td> String </td> </tr> <tr> <td> row1_cell3 </td> <td> String </td> </tr> <tr> <td> prefix2 </td> <td> String </td> </tr> <tr> <td> row1_cell5 </td> <td> String </td> </tr> <tr> <td> row1_cell6 </td> <td> String </td> </tr> <tr> <td> prefix3 </td> <td> String </td> </tr> <tr> <td> row1_cell8 </td> <td> String </td> </tr> <tr> <td> row1_cell9 </td> <td> String </td> </tr> <tr> <td> prefix4 </td> <td> String </td> </tr> <tr> <td> row1_cell11 </td> <td> String </td> </tr> <tr> <td> row1_cell12 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> metric1 </td> <td> String </td> </tr> <tr> <td> metric2 </td> <td> String </td> </tr> <tr> <td> metric3 </td> <td> String </td> </tr> <tr> <td> metric4 </td> <td> String </td> </tr> <tr> <td> metric5 </td> <td> String </td> </tr> <tr> <td> metric6 </td> <td> String </td> </tr> <tr> <td> metric7 </td> <td> String </td> </tr> <tr> <td> metric8 </td> <td> String </td> </tr> <tr> <td> metric9 </td> <td> String </td> </tr> <tr> <td> metric10 </td> <td> String </td> </tr> <tr> <td> metric11 </td> <td> String </td> </tr> <tr> <td> metric12 </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> </table> <table> <tr> <th> value9 </th> <th> String </th> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> </table> <table> <tr> <th> [6] </th> <th> Object </th> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> Param </td> <td> String </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> </table> <table> <tr> <th> value3 </th> <th> String </th> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> param </td> <td> String </td> </tr> <tr> <td> value1 </td> <td> String </td> </tr> <tr> <td> value2 </td> <td> String </td> </tr> <tr> <td> value3 </td> <td> String </td> </tr> <tr> <td> value4 </td> <td> String </td> </tr> <tr> <td> value5 </td> <td> String </td> </tr> <tr> <td> value6 </td> <td> String </td> </tr> <tr> <td> value7 </td> <td> String </td> </tr> <tr> <td> value8 </td> <td> String </td> </tr> <tr> <td> value9 </td> <td> String </td> </tr> <tr> <td> value10 </td> <td> String </td> </tr> <tr> <td> value11 </td> <td> String </td> </tr> <tr> <td> value12 </td> <td> String </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **QCTA Analysis, “fragments” / “hfragments” structures (Clinical Partner direct upload, no data extraction)** <table> <tr> <th> **QCTA Analysis** </th> <th> </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **Exclusion criteria, “fragments” / “hfragments” structures (Clinical Partner direct input)** <table> <tr> <th> **Exclusion Criteria** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> loked </td> <td> Boolean </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> acuteCoronarySyndrome </td> <td> Boolean </td> </tr> <tr> <td> inabilityProvideConsent </td> <td> Boolean </td> </tr> <tr> <td> Pregnancy </td> <td> Boolean </td> </tr> <tr> <td> lvDysfunction </td> <td> Boolean </td> </tr> <tr> <td> AtrialFibrillation </td> <td> Boolean </td> </tr> <tr> <td> CerebralIschemicAttack </td> <td> Boolean </td> </tr> <tr> <td> ActiveCancerNeoplasticSurgery </td> <td> Boolean </td> </tr> <tr> <td> Cardiomyopathy </td> <td> Boolean </td> </tr> <tr> <td> CongenitalHeartDisease </td> <td> Boolean </td> </tr> <tr> <td> SignificantValvularDisease </td> <td> Boolean </td> </tr> <tr> <td> CoronaryRevascularization </td> <td> Boolean </td> </tr> <tr> <td> CABG </td> <td> Boolean </td> </tr> <tr> <td> CarotidSurgeryHistory </td> <td> Boolean </td> </tr> <tr> <td> Creatinine </td> <td> Boolean </td> </tr> <tr> <td> ChronicKidneyDisease </td> <td> Boolean </td> </tr> <tr> <td> ActiveAutoimmuneAcuteInflammatoryDisease </td> <td> Boolean </td> </tr> <tr> <td> HIV </td> <td> Boolean </td> </tr> <tr> <td> HepatitisB </td> <td> Boolean </td> </tr> <tr> <td> HepatitisC </td> <td> Boolean </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **CTA Report page, “fragments” / “hfragments” structure (Clinical Partner direct input)** <table> <tr> <th> **CTA Report** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> ccsNotApplicable </td> <td> Boolean </td> </tr> <tr> <td> TCS_HU </td> <td> String </td> </tr> <tr> <td> TCScore </td> <td> String </td> </tr> <tr> <td> RCA_HU </td> <td> String </td> </tr> <tr> <td> RCAScore </td> <td> String </td> </tr> <tr> <td> LM_HU </td> <td> String </td> </tr> <tr> <td> LMScore </td> <td> String </td> </tr> <tr> <td> LAD_HU </td> <td> String </td> </tr> <tr> <td> LADScore </td> <td> String </td> </tr> <tr> <td> LCX_HU </td> <td> String </td> </tr> <tr> <td> LCXScore </td> <td> String </td> </tr> <tr> <td> nonCoronaryCardiacFindings </td> <td> Boolean </td> </tr> <tr> <td> systemDominance </td> <td> String </td> </tr> <tr> <td> noPlaquesInAnySegment </td> <td> Boolean </td> </tr> <tr> <td> noVisibleSegmentsExist </td> <td> Boolean </td> </tr> <tr> <td> noVisibleSegmentDescription </td> <td> String </td> </tr> <tr> <td> plaque1 </td> <td> Boolean </td> </tr> <tr> <td> p1_Prox_type </td> <td> String </td> </tr> <tr> <td> p1_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p1_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p1_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque1_2 </td> <td> Boolean </td> </tr> <tr> <td> p1_2_Prox_type </td> <td> String </td> </tr> </table> <table> <tr> <th> p1_2_Prox_stent </th> <th> Boolean </th> </tr> <tr> <td> p1_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p1_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque2 </td> <td> Boolean </td> </tr> <tr> <td> p2_Prox_type </td> <td> String </td> </tr> <tr> <td> p2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque2_2 </td> <td> Boolean </td> </tr> <tr> <td> p2_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p2_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p2_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p2_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque3 </td> <td> Boolean </td> </tr> <tr> <td> p3_Prox_type </td> <td> String </td> </tr> <tr> <td> p3_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p3_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p3_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque3_2 </td> <td> Boolean </td> </tr> <tr> <td> p3_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p3_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p3_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p3_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque4 </td> <td> Boolean </td> </tr> <tr> <td> p4_Prox_type </td> <td> String </td> </tr> <tr> <td> p4_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p4_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p4_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque4_2 </td> <td> Boolean </td> </tr> <tr> <td> p4_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p4_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p4_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p4_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque5 </td> <td> Boolean </td> </tr> <tr> <td> p5_Prox_type </td> <td> String </td> </tr> <tr> <td> p5_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p5_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p5_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque5_2 </td> <td> Boolean </td> </tr> <tr> <td> p5_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p5_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p5_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p5_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque6 </td> <td> Boolean </td> </tr> <tr> <td> p6_Prox_type </td> <td> String </td> </tr> <tr> <td> p6_Prox_stent </td> <td> Boolean </td> </tr> </table> <table> <tr> <th> p6_Prox_lumen </th> <th> String </th> </tr> <tr> <td> p6_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque6_2 </td> <td> Boolean </td> </tr> <tr> <td> p6_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p6_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p6_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p6_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque7 </td> <td> Boolean </td> </tr> <tr> <td> p7_Prox_type </td> <td> String </td> </tr> <tr> <td> p7_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p7_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p7_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque7_2 </td> <td> Boolean </td> </tr> <tr> <td> p7_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p7_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p7_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p7_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque8 </td> <td> Boolean </td> </tr> <tr> <td> p8_Prox_type </td> <td> String </td> </tr> <tr> <td> p8_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p8_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p8_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque8_2 </td> <td> Boolean </td> </tr> <tr> <td> p8_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p8_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p8_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p8_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque9 </td> <td> Boolean </td> </tr> <tr> <td> p9_Prox_type </td> <td> String </td> </tr> <tr> <td> p9_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p9_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p9_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque9_2 </td> <td> Boolean </td> </tr> <tr> <td> p9_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p9_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p9_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p9_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque10 </td> <td> Boolean </td> </tr> <tr> <td> p10_Prox_type </td> <td> String </td> </tr> <tr> <td> p10_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p10_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p10_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque10_2 </td> <td> Boolean </td> </tr> <tr> <td> p10_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p10_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p10_2_Prox_lumen </td> <td> String </td> </tr> </table> <table> <tr> <th> p10_2_Plaque_progression </th> <th> Boolean </th> </tr> <tr> <td> plaque11 </td> <td> Boolean </td> </tr> <tr> <td> p11_Prox_type </td> <td> String </td> </tr> <tr> <td> p11_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p11_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p11_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque11_2 </td> <td> Boolean </td> </tr> <tr> <td> p11_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p11_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p11_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p11_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque12 </td> <td> Boolean </td> </tr> <tr> <td> p12_Prox_type </td> <td> String </td> </tr> <tr> <td> p12_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p12_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p12_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque12_2 </td> <td> Boolean </td> </tr> <tr> <td> p12_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p12_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p12_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p12_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque13 </td> <td> Boolean </td> </tr> <tr> <td> p13_Prox_type </td> <td> String </td> </tr> <tr> <td> p13_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p13_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p13_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque13_2 </td> <td> Boolean </td> </tr> <tr> <td> p13_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p13_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p13_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p13_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque14 </td> <td> Boolean </td> </tr> <tr> <td> p14_Prox_type </td> <td> String </td> </tr> <tr> <td> p14_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p14_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p14_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque14_2 </td> <td> Boolean </td> </tr> <tr> <td> p14_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p14_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p14_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p14_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque15 </td> <td> Boolean </td> </tr> <tr> <td> p15_Prox_type </td> <td> String </td> </tr> <tr> <td> p15_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p15_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p15_Plaque_progression </td> <td> Boolean </td> </tr> </table> <table> <tr> <th> plaque15_2 </th> <th> Boolean </th> </tr> <tr> <td> p15_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p15_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p15_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p15_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque16 </td> <td> Boolean </td> </tr> <tr> <td> p16_Prox_type </td> <td> String </td> </tr> <tr> <td> p16_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p16_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p16_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque16_2 </td> <td> Boolean </td> </tr> <tr> <td> p16_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p16_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p16_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p16_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque17 </td> <td> Boolean </td> </tr> <tr> <td> p17_Prox_type </td> <td> String </td> </tr> <tr> <td> p17_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p17_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p17_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> plaque17_2 </td> <td> Boolean </td> </tr> <tr> <td> p17_2_Prox_type </td> <td> String </td> </tr> <tr> <td> p17_2_Prox_stent </td> <td> Boolean </td> </tr> <tr> <td> p17_2_Prox_lumen </td> <td> String </td> </tr> <tr> <td> p17_2_Plaque_progression </td> <td> Boolean </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **CTA Acquisition page, “fragments” structure (Clinical Partner direct input)** <table> <tr> <th> **CTA Acquisition** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> ccta_date </td> <td> String </td> </tr> <tr> <td> scanner_manufacturer </td> <td> String </td> </tr> <tr> <td> scanner_type </td> <td> String </td> </tr> <tr> <td> standardCStubeVoltage </td> <td> Int32 </td> </tr> <tr> <td> standardCStubeCurrent </td> <td> Int32 </td> </tr> <tr> <td> standardCSscanSliceThickness </td> <td> Double </td> </tr> <tr> <td> standardCSscanIncrementThickness </td> <td> Double </td> </tr> <tr> <td> hr_stable </td> <td> Boolean </td> </tr> <tr> <td> hr_stable_value </td> <td> Int32 </td> </tr> <tr> <td> beta_blocking_used </td> <td> Boolean </td> </tr> <tr> <td> beta_blocking_type </td> <td> String </td> </tr> <tr> <td> additional_beta_blocking_used </td> <td> Boolean </td> </tr> <tr> <td> additional_beta_blocking_type </td> <td> String </td> </tr> <tr> <td> additional_beta_blocking_amount </td> <td> String </td> </tr> <tr> <td> additional_beta_blocking_roa </td> <td> String </td> </tr> <tr> <td> oral </td> <td> String </td> </tr> <tr> <td> IV </td> <td> String </td> </tr> <tr> <td> OtherMedicationAdministered </td> <td> Boolean </td> </tr> <tr> <td> OtherMedicationAdministeredType </td> <td> String </td> </tr> <tr> <td> Nitroglycerin </td> <td> Boolean </td> </tr> <tr> <td> NitroglycerinAmount </td> <td> String </td> </tr> <tr> <td> NitroglycerinRoa </td> <td> String </td> </tr> <tr> <td> sublingualSpray </td> <td> String </td> </tr> <tr> <td> tablet </td> <td> String </td> </tr> <tr> <td> Native scan CTDIvol </td> <td> Double </td> </tr> <tr> <td> Native sca DLP </td> <td> Double </td> </tr> <tr> <td> Contrast scan CTDIvol </td> <td> Double </td> </tr> <tr> <td> Contrast scan DLP </td> <td> Double </td> </tr> <tr> <td> Total scan CTDIvol </td> <td> Double </td> </tr> <tr> <td> Total scan DLP </td> <td> Double </td> </tr> <tr> <td> Heart rate contrast scan </td> <td> Int32 </td> </tr> <tr> <td> Reconstruction % of R-R </td> <td> Int32 </td> </tr> <tr> <td> Slice Thickness </td> <td> Double </td> </tr> <tr> <td> Increment thickness </td> <td> Double </td> </tr> <tr> <td> kernel </td> <td> Boolean </td> </tr> <tr> <td> Collimation </td> <td> Int32 </td> </tr> <tr> <td> Tube voltage </td> <td> Int32 </td> </tr> <tr> <td> Tube current </td> <td> Int32 </td> </tr> <tr> <td> Prospective </td> <td> Boolean </td> </tr> <tr> <td> ProspectiveAmount </td> <td> String </td> </tr> <tr> <td> Retrospective </td> <td> Boolean </td> </tr> <tr> <td> ECGmodulation </td> <td> Boolean </td> </tr> <tr> <td> ECGmodulationFrom </td> <td> Int32 </td> </tr> <tr> <td> ECGmodulationTo </td> <td> Int32 </td> </tr> <tr> <td> CCollimation </td> <td> Int32 </td> </tr> <tr> <td> CTube voltage </td> <td> Int32 </td> </tr> <tr> <td> CTube current </td> <td> Int32 </td> </tr> <tr> <td> CProspective </td> <td> Boolean </td> </tr> <tr> <td> CProspectiveAmount </td> <td> Int32 </td> </tr> <tr> <td> CRetrospective </td> <td> Boolean </td> </tr> <tr> <td> CECGmodulation </td> <td> Boolean </td> </tr> <tr> <td> CECGmodulationFrom </td> <td> Int32 </td> </tr> <tr> <td> CECGmodulationTo </td> <td> Int32 </td> </tr> <tr> <td> Automatic Exposure Control </td> <td> Boolean </td> </tr> <tr> <td> ContrastAgent </td> <td> Boolean </td> </tr> <tr> <td> ContrastAgentName </td> <td> String </td> </tr> <tr> <td> ContrastAgentVolume </td> <td> Double </td> </tr> <tr> <td> ContrastAdministration </td> <td> Double </td> </tr> <tr> <td> SalineFlush </td> <td> Boolean </td> </tr> <tr> <td> SalineFlushAmount </td> <td> Double </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **Clinical Decision page, “fragments” structure (Clinical Partner direct input)** <table> <tr> <th> **Clinical Decision** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> plaqueProgression </td> <td> String </td> </tr> <tr> <td> plaqueProgressionType </td> <td> String </td> </tr> <tr> <td> nonInvasiveTesting </td> <td> Boolean </td> </tr> <tr> <td> nonInvasiveTestingType </td> <td> String </td> </tr> <tr> <td> angiography </td> <td> Boolean </td> </tr> <tr> <td> angiographyType </td> <td> String </td> </tr> <tr> <td> revascularization </td> <td> Boolean </td> </tr> <tr> <td> revascularizationType </td> <td> String </td> </tr> <tr> <td> oralAntidiabetics </td> <td> Boolean </td> </tr> <tr> <td> oralAntidiabeticsDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> Insulin </td> <td> Boolean </td> </tr> <tr> <td> InsulinDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> statins </td> <td> Boolean </td> </tr> <tr> <td> StatinsMgDie </td> <td> Boolean </td> </tr> <tr> <td> StatinsType </td> <td> Boolean </td> </tr> <tr> <td> StatinsDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> aCEInhibitors </td> <td> Boolean </td> </tr> <tr> <td> aCEInhibitorsDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> Diuretics </td> <td> Boolean </td> </tr> <tr> <td> DiureticsDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> ARB </td> <td> Boolean </td> </tr> <tr> <td> ARBDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> bETABlockers </td> <td> Boolean </td> </tr> <tr> <td> bETABlockersDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> calciumAntagonists </td> <td> Boolean </td> </tr> <tr> <td> calciumAntagonistsDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> Aspirin </td> <td> Boolean </td> </tr> <tr> <td> AspirinDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> novelAnticoagulants </td> <td> Boolean </td> </tr> <tr> <td> novelAnticoagulantDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> traditionalAnticoagulants </td> <td> Boolean </td> </tr> <tr> <td> traditionalAnticoagulantDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> Nitrates </td> <td> Boolean </td> </tr> <tr> <td> NitratesDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> DAPT </td> <td> Boolean </td> </tr> <tr> <td> DAPTDoseIncrease </td> <td> Boolean </td> </tr> <tr> <td> otherDrugs </td> <td> </td> </tr> <tr> <td> otherDrugsText </td> <td> </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> **Quality Control page, “fragments” / “hfragments” structure (Clinical Partner direct input)** <table> <tr> <th> **Quality Control** </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> typeCode </td> <td> String </td> </tr> <tr> <td> payload </td> <td> Object </td> </tr> <tr> <td> clinical_issue </td> <td> Boolean </td> </tr> <tr> <td> clinical_notes </td> <td> String </td> </tr> <tr> <td> clinical_check_completed </td> <td> Boolean </td> </tr> <tr> <td> clinical_data_corrected </td> <td> Boolean </td> </tr> <tr> <td> cta_report_issue </td> <td> Boolean </td> </tr> <tr> <td> cta_report_notes </td> <td> String </td> </tr> <tr> <td> cta_report_check_completed </td> <td> Boolean </td> </tr> <tr> <td> cta_report_data_corrected </td> <td> Boolean </td> </tr> <tr> <td> blood_tests_issue </td> <td> Boolean </td> </tr> <tr> <td> blood_tests_notes </td> <td> String </td> </tr> <tr> <td> blood_tests_check_completed </td> <td> Boolean </td> </tr> <tr> <td> blood_tests_data_corrected </td> <td> Boolean </td> </tr> <tr> <td> decision_making_issue </td> <td> Boolean </td> </tr> <tr> <td> decision_making_notes </td> <td> String </td> </tr> <tr> <td> decision_making_check_completed </td> <td> Boolean </td> </tr> <tr> <td> decision_making_data_corrected </td> <td> Boolean </td> </tr> <tr> <td> status </td> <td> String </td> </tr> <tr> <td> valid </td> <td> Boolean </td> </tr> <tr> <td> historical </td> <td> Boolean </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> <tr> <td> study </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> ObjectId </td> </tr> </table> I **nstitutions” structure** <table> <tr> <th> **Institutions** </th> <th> </th> </tr> <tr> <td> _id </td> <td> String </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> clinical </td> <td> Boolean </td> </tr> </table> **“Images” structure (Clinical Partner direct input)** <table> <tr> <th> **Images** </th> <th> </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> fileName </td> <td> String </td> </tr> <tr> <td> fileSize </td> <td> Int64 </td> </tr> <tr> <td> contentType </td> <td> String </td> </tr> <tr> <td> sourcePath </td> <td> String </td> </tr> <tr> <td> studyCode </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **“studies” structure (Clinical Partner direct input)** <table> <tr> <th> **Studies** </th> <th> </th> </tr> <tr> <td> _id </td> <td> ObjectId </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> studyCode </td> <td> String </td> </tr> <tr> <td> subjectCode </td> <td> String </td> </tr> <tr> <td> progressiveStudyNumber </td> <td> Int32 </td> </tr> <tr> <td> creationDate </td> <td> Date </td> </tr> <tr> <td> enabled </td> <td> Boolean </td> </tr> <tr> <td> fragmentsDisabled </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> <table> <tr> <th> Institution </th> <th> Object </th> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> fragments </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> $ref </td> <td> String </td> </tr> <tr> <td> $id </td> <td> String </td> </tr> </table> **Blood Tests page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_1 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_2 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> classname </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_3 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> </table> <table> <tr> <th> name </th> <th> String </th> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_4 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_5 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_6 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> description </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_7 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> options </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_8 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> allowInvalid </th> <th> Boolean </th> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> uom2x_9 </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> [3] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Quality Control page, “types” structure** <table> <tr> <th> **Quality Control** </th> <th> </th> </tr> <tr> <td> _id </td> <td> String </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> String </td> </tr> </table> <table> <tr> <th> [8] </th> <th> String </th> </tr> <tr> <td> [9] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **RNA Sequencing page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Exclusion Criteria page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> Fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> fieldGroup </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> fieldGroup </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> fieldGroup </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Lipid Profile page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> importerClass </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Biohumoral markers page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> importerClass </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Exome Sequencing page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Monocytes page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> importerClass </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Inflammatory markers page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> importerClass </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **QCTA Analysis page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Blood Sampling page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> Fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> template </th> <th> String </th> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [8] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **CTA Report Core Lab, “types” structure** <table> <tr> <th> **CTA Report Core Lab** </th> <th> </th> </tr> <tr> <td> _id </td> <td> String </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> Fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> fieldGroup </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> depends </th> <th> String </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> Object </th> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> id </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> dictionary </th> <th> String </th> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [4] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> String </th> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> dictionary </th> <th> String </th> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> String </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> hideExpression </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> </table> <table> <tr> <th> [4] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> </table> <table> <tr> <th> expression </th> <th> String </th> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> </table> <table> <tr> <th> expression </th> <th> String </th> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> </table> <table> <tr> <th> [3] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> </table> <table> <tr> <th> [5] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> depends </th> <th> String </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> fieldGroup </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> </table> <table> <tr> <th> [7] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> depends </th> <th> String </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> </table> <table> <tr> <th> [15] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> depends </th> <th> String </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> depends </th> <th> String </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Clinical Characteristics, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> field </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> max </td> <td> Int32 </td> </tr> <tr> <td> min </td> <td> Int32 </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> descrption </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> descrption </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> descrption </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> descrption </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> descrption </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> descrption </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> description </th> <th> String </th> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> [11] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> </table> <table> <tr> <th> value </th> <th> String </th> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [19] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [20] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [21] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> description </th> <th> String </th> </tr> <tr> <td> [22] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [23] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [24] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [25] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [26] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> hideExpression </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [27] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [28] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [29] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [30] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [31] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [32] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [33] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> [34] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [35] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> </table> <table> <tr> <th> [2] </th> <th> Object </th> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> [36] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [37] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [38] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [39] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [40] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [41] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [42] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [43] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [44] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [45] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [46] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [47] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [48] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [49] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [50] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [51] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **CTA Acquisition, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> datePickerPopup </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [3] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [19] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [20] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [21] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [22] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [23] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [24] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [25] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [26] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [27] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **CTA Report, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> depends </th> <th> String </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> hideExpression </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> depends </td> <td> String </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> </table> <table> <tr> <th> name </th> <th> String </th> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> </table> <table> <tr> <th> [0] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> </table> <table> <tr> <th> templateOptions </th> <th> Object </th> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> hideExpression </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> id </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> </table> <table> <tr> <th> [3] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> </table> <table> <tr> <th> hideExpression </th> <th> String </th> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [5] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [2] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> </table> <table> <tr> <th> [3] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> id </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> dictionary </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Lab-On-Chip page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> description </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Quality Control structure, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> </table> <table> <tr> <th> controllerType </th> <th> String </th> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> </table> <table> <tr> <th> notChecked </th> <th> Object </th> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> placeholder </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> </table> <table> <tr> <th> modelOptions </th> <th> Object </th> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> modelOptions </td> <td> Object </td> </tr> <tr> <td> allowInvalid </td> <td> Boolean </td> </tr> <tr> <td> validators </td> <td> Object </td> </tr> <tr> <td> notChecked </td> <td> Object </td> </tr> <tr> <td> expression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> required </td> <td> Boolean </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Clinical Decision page, “types” structure** <table> <tr> <th> **Clinical Decision** </th> </tr> <tr> <td> _id </td> <td> String </td> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> fields </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> </table> <table> <tr> <th> [2] </th> <th> Object </th> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> key </th> <th> String </th> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> options </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> name </td> <td> String </td> </tr> <tr> <td> value </td> <td> String </td> </tr> <tr> <td> [6] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> template </td> <td> String </td> </tr> <tr> <td> [7] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> </table> <table> <tr> <th> [1] </th> <th> Object </th> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [8] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [9] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> </table> <table> <tr> <th> label </th> <th> String </th> </tr> <tr> <td> [3] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [10] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [11] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [12] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> </table> <table> <tr> <th> type </th> <th> String </th> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [13] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [14] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [15] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> </table> <table> <tr> <th> className </th> <th> String </th> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [16] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [17] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [18] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> </table> <table> <tr> <th> fieldGroup </th> <th> Array </th> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [19] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [20] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> fieldGroup </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> Object </td> </tr> <tr> <td> className </td> <td> String </td> </tr> <tr> <td> key </td> <td> String </td> </tr> <tr> <td> type </td> <td> String </td> </tr> <tr> <td> hideExpression </td> <td> String </td> </tr> <tr> <td> templateOptions </td> <td> Object </td> </tr> <tr> <td> label </td> <td> String </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> [3] </td> <td> String </td> </tr> <tr> <td> [4] </td> <td> String </td> </tr> <tr> <td> [5] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> [1] </td> <td> String </td> </tr> <tr> <td> [2] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **Dummy page, “types” structure** <table> <tr> <th> _id </th> <th> String </th> </tr> <tr> <td> _class </td> <td> String </td> </tr> <tr> <td> display </td> <td> String </td> </tr> <tr> <td> shortName </td> <td> String </td> </tr> <tr> <td> order </td> <td> Int32 </td> </tr> <tr> <td> controllerType </td> <td> String </td> </tr> <tr> <td> definition </td> <td> Object </td> </tr> <tr> <td> canReadRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> canWriteRoles </td> <td> Array </td> </tr> <tr> <td> [0] </td> <td> String </td> </tr> <tr> <td> versionNumber </td> <td> Int64 </td> </tr> <tr> <td> createdAt </td> <td> Date </td> </tr> <tr> <td> lastModified </td> <td> Date </td> </tr> <tr> <td> createBy </td> <td> String </td> </tr> <tr> <td> lastModifiedBy </td> <td> String </td> </tr> </table> **_Data processing via Apache Zeppelin:_ ** From CRFA data has abeen produced in order to perform data processing and analysis. This is performed through Apache Zeppelin, which is a collaborative data analytics and visualization tool for distributes, general-purpose data processing systems. In CRFA we use it to perform a fine-grained data export. We also reported the LUMC CTA score function definition and stenosis function definition, calculated by CRFA, for data export and further processing. The following formulas refers to some data produced from CRFA, whose values are function of some of the stored data. **__Clinical Characteristics_ Page _ ** * Body Mass Index (BMI): function of height and weight <table> <tr> <th> • BMI Class: function of BMI </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> _**_CTA Report CL Page_ ** _ </td> <td> $% $& </td> <td> ℎ 18.50 ℎ 25.00 </td> <td> 18.50 24.99 29.99 ’ 30.00 </td> </tr> <tr> <td> • TCSore: function of TCS_HU </td> </tr> <tr> <td> H@ G H@ H@ H@ H@ </td> <td> 1. 130 G H_I 2. 200 G H_I 3. 300 G H_I 4. G H_I </td> <td> 199 299 399 ’ 400 </td> </tr> </table> * RCASore: function of RCA_HU <table> <tr> <th> </th> <th> H@ J KH@ H@ H@ H@ </th> <th> 1. 130 J K_I 2. 200 J K_I 3. 300 J K_I 4. J K_I </th> <th> 199 299 399 ’ 400 </th> </tr> <tr> <td> • </td> <td> LMSore: function of LM_HU </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> H@ L H@ H@ H@ H@ </td> <td> 1. 130 L _I 2. 200 L _I 3. 300 L _I 4. L _I </td> <td> 199 299 399 ’ 400 </td> </tr> <tr> <td> • </td> <td> LADSore: function of LAD_HU </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> H@ LKMH@ H@ H@ H@ </td> <td> 1. 130 LKM_I 2. 200 LKM_I 3. 300 LKM_I 4. LKM_I </td> <td> 199 299 399 ’ 400 </td> </tr> <tr> <td> • </td> <td> LCXSore: function of LCXS_HU </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> H@ L NH@ H@ H@ H@ </td> <td> 1. 130 L N_I 2. 200 L N_I 3. 300 L N_I 4. L N_I </td> <td> 199 299 399 ’ 400 </td> </tr> </table> * Segment Weight Factor (seg_wf): function of System Dominance (sd) 1, ℎ ) * +0, * , ℎ * Stenosis Weight Factor 1 and (ste_wf_1): function of % of Lumen Diameter 1 (prox_lumen_1) )*)1 ⎧ ⎪⎪1 1 2 ) 3 _1 30% | 30% 1 2 ) 3 _1 50% ⎪1.4 50% 1 2 ) 3 ) 1 70% | 70% 1 2 ) 3 ) 1 90% | 1 2 ) 3 ) 1 8 90% 1.4 _1 ⎨ 1.2 1 2 ) 3 ) 1 3 1 & ⎪ 0 ℎ ⎪⎪ ⎩ * Stenosis Weight Factor 2 and (ste_wf_2): function of % of Lumen Diameter 2 (prox_lumen_2) )*)2 ⎧ ⎪⎪1 1 2 ) 3 _2 30% | 30% 1 2 ) 3 _2 50% ⎪1.4 50% 1 2 ) 3 ) 2 70% | 70% 1 2 ) 3 ) 2 90% | 1 2 ) 3 ) 2 8 90% 1.4 _2 ⎨ 1.2 1 2 ) 3 ) 2 3 1 & ⎪ 0 ℎ ⎪⎪ ⎩ * Stenosis Weight Factor (ste_wf): function of _ste_wf_1_ and _ste_wf_2_ ) * max< ) * ) 1, ) * ) 2= * Plaque Weight Factor 1 (pla_wf_1): function of Plaque type 1 (prox_type_1) <table> <tr> <th> 1 </th> <th> )* ) 1 </th> <th> 1.2 1 2 ⎧ ) >1 1 ⎪1.5 1.6 1 2 ) >1 1 2 | 1 2 ) >1 1 ⎨ 1.7 1 2 ) >1 1 ⎪ ⎩0 ℎ </th> <th> 1 3 4 </th> </tr> </table> * Plaque Weight Factor 2 (pla_wf_2): function of Plaque type 2 (prox_type_2) ⎧ 1.2 1 2) >1 2 1 ⎪1.5 1 ) * ) 2 1.6 1 2 ) >1 2 2 | 1 2 ) >1 2 3 ⎨ ⎪1.7 1 2 ) >1 2 4 ⎩0 ℎ * Plaque Weight Factor (ste_wf): function of _pla_wf_1_ and _pla_wf_2_ 1 _ * max< 1 _ * ) 1,1 _ * ) 2= * Segment Weight Factor (seg_wf), Stenosis Weight Factor (ste_wf), Plaque Weight Factor (pla_wf) ) @ ) * CAD Score (cad_score): function of _seg_score_ CD @ ) @ A ) @ B BEF The following parameters have been produced through Zeppelin and used for data analysis * Degree of Max Stenosis (max_stenosis): function of % of Lumen Diameter It is the maximum % of lumen diameter among all plaques for each patient. * Count of stents (count_stent) : function of stent. It is the number of total stents for each patient * Count of plaques (count_plaques): function of plaques. It is the number of total plaques for each patient.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1095_SMARTool_689068.md
# 1\. Executive Summary This deliverable provides the SMARTool Data Management Plan (DMP) first version and describes how the data collected or created within the project will be handled and which are the standards and the methodology that is followed for the collection and processing, whether and how these data will be accessible for re-use and whether it will be shared [1]. SMARTool project, at its present stage, is foreseen to collect and analyse a series of datasets, including data from user requirements to the clinical and other patient related data. Specifically, datasets are to be collected in two ways: (i) the data collection related to the users' personal information, demographics, clinical, imaging, molecular, omics and other health related data, (ii) the collected data through the application of the multivariate analysis, the data mining and the non-imaging classification algorithm for CAD stratification, the multiscale and multilevel site specific models for plaque progression and the non-invasive FFR, the pharmacological therapy modulation algorithms (EVINCI database) using data mining techniques and the virtual stent deployment approach. The former dataset will assist in the development of the SMARTool platform, while the latter will be the actual datasets that are foreseen to be the main output of the SMARTool platform. This deliverable consolidates the feedback provided by the SMARTool partners and the datasets they contribute to. More specifically, the deliverable was circulated to the project partners responsible for the different tasks, who defined the dataset descriptions according to the plans of data collection and analysis as well as the methods and processes which are followed to ensure adherence with the ethics requirements. Given that the majority of SMARTool datasets include the data collection from human participants, the produced and processed data are handled with caution, following all the ethical and privacy issues involved in such datasets. In addition, data are not shared or made accessible, if there is a risk of compromising the participants’ privacy. This is the initial version of the DMP of SMARTool and the datasets described represent the reflection of the collected and processed data till M18. These data will be enriched with more data (collected and processed), therefore during the evolution of the project the context of the datasets is expected to be continuously updated. However, the main principles as well the process of data collection, processing and preservation are expected to be those, as described in the current document. The deliverable is composed of different chapters and sub-sections. **Chapter 2** presents: (i) an overview of the SMARTool project, its overall vision and objectives, (ii) the main scope of this deliverable including the objectives of the DMP, (iii) the steps and actions involved in the data management cycle and, (iv) the background of the SMARTool DMP. **Chapter 3** presents: (i) the principle of SMARTool FAIR data management, (ii) the main categories of the project data, (iii) the data access and storage processes, (iv) the available databases accompanied with the level roles and permissions, (v) a detailed description of the datasets collected and processed per WP, the involved partners, the standards and metadata, the data exploitation and sharing, the archiving and preservation (including storage and backup) processes. **Chapter 4** presents the specific methods and tools for ensuring data security, including the data anonymization, encryption, storage, backups, etc. The chapter refers also to the ethical considerations related to the protection of the enrolled patients, the EU legislation and Ethics documents and experts’ opinions and Ethical Committees, that are taken into consideration during the lifecycle of SMARTool project. # 2\. Introduction A new element that has been included in the Horizon 2020 projects is the DMP which is related to the data generated in the framework of the project and how this data will be accessible [2]. The D4.1 – Data Management Plan v1 is the first version of the DMP of the SMARTool project, which has received funding by the European Union’s Horizon 2020 Programme under Grant Agreement number: 689068. The DMP is not a fixed process; it is an activity that will evolve during the lifespan of the SMARTool project. This first version of the DMP concerns an overview of the produced datasets and the specific conditions that are related to them. ## About the project Coronary artery disease (CAD) is a chronic disease with a high prevalence and epidemic proportions in the middle aged and elderly population, accounting for about 50% of all deaths [3], [4]. Medical therapy, lifestyle and diet suggestions and clinical risk factors, including dyslipidemia, hypertension, diabetes, are among the key factors in CAD patient-specific management. Currently, an integrated site specific and patient-specific comprehensive predictive model of plaque progression is lagging. Although several algorithms have been developed for primary and secondary prevention of CAD, a unified platform that incorporates all local and systemic risk factors into a personalized clinical decision support tool for stratification, diagnosis, prediction and treatment is not available. SMARTool aims at the development of a Cloud platform for enabling clinical decision support for prevention of CAD and of associated major adverse cardiovascular events (MACE). This is achieved through the standardization and integration of health data from heterogeneous sources and existing patient/artery specific multiscale and multilevel predictive models. Specifically, the SMARTool models rely on the extension of the already available multiscale and multilevel ARTreat models for coronary plaque assessment and progression over time through the utilization of non- invasive imaging by coronary computed tomography angiography (CCTA) and are extended with functional site-specific assessment (hemodynamically significant plaques by noninvasive Fractional Flow Reserve (FFR) computation) and additional heterogeneous patient-specific non-imaging data, such as patient history, and lifestyle, exposome, biohumoral data, phenotyping and genotyping). SMARTool allows: * **Patient-specific CAD stratification.** Specifically, the already available clinical stratification models (e.g Framingham, Genders, GES - Gene Expression Score) are implemented by patient genotyping and phenotyping (cellular/molecular markers) and examined in retrospective and prospective clinical data (EVINCI project population) towards stratifying the patients to the following categories: nonobstructive CAD, obstructive CAD and those without CAD categories. * **Site specific plaque progression prediction.** Existing multiscale and multilevel models for plaque progression prediction (ARTreat project) are updated and refined through the incorporation of additional genotyping and phenotyping features and examined by retrospective/ prospective noninvasive CCTA imaging data plus non-imaging patient-specific information (follow-up - EVINCI population). * **Patient-specific CAD diagnosis and CAD-related CHD treatment** Personalised and patientspecific therapeutical management (eg lifestyle changes, standard or high intensity medical therapy) are provided. Additionally, the interventional cardiologists are able to select the optimal stent type(s) and site(s) for appropriate stent deployment through the utilization of the SMARTool virtual angioplasty tool. Additionally, the final Clinical Decision Support System (CDSS) includes a microfluidic lab-on-chip device for blood analysis of cellular/molecular inflammatory markers. The overall platform will be assessed in the participating clinical sites. ### _**2.1 Purpose of the SMARTool Data Management Plan** _ The DMP is a cornerstone for ensuring good data management. DMP is related to the data management life cycle for the data which will be collected, processed and created during SMARTool project. More specifically, the DMP concerns the following activities: * research data management during/ after the project end * description of the type of the collected, processed and generated data * definition of standards that are used for data storage, safety and security * description of the data, which are shared/enable open access * process of data storage * assistance in streamlining the whole research process. The DMP provides _a priori_ the required data process and facilities to store data. The DMP: (i) includes in combination with ethical issues (section 4.2) the availability of research data, (ii) defines the measures and processes in order for the data to be properly anonymized and ensure their privacy, and (iii) concerns the strategy that is followed for the open data, which does not violate the conditions of the interlinked Research and Innovation projects. Figure 1 presents the steps and actions involved in the SMARTool data management cycle. _**Figure 1:** Overview of the SMARTool data management cycle. _ As far as research data is concerned, SMARTool provides access to this data through the CRFA platform. The imaging data is stored in the cloud-based 3DnetMedical.com DICOM compliant database from B3D’s UK based ISO27001 accredited datacenter – providing security, redundancy, reliability and scalability. DICOM studies can be automatically uploaded from PACS systems through 3Dnet Gateway using 2048 bit encryption, or can be uploaded manually by the user using SSL encryption (Figure 2). Non imaging clinical data is securely stored in a NoSQL MongoDB Database, accessible only via locally installed application server. Data redundancy, reliability and scalability is provided by the native MongoDB replica-set feature. MongoDB cluster members communicate (for continuous synchronization) via encrypted TLS/SSL channels. Data is uploaded via https (secure http) application interface and can be accessed, from within the project infrastructure via secure RESTful API (OAuth 2.0, OpenID session token). The OAuth2 authentication and session token issuing, renewing and revoking, is provided by a WSO2 Identity Server deployed in the project infrastructure. Clinical documents are acquired and stored in a Document Repository according to IHE XDS.b (Cross-Enterprise Document Sharing) and XDS-I.b (Cross-enterprise Document Sharing for Imaging) standards [5]. Data security and privacy for clinical documents is provided by implementing the IHE ATNA (Audit Trail and Node Authentication) [6] and IHE XUA (Cross-Enterprise User Assertion) profiles [7]. _**Figure 2:** Uploading DICOM studies to 3DnetMedical.com. _ The purpose of the clinical data collection/generation in relation to the objectives of the project are: * To optimise the learning stage of SMARTool in order to extract and select discriminative markers (from any data source, ranging from history, genetics, circulating molecules and coronary anatomy by CT scan) of CAD severity/progression/association with MACE. * To provide a "smart" data storage to the final CDSS which is the main outcome of the project, adequate for: (i) the external validation of the selected panel of markers by external cohorts, and (ii) CDSS application and exploitation in clinics. ### _**2.2 Background of the SMARTool Data Management Plan** _ SMARToool DMP will be in accordance with the following articles of the Grant Agreement: * **Article 36 - Confidentiality.** During the project duration and for four years after the period set out in Article 3, the Consortium will keep confidential any data, documents or other relevant material that is defined as confidential. * **Article 39.2 - Processing of personal data.** All personal data are processed in compliance with the applicable EU and national law on data protection. The Consortium provides adequate information and explanation to the personnel whose personal data are collected and processed and provide them with the specific privacy statement. # 3\. SMARTool Data Management Plan ## 3.1. FAIR Data The SMARTool Consortium will be the end-responsible for the DMP and will make the research and clinical data findable, accessible, interoperable and reusable (FAIR), towards ensuring appropriate data management [8]. The research data management is not an objective itself, but it is rather the way to achieve knowledge discovery and innovation, and to accomplish data extension, integration and reuse. More specifically, the FAIR principles will promote the ability of machines to automatically discover and use the data, to support its reuse by individuals (Figure 3). _**Figure 3:** SMARTool FAIR data. _ To accomplish this, the data are available through well-defined APIs and the CRFA web-based user interface. The developed within the project software tools, which are used to create and process the data could be made available under the open source Apache 2.0 license, whenever possible. To support the FAIR principles, the following practices are followed in the SMARTool platform. * Data discoverability and metadata provision * Unique identifiers support persistent for a long time * Data naming conventions * Keywords search * Clear versioning approach * Standards for metadata: Documentation for future metadata will be created Τhe collected, processed and generated clinical and research data will be preserved and stored in a specific format so as to ensure long-time accessibility. To avoid file format obsolescence and avoid the risk of missing useful information, specific actions are followed, such as the specific file format selection with a high chance of being usable in the future. In addition, it is foreseen that in the next update of this deliverable the following issues will be addressed:  Specify/update the data which will be open available. In case of closed data a rationale will be provided * Specify/update how the data and associated metadata, documentation and code will be stored  Specify/update the access rights and restrictions To support the data interoperability, specific actions are followed. In order to acquire data from the legacy data sources, a specific HL7 compliant integration layer using clinical data semantics is designed [9]. HL7 integration layer is written using HAPI (HL7 application programming interface). The IHE Cross-Enterprise Document Sharing (XDS) Integration Profile is adopted to allow the registration, distribution and access across health enterprises of patient electronic health data. In any single Hospital, the HIS (Hospital Information System) provides the enterprise specific patient ID as well as the historical demographic data. For Patient Medical Records the CDA Release 2 HL7 Standard (Clinical Reports) is used according to the IHE Cardiology Technical Framework (Volume 1 (CARD TF-1): Integration Profiles [10] and Volume 2 (CARD TF-2): Transactions [11]). For Genomics data exchange we will adhere to HL7 IG CG_GENO, R1 Version 3 Genotype, Release1 - –January 2009, HL7 IG LOINCGENVA, R1 Version 2 Implementation Guide: Clinical Genomics; Fully LOINC-Qualified Genetic Variation Model, Release 1. For medical images, DICOM standard is adopted. For the refined models, standards Μarkup language (ML) format is used; ML for the multiscale and multilevel model, Predictive Model Markup language (PMML) for the data mining and stratification models, Portable Format for Analytics (PFA), which is an emerging standard for statistical models [12]. All the aforementioned standards and protocols ensure the required security in information exchange and anonymization as well as interoperability. To support the data re-use the following aspects will be taken into consideration: * Specific data will be licensed to permit its possible re-use * The period of data embargo will be defined for the data which is available for re-use  The access to the data by third parties after the end of the project will be defined  The data quality assurance processes will be defined. * The length of time for which the data is re-usable will be defined ## 3.2. Datasets to be gathered and processed The data of SMARTool project can be categorised in the following classes: **Cat1.** Collected Data: Data that has not been subjected to quality assurance or control **Cat2** . Validated Collected Data: Data that has been assessed in terms of completeness, correctness, integrity, credibility **Cat3** . Analyzed Collected/ Generated Data: Data which has been validated, analysed and processed _**Table 1** : Overview of data sets. _ <table> <tr> <th> **Dataset** </th> <th> **Related WP** </th> <th> **Brief description** </th> </tr> <tr> <td> Dataset 1 </td> <td> WP1, WP2, WP3 </td> <td> This dataset contains the SMARTool users' personal information, demographics, clinical, imaging, molecular, omics and other health related data. </td> </tr> <tr> <td> Dataset 2 </td> <td> WP3 </td> <td> This dataset contains the CAD stratification which is extracted through the application of data mining techniques </td> </tr> <tr> <td> Dataset 3 </td> <td> WP4 </td> <td> This dataset contains the results of the multiscale and multilevel site specific models for plaque progression and the non-invasive FFR to be used in the prognostic and diagnostic CDSS </td> </tr> <tr> <td> Dataset 4 </td> <td> WP4 </td> <td> This dataset contains the results of the pharmacological therapy modulation algorithms (EVINCI database) using data mining techniques. In addition, data from the virtual stent deployment approach are also included. </td> </tr> <tr> <td> Dataset 5 </td> <td> WP5 </td> <td> This dataset contains information on users’ requirements, use cases, questionnaires, specifications and system architecture. </td> </tr> <tr> <td> Dataset 6 </td> <td> WP6 </td> <td> The dataset contains the results of the research performed in SMARTool project which are communicated through public deliverables, journal and conference presentations, as well as other dissemination channels (website, social media, etc). </td> </tr> <tr> <td> Dataset 7 </td> <td> WP7 </td> <td> This dataset contains information related to the project management and coordination. </td> </tr> </table> ## 3.3. Data Access Procedures **Public (PU).** For the data that will be available in the public, the project web page [13] will provide a description of the dataset and will allow the user to download the relevant file. **Protected Data (PR).** Data indicated as protected may be communicated out of the consortium, as long as the interested parties ask for access from the SMARTool consortium, after explaining and providing evidence on how this data will be utilised, for instance for research or commercial purposes. **Confidential/Private (CO).** Data which are denoted as private/confidential will be stored at a specific space, namely the databases, to which only selected partners will have access. In order for other SMARTool partners to have access to these data, a proper written application to the partner responsible for the data storage will be provided, accompanied with a justification of the need for access. ## 3.4. Data Storage During the lifespan of the project, the data will be collected and systematically stored in a complex repository based on the following three components: * Imaging data is stored in cloud-based 3DnetMedical.com DICOM compliant repository. 3DnetMedical.com conforms to the DICOM SOP’s of the Storage Service Class at Level 2 (Full). 3DnetMedical.com offers Vendor Neutral Archive (VNA) functionalities from a UK based ISO27001 accredited datacentre – providing security, redundancy, reliability and scalability through onshore outsourcing. 3DnetMedical.com follows specific privacy and security-conscious policies applicable to all of its information handling practices. The 3Dnet Dicom store is employed also for genomic data storage, following the Dicom. * Structured clinical data (data acquired during the project for the trial population as well as data employed by the final platform) are stored in the MongoDB database; * Clinical data acquired by means of HL7/IHE integrations with hospital sources are stored in an XDS/XDSi repository User data (such as usernames and application privileges) will be managed by a platform's Identity Server (WSO2IS) and stored in the LDAP server embedded in the Identity Server. ### **3.4.1. Databases** **3.4.1.1 Level roles and permissions in databases** To easily manage the permissions in the databases, several roles have been defined in the following three groups of roles; Administrator, WP leaders and Researchers, Users (Table 2, Table 3, Table 4). _**Table 2:** Level roles and permissions in the 3DnetMedical.com DICOM compliant Database. _ <table> <tr> <th> **Description of permissions** </th> <th> **Administrator** </th> <th> **WP leaders & Researchers ** </th> <th> **User** </th> </tr> <tr> <td> Manage user accounts </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage user roles and access </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage folders, worklists and gateways. </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Upload studies </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Delete or assign a study* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Visualisation and manipulation of imaging data from studies in accessible worklists and folders* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Download data* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Report a study* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Access to patient information* </td> <td> </td> <td> </td> <td> </td> </tr> </table> *Depending on the user role _**Table 3** : Level roles and permissions in the CRFA NoSQL MongoDB Database. _ <table> <tr> <th> **Description of permissions** </th> <th> **Administrator** </th> <th> **WP leaders & Researchers ** </th> <th> **User** </th> </tr> <tr> <td> Download anonymized non imaging clinical data </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Visualisation and editing of non-imaging data from studies </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Upload non imaging data for studies </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Delete and report a study* </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Access to patient information* </td> <td> </td> <td> </td> <td> </td> </tr> </table> _**Table 4:** Level roles and permissions in the WSO2IS embedded LDAP User Database. _ <table> <tr> <th> **Description of permissions** </th> <th> **Administrator** </th> <th> **WP leaders & Researchers ** </th> <th> **User** </th> </tr> <tr> <td> Manage user accounts </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage user roles and access </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Manage DB, Identity Server, Integration Bus and Application Servers </td> <td> </td> <td> </td> <td> </td> </tr> </table> **3.4.1.2 Process** The 3DnetMedical.com conforms to DICOM SOP’s of the Storage Service Class at Level 2 (Full). The DICOM Information Model is derived from the DICOM Model of the Real World where the DICOM view of the Real-World that identifies the relevant Real-World Objects and their relationships within the scope of the DICOM Standard is shown. It provides a common framework to ensure consistency between the various Information Objects defined by the DICOM Standard. Clinical documents will be made available to the production SMARTool platform by means of IHE XDS.b (Cross-Enterprise Document Sharing) and XDS-I.b (Cross- enterprise Document Sharing for Imaging) integration profiles. An XDS Affinity Domain will be defined for the platform, in order to ease clinical information sharing with the clinical organizations that will use the platform. The platform itself will act as a Document Consumer to query/retrieve documents containing data used to model the patient, and as a Document Producer for registering the documents generated by the platform. Non imaging data will be collected directly during or after a patient's visit via the platform's data entry application (CRFA). This data will be stored in the SMARTool MongoDB database for processing and kept for future encounters with the patient. In order to ease data entry activities, when non imaging data is available in clinical documents stored in XDS.b repository the data entry application (CRFA) will pre-fill retrievable data in the corresponding form fields for user review. Non imaging data retention and clinical document retention will be subject to the clinical organization policy. ### **3.4.2. Datasets** **3.4.2.1 Origin of WP1, WP2 and WP3 datasets** _**Table 5:** Details of Dataset 1. _ <table> <tr> <th> **Data identification: Dataset 1** </th> </tr> <tr> <td> **Description** This dataset contains the SMARTool users' personal information, demographics, clinical, imaging, molecular, omics and other health related data. More specifically, the following categories of data are also included: * Blood tests/ Biohumoral * Imaging - CTA scan visual/quantitative analysis: plaque composition (calcified, mixed, noncalcified) and features , nominal categories * Circulating soluble proteins: consolidated biomarkers (hsTN, hsCRP, BNP, ALT) and inflammatory markers (IL6, IL 10, ICAM 1, VCAM, e-.selectin), values of blood concentration. * Genetics: associated selected SNPs, selected RNA genes from bioinformatics analysis of DNA/RNA sequencing. * Lipids: selected lipid species and concentrations in blood; names and values of plasma concentration from bioinformatics analysis. * Circulating MN surface proteins to quantify Mon1 Mon 2 and Mon 3 subpopulations: relative and absolute concentrations in blood </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat1, Cat2 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CNR, LUMC, ALACRIS </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> B3D, EHIT </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP1 Task 1.1 Clinical and imaging (CCTA) data collection (eg Evinci baseline data) Task 1.2 CCTA analysis under standardized criteria WP2 Task 2.1 Biohumoral data collection and analysis at baseline and at follow-up Task 2.2 Patient-specific phenotyping (cellular and molecular data) at baseline and at follow-up WP3 Task 3.1 Genomics and transcriptomics Task 3.2 Omics data analysis </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> RAW, DICOM, XLM, JSON, BSON Clinical data will not contain any metadata. Volume: To be calculated at the end of patient data collection </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy/ Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> Internal validation of discriminative markers by bootstrapping techniques </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal protection according to the project ethical and legal guidelines Written consent received. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> 3DnetMedical.com DICOM compliant database NoSQL MongoDB Database Long term archival </td> </tr> </table> **Methodologies for data collection.** For the follow-up CT acquisition, in order to achieve optimal quality for the 3D reconstruction of the desired arterial segments, the 128/256/320 MSCT scanners are used with interval between the slices not exceeding the threshold of 0.5 mm. For the elimination of motion or other artifacts in the acquired images, heart rate during scan should be less than 65 beats/min and optimally less than 60 beats/min. Nitroglycerin prior to the CTA acquisition is administered. Also, multiple cardiac phases are captured to choose different phases for different coronary segments, if needed. The reconstructed field of view is reduced to maximize the number of pixels devoted to depiction of the heart, usually a field of view of 200–250mm for coronary CTA studies of native coronary arteries. For the blood sampling ( _samples to be sent to CNR_ ) the responsible clinical partners collect for each enrolled patient during venipuncture: * N= 8/5 ml EDTA tubes (VIOLET tube) * N=2/5ml Li-heparin tubes (GREEN tube) * N= 2/5ml Clot activator tubes (RED tube) During blood sampling, details are recorded such as: * Samples collected (blood [which tubes, how many and in which order]). * Date and time of blood sampling * Last time of food/drink consumption and smoking. Special attention for 12h fasting and 12h refraining from smoking. The separation of blood aliquots is described below: ### For the VIOLET tubes: EDTA N=8 tubes  N=6 tubes/8 Ø EDTA-Plasma is separated by centrifugation at +4°C in a refrigerated centrifuge, 10 minutes at 1500xg. The plasma is separated from cell pellet within 1 hour from the withdrawing, keeping the vial over this time in an ice bath. Samples are subdivided in aliquots about 1 ml each (18 aliquots), by using small plastic vials (VIOLET CAPS) and keeping all the vials in an ice-bath during the whole procedure.  N=2 tubes/8 Ø EDTA-Whole-Blood tubes are stored at -20°C or -80°C without centrifugation and NOT aliquoted. ### For the RED tubes: Clot Activator N=2 tubes  Ø Serum samples are separated by centrifugation, for 10 minutes at 1500xg. Samples are subdivided in aliquots about 1 ml each (6 aliquots), by using small plastic vials (RED CAPS) and keeping all the vials in an ice-bath during the whole procedure. ### For the GREEN tubes: Li-HEPARIN N=2 tubes  Ø Heparin-Plasma are separated by centrifugation, for 10 minutes at 1500xg. Samples should be subdivided in aliquots about 1 ml each (6 aliquots), by using small plastic vials (GREEN CAPS) and keeping all the vials in an ice- bath during the whole procedure. ### For the TEMPUS tubes: N=3 tubes * Ø TEMPUS tubes (obtained by CNR) are stored at -20°C without centrifugation and NOT aliquoted For the blood storage, upon arrival in the lab, samples are maintained at room temperature (18-22 °C) for 2-4 hours before transferring to freezer, in an upright position. TEMPUS tubes can be left for up to72 hours at room temperature prior to freezing. The TEMPUS tubes should then be frozen at -20 °C upright until freezing and storage at -20°C or -80°C (if available) until shipping. The blood samples are packaged appropriately with sufficient dry ice or ensure that samples do not thaw and packaged in a manner that prevents breakage or leakage. For mandatory safety reasons the blood samples are tested for * HIV * HEPATITIS B * HEPATITIS C _**Table 6:** Details of Dataset 2. _ <table> <tr> <th> **Data identification: Dataset 2** </th> </tr> <tr> <td> **Description:** This dataset contains the CAD stratification which is extracted through the application of data mining techniques </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat3 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> FORTH </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> EHIT </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP3 Task 3.4 Multivariate analysis, data mining and non-imaging classification algorithm for CAD stratification </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> RAW, JSON, ML,XML, PMML, PFA Volume: ~5GB </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy/ Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> No data sharing, Only the data mining and nonimaging classification algorithm will be public available in peer review journals and conferences </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal data protection according to the project ethical and legal guidelines </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> NoSQL MongoDB Database Long term archival </td> </tr> </table> **3.4.2.2 Origin of WP4 datasets** _**Table 7:** Details of Dataset 3. _ <table> <tr> <th> **Data identification: Dataset 3** </th> </tr> <tr> <td> **Description:** This dataset contains the results of the multiscale and multilevel site specific models for plaque progression and the non-invasive FFR to be used in the prognostic and diagnostic CDSS </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat3 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> FORTH </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> EHIT </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 Task 4.1 Prognostic CDSS: Refinement of multiscale and multilevel site specific models for plaque progression Task 4.2 Diagnostic CDSS: Refinement of noninvasive FFR computation Task 4.4 Validation of prognostic CDSS (Multiscale-multilevel site specific plaque progression models) Task 4.5 Validation of diagnostic CDSS (3D artery reconstruction, plaque characterization and noninvasive FFR </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> DICOM, RAW, XLM, STL, IGES A data volume estimation will be later provided. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy / Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> No data sharing. Only the results from the plaque progression and the non- invasive FFR will be public available in peer review journals and conferences </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal data protection according to the project ethical and legal guidelines </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> NoSQL MongoDB Database Long term archival </td> </tr> </table> _**Table 8:** Details of Dataset 4. _ <table> <tr> <th> **Data identification: Dataset 4** </th> </tr> <tr> <td> **Description:** This dataset contains the results of the pharmacological therapy modulation algorithms (EVINCI database) using data mining techniques. In addition, data from the virtual stent deployment approach are also included. </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat3 </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> Clinical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> FINK, FORTH </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> B3D, EHIT, FINK </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 Task 4.3 Treatment CDSS: Refinement of medical therapy and virtual angioplasty decision support methods Task 4.6 Validation of treatment CDSS (Pharmacological therapy and virtual angioplasty tool) </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> DICOM, STL, DAT, UNV, LST, BMP, AVI A data volume estimation will be later provided. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data access policy / Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential, Public only some specific results </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> Data sharing, publication in peer review journal </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> Personal data protection according to the project ethical and legal guidelines </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> B3D, FINK </td> </tr> </table> **3.4.2.3 Origin of WP5 datasets** _**Table 9:** Details of Dataset 5. _ <table> <tr> <th> **Data Identification: Dataset 5** </th> </tr> <tr> <td> Description: This dataset contains information related with T5.1 regarding users’ requirements, use cases, questionnaires, functional specifications and system architecture for the SMARTool Platform. </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat 3 </td> </tr> <tr> <td> Partners activities and responsibilities </td> </tr> <tr> <td> Partner owner of the data </td> <td> Technical partners </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> B3D, FORTH, FINK, EHIT </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> B3D </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP5 Task 5.1 User requirements, functional specifications and architecture of the SMARTool platform </td> </tr> <tr> <td> Standards and metadata </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> RAW, XML, UML, PDF, CSV Volume~500ΜΒ </td> </tr> <tr> <td> Data exploitation and sharing </td> </tr> <tr> <td> Data access policy / Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> No data sharing. </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> This dataset does not include personal data. Questionnaires are anonymized. </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> Data will be available through D5.1 and its annexes. As WP leader B3D will store and backup the data. </td> </tr> </table> **3.4.2.4 Origin of WP6 datasets** Each SMARTool partner should disseminate its results in accordance with Article 29 of the Grant Agreement. The results of the research performed in SMARTool project are communicated through public deliverables, journal and conference presentations, as well as other dissemination channels (website, social media, etc). On the other side, the exploitation is mainly achieved through the main SMARTool exploitable products: * SMARTool 3DNet™-framework: a common, centralized shareable platform that provides a better control, management and streamlining of CAD clinical workflow based on CDSS. * SMARTool CDSS: an integrated system for supporting the clinicians in diagnosis, prognosis and treatment of CAD patients and subjects at risk. * SMARTool Point-Of-Care-Testing (POCT), a portable device for on chip blood analysis and patient phenotyping exploitable in diagnostic CDSS.  **3.4.2.5 Origin of WP7 datasets** _**Table 10:** Details of Dataset 7. _ <table> <tr> <th> **Data identification: Dataset 7.1** </th> </tr> <tr> <td> Description: This dataset contains information related to the project management and coordination </td> </tr> <tr> <td> Category (Cat1, Cat2, Cat3) </td> <td> Cat 1, Cat 2, Cat 3 (by the Commission) </td> </tr> <tr> <td> Partners activities and responsibilities </td> </tr> <tr> <td> Partner owner of the data </td> <td> Joint Ownership </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CNR </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> CNR </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP7 T7.1 Project Management </td> </tr> <tr> <td> Standards and metadata </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> No specific standards for this data. Files will be in Microsoft Office in doc, .docx, .xls, .xlsx, and .pdf formats. Volume~1GB </td> </tr> <tr> <td> Data exploitation and sharing </td> </tr> <tr> <td> Data access policy/ Dissemination level: confidential (only for members of the Consortium and the Commission Services) or Public </td> <td> Confidential: information shared only among the Consortium and between the Consortium and the Commission </td> </tr> <tr> <td> Data sharing, re-use, distribution, publication (How?) </td> <td> Effort (quantified in person months – PMs) and financial data of each partner is collected by CNR and compiled for monitoring purposes. Data is entered/uploaded in the EC system SyGMa in order to allow the Commission to oversee and assess use of the resources by the consortium. </td> </tr> <tr> <td> Personal data protection: are they personal data? If so, have you gained (written) consent from data subjects to collect this information? </td> <td> No personal data collected </td> </tr> <tr> <td> Archiving and preservation (including storage and backup) </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> The data is collected for internal use in the project. Standard daily offsite backup on CNR systems. Length: 10 years </td> </tr> </table> # 4\. Data security and ethical considerations ## 4.1. Data security The SMARTool project utilises and adopts specific methods and tools for ensuring adequate field access and extended contact and communication between the participants, who are sensitive in the activities that are related to ethical concerns and research activities. The following instructions will be used to ensure data security:  Perform anonymization of personal data * Encrypt data, if necessary by the local researchers * Data storage in two separate locations for avoiding data loss * Perform frequent backups (every 24 hours) * Ensure the final dataset coherence by marking up files in a structured way SMARTool databases for imaging and for non-imaging clinical data are hosted in B3D’s secure data centre, collocated in dedicated spaces at top-tier datacentre accredited withISO 27001 for Information Security Management, ISO 9001 for Quality Management, ISO 14001 for Environment Management. This facility provides carrier-level support, including: * Access control and physical security * Environmental controls * Power * Network * Fire detection and suppression * Network protection * Disaster Recovery * Security Monitoring B3D’s Information Governance has the purpose to ensure: (i) the patient information confidentiality, and (ii) that the adherence to information governance is built into the design of the 3DnetMedical service and derived products provided to healthcare professionals. Information governance and security underpins 3DnetMedical and, as an organisation, B3D strives to achieve excellence in the services provided. B3D operates under ISO 13485:2012 (quality management system). B3D complies with ISO 62304 (medical software development) and incorporates ISO 14971 (risk analysis) into product life cycle. All B3D products and operations conform to industry standards including CE Annex II of directive 93/42/EEC, DICOM, HL7 and IHE. SMARTool non imaging clinical data and clinical documents will be encrypted at rest by means of a transparent filesystem encryption strategy with AES-256 symmetric key cypher. Disk encryption keys will be managed in a keystore (vaultproject.io), and will be encrypted themselves with a master key which will be separate from the data and the databases. Network traffic carrying non imaging data will be encrypted via TLS/SSL (HTTPS), with public key certificates issued and renewed by a root certificate authority. EHIT is an ISO 13485:2013 and ISO/IEC 27001 certified organization. This ensures consistent information security management and product development life cycle. Moreover, EHIT is certified for 21 IHE integration profiles. ## 4.2. Ethical issues SMARTool will support the protection of the enrolled patients and make their rights more visible by adopting the National Laws and EU Legislation and be in compliance with the Directives as described in the following section. **EU legislation and Ethics documents** The Directives, Ethics Documents and experts’ opinions and Ethical Committees are presented in the following table accompanied with a short reference note on the general matter as well as the stated principles. All the European Union countries of SMARTool adopted the European Directives. Additionally, the partners of the SMARTool project follow the European Charter for researchers. _**Table 11:** Ethical considerations in SMARTool project. _ <table> <tr> <th> **Directive 1995/46/EC of the European Parliament (October 1995):** </th> </tr> <tr> <td> Protection of individuals related to personal data processing and free movement. It concerns the protection for the individuals’ privacy and the free movement of personal data within the European Union (EU). The Directive defines specific criteria for the collection and utilisation of personal data. Furthermore, the Directive stresses the obligation of each Member State to set up an independent national authority for the monitoring of the Directive application. </td> </tr> <tr> <td> **Charter of Fundamental Rights of the EU that became legally binding on the EU institutions and the national governments on 2009 with the entry into force of the Treaty of Lisbon** </td> </tr> <tr> <td> The Charter includes the rights and freedoms in terms of Dignity, Freedoms, Equality, Solidarity, Citizens' Rights, and Justice. </td> </tr> <tr> <td> **Opinion of the European Group on Ethics in Science and New Technologies No. 13 (July 1999)** </td> </tr> <tr> <td> Protection of recognisable personal health data. </td> </tr> <tr> <td> **Opinion of the European Group on Ethics in Science and New Technologies No. 26 (February 2012)** </td> </tr> <tr> <td> Ethics of information and communication technologies. </td> </tr> <tr> <td> **Opinion of the European Group on Ethics in Science and New Technologies No. 29 (13 October 2015)** </td> </tr> <tr> <td> Examination of the principal health technologies and definition of a set of recommendations for the EU and nationallevel policymakers, industry and other stakeholders, towards maximising the benefits and minimise the issues associated with new health technologies and citizen participation in health policy, research and practice </td> </tr> <tr> <td> **Directive 2001/20/EC of the European Parliament (April 2001)** </td> </tr> <tr> <td> Performance of clinical trials and medicine under good clinical practice. </td> </tr> <tr> <td> **Directive 2005/28/EC or Good Clinical Practice Directive of the European Parliament and of the Council(8** **April 2005)** </td> </tr> <tr> <td> Presents the principles and detailed guidelines for good clinical practice with regard to investigational medicinal products for human use, as well as the requirements for authorisation of the manufacturing of such products. </td> </tr> <tr> <td> **Universal Declaration on the human genome and human rights adopted by UNESCO (1997)** </td> </tr> <tr> <td> Refers to national and regional legislation on medicine, privacy and genetic research. </td> </tr> <tr> <td> **Clinical Trials Regulation (CTR) EU No 536/2014** </td> </tr> <tr> <td> Ensures that the rules for conducting clinical trials are the same throughout the EU. It is imperative to ensure that all Member States, in authorising and supervising the conduct of a clinical trial, are based on identical rules. </td> </tr> <tr> <td> **Italy** </td> <td> A number of ministerial decrees cover this area. A key one is Legislative Decree no.211 of June 24, 2003 “Transposition of Directive 2001/20/EC relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for clinical use”. </td> </tr> <tr> <td> Law n.675 of 31 December 1996 Tutela delle persone e di altri soggetti rispetto al trattamento dei dati personali (published on the G.U. n.5 of8 January 1996; </td> </tr> <tr> <td> C. M. n. 6 of 2 September 2002, Attività dei comitati etici istituiti ai sensi del decreto ministeriale 18 marzo 1998. (published on the G.U. n. 214 of 12 September 2002) </td> </tr> <tr> <td> **Finland** </td> <td> Medical Research Act (L 488/1999); </td> </tr> <tr> <td> Act of the Medical Use of Human Organs and Tissues (L 101/2001), </td> </tr> <tr> <td> Act on the Status and Rights of Patients (L785/1992), </td> </tr> <tr> <td> Personal Data Act (L 523/1999); </td> </tr> <tr> <td> Good Clinical Practice (GCP) guidelines in accordance with the International </td> </tr> <tr> <td> Conference on Harmonization (www.ich.org) </td> </tr> <tr> <td> **Switzerland** </td> <td> Loi fédérale sur les médicaments et les dispositifs médicaux 812.21 du 15 décembre 2000 (Chapitre 4 Section 2 Essais cliniques, Section 4 Obligation de garder le secret et la communication de donées) </td> </tr> <tr> <td> Loi fédérale sur l’analyse génétique humaine 810.12 du 8 octobre 2004 (Section 2; Art. 5 Consentement, Art. 7 Protection des données génétiques, Art. 20. Réutilisation du matériel biologique) </td> </tr> <tr> <td> Local: Approval by the IRB Bundesgesetz 812.21 über Arzneimittel und Medizinprodukte (Heilmittelgesetz, HMG) as of Dezember 15, 2000 </td> </tr> <tr> <td> Verordnung 812.214.2 über klinische Versuche mit Heilmitteln (VKlin) as of October 17, 2001 </td> </tr> <tr> <td> Patientinnen- und Patientengesetz 813.13 as of April 5, 2004 </td> </tr> <tr> <td> Heilmittelverordnung (HMV) 812.1 as of May 21, 2008 </td> </tr> <tr> <td> **UK** </td> <td> Integrated Research Application Systems (IRAS) </td> </tr> <tr> <td> National Research Ethics Service (NRES) </td> </tr> <tr> <td> Department of Health’s Research Governance Framework for Health and Social Care (2nd Edition, 2005) </td> </tr> <tr> <td> **France** </td> <td> Public Health Code (articles L. 1121-1 et seq.).6 </td> </tr> <tr> <td> The bioethics law of 2004 creating the French Biomedicine Agency </td> </tr> <tr> <td> The Advisory Committee on the Treatment of Research Information in the Health </td> </tr> <tr> <td> Field was created by a law n°94-548 1 July 1994 </td> </tr> <tr> <td> **Germany** </td> <td> Das Deutsche Referenzzentrum für Ethik in den Biowissenschaften - DRZE </td> </tr> <tr> <td> Deutscher Ethikrat </td> </tr> <tr> <td> Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag - TAB </td> </tr> <tr> <td> **Spain** </td> <td> Comités de Ética en Investigación Clínica, CEIC </td> </tr> <tr> <td> Royal Decree 561/1993 </td> </tr> <tr> <td> Royal Decree 223/2004 </td> </tr> <tr> <td> Law of Biomedical Research (2007) </td> </tr> <tr> <td> Asociación Nacional de Comités de Ética de la Investigación, ANCEI </td> </tr> <tr> <td> **Poland** </td> <td> First Polish Code of Ethics (1977) </td> </tr> <tr> <td> Extraordinary National Assembly of Delegates of the Polish Society of Medicine in Szczecin on 22 June 1984 </td> </tr> <tr> <td> Code of Medical Ethics (1991) </td> </tr> <tr> <td> First Polish Research Ethics Committee (1979) </td> </tr> <tr> <td> Bioethics Committee at the Ministry of Health </td> </tr> <tr> <td> Zbiór zasad i wytycznych pt. "Dobre obyczaje w nauce" </td> </tr> <tr> <td> April 7, 2005 Order of the Minister of Health concerning the nature and extent of inspection of clinical trials. Legislation Journal of the Republic of Poland (2005) 69: pos. 623 </td> </tr> <tr> <td> January 3, 2007 Order of the Minister of Health concerning the application form for authorization of clinical trials, payments for authorization and final report of the clinical trial Legislation Journal of the Republic of Poland (2010) 222: pos. 1453 </td> </tr> <tr> <td> February 11, 2011 Order of the Minister of Health concerning requirements related with the basic documentation of clinical trials. Legislation Journal of the Republic of Poland (2011) 40: pos. 210 </td> </tr> <tr> <td> May 12, 2012 Order of the Minister of Health concerning Good Clinical Practice Legislation Journal of the Republic of Poland (2012) 0: pos. 489 </td> </tr> <tr> <td> **Netherlands** </td> <td> Personal Data Protection Act of 6 july 2000 (Wet bescherming persoonsgegevens) </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1096_SIM4NEXUS_689150.md
Executive Summary This document presents Data Management Plan (DMP) on open access data handling (see box 1) by SIM4NEXUS. The aim is to consider the many aspects of data management, data and metadata generation, data preservation- maintenance- and analysis, whilst ensuring that data is well managed at present and prepared for preservation in the future. This Data Management Plan is compiled according to the _Guidelines on FAIR Data Management in H2020_ 1 , and the Guidelines to the Rules on the _Open_ _Access to Scientific Publications and Open Data Access to Research Data in H2020_ 2 . Thus, the sections below present the lifecycle, Box 1: Open Access responsibilities and review processes and <table> <tr> <th> management policies of research data, produced by SIM4NEXUS. The DMP reflects the status of discussion within the Consortium on data to be produced. It is a _Living Document_ with iterations (M12, M30 and M48) as SIM4NEXUS evolves. An updated version of this document will be delivered together with each reporting period, and whenever significant changes related to DMP would occur. This document is the update of the second release of </th> <th> Open access (OA) refers to the practice of providing online access to scientific information that is free of charge to the end-user and reusable. 'Scientific' refers to all academic disciplines. In the context of research & innovation, 'scientific information' can mean: (1) peer-reviewed scientific research articles (published in scholarly journals) or (2) research data (data underlying publications, curated and raw data) </th> </tr> </table> the DMP. In the future, it will be delivered together with the progress reports as it provides an overview of available research data, access and the data management terms of use at this stage. For SIM4NEXUS, the DMP is defined as “the development, execution and supervision of plans, policies, programmes and practices that control, protect, deliver and enhance the value of data and information assets” obtained. In this regard, at the start of the project, the following processes and procedures for data management procedures are established: * Data governance, such as standards management and guidelines * Data architecture, analysis, and design including data modelling * Data maintenance, administration, and data mapping across building blocks and solution modules * Data security management including data access, archiving, privacy, and security * Data quality management including query management, data integrity, data quality, and quality assurance * Reference and master data management including data integration, external data transfer, master data management, reference data * Document, record, and content management * Metadata management, i.e., metadata definition, discovery, publishing, metrics, and standardization. Readers are the project members and research institutions using the data collected and produced during the project period. Changes with respect to the DoA No changes with respect to the DoA Dissemination and uptake The deliverable is publicly available, based on the participation of SIM4NEXUS to the Pilot on Open Research Data in Horizon 2020 3 . Special attention will be paid to how personal data will be properly catered together with other important data and/or scientific information. Short Summary of results (<250 words) As SIM4NEXUS participates in the Pilot on Open Research Data in Horizon 2020, it is required to submit in the first 6 months of the project the Data Management Plan. This document aims to improve and maximise access and re-use of research data generated by project actions. Participating in the Open Research Data Pilot does not necessarily mean opening up all research data. In a sense, the document determines and explains which of the research data generated and/or collected will be made open. Several iterations of this document will be released as project evolves. Evidence of accomplishment The deliverable itself can act as the evidence of accomplishment. Also, communication (Teleconferences, emails) between EURECAT, EPSILON and the project Coordinator (WUR) can be revealed as evidence. 1. Introduction 1.1 Scope This document describes the SIM4NEXUS Data Management Plan (DMP, see box) that corresponds to Deliverable D4.2 of the SIM4NEXUS Technical Annex. The DMP: * Provides a description of how the research data collected, processed, and generated will be handled during and after the SIM4NEXUS. * Describes which standards and methodology for data collection and generation will be followed, how data will be shared and be curated and preserved. The document follows the template provided by the European Commission on DMP 4 . The DMP is prerequisite for SIM4NEXUS as it participates in the Open Research Data Pilot 5 , thus first version was delivered at an early stage of the project (Month 6). An updated version of this document has been, and will be provided together with the first two progress reports (M12, M30) and whenever significant Box 2: Data Management Plan A Data Management Plan (DMP) is a _key element_ of good data management; it describes the data management life cycle for the data to be collected, processed, and generated by a Horizon 2020 project. As part of making research data findable, accessible, interoperable, and re- usable (FAIR), a DMP should include information on: (i) the handling of research data during and after the end of the project, (ii) what data will be collected, processed, and generated, (iii) which methodology and standards will be applied, (iv) whether data will be shared/made open access, and (v) how data will be curated and preserved (including after the end of the project). A DMP is required for all projects participating in the extended ORD pilot unless they opt out of the ORD pilot; however, projects that opt are encouraged to submit a DMP on voluntary basis. changes occur. At month 48 the final version will be provided as Appendix to deliverable D4.7 Data Management Report. 1.2 Structure of the document DMP deliverable is organized as follows: * Section 1 is the introductory chapter, which provides the scope of the deliverable * Section 2 presents the key questions that DMP addresses as tailored for SIM4NEXUS * Section 3 contains information on digital data sets generated or collected in SIM4NEXUS for each Work Package. It will be updated per reporting period until the end of the project * Section 4 contains a data summary for all Work Package data sets. It will be updated per reporting period until the end of the project * Section 5 contains information of FAIR data for SIM4NEXUS and will be updated as the project evolves * Sections 6 address issues related to data security & ethical aspects * Section 7 answers FAIR data key questions related to all datasets produced or gathered in. 2. Background on SIM4NEXUS DMP 1. General <table> <tr> <th> Box 3: Research Data </th> </tr> </table> <table> <tr> <th> data that derives new scientific findings is stable. This does not imply, that data currently being unused is useless, as it can be of high value in future. Prerequisite for meaningful use, re-use or recombination of data is to be well documented according to accepted and trusted standards. These standards form a key pillar of science because they enable the recognition of suitable data. To ensure this, agreements on standards, quality level and sharing practices are to be discussed. Strategies should be fixed to preserve and store data over a </th> <th> Research data Refers to information, facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. Users can normally access, mine, exploit, reproduce and disseminate openly accessible research data free of charge (Figure 1: Open Access and dissemination of scientific data (EC, 2017)). </th> </tr> </table> It is a well-known phenomenon that data is increasing exponentially, while the use and re-use of defined period to ensure its availability and reusability after the end of SIM4NEXUS. Based on the EU guidelines and because SIM4NEXUS utilizes data for various pilots, the H2020 Programme requires for such projects to participate in the Open Research Data Pilot 6 (ORD Pilot). Figure 1: Open Access and dissemination of scientific data (EC, 2017) 2. Why is a Data Management Plan needed? SIM4NEXUS participates in the Open Access and the Open Research Data Pilot of the European Research Council (ERC). The DMP specifies the implementation of the pilot for: data generated and collected, standards in use, workflow to make data accessible for use, reuse and verification by the community, and definition of a strategy of curation and preservation of the data. Therefore, we refer to the SIM4NEXUS Grant Agreement (GA), Article 29.3 on “Open Access to research data”: _'Regarding the digital research data generated in the action (‘data’), the beneficiaries must: (a) deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce, and disseminate - free of charge for any user - the following: (i) the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; (ii) other data, including associated metadata, as specified and within the deadlines laid down in the data management plan;_ _(b) provide information - via the repository - about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and - where possible – provide the tools and instruments themselves)._ The data management policy described in this document reflects the current state of consortium agreements 7 on data management. Collecting data sets on environmental systems at scale can be an expensive and time-consuming process. By making a large part of them available, this project will make a long- lasting and significant contribution to the research and industrial communities. All datasets will be released in open formats (e.g., JSON, XML) respecting available standards, with proper documentation supporting their use by other researches. Since we will engage end-users in the development of the SIM4NEXUS platform, it is imperative to carefully address data management issues connected to the use of external data; whist we will also deal with highly sensitive data, which may be confidential. In the development of the SIM4NEXUS platform we will explicitly deal with security issues from the technical perspective and in the DMP, delivered in WP4 with security and privacy issues arranged from the management perspective. The following categories of data are considered within the project: * personal status (age, gender, etc.), * socio-economic (city of residence, social status, marital status, and income category), • social network, and * domain related. Data will be stored in a project database developed by EPSILON and managed by the project coordinator. All data and especially personal data will be securely stored in a project server. The final server specifications and the system architecture may change during the execution of the project. However, a first definition made by EURECAT and DHI (Serious Game needs) can be checked in “D4.3 Game land systems requirements”. EPSILON is the responsible partner to provide the cloud server, which will be scalable when required in terms of processing power and storage space. SIM4NEXUS will use cloud server solution for maximum performance (e.g. quick access for the user to the content, quick processing and communication with SIM4NEXUS services/modules). Project participants will have secured web access to the previously anonymized data, which will have been automatically checked for consistency, homogeneity and completeness, whist manual audits will be performed as per the standard operation procedures previously defined. After project completion, and in case of no objection by project partners and user, anonymization is preserved (i.e., a user cannot be identified from their data) and data may be published in an Open Data portal (e.g., _http://open- data.europa.eu_ ) for future research always consistent with exploitation and Intellectual Property Rights (IPR) requirements. 3. Who is responsible for the implementation of the DMP? The responsible partner is EPSILON, co-Leader in WP4, led by EURECAT, though all WP Leaders and coLeaders shall be involved in the compliance of the DMP. The partners agreed to deliver datasets and metadata produced or collected in SIM4NEXUS according to the signed GA (Article 29.3) which is in line with rules described in the DMP. The Project and the Scientific Officer are also central players in the implementation of the DMP and will track the compliance of the rules agreed. 4. What kind of data will be affected by DMP? The main purpose of a DMP is to describe Research Data with the metadata attached to make them discoverable, accessible, assessable, usable beyond the original purpose and exchangeable between researchers. Thus, SIM4NEXUS focuses more on the production process and tools than on production of research or observation data and so the amount of Research Data which SIM4NEXUS intend to produce is limited, at least at this stage of the project. As already presented the Open Research Data Pilot applies to two types of data: 1 “the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible.” 1\. “other data, including associated metadata, as specified and within the deadlines laid down in the data management plan–that is, per the individual judgement by each project.” According to the “Guidelines on Data Management in Horizon 2020” (2015) the DMP describes the handling of _numerical datasets_ processed or collected during SIM4NEXUS lifetime. _“Research data refers to information, in particular facts or numbers, collected to be examined and considered and as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form.”_ The DMP includes clear descriptions and rationale for the access regimes that are foreseen for collected data sets. Therefore, the DMP leaves explicitly open the handling, use and curation of products like tools, software and written documents. Thus, we restrict the focus of our DMP to numerical data products like produced model data or observation data. 3 Digital data sets generated or collected in SIM4NEXUS The intention of the DMP is to present data management plans of the Work Packages. The information listed below reflects the conception and design of the individual Work Packages at the beginning of the project, updated accordingly with current status. As the operational phase of the project started in June 2016, the data collection and generation is at a very first state and is rather restricted to the ongoing fast track for the Sardinia area. The objective of the fast track aims to: 1. Identify difficulties related to datasets collection from the different stakeholders and projects to run models and have a comprehensive view of all nexus components in the area 2. Identify the obstacle and find solution to harmonize data at level of scale and spatial distribution Thus, the data register will deliver information according to Annex 1 of the H2020 guidelines (2015): 1. Data set reference and name: Identifier for the data set to be produced. 2. Data set description: Descriptions of data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. 3. Standards and metadata: Reference to existing suitable standards, in case they do not exist, an outline on how and what metadata will be created. 4. Data sharing: Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related). 5. Archiving and preservation (including storage and backup): Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. FAIR data of the ORD Pilot, stands as an acronym for Findable, Accessible, Interoperable, and Reused data. The following table presents the template to be used to report datasets related to each WP. Table 1. WPXX datasets <table> <tr> <th> WPXX – Dataset YY </th> <th> Description </th> </tr> <tr> <td> Data set reference and name </td> <td> </td> </tr> <tr> <td> Data set description </td> <td> </td> </tr> <tr> <td> Standards and metadata </td> <td> </td> </tr> <tr> <td> Data sharing </td> <td> </td> </tr> <tr> <td> Archiving and Preservation (including storage and back-up) </td> <td> </td> </tr> <tr> <td> Reported by </td> <td> (WP leader) / (WP co-leader) </td> </tr> </table> EPSILON is responsible to communicate with the WP Leaders and co-Leaders and collect the required information; assigned people for reporting and updating the above template are shown in Table 2. Assign persons. Table 2. Assign persons <table> <tr> <th> WP </th> <th> Assigned person </th> </tr> <tr> <td> WP1 </td> <td> Chrysi Laspidou (WP leader) / Mark Howells (WP co-leader) </td> </tr> <tr> <td> WP2 </td> <td> Maria Witmer (WP leader) / Janez Sušnik (WP co-leader) </td> </tr> <tr> <td> WP3 </td> <td> Dragan Savic (WP leader) / Maria Blanco (WP co-leader) </td> </tr> <tr> <td> WP4 </td> <td> Xavier Domingo (WP leader) / Marc Bonazountas (WP co-leader) </td> </tr> <tr> <td> WP5 </td> <td> Floor Brouwer (WP leader) / Maïté Fournier (WP co-leader) </td> </tr> <tr> <td> WP6 </td> <td> Alexandre Bredimas (WP leader) / Chengzi Chew (WP co-leader) </td> </tr> <tr> <td> WP7 </td> <td> Guido Schmidt (WP leader) / Frank Wechsung (WP co-leader) </td> </tr> <tr> <td> WP8 </td> <td> George Beers (WP leader) </td> </tr> </table> The time plan (Figure 2: DMP Time plan) for the next 6 months asks to consider input from both the Sardinia pilot case and from each WP. As soon as simulations are completed more input will become available in terms of data and data ontologies that will be stored in the database and semantic repository. Additionally, bilateral teleconferences will take place between EPSILON and WP Leaders are regularly discussing in available and created data and information and terms of use (i.e. whether data will be publicly available or not). 3.1 Data set reference and name Data set reference and naming will be implemented to employ a standard identification mechanism for each data set according the metadata standard implemented. Zenodo 8 (a popular repository for research data, will be extensively exploited throughout the project) assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily and uniquely citeable. Zenodo supports harvesting of all content via the OAI-PMH protocol 9 . 3.2 Data set description Data sets related to the project’s DMP: (i) data sets referred to project publications (deliverables and papers) (ii) curated and/or raw data collected produced during the project. SIM4NEXUS data sets collection and production is mainly linked to WP3 applying the thematic models (i.e. E3ME-FTT, MAGNET, CAPRI, IMAGE-GLOBIO, OSeMOSYS, SWIM MAgPIE) selected for the project within the individual Case Studies (as specified in the Task 3.3 of WP3) to realise a partial simulation of the Nexus components under different scenarios, feeding into the development of the Serious Game. Based on the testing, the partners will then collect and organize the data into a semantic database that houses the complexity science tools (WP1). It will also review and select the most suitable integration methodologies for the Case Studies and the Serious Game (for WP4). Thereby, integrated complexity science models will be developed for all the Case Studies. These complexity science models will then be used to run many scenarios. Thus, SIM4NEXUS produces raw data with some parts summarized in deliverables and scientific publications. This raw data, underpinning the published work, constitute the main research data sets that will be made publicly available if the WP leaders give their permission. In cases where release of complete raw data sets is impossible due to, for example, privacy or personal data concerns (such as packet traces involving networking usage of trial participants), we will strive to find _data sanitation_ and _anonymization_ approaches that enable publishing as large parts of the data as possible. Any scripts used for post-processing the raw data will also be shared. _Data layer_ The data layer is responsible for storing the SIM4NEXUS data according to different strategies: * As files: For example, raster data can be stored in raster files formats such as JPEG, TIFF or GeoTiff. Vector data can be stored in various formats as ESRI Shapefile, MapInfo, DXF, etc. * Within a database: This is managed by a (spatial) database management system such as Oracle Spatial, ESRI ARCSDE or Postgres/PostGIS. Spatial Databases are used for higher data volumes or when data must be accessed and updated by many users, when security policy is required, when complex spatial and non-spatial operation must be performed on the database and when the data must be integrated with non-spatial data. 3.3 Standards and Metadata As mentioned data will be shared in relation to (i) publications (deliverables and papers) and (ii) curated and/or raw data. For the data linked to scientific publication, the publication will serve as the main piece of metadata documentation for the shared data. When this is not seen as being adequate for the comprehension of the raw data, a report will be shared along with the data explaining their meaning and methods of acquisition. However, for both data categories the metadata standard structure of Zenodo repository. _For Public Availability of Data,_ data will be shared when the related deliverable or paper or data set has been made available at an open access (OA) repository from the responsible partner/owner of the data. It is expected that data related to a publication will be openly shared. However, to allow the exploitation of any opportunities arising from the raw data and tools, data sharing will proceed only if all co-authors of the related publication agree. The Lead author is responsible for getting approvals and then sharing the data and metadata on Zenodo. The Lead Author will also create an entry on OpenAIRE 10 to link the publication to the data. OpenAIRE is a service built to offer this functionality and may be used to reference both the publication and the data. A link to the OpenAIRE entry will then be submitted to the SIM4NEXUS Website Administrator (FT) by the Lead Author. In view of the precautions for protection of personal data, it is explicitly confirmed that the data collected will be publicly available, after care is taken with regards to rules of confidentiality, anonymity, and protection. Anonymized final data sets will be open access and procedures are set as to how data will be preserved and archived in the repository. We are aware of post-publication risks to local researchers and end-users in our research sites and will mitigate all reasonable risk before publication according to the ethical and IPR requirements set. However, “Opting Out” remains a choice for data owners, as it is possible that even though comprehensive measures are taken to ensure the safety of participants, researchers and their environment, it is only after a SIM4NEXUS report or peer reviewed article is published and generation of date sets is realized, that the question of open access arises. Open access does not entail an absolute obligation to publish all data, and it is up to researchers and associated organization to decide whether data is suitable and ethical to be published or not. 3.4 Archiving and Preservation To ensure archiving and preservation of long-tail research data during the project, a repository with a web catalogue service will be built and maintained after the project competition. The Web Catalogue Services provides the system with a smarter interface to the SIM4NEXUS repository (geo- database). There are many technologies that can be exploited and adopted to perform this function. 3.4.1 Web catalogue service description The CSW interface is based on the OGC™ Catalog Services Specification Version 2.0.2. The interaction between a client and a server is accomplished using a standard request-response model of the HTTP protocol. That is, a client sends a request to the server using HTTP, and expects to receive a response to the request or an exception message. Repository service access is based upon the HTTP protocol with client and server requests and responses using XML or JSON. Client applications can use this interface for executing service repository queries and receiving service repository metadata results. Basically, the essential purpose of a Catalog Service is to enable a user to locate, access, and make use of resources in an open, distributed system by providing facilities for retrieving, storing, and managing many kinds of resource descriptions. The metadata repository managed by the catalog can store a multitude of resource descriptions that conform to any standard Internet media type, as: * XML schemas * Audio annotations * Specification documents * Style sheets for generating detailed topographic maps. Furthermore, arbitrary relationships among catalog items can be expressed by creating links between any two resource descriptions. For example, a service offer may be associated with descriptions of the data sets that can be acquired using the service. A catalogue can function as a stand-alone service or it can interact with other affiliated catalogues within a federation that spans multiple administrative domains. The federation then effectively enlarges the total search space within which resource descriptions may be discovered. When a catalog is linked to a peer catalog, it makes the resource descriptions managed by the peer implicitly available to its own clients. Each catalog client connects to a single catalog service as its main point of contact with the federation. This is the agent node; the propagation of request messages to neighboring nodes is invisible to the client. It is not necessary to know where the metadata repositories are located or how they are accessed. The CSW catalogue profile is intended to provide a flexible, general-purpose catalogue service that can be adapted to meet the needs of diverse communities of practice within the geospatial domain. In the SIM4NEXUS framework we try to respect the following rules: 1. Communicate information adopting standard protocols (e.g. XML/JSON, OGC standards, etc.). 2. Try to adopt a solution that allows for the maximum interoperability among actors who will process the data stored. 3. State of the art technologies that will be used in the context of the Web Catalogue Service include: * pyCSW ( _http://pycsw.org/_ ) * GeoNetwork ( _http://geonetwork-opensource.org/_ ) * Micka ( _http://micka.bnhelp.cz/_ ) * CKAN ( _https://ckan.org/_ ) pyCSW The pyCSW is OGC compliant python implementation that permits clients to make searches on metadata. The CSW interface is based upon the OGC™ Catalog Services Specification Version 2.0.2. The interaction between a client and a server is accomplished using a standard request-response model of the HTTP protocol. GeoNetwork GeoNetwork open source is a standard based and decentralized spatial information management system, designed to enable access to geo-referenced databases and cartographic products from a variety of data providers through descriptive metadata, enhancing the spatial information exchange and sharing between organizations and their audience, using the capacities and the power of the Internet. The system provides a broad community of users with easy and timely access to available spatial data and thematic maps from multidisciplinary sources, that may in the end support informed decision making. The main goal of the software is to increase collaboration within and between organizations for reducing duplication and enhancing information consistency and quality and to improve the accessibility of a wide variety of geographic information along with the associated information, organized and documented in a standard and consistent way. Main Features: * Instant search on local and distributed geospatial catalogues * Uploading and downloading of data, documents, PDF’s, and any other content * An interactive Web map viewer that combines Web Map Services from distributed servers around the world * Online map layout generation and export in PDF format * Online editing of metadata with a powerful template system * Scheduled harvesting and synchronization of metadata between distributed catalogues * Groups and user’s management * Fine grained access control From technical point of view, GeoNetwork has been developed following the principles of a Free and Open Source Software (FOSS) and is based on International and Open Standards for services and protocols, like the ISO- TC211 and the Open Geospatial Consortium (OGC™) specifications. The architecture is largely compatible with the OGC™ Portal Reference Architecture, i.e. the OGC™ guide for implementing standardized geospatial portals. Indeed, the structure relies on the same three main modules identified by the OGC Portal Reference Architecture that are focused on spatial data, metadata, and interactive map visualization. The system: * Is fully compliant with the OGC™ specifications for querying and retrieving information from Web Catalog (CS-W) * It supports the most common standards to specifically describe geographic data (ISO19139 and FGDC) and the international standard for general documents (Dublin Core) * It uses standards (OGS WMS) also for visualizing maps through the Internet A common use case is the harvesting of geospatial data in a shared environment. In fact, within the geographic information environment, the increased collaboration between data providers and their efforts to reduce duplication have stimulated the development of tools and systems to significantly improve the information sharing and guarantee an easier and quicker access of data from a variety of sources without undermining the ownership of the information. The harvesting functionality in GeoNetwork is a mechanism of data collection in perfect accordance with both rights to data access and data ownership protection. Through the harvesting functionality it is possible to collect public information from the different GeoNetwork nodes installed around the world and to copy and store periodically this information locally. In this way, a user from a single-entry point can get information also from distributed catalogues. MICKA MICKA is a meta-information catalogue that fully complies with the ISO 19115 standard and is fully compliant with the INSPIRE principles. It can be integrated with map applications. It is multilingual. The web catalogue service uses OGC specifications. MICKA is a complex system for metadata management used for building Spatial Data Infrastructure (SDI) and geo-portal solutions. It contains tools for editing and management of metadata for spatial information, web services and other sources (documents, web sites, etc.). It includes online metadata search engine, portrayal of spatial information and download of spatial data to local computer. MICKA is compatible with obligatory standards for European SDI building (INSPIRE). Therefore, it is ready to be connected with other nodes of prepared networked metadata catalogues (its compatibility with pilot European geo- portal is continuously being tested). Functions include: * Spatial data metadata (ISO 19115) * Spatial services metadata (ISO 19119) * Dublin Core metadata (ISO 15836) * Feature catalogue support (ISO 19110) * OGC CSW 2.0.2 support (catalogue service) * User defined metadata profiles * INSPIRE metadata profile * Web interface for metadata editing Multilingual (both user interface and metadata records). Currently 16 languages are supported. It is possible to dynamically extend the system for other languages. Context help is multilingual and import from the following metadata formats is supported: * ESRI ArcCatalog * ISO 19139 * OGC services (WMS, WFS, WCS, CSW) * Feature catalogue XML * Export – ISO 19139, GeoRSS • Support of thesauri and gazetteers. * Display of changes with GeoRSS Template base interface with possibilities to change according to user requirements is available with possibility in deep cooperation with any map clients for display of on-line map services. MICKA stores metadata in a relational database and it is edited by dynamically generated forms. Therefore, it is possible to amend other standards or profiles. It is possible to switch between profiles while editing. Individual profiles can be distributed into sections. With the help of control elements, it is possible to duplicate individual items, select from code lists or connects to supporting applications. Checking of mandatory items is enabled while editing. The MICKA integrated application is divided into 3 independent components: * Metadata creation * Metadata importing * Metadata Management CKAN CKAN is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding, and using data. It is aimed at data publishers (national and regional governments, companies, and organizations) wanting to make their data open and available CKAN implements several of the administrative services that are described in the data management plan and provides both an attractive end-user client as well as a Web Service API. CKAN is built with Python on the backend and JavaScript on the frontend, and uses The Pylons web framework and SQLAlchemy as its ORM. Its database engine is PostgreSQL and its search is powered by SOLR. It has a modular architecture that allows extensions to be developed to provide additional features such as harvesting or data upload. CKAN uses its internal model to store metadata about the different records, and presents it on a web interface that allows users to browse and search this metadata. It also offers a powerful API that allows third-party applications and services to be built 3.4.2 Zenodo Repository After the project completion of the project, the final dataset will be transferred to the Zenodo repository which ensures sustainable archiving of the final research data sets and publications produced. Zenodo is built and developed by researchers, in the context of The OpenAIRE project, that in the vanguard of the open access and open data movements in Europe, commissioned by the EC to support their nascent Open Data policy by providing a catch-all repository for EC funded research. One of its mayor advantages is it works closely with GitHub, enabling users to make the work they share on GitHub 11 citable by archiving one of your GitHub repositories and assigning a DOI with the data archiving tool Zenodo. 4 Data Summary – Specifics This section must be understood as a living section that will be further updated in future iteration as the project evolves, and more input in terms of data and date reporting comes from the SIM4NEXUS partners. The purpose of this section is to provide an executive summary of the different SIM4NEXUS data addressing the following issues: Provide a summary addressing the following issues: * State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful * Defines how this data is going to be accessible both for internal or public use 4.1 System Dynamics models data sets EPSILON has implemented the SIM4NEXUS repository (currently using Dropbox infrastructure) for the efficient management of the SIM4NEXUS datasets. The service supports cloud storage, file sharing and synchronization and client software. The file organization structure adopted in the context of the project, supports the easy identification and revisions of the datasets provided by each partner. For each of the SIM4NEXUS cases, a specific folder has been created in the file hosting service. The folder contains the following sub-folders: 01-ModelData, 02-ArbitraryData, 03-ThematicData, 04-ClimateData. The 1 st folder provides the outputs from the selected models applied in each case study. The file type of the outputs is either Microsoft Excel Open XML Spreadsheet (.XLSX File Extension) or Commas Separated Values files (.CSV File Extension). The sub-folder 02-ArbitraryData contains information about the relevant case Study. These include the conceptual model of the case, the concept harmonization process etc. The 3 rd sub-folder contains the thematic datasets of each case study along with its metadata. Finally, the sub-folder 04-ClimateData contains the climate datasets such as precipitation, relative humidity, long-wave downward solar radiation at the ground, long-wave downward solar radiation at the ground, daily maximum air temperature, daily minimum air temperature, and wind speed at 10m height. The datasets in this folder are of a generic file type with .DAT file extension. Each dataset of this type may contain data in binary or text format. A standardised name has been assigned to each dataset in the following format: _Country code_Earth System Model_Simulation Method_Period_Time frequency.dat_ In this way, all the necessary elements of each dataset such as the way that each dataset has been produced (i.e. model and simulation methods), the addressed area, and the duration and time frequency, are provided. Each subfolder in the file hosting service contains files with descriptive information about the available datasets. At the moment, none of this datasets are intended for public availability, as they are only useful in the context of building System Dynamics Models, and specific for the different scenarios of the case studies (only Sardinia so far). It has been decided internally that only people involved on each case study will have access to the specific case study folder. This is a default initial status, which can be revoked if the Case Study leader considers it is convenient. The access is by default read-only, in the same way, if there is a need for edition access, this can be requested to EPSILON by the Case Study leader. As this datasets are only useful for the specific case studies’ System Dynamic Models generation, it has been decided to not make them available publicly. At this stage of the project, this decision only affects to Sardinia’s Case Study. However, some baselines for some case studies are being developed and may be of interest for the scientific community. In that case, SIM4NEXUS project will make these baselines available under Open Access. 4.2 Semantic repository A Semantic Repository is being developed to store information related to the concepts, properties and restrictions from the Nexus procedures; to improve the data integration of diverse sources and, finally, to give a better analytical power. This repository, which is currently focused in provide support to the information flow between the System Dynamics Models and the Serious Game User interface, will also allow for knowledge storage related to the Nexus, policies, etc., coming from WP1 and WP2. A triple-store is being used to be the base of the repository and an ontology to semantically represent the stored data. The ontology is still under development and represents the SIM4NEXUS knowledge and scope. Currently, the top concept is the ‘Session’, which represent a game or session in the Serious Game. Linked to the ‘Session’ there are a ‘User’, representing the player, and a ‘StudyCase’ which, in turn, it is related with a specific ‘Model’, a SDM. In order to represent the Nexus state through the game, the ‘Session’ has a list of ‘StateEvolution’, which represent the way from one state (‘State’) to the new state (‘State’), applying certain polices (‘Policy’). The ‘State’ is defined by ‘NexusComponent’ (for instance ‘Climate and environment’ or ‘Water’) and these components have a specifics parameters (‘Parameter’). The ontology is defined using the Web Ontology Language (OWL), a Semantic Web language designed to represent rich and complex knowledge about things, and relations between things. Some existing ontologies, related to the Nexus, have been analysed to be involved in the SIM4NEXUS context: * WatERP ontology, which reflects the water manager’s expertise to manage water supply and demand. The novelty of WatERP ontology lies on including man interactions with the natural paths as a mechanism to understand how affect into the water resources management with the objective to match supply with demand, these interactions could range from infrastructures to management decisions. * WEFNexus ontology, which concern Water, Energy and Food derived by the European Directives: Article 2 of EU Directive 98/83/EC that defines the water intended for human consumption; Article 2 of EU Directive 2003/30/EC that defines bio-fuels; Article 2 of EU Regulation 178/2002/EC that defines food. At the moment, the ontology is not available publicly, as it is in an initial state. As soon as it gets a stable version, will be made available broadly. 4.3 3D map and terrain data in serious game The 3D map available in current Serious Game user interface prototype is rendered from 2 sources – a height map and a texture overlay: * The height map data comes from NASA Shuttle Radar Topographic Mission (SRTM) v4.1 and is distributed freely by USGS. The SRTM data is available with a 90m resolution. Not all data from SRTM is being used in the serious game, only data related to the geographical regions of the 12 case studies are used. This data is downloaded from the USGS web and then stored as mesh elements within the serious game client. * The texture overlay is from Google Earth, it is downloaded from Google and then stored within the serious game client. Consequently, we are not generating this data, but reusing existing publicly available datasets. 4.4 Other considerations As stated in D.9.1, SIM4NEXUS project, foresee to provide both short and long- term benefits for the involved decision-makers and their associated networks. Given the above, « _the only ethics issues involved in the SIM4NEXUS study concern general ethical issues of informed consent, anonymity and confidentiality associated with the voluntary involvement of human participants in the European Union_ ». SIM4NEXUS will not involve types of data related to sensitive topics (well described in D.9.1), which might generate uncomfortable situations such as psychological stress, any kind of anxiety or humiliation, deception, or any potential increased danger to participants, or gathering of personal data from participants. Thus, SIM4NEXUS, has defined approaches for the following issues: 1. Collection and processing of personal data (described in D.9.1) 2. General ethics commitments (described in D.9.1) 3. Storing and sharing information (described in D.9.1) 4. Accessing and using of information (described in D.9.1) 5. Protection of Information (described in D.9.1) In addition, deliverable D9.2 provides templates of the informed consent forms and information sheets. This information is confidential, and only for the consortium members (including the Commission Services). Also, SIM4NEXUS consortium, agreed on signing an ethics agreement based on the European Code of Conduct for Research Integrity, published by the European Science Foundation (http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegrity.pdf) and the ethical principles for conducting community-based participatory research, as defined by the National Co-ordinating Centre for Public Engagement of Durham University, UK (www.publicengagement.ac.uk). This will ensure the fair and equal power relationships between researchers and participants. Last but not least, Deliverable 9.3 stipulates a keen awareness of most ethical issues, as they were presented to the Social Sciences Ethics Committee (SEC). « _The SEC is convinced that fair and respectful treatment in terms of inconvenience, consent and privacy is assured_ ». 5 FAIR Data – Specifics This section must be understood as a living section that will be further updated in future iteration as the project evolves, and more input in terms of data and data reporting comes from the SIM4NEXUS partners. Intellectual property rights (IPR) management in SIM4NEXUS project, is a substantial part of its data management plan. Usually data content and their system are treated as one parameter, but when the matter comes to IPR a distinction between the databases and data content is of utmost importance. It is imperative for other users to know how they can reuse both the data collected, assembled, or generated and the databases where these are included. The Open Data Commons group (http://opendatacommons.org) developed the following tools to govern the use of data sets. The three ODC licenses are: * Public Domain Dedication and License (PDDL): This makes the use of the database and its content free to the public domain. * Attribution License (ODC-By): Users can make use of the database and its content in new and different ways, but they need to provide an attribution to the source of the data and/or the database. * Open Database License (ODC-ODbL): ODbL stipulates that any use of the database must provide attribution, and any new outcomes must use the same terms of licensing (also an unrestricted version of the new product must always be accessible). In addition, it is acceptable to articulate for SIM4NEXUS project a set of “community norms” that can be used complementary to the use of formal licenses. At this version of the D4.2 this section is briefly answered in the section 9 Questions & Answers on FAIR data, and thus is structured here to give the outline of future DMP iterations. 5.1 Making data findable, including provisions for metadata This section will be updated on next iterations to provide detailed information on how data will be made discoverable, and more specifically: * Discoverability of data (metadata provision) * Identifiability of data and refer to standard identification mechanism * Use of persistent and unique identifiers such as Digital Object Identifiers * Naming and conventions used * Approach towards search keyword * Approach for clear versioning * Specify standards for metadata creation * Type of metadata created and how As detailed in section 4, all data, information, and knowledge considered relevant for the scientific community will be made accessible under Open Access. At the moment, all datasets, and other means of information storage are in draft status, and this information has not been defined for each dataset. When a dataset is set to be accessible publicly, this information will be fulfilled and the DMP updated accordingly. 5.2 Making data openly accessible This section will be updated on next iterations to provide detailed information on how data will be made accessible, assessable and intelligible more specifically: * Specifics on which data will be made openly available • Which data is kept closed and provide the rationale? * How the data will be made available * What methods and software tools are used to access the data * Documentation of software needed to access the data included * Inclusion of relevant software * Data and associated metadata, documentation and code deposit • Provision of access provided in case of restrictions As detailed in section 4, and 5.1, all data, information, and knowledge considered relevant for the scientific community will be made accessible under Open Access. At the moment, all datasets, and other means of information storage are in draft status, and this information has not been defined for each dataset. When a dataset is set to be accessible publicly, this information will be fulfilled and the DMP updated accordingly. 5.3 Data interoperability This section will be updated on next iterations to provide detailed information on how data will be made interoperable to specific quality standards and more in detail: * Assess the interoperability of project data * Specifics on data/metadata vocabularies, standards, methodologies followed * Use of standard vocabulary for all data types present to allow inter-disciplinary interoperability * Provision of mapping to more commonly used ontologies To assure data interoperability, SIM4NEXUS project will follow state of the art ontologies and standards. The two main elements which will store and make available information and services in SIM4NEXUS publicly are the Knowledge Elicitation Engine, and the Semantic Repository. The Knowledge Elicitation Engine is being implemented under Service Oriented Architecture principles, following the OGC standards and services defined for information publication, discovery, exchange, etc. (WPS, WFS, GML…). Please refer to deliverable D4.3, section 3.2 for more information. In regards to the Semantic Repository, please refer to section 4.2 in this document for more information. 5.4 Increased data re-use This section will be updated on next iterations to provide detailed information on how data will be made usable beyond the original purpose for which it was collected, and more in detail: * Data licensing to permit the widest reuse possible * Data availability for re-use * Why and for what period a data embargo is induced * Data useable by third parties after the end of the project * Restriction of re-use of some data * Data quality assurance processes * Length of time for which the data will remain re-usable As detailed in section 4, 5.1, and 5.2, all data, information, and knowledge considered relevant for the scientific community will be made accessible under Open Access. At the moment, all datasets, and other means of information storage are in draft status, and this information has not been defined for each dataset. When a dataset is set to be accessible publicly, this information will be fulfilled and the DMP updated accordingly. 6 Ethical & Security aspects 6.1 General Within the SIM4NEXUS study only general ethical issues are concerned such as informed consent, anonymity and confidentiality associated with the voluntary involvement of human participants in the European Union. Types of such data collected in SIM4NEXUS are various user interviews, opinions and reviews associated with project's components. Non-exhaustive list is as follows: * Stored involvement of Serious Game users to gain insight into the decisions and behaviors of the players and to allow further analysis * The visualization and interaction tool - to collect information from users so that the Knowledge Elicitation Engine (KEE) can learn from user decisions * A series of interviews with key stakeholders and decision makers – in particular those which might be affected most by a Nexus-compliant implementation of policies, or which behavioral change is central to the achievement of a resource efficient Europe * Planned contacts with representative persons of the targeted users. Interviews should be carried out by phone or face-to-face when convenient. Interviews should help define the expected functionalities/ services to be offered, test the price that could be acceptable and identify distribution channels to access these clients * The end-users, potential developers, and partners, etc. will be provided the opportunity to test and review the latest products and services * Methodology and procedures for sensitive data processing and storing will be specified as a part of the ethics Deliverable 9.1. It is important to emphasize that special efforts will be devoted to anonymize information and securing accessibility. Mechanisms to delete personal data will be provided in an easy and usable manner 6.2 Intellectual Property Rights (IPR) Intellectual Property Rights (IPR) will receive special attention from the beginning. All rules regarding management of knowledge and IPR will be governed by the Consortium Agreement (CA). SIM4NEXUS was based on DESCA (Consortium Agreement Model) H2020 model for the Consortium Agreement (CA). SIM4NEXUS will not act in contradiction with the rules laid down in Annex II of the Grant Agreement. The CA will address background and foreground knowledge, ownership, protected third party components of the products, and protection, use and dissemination of results and access rights. The following principles will be applied: * Confidentiality: During the project duration and beyond (Section 10 of the GA – nondisclosure of the information for a period of 4 years after the end of the project), the contractors shall treat any information, which is designated as property by the disclosing contractors, as confidential. They also shall impose the same obligations to their employees and suppliers. * Pre-existing know how: Each Contractor is and remains the sole owner of its IPR over its preexisting know-how. The Contractors will identify and list the pre-existing know-how over which they may grant access rights for the project. The Contractors agree that the access rights to the pre-existing know-how needed for carrying out their own work under the project shall be granted on a royalty-free basis. * Ownership and protection of knowledge: The ownership of the knowledge developed within the project will be governed by an open source license. * Open data: Data and results obtained during the project that are based on open public sector data will be made available free of charge. 7 Questions & Answers on FAIR data Furthermore, the goal of this document is to clarify a series of questions related to all datasets produced or gathered in SIM4NEXUS project. In this conclusive section, we describe how the DMP answers to these questions. In the following table, we report the characteristics of the dataset together with the questions to which the DMP should answer. Table 3. SIM4NEXUS data characteristic description. <table> <tr> <th> Data Characteristic </th> <th> Question SIM4NEXUS Answer </th> </tr> <tr> <td> Discoverable </td> <td> Are data and associated software produced and/or used in the SIM4NEXUS discoverable (and readily located), identifiable by means of a standard identification mechanism (e.g. Digital Object Identifier)? </td> <td> Data produced within the project can be discoverable in SIM4NEXUS Database and will be uniquely identified most probably by RESTful (Representational State Transfer) service pointing to Database resource. Third party datasets will be referenced and identified with data name and version, thus discoverable in their original data repositories. Existing software that is used should be background information of a given partner and as such could be documented but not discoverable. This may also apply to modifications made as part of the SIM4NEXUS case. </td> </tr> <tr> <td> Accessible </td> <td> Are data and associated software produced and/or used in SIM4NEXUS accessible and in what modalities, scope, licenses (e.g. licensing framework for research and education, embargo periods, commercial exploitation, etc.)? </td> <td> Intermediate data (i.e. non-final data produced during the processing chain elaboration) will be stored in the database but only Consortium Members will access to them. Final products data instead will be stored in the Database and freely accessible also by externals through a dedicated web-service obtaining data from the database. </td> </tr> <tr> <td> Assessable and intelligible </td> <td> Are data and associated software produced and/or used in the project accessible for and intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for peer review, is data provided in a way that judgments can be made about their reliability and the competence of those who created them)? </td> <td> Final products data will be freely accessible for and intelligible to third parties. </td> </tr> <tr> <td> Data Characteristic </td> <td> Question </td> <td> SIM4NEXUS Answer </td> </tr> <tr> <td> Useable beyond the original purpose for which it was collected </td> <td> Are data and associated software produced and/or used in SIM4NEXUS project useable by third parties even long time after the collection of the data (e.g. is the data safely stored in certified repositories for long term preservation and curation; is it stored together with the minimum software, metadata, and documentation to make it useful; is the data useful for the wider public needs and usable for the likely purposes of nonspecialists)? </td> <td> Final products data will be useable by third parties even long time after their production. Thanks to infrastructure (i.e. IaaS – Infrastructure as a Service that will be applied to SIM4NEXUS) behind the database, data retention will have no physical limit about size and the time of validity (just economic limits related to maintenance). The historical data will be accessible through same interface and methods of the most recent ones also thanks to dedicated storage methodologies. </td> </tr> <tr> <td> Interoperable to specific quality standards </td> <td> Are data and associated software produced and/or used in the project interoperable allowing data exchange between researchers, institutions, organizations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins)? </td> <td> The web-service (providing data) will be based on well-known protocols (i.e. Data Access Protocol - DAP 2.0, RESTful/HTTP), so all data can be accessed in a standardized way, through a compliant HTTP / DAP client. Moreover, data will be stored, when possible, with recognized state of the art standards and protocols, thus assuring interoperability and maximizing exploitation of results. </td> </tr> </table> References * Article 43.2 of Regulation (EU) No 1290/2013 of the European Parliament and of the Council, of 11 December 2013, laying down the rules for participation and dissemination in "Horizon 2020 - the Framework Programme for Research and Innovation (2014-2020)" and repealing Regulation (EC) No 1906/2006. * Guidelines on Data Management in Horizon 2020, _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020hi-oa-data-mgt_en.pdf_ * Open Access to Scientific Publications and Research Data in Horizon 2020 Guidelines, _https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020 -hi-oa-pilot-guide_en.pdf_ * Open Research Data Pilot – ORD pilot: _https://www.openaire.eu/opendatapilot_ • SIM4NEXUS Grant agreement & SIM4NEXUS consortium agreement
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1097_DECISIVE_689229.md
**Executive summary** </th> </tr> </table> As stated in the proposal, the WP 3.3 takes part in the “Pilot on Open Research Data 1 ” as suggested in the H2020 guidelines. It aims at making final results and when possible, intermediate results of this WP (mainly in the form of spatial datasets) available for free to access, reuse, repurpose and redistribute. This deliverable aims at defining the specific data management plan (DMP) for the WP 3.3 which is a key element to comply with the Open Research Data requirement. This document follows the guidelines on the “FAIR Data Management in Horizon 2020” 2 . It describes the data that will be produced by the WP3.3, the file formats and the metadata standard used and the sharing and the archiving strategy. The DMP for the overall project has been described in D8.4 and the DMP described here is specific to the WP 3.3. This document is the first version of the DMP. It will be updated over the course of the project to add details on the data that will be published or if there are some changes in the consortium policies or consortium composition. <table> <tr> <th> **2.** </th> <th> **ADMIN DETAILS** </th> </tr> </table> **Project Name:** DMP for the work package 3.3 of the DECISIVE Project, funded by Horizon 2020 ("spatial approach for designing the decentralized urban biowaste valorization network") DMP title **Grant Title:** 689229 **Principal Investigator / Researcher:** Irstea **Description:** The growing attractiveness of cities leads to increasing population, thus rising energetic and food demands in urban areas. This makes urban waste management increasingly challenging, both in terms of logistics and environmental or health impacts. To decrease the cities’ environmental impacts and to contribute to a better resilience of urban areas towards energy or food supply crisis, waste management systems have to be improved to increase recycling of resources and local valorization. In this context, The DECISIVE project proposes to change the present urban metabolism for organic matter (foods, plants, etc.), energy and biowaste to a more circular economy and to assess the impacts of these changes on the whole waste management cycle. Thus, the challenge will be to shift from an urban “grey box”, implying mainly goods importation and extra-urban waste management, to a cooperative organization of intra- and peri-urban networks enabling circular local and decentralized valorization of biowaste, through energy and bio products production. Such a new waste management paradigm is expected to increase the sustainability of urban development by: 1-promoting citizens awareness about waste costs and values; 2-promoting renewable energy production and use in the city; 3-developing an industrial ecology approach that can promote the integration between urban and peri-urban areas, by providing valuable agronomic by-products for urban agriculture development and so improving the balance of organic products and waste in the city; 4-developing new business opportunities and jobs. In order to achieve these objectives, the project DECISIVE will develop and demonstrate eco-innovative solutions, addressed to waste operators and public services. The work package 3.3 of the project, handled by Irstea, is focused on _a spatial approach for designing the decentralized urban biowaste valorization network_ . It also takes part in the “ _Pilot on Open Research Data_ ” as suggested in the H2020 guidelines. Therefore final results and when possible, intermediate results of the 3.3 work package (mainly in the form of spatial datasets) will be free to access, reuse, repurpose and redistribute. **Funder:** European Commission (Horizon 2020) <table> <tr> <th> **3.** </th> <th> **DATA SUMMARY** </th> </tr> </table> The DECISIVE project proposes to change the present urban metabolism for organic matter to a more circular economy (food waste, partly in connection with further digestible biowastes to heat and power from biogas and further material products, resulting from SSF processes) and to assess the impacts of these changes on the whole waste management cycle. One objective of the project, addressed by the work package 3.3, is to **design the decentralized urban biowaste valorization network** based on **a spatial analysis** . In a first step, **data will be gathered** about: * Locations and if possible characteristics of the sources of organic matter * Locations and characteristics of the potential output for the products coming out of the process (energy, digestate, by-products) * Location of rural, peri-urban and urban farms in the target areas * Location of the current biowaste treatment systems (incineration, landfilling, biological, etc.) Information will be either collected from existing database (OpenStreetMap, some official database like BD Topo or SIREN in France, etc.) or created based on surveys. In a second step, those data, completed with other information (AD 3 or SSF 4 characteristics, rules and legislation, social and economic factors, infrastructures), will be incorporated in a **multi-criteria analysis tool** to optimized the location of a network of AD and SSF sites. The spatial data are saved in **Shapefile** format, an open and interoperable format recognized as the de-facto standard for all GIS vector data or saved in **GeoTIFF** for raster data. The final sizes of the different datasets remain unknown at this stage of the project but they should be small enough to remain easily manageable. The project data may be useful from an operational and methodological point of view. In the cities used as case studies, the results may be already exploited as decision tools by local authorities to design new biowaste networks, but those data may also serve as guidelines for the replication of the method in other contexts. <table> <tr> <th> **4.** </th> <th> **FAIR DATA** </th> </tr> </table> **4.1. Making data findable, including provisions for metadata:** All the data published in the framework of the work package 3.3 of the DECISIVE project and included in the “ _Pilot on Open Research Data_ ” procedure will comply with the **INSPIRE Directive** 5 . Therefore, all data will be provided with a **standard XML metadata** . The metadata will be published on metadata catalog like the _European Union Open Data Portal_ to insure their good discoverability. The metadata will contain in particular: * Clear information about the content of the data (title, abstract, etc.) to insure a good understanding of its content * A set of keywords including keywords from officials thesaurus (GEMET 6 , etc.) and tailor made keywords to optimize the searching process ▪ Date of creation and revision, contacts of creators The metadata will be created manually for each dataset with GIS software (ESRI-ArcGIS or QSphere: a plugin for QGIS Software). In the case of versioning, the version of the data will be clearly expressed in the file name as defined by the file naming schema of the Decisive Project (Deliverable 8.4, ex: _DECISIVE-WP3.3- title-partner_name-Vxx_ , where **Vxx** indicates the version number: V01, V02, etc.) but also in the title and the abstract of the metadata. All dataset will have a **unique resource identifier** (DOI) provided by the repository _Zenodo_ 7 . # 4.2. Making data openly accessible The final results of the work package 3.3 will be openly accessible in the selected repository (Zenodo). The intermediate data will also be published in the same repository, except if: * The owners of the data sources or raw data request to keep it close * The data contain some personal information Note: The data processing will heavily modify the raw data. Therefore, the final results will show only anonymized information. Method for data sharing and data repository: The data and the metadata will be available through Zenodo (www.zenodo.org), a generalpurpose open access repository created by OpenAIRE. All data will be published in open and standard formats to allow a good interoperability. # 4.3. Making data interoperable All data and metadata will be published based on open and standard file format: * GIS vector data are saved in Shapefile format. It is recognized as the standard format for spatial vector data and it is based on open specifications for data operability among most of the GIS software. * GIS raster data are saved in GeoTIFF format. * Metadata files are saved in XML The data vocabulary used in attribute tables will be specific to the theme of the project (waste management, energy, network, etc.). Therefore, a definition of the technical names or concepts will be included in the metadata or in a side document (PDF). Moreover, to comply with the INSPIRE Directive; some of the keywords used in the metadata will come from official thesaurus like GEMET. # 4.4. Increase data re-use (through clarifying licenses) ## \- Timeframe for data sharing Data will be shared as soon as the _Communication and Dissemination_ and the _Work package 3_ leaders have given their agreement. The data will be maintained during the project timeframe and they will remain available after the project on the repository (Zenodo). ## \- Expected reuse The project data may be useful from an operational and methodological point of view. In the case studies areas, the results may be already exploited as decision tools by local authorities to design new biowastes networks. But the data may also serve as guidelines for the replication of the method in other contexts. ## \- Ownership and Licensing The data will be owned by Irstea and will be published under _Creative Commons license_ 8 with at least an attribution (CC **BY** ) or attribution and non- commercial (CC **BY-NC** ). However, the final selection of the license is still under discussion and may be different according to the dataset. In the case of information based on third-party data with a copyleft license, the license will be adapted in consequences. <table> <tr> <th> **5.** </th> <th> **ALLOCATION OF RESOURCES** </th> </tr> </table> ## \- Responsible for data management The data management of the WP3.3 is handled by the GIS service of Irstea. This service is responsible for: * the data capture or gathering (if others partners are involved) * the quality control * the metadata creation or edition * the storage and backup * the publication in the selected repository ## \- Resourcing To comply with the FAIR requirement 9 , the Decisive project will have to cover the cost of data management, storage backup and publication: <table> <tr> <th> **Resources** </th> <th> **Description** </th> <th> **Cost** </th> </tr> <tr> <td> Human resources </td> <td> _Format the data, create the metadata, publication in repository, etc._ </td> <td> 20 man-day (estimation) </td> </tr> <tr> <td> Storage and backup </td> <td> _Running cost of Irstea IT services_ </td> <td> Not assessed yet </td> </tr> <tr> <td> Repository </td> <td> _The repository Zenodo is free of charge_ </td> <td> Free of charge </td> </tr> </table> After the end of project, the data will remain available on the repository (Zenodo) without added cost as the services is free of charges. <table> <tr> <th> **6.** </th> <th> **DATA SECURITY** </th> </tr> </table> \- Data security No sensitive data will be published. ## \- Storage and backup Data and the metadata will be stored in the external repository _Zenodo_ which will insure their long term preservation. Moreover, the data will be stored additionally in an own Irstea platform with a secure NetApp® data storage system. The data will be daily saved and replicated to an external data center that guarantees the preservation of long-term data as described in D8.4. <table> <tr> <th> **7.** </th> <th> **SUMMARY OF DMP TECHNICAL CHOICES** </th> </tr> <tr> <td> **Topics** </td> <td> **Description** </td> </tr> <tr> <td> _GIS data_ </td> <td> Shapefile format for vector data GeoTIFF format for raster data </td> </tr> <tr> <td> _Metadata_ </td> <td> XML format Compliant with INSPIRE Directive </td> </tr> <tr> <td> _Repository_ </td> <td> Zenodo (www.zenodo.org) </td> </tr> <tr> <td> _Additional backup_ </td> <td> Irstea platform </td> </tr> </table> <table> <tr> <th> **8.** </th> <th> **ETHICAL ASPECTS** </th> </tr> </table> Until now, no ethical or legal issues were identified. * No sensitive or personal data will be published. * If sensitive or personal information is used, from a survey for example, a _Consent Form_ will be required as defined by the Deliverable 8.4. Moreover, the spatial analysis will make the data anonymous or impossible to link to any raw data. <table> <tr> <th> **9.** </th> <th> **OTHER** </th> </tr> </table> Data will be compliant with the INSPIRE Directive 10 .
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1099_Quaco_689359.md
# Executive summary _The QUACO Data Management Plan (DMP) shows the tools and life cycle of data generated and used during and after the project. All data generated in this project (both on PCP and on technical development) will be centrally stored at CERN who will act as central repository for archiving also after the end of the project. The DMP also outline the accessibility of the data, which is in line with the project’s Open Access policy._ # INTRODUCTION The main deliverable of QUACO PCP is a potential component of the HL-LHC project. For this reason, the HL-LHC Data management Plan concepts are the basis for this document. Data Management is a process that provides an efficient way of sharing knowledge, information and thinking among the project's participants and stakeholders. In this document, we will define: * The type of Data that will be generated and managed; * The life cycle of the data and in particular the data control process;  The tools used to manage the data. The process of communication and dissemination of the data it is part of the Deliverable 8.1. # DATA HANDLED BY THE PROJECT ## OVERVIEW The QUACO project will produce two different type of documents and data. From one side the information exchange on the PCP process, from the other the information related to the design and fabrication of the MQYY first of a kind magnet. The type of documents and data and the way they will be treated will be very different. ## DATA LINKED TO THE PCP PROCESS QUACO is a collaborative acquisition process. There will be documents and data related to the exchange of information with/among QUACO partners and the interaction with:  Partner labs interested in the use of the PCP instrument in the future; * Industrial suppliers; * Public; * Stakeholders. There will be also data created by the analysis of the interaction of these four groups with QUACO. ## DATA LINKED TO THE PRODUCTION OF THE FIRST OF A KIND MAGNET The QUACO technical specification [1] gives a detailed list of technical and managerial documents and data that will be produced during the three phases. Among them: * Technical Documents such as the Conceptual Design Report of the magnet; * Managerial Documents such as the Development and Manufacturing plan; * Acquisition Documents such as the technical specifications for the tendering of tooling and components; * 2D and 3D models such as the As-built 2D and 3D CAD manufacturing drawings; * Data such as the control parameters during the winding process or the dimensional checks. * Contract Follow up Documents such as minutes and visit reports. There will be also data created by the internal exchange among QUACO partners on the progress done by the suppliers. # LIFE CYCLE AND DATA CONTROL PROCESS ## OVERVIEW The PCP Project has a Data Management Plan because needs that: * Data required for the project is identified, traced and stored; * Documents are approved for adequacy prior to issue; * Documents are reviewed and updated as necessary; * Changes to Data and Documents are identified; * Relevant versions of applicable documents are available at points of use; * Documents remain legible and readily identifiable; * Documents of external origin are identified and their distribution controlled. To manage and control a document we shall establish several sub processes: 1. **Identification** : what kind of document shall be managed. 2. **Labeling** : how the document shall be named. 3. **Lifecycle** : how shall be ensured the adequacy of the document before distribution. 4. **Availability** : how shall be ensured that the document arrives to the right person and can access it as long as required. 5. **Traceablity** : how it is record the changes and location of the document. Except for the labelling process, the same sub processes are applicable for the data that will be handled by the project. ## IDENTIFICATION OF DATA MANAGED The QUACO project shall manage and control all documents required to guaranty the full life cycle of the Project. Points 2.2 and 2.3 list the type of data and documents identified to follow the PCP process and the production of the first of a kind MQYY. Among those documents we can distinguish two classes: * Baseline documents: Documents that will have to be stored after the end of QUACO. * Non-baseline documents: documents that are required for the well functioning of the project but that which storage will not be considered critical after Phase 3. The management and control sub processes of these two types of documents will be handled differently. The HL-LHC Configuration, Quality and Resource Officer ensures the training of the different QUACO partners [2] on the identification of the data to be managed. ## LABELING Baseline Documents shall follow the HL-LHC Quality plan naming convention [3] and shall be labelled with an EDMS number and with a document name. Non- Baseline Documents do not need to have a document name. ## LIFECYCLE The lifecycle of a document includes the publishing (proofreading, peer review, authorization, printing), the versioning and the workflow involved on this two processes. Concerning peer reviewing as a general rule: * Baseline documents shall be peer reviewed (verification process) by a group of people knowledgeable on the subject and those interfacing with the system/process described in the document. As default the peer review is done by the QUACO PMT or one of its members. * Non Baseline documents peer review process is generally managed by the author. In particular for the Tender Documents (Baseline Documents) * The JTEC reviews the Prior Information Notice, the draft Invitation to Tender, the draft Subcontracts, as prepared by the Lead Procurer in accordance with the laws applicable to it and the Specific PCP Requirements and submit to the STC all above documents for approval Concerning Authorization the process is adapted to the type of document. As a general rule: * For Baseline documents, the STC gives the Final approval of technical and contractual specifications for the PCP tender, Approval of tender selection and Management of knowledge (IPR), dissemination & exploitation documents. The Project Coordinator always approve all Baseline Documents. * For Non Baseline documents the process depends mainly on the type of document, but are mainly approved by the WP Leader. Every time there is a change in the lifecycle of a document a new version of the document shall be created. Changes are traced by the revision index. The revision index will increase by 0.1 for minor changes. In case of major changes will be the first digit that will be moved to the next integer. ## AVAILABILITY Table 1 and Table 2 give the general guidelines for the visibility and storage of documents in the project. The process of communication and dissemination of the data it is part of the Deliverable 8.1. _Table 1: Visibility guidelines_ <table> <tr> <th> **Document class** </th> <th> **Visibility** </th> </tr> <tr> <td> Baseline documents </td> <td> Financial, resource oriented and with sensitive information </td> <td> QUACO Partners and EU Stakeholders </td> </tr> <tr> <td> Commercial </td> <td> QUACO Partners, EU Stakeholders and Partner labs interested in the use of the PCP instrument in the future </td> </tr> <tr> <td> Technical </td> <td> QUACO Partners, EU Stakeholders and Partner labs In some cases Industrial partners* </td> </tr> <tr> <td> Non Baseline documents </td> <td> Technical documents </td> <td> QUACO Partners, EU Stakeholders and Partner labs In some cases Industrial partners* </td> </tr> <tr> <td> Scientific publications </td> <td> Worldwide </td> </tr> <tr> <td> Outreach </td> <td> Worldwide </td> </tr> </table> (*) Follows IP rules described in the Tendering Documentation and on the Grant Agreement _Table 2: Storage time and format requirements_ <table> <tr> <th> **Document class** </th> <th> **Storage time** </th> <th> **Format** </th> </tr> <tr> <td> Baseline documents </td> <td> Forever </td> <td> Native format and at least a long term readable format </td> </tr> <tr> <td> Non Baseline documents </td> <td> Limited time </td> <td> Native format or long term readable format </td> </tr> </table> ## TRACEABILITY Traceability includes the record of the lifecycle of the document and the metadata that describes the document. A document is fully trace if we can retrieve: * Label including version number, * Properties (Author, creation date, title, description) * Life cycle information, * Storage location, * List of actions and comments with their author linked to changes in the life cycle. Baseline documents shall be fully traced. For Non Baseline documents it is not required a complete traceability of the actions and comments with their author linked to changes in the life cycle. # TOOLS CERN has two documentation management systems EDMS and CDS. EDMS is the tool used for the control of engineering documents and presentations. CDS is the tool used for the control of scientific documents, meetings documentation and graphic records. To ensure the long term storage of Baseline documents they shall be stored in EDMS. Non Baseline documents can be stored in another documentation management system that can ensure the correct level of approval, availability and traceability. _Table 3: Recommended tools_ <table> <tr> <th> **Document class** </th> <th> **Tool** </th> </tr> <tr> <td> Baseline documents </td> <td> EDMS </td> </tr> <tr> <td> Non Baseline documents </td> <td> Meetings: Indico, EDMS, SharePoint Technical: EDMS (requiring approval process), SharePoint Scientific: CDS Commercial: CFU, EDMS Outreach: WWW </td> </tr> <tr> <td> Data </td> <td> Technical: MTF Non-Technical: SharePoint Outreach: Twitter, LinkedIn </td> </tr> </table> # LINKS TO THE TOOLS CDS: https://cds.cern.ch/ EDMS: https://edms.cern.ch/project/CERN-0000154893 Indico: https://indico.cern.ch/category/7138/ LinkedIn: _https://www.linkedin.com/in/quaco/en_ MTF: https://edms5.cern.ch/asbuilt/plsql/mtf_eqp_sel.adv_list_top Twitter: https://twitter.com/HL_LHC_QUACO SharePoint: https://espace.cern.ch/project-HL-LHC-Technical- coordination/QUACO/_layouts/15/start.aspx#/ WWW: https://quaco.web.cern.ch/ # TEMPLATES The HL-LHC Quality support Unit maintains a series of templates that are accessible in the EDMS [4]. # CONCLUSIONS The QUACO Project has identified the different type of documents and data that have to be managed to ensure its full life cycle. The different sub process such as labelling, publishing or traceability have been analyzed and adapted to the project. Finally different tools have been identified and deployed to ensure the sub processes.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1101_STARR_689947.md
# Deliverable Description This document is a deliverable of the STARR project, which is funded by the European Union’s Horizon 2020 Program under Grant Agreement #689947. It is a follow-up to Deliverables 1.6 and 1.7, which described what data the project would generate, how they would be produced and analyzed. The following deliverable also aims to detail how the data related to the STARR project will be disseminated and afterwards shared and preserved. # 1\. Scope of the document The STARR project Data Management Plan (DMP) primarily lists the different datasets were produced by the project, the main exploitation perspectives for each of those datasets, and the major management principles the project were implemented to handle those datasets. The purpose of the DMP is to provide an analysis of the main elements of the data management policy that were used by the consortium with regard to all the datasets that were generated by the project. As mentioned in the previous versions of this deliverable, the data management plan covers the whole data life cycle. _Figure 1: Steps in the data life cycle. Source: From University of Virginia Library, Research Data Services_ # 3\. Responsability for the data <table> <tr> <th> **Person in charge of the data management during the oroject** </th> <th> Margarita Anastassova (personal and non-personal data) [email protected]_ Franziska Boehm (specific focus on EU regulations about personal data management) [email protected]_ </th> </tr> <tr> <td> **Partners’ reponsability** </td> <td> Every partner is responsible for the data they are collecting. This is valid for both personal and non-personal data. The data has been collected, combined, stored and transmitted according to the relevant national, European and institutional regulations. As far as the management of personal data is concerned, the partners were supported by FIZ Karlsruhe, providing guidance on personal data management in an EU perspective. </td> </tr> <tr> <td> **Data management policy** </td> <td> All personal data collection in STARR was done within the remit of formal ethics clearances obtained at our testing sites and granted by the relevant university and/or local health officials. Thus, any patient-related data, such as data from pre-exiting health record data and behavioural data depicted, for example, by keyboard presses, video, audio, motion tracking falls under the ethics clearance. Some nonpatient data, such as requirements from stakeholders in WP3, in the form of anonymous questionnaire responses & focus group opinions, can be gathered more informally. The legal basis for the personal data processing is the participant’s consent, obtained in accordance with the rules to which the collecting partner is subject. The most relevant standards regarding data handling, in this experimental context with patients, concern the area of ethics, data protection and privacy. They are listed below: \- Directive 95/46/EC: Protection of individuals with regard to the processing of personal data and on the free movement of such data. </td> </tr> </table> <table> <tr> <th> </th> <th> -Directive 99/5/EC: Radio equipment and telecommunications terminal equipment and the mutual recognition of their conformity. * Directive 98/44/EC: Legal protection of biotechnological inventions. * Art. 7, 8 Charter of Fundamental Rights * Art. 8 European Convention on Human Rights * Case Law by the European Court of Human Rights * Documents of the Aricle 20 Working Party * Ilves report on E-health * GDPR regulation (2016/679). * Draft data protection regulation. All of our patient data is reported as anonymized group summaries in the project deliverables and in peer reviewed publications. Anonymous individual quotes describing requirements are also be used in these reports. During system evaluation, user performance data (e.g. task execution time and number of errors) is transient and context sensitive (to the particular task, sensors and user). There is no public value in the data and hence no foreseen need for public access beyond that of the therapist. Nonetheless we have no objection in principle to releasing this data in the spirit of Open Access if requested. </th> </tr> <tr> <td> **Ownership and access to data** </td> <td> A consortium agreement was negotiated and signed by all the parties in order to inter alia specify the terms and conditions pertaining to ownership, access rights, exploitation of background and results and dissemination of results, in compliance with the grant agreement and Regulation n°1290/2013 of December 11th, 2013. The consortium agreement was based on the DESCA Horizon 2020 Model Consortium Agreement with the necessary adaptations considering the specific context and the parties involved in the project. Its basic principles are as follows: \- The parties exhaustively identified the background intellectual property they brought to the project, and assessed its availability for access rights as regards to potential third parties’ rights over such background; </td> </tr> </table> <table> <tr> <th> </th> <th> * Ownership of results including joint results generated by two or more parties goes to the party(ies) having generated such results; * The owning parties take all appropriate measures for the protection of the results capable of commercial or industrial exploitation, notably through intellectual property rights when relevant; * The parties use their best efforts to exploit and disseminate the results, either directly or indirectly, for instance by out-licensing said results; * Each party gives access rights (through licenses) to their background and results to the other parties for the implementation of the project and/or for the exploitation of those other parties’ own results (under fair and reasonable conditions). Knowledge management follows the strategy presented in the Figure below. The IPR activities are organized according to the different project phases. In principle, Foreground is managed according to the provisions of the European Commission, and the access to the foreground created throughout the project lifetime is specified by the Consortium Agreement. As a general rule, the foreground is considered as a property of the Contractor generating it, and in this sense the originator is entitled to use and to license such right without any financial compensation to or the consent of the other Contributors. In case of licensing to third parties, the Contributors shall be informed in advance and appropriate financial compensation shall be given to them. Starting from these </th> </tr> </table> <table> <tr> <th> </th> <th> basic rules, other particular situations have been treated in WP9 (Dissemination & Exploitation): * If the features of a joint invention, design or work are such that it is not possible to separate them, the Contributors could agree that they may jointly apply to obtain and/or maintain the relevant rights and shall strive to set up a joint exploitation agreements in order to do so; * An originator of the foreground could decide not to seek protection of its Foreground. In this case, another contractor interested in such protection might apply for it, advising the other Contractors. In case several Contractors are interested, an agreement is necessary between them. Concerning access rights, each Contractor shall take appropriate measures to ensure that it can grant Access Rights and fulfill the obligations under the EU Contract. The Contractors agree that Access Rights are granted on a nonexclusive basis, and that, if not otherwise provided in the Consortium Agreement or granted by the owner of the Foreground or Background, the Access Rights does not include the right to grant sub-licenses. The Consortium agreement has one section or one appendix to define which access rights to the background may be granted. Access rights to foreground and background needed for the execution of the project are be granted on a royalty-free basis to the project partners. Publication and dissemination of foreground are granted with the approval of the Consortium, making sure that the period of secrecy needed for IP protection is respected. Contractors have to inform the Consortium and the Commission of its intention to publish on its foreground. Adequate references to the Contract with the EC shall be given in the publication. Publication can be impeded if another contractor can show that the secrecy of the foreground is not guaranteed. </th> </tr> <tr> <td> **Publication of research data** </td> <td> Since in STARR certain research activities rely on the processing of personal data of stroke survivors, the privacy and the protection of their personal data should be ensured. However, this does not exclude the publication of aggregated, properly anonymized data, e.g. in statistical form. In this way both the privacy of the stroke survivors are respected and scientific results can be disseminated </td> </tr> </table> # 4\. Data set description The project partners have identified the dataset that was produced during different phases of the project. The list is provided below, while the nature and details for each dataset are given in the subsequent sections. <table> <tr> <th> **Data set name** </th> <th> **Personal identifiable data** </th> </tr> <tr> <td> **Type of data** </td> <td> Name, gender, age, marital status, deprivation index, hand dominance, educational level, type of job, hobbies, socio-familiar support </td> </tr> <tr> <td> **Format** </td> <td> Documents (paper and digital such as Word documents or Excel sheets), pictures, audio or video recordings, inputs to the STARR application </td> </tr> <tr> <td> **Source** </td> <td> This data comes from: * Questionnaires (paper originals, responses captured in an Excel spreadsheet), analysis (e.g. charts) within the spreadsheet * Interviews transcribed to Word documents (anonymized), analyzed data from the interviews * Focus groups: discussions transcribed to Word documents (anonymized), synthesis of themes captured in Word documents * Consent forms: signed paper documents * Inputs to the application * Video or audio recordings of user testing and studies * Pictures from user testing and studies </td> </tr> <tr> <td> **Reuse and sharing** </td> <td> Only relevant anonymized and aggregated data are transmitted and reused by the partners not collecting the data. </td> </tr> <tr> <td> **Diffusion principles** </td> <td> Within the consortium only respecting the limitations mentioned above. </td> </tr> <tr> <td> **Scientific publications** </td> <td> No personal identifiable data was used in scientific publications. </td> </tr> <tr> <td> **Relevant documentation** </td> <td> Personal identifiable data was not used in any documentation. </td> </tr> <tr> <td> **Archiving and preservation** **(including storage and backup)** </td> <td> The data is stored by the partner collecting it (on their own computers and/or institutional servers). </td> </tr> </table> <table> <tr> <th> **Data set name** </th> <th> **Sensitive data (except Health data)** </th> </tr> <tr> <td> **Type of data** </td> <td> Ethnic group, sex life or orientation, political opinions, religious beliefs or other beliefs of a similar nature </td> </tr> <tr> <td> **Format** </td> <td> Documents (paper and digital such as Word documents or Excel sheets), audio recordings or videos, inputs to the STARR application </td> </tr> <tr> <td> **Source** </td> <td> This data comes from: * Questionnaires (paper originals, responses captured in an Excel spreadsheet), analysis (e.g. charts) within the spreadsheet * Interviews transcribed to Word documents (anonymized), analyzed data from the interviews * Focus groups: discussions transcribed to Word documents (anonymized), synthesis of themes captured in Word documents * Inputs to the application * Audios or videos recordings of user testing and studies </td> </tr> <tr> <td> **Reuse and sharing** </td> <td> Only anonymized and aggregated relevant data are transmitted and reused by the partners not collecting the data. </td> </tr> <tr> <td> **Diffusion principles** </td> <td> Within the consortium only respecting the limitations mentioned above. </td> </tr> <tr> <td> **Scientific publications** </td> <td> No sensitive data was used in scientific publications. </td> </tr> <tr> <td> **Relevant documentation** </td> <td> Sensitive data was not included in any documentation. </td> </tr> <tr> <td> **Archiving and preservation** **(including storage and backup)** </td> <td> The data is stored by the partner collecting it (on their own computers and/or institutional servers). </td> </tr> </table> Concerning sensitive data, it also includes health data. However, we chose to separate it and create a specific category, as this is the major type of data that was collected and analyzed during the project. It is further detailed in the table below. <table> <tr> <th> **Data set name** </th> <th> **Health data (physical + mental)** </th> </tr> <tr> <td> **Type of data** </td> <td> Medical history, blood pressure, lipidic profile, glycaemia, heart rate, diet, toxic habits, disabilities, weight, depression, stress, pain, motor function, physical activity, canes and wheel chair use, gait analysis, skeleton movements in the game, emotions, behavior, </td> </tr> <tr> <td> </td> <td> adherence to treatment, information needed and obtained from the psychological model analysis </td> </tr> <tr> <td> **Format** </td> <td> Documents (paper and digital such as Word documents or Excel sheets), videos, inputs to the STARR application, visual sensing (Kinect or equivalent) </td> </tr> <tr> <td> **Source** </td> <td> This data comes from: * Questionnaires (paper originals, responses captured in an Excel spreadsheet), analysis (e.g. charts) within the spreadsheet * Interviews / Focus groups transcribed to Word documents (anonymized), analyzed data from the interviews * Inputs to the application via questions or sensing * Video or audio recordings of user testing and studies * Movement recordings by the games and training applications * Movement recordings by the wearables </td> </tr> <tr> <td> **Reuse and sharing** </td> <td> Only relevant anonymized and aggregated data are transmitted and reused by the partners not collecting the data. </td> </tr> <tr> <td> **Diffusion principles** </td> <td> Within the consortium only respecting the limitations mentioned above. </td> </tr> <tr> <td> **Scientific publications** </td> <td> Health data was used in scientific publications in the form of anonymized, statistical data or anonymized, aggregated patient description. </td> </tr> <tr> <td> **Relevant documentation** </td> <td> Health data was only included in health-related files in OSA and HOP (the medical partners). </td> </tr> <tr> <td> **Archiving and preservation** **(including storage and backup)** </td> <td> The data is stored by the partner collecting it (on their own computers and/or institutional servers). </td> </tr> </table> <table> <tr> <th> **Data set name** </th> <th> **Website traffic** </th> </tr> <tr> <td> **Type of data** </td> <td> Email addresses, Google Analytics data, HTTP cookies </td> </tr> <tr> <td> **Format** </td> <td> CSV files, HTTP cookies </td> </tr> <tr> <td> **Source** </td> <td> This data comes from user activity on the My Stroke Guide website. </td> </tr> <tr> <td> **Reuse and sharing** </td> <td> My Stroke Guide website collects email addresses, which are only used by the Stroke Association, with consent of the users and are not shared with any third parties. My Stroke Association collects Google Analytics data from the website, which is used anonymously to assess site use and is not shared with any third parties. </td> </tr> <tr> <td> **Diffusion principles** </td> <td> This data is not diffused. </td> </tr> <tr> <td> **Scientific publications** </td> <td> This data is not included in any scientific publications. </td> </tr> <tr> <td> **Archiving and preservation** **(including storage and backup)** </td> <td> The data is stored by the My Stroke Association (on their own computers and/or institutional servers). </td> </tr> </table> # 5\. Data volume The volume of data collected by the STARR platform is limited to the time during which the pilot was run by each patient, as well as to the total number of participating patients and the quantity of information that was be sent from the client agents to the platform. The total amount of information gathered from telemonitoring without including the smart-space was under 100Gbytes (N = 20, effective time pilot = 2 months, daily system usage = 12 hours, sensors data rate = 1Kbytes/s, which estimate a total of 52Gbytes). This information is duly protected as described in section 7. # 7\. Data security The STARR project uses methods that emphasize good field access and extended contact and trust building with participants. Due to the sensitive nature of some of the topics that were discussed in interviews and focus groups, data security is of vital importance. The following guidelines have been followed throughout the project in order to ensure the security of the data: * Keep anonymized data and personal data of respondents separate; * Encrypt data if it is deemed necessary by the local researchers; * Store data in at least two separate locations to avoid loss of data; * Do not store personal data on USB drives; * Save digital files in one of the preferred formats (see table above), and * Label files in a systematically structured way in order to ensure the coherence of the final dataset; * Whenever possible and according to the ICT policies of each organization, on all devices (desktops, laptops, external hard disks, USB drives, etc.) that are for archiving, storing and data transferring is encrypted. Security of long-term preservation: Long-term preservation of the data will be carried out by three partners: * CEA – preservation of the data for two years after the end of the project, for publishing purposes; * Hopale and Osakidetza – the data on the health record is to be stored at OSA for 30 years which is a requirement under Spanish law. In HOP, this data will be stored for 20 years, as required by French law.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1104_SecureCloud_690111.md
Executive Summary i 1 Building a DMP in the Context Of H2020 2 1.1 Purpose of the SecureCloud Data Management Plan (DMP) 2 1.2 Background Of The SecureCloud DMP 2 2 SecureCloud Data Context 3 2.1 The Smart Grid Use Case 3 2.1.1 Existing Standards 4 2.1.2 Data Model 5 2.1.3 Protocols 5 2.2 Smart Grids in SecureCloud 7 2.2.1 COPEL 7 2.2.2 Israel Electric Corporation IEC 8 3 Data Description 9 3.1 COPEL 9 3.1.1 Data Set Description 9 3.2 Israel Electric Corporation IEC 10 3.2.1 Data Set Description 10 Appendices 13 A Zenodo Repository 14 Bibliography 15 # Building a DMP in the Context Of H2020 ## Purpose of the SecureCloud Data Management Plan (DMP) SecureCloud is a Horizon 2020 project participating in the Open Research Data Pilot. This pilot is part of the Open Access to Scientific Publications and Research Data programme in H2020 [1]. The goal of the program is to foster access to data generated in H2020 projects. Open Access refers to a practice of giving online access to all scholarly disciplines information that is free of charge to the end-user. In this way data becomes re-usable, and the benefit of public investment in the research will be improved. The EC provided a document with guidelines for projects participants in the pilot. The guidelines address aspects like research data quality, sharing and security. According to the guidelines, projects participating will need to develop a DMP. The DMP describes the types of data that will be generated or gathered during the project, the standards that will be used, the ways how the data will be exploited and shared for verification or reuse, and how the data will be preserved. This document has been produced following these guidelines and aims to provide a consolidated plan for SecureCloud partners in the data management plan policy that the project will follow. The document is the first version of the DMP, delivered in M6 of the project. ## Background Of The SecureCloud DMP The SecureCloud DMP is written in reference to the Article 29.3 in the Model Grant Agreement called Open access to research data (research data management). Project participants must deposit their data in a research data repository and take measures to make the data available to third parties. The third parties should be able to access, mine, exploit, reproduce and disseminate the data. This should also help to validate the results presented in scientific publications. In addition Article 29.3 suggests that participants will have to provide information, via the repository, about tools and instruments needed for the validation of project outcomes. The DMP will be important for tracking all data produced during the SecureCloud project. Article 29 [2] states that project beneficiaries do not have to ensure access to parts of research data if such access would be lead to a risk for the projects goals. In such cases, the DMP must contain the reasons for not providing access. # SecureCloud Data Context ## The Smart Grid Use Case Traditionally, the term ”grid” is used for an electricity system that may support all or some of the following four operations: electricity generation, electricity transmission, electricity distribution, and electricity control. the SmartGrid uses two-way flows of electricity and information to create an automated and distributed advanced energy delivery network. The SmartGrid can be regarded as an electric system that uses information, two-way, cyber-secure communication technologies, and computational intelligence in an integrated fashion across electricity generation, transmission, substations, distribution and consumption to achieve a system that is clean, safe, secure, reliable, resilient, efficient, and sustainable. This description covers the entire spectrum of the energy system from the generation to the end points of consumption of the electricity. The ultimate SmartGrid is a vision. It is a loose integration of complementary components, subsystems, functions, and services under the pervasive control of highly intelligent management-and-control systems. However smart technologies improve the observability and/or the controllability of the power system. Thereby Smart Grid technologies help to convert the power grid from a static infrastructure to be operated as designed, to a flexible, living infrastructure operated proactively. The Smart Grid is integrating the electrical and information technologies in between any point of generation and any point of consumption. Examples: * Smart metering could significantly improve knowledge of what is happening in the distribution grid, which nowadays is operated rather blindly. * The controllability of the distribution grid is improved by load control and automated distribution switches. * Common to most of the Smart Grid technologies is an increased use of communication and IT technologies, including an increased interaction and integration of formerly separated systems. European Technology Platform Smart Grid defines smart grid as follows: A SmartGrid is an electricity network that can intelligently integrate the actions of all users connected to it generators, consumers and those that do both in order to efficiently deliver sustainable, economic and secure electricity supplies. A SmartGrid employs innovative products and services together with intelligent monitoring, control, communication, and self-healing technologies to: * better facilitate the connection and operation of generators of all sizes and technologies; * allow consumers to play a part in optimizing the operation of the system; * provide consumers with greater information and choice of supply; * significantly reduce the environmental impact of the whole electricity supply system; * deliver enhanced levels of reliability and security of supply. Smart Grid deployment must include not only technology, market and commercial considerations, environmental impact, regulatory framework, standardization usage, ICT (Information and Communication Technology) and migration strategy but also societal requirements and governmental edicts. ### Existing Standards The IEC 62357 Reference Architecture [3] addresses the communication requirements of the application in the power utility domain. Its scope is the convergence of data models, services and protocols for efficient and future- proof system integration for all applications. This framework comprises communication standards including semantic data models, services and protocols for the abovementioned intersystem and subsystem. ABNT NBR 14522 The Brazilian Standard is ABNT NBR 14522, Data Exchange for electricity metering systems. It defines the standard for the exchange of information in the electricity metering system in order to achieve compatibility between systems and electricity metering equipment from different sources. This pattern consists of the following items: * conventional communication reader-meter, * directional communication reader-meter, * synchronous remote communication, * user exits, * communication-computer reader, * public format, * expanded public format, * FK7 format, * operational program format. Load format parameters: * 1/2 magnetic tape format, * display codes, * standardized readings. ### Data Model The Power supply companies today face the urgent task of optimizing their core processes. This is the only way that they can survive in this competitive environment. The vital step here is to combine the large number of autonomous IT systems into a homogeneous IT landscape. However, conventional network control systems can only be integrated with considerable effort because they do not use uniform data standards. Network control systems with a standardized data format for source data based on the standardized Common Information Model (CIM), in accordance with IEC 61970, offer the best basis for IT integration. The CIM defines a common language and data modeling with the objective of simplifying the exchange of information between the participating systems and applications via direct interfaces. The CIM was adopted by IEC TC 57 and fast- tracked for international standardization. The standardized CIM data model offers a very large number of advantages for power suppliers and manufacturers: * Simple data exchange for companies that are near each other. * Standardized CIM data remains stable, and data model expansions are simple to implement. It results to be simpler, faster and less risky upgrading the energy management systems. * The CIM application program interface creates an open application interface. The aim is to use this to interconnect the application packages of all kinds of different suppliers using Plug and Play to create an EMS. The CIM forms the basis for the definition of important standard interfaces to other IT systems. The working group in IEC TC 57 plays a leading role in the further development and international standardization of IEC 61970 and the CIM. Working group WG14 (IEC 61968 Standards) in the TC 57 is responsible for standardization of interfaces between systems, especially for the power distribution area. Standardization in the outstation area is defined in IEC 61850. With the extension of document IEC 61850 for communication to the control centre, there are overlaps in the object model between IEC 61970 and IEC 61850. The CIM data model describes the electrical network, the connected electrical components, the additional elements and the data needed for network operation as well as the relations between these elements. The Unified Modeling Language (UML), a standardized, objectoriented method that is supported by various software tools, is used as the descriptive language. CIM is used primarily to define a common language for exchanging information via direct interfaces or an integration bus and for accessing data from various sources. The CIM model is subdivided into packages such as basic elements, topology, generation, load model, measurement values and protection. The sole purpose of these packages is to make the model more transparent. Relations between classes may extend beyond the boundaries of packages. The ABNT NBR 14522 data definition can be found in deliverable 5.1. ### Protocols Communication technology has continued to develop rapidly over the past few years and the TCP/IP protocol has also become the established network protocol standard in the power supply sector. The modern communication standards as part of the IEC 62357 reference architecture (e.g. IEC 61850) are based on TCP/IP and provide full technological benefits for the user. The protocol used by Copel is defined in ABNT NBR 14522\. IEC 61850 Communication networks and systems in substations Since being published in 2004, the IEC 61850 communication standard has gained more and more relevance in the field of substation automation. It provides an effective response to the needs of the open, deregulated energy market, which requires both reliable networks and extremely flexible technology flexible enough to adapt to the substation challenges of the next twenty years. IEC 61850 has not only taken over the drive of the communication technology of the office networking sector, but it has also adopted the best possible protocols and configurations for high functionality and reliable data transmission. Industrial Ethernet, which has been hardened for substation purposes and provides a speed of 100 Mbit/s, offers enough bandwidth to ensure reliable information exchange between IEDs (Intelligent Electronic Devices), as well as reliable communication from an IED to a substation controller. The definition of an effective process bus offers a standardized way to digitally connect conventional as well as intelligent CTs and VTs to relays. More than just a protocol, IEC 61850 also provides benefits in the areas of engineering and maintenance, especially with respect to combining devices from different vendors. Telecontrol IEC 60870-5 IEC 60870-5 provides a communication profile for sending basic telecontrol messages between two systems, which uses permanent directly connected data circuits between the systems. The IEC Technical Committee 57 (Working Group 03) have developed a protocol standard for Telecontrol, Teleprotection, and associated telecommunications for electric power systems. The result of this work is IEC 60870-5, Telecontrol equipment and systems. Five documents specify the base IEC 60870-5: * IEC 60870-5-1, Transmission frame formats * IEC 60870-5-2, Link transmission procedures * IEC 60870-5-3, General structure of application data * IEC 60870-5-4, Definition and coding of application information elements * IEC 60870-5-5, Basic application functions IEC TC 57 has also generated companion standards: * IEC 60870-5-101, Transmission Protocols, companion standard for basic telecontrol tasks * IEC 60870-5-102, Companion standard for the transmission of integrated totals in electric power systems (this standard is not widely used) * IEC 60870-5-103, Transmission protocols, Companion standard for the informative interface of protection equipment * IEC 60870-5-104, Transmission Protocols, Network access for IEC 60870-5-101 using standard transport profiles ## Smart Grids in SecureCloud The SecureCloud project will consider use cases in the area of smart grids. Smart grid applications offer the opportunities to consider many of the requirements that a sensitive big data applications may have when executing in the cloud. First, smart grid applications consider a growing volume of data. Smart meters and sensors for monitoring distribution and transmission grids are being deployed and are capable of continuously collecting and transmitting data. Adequate use of this data enables energy distributors not only to optimise their infrastructure, but also to reduce the environmental impact of supplying power to a given load or region. Second, these promising data analysis opportunities require having access to detailed information about energy consumption. In the first use case we consider, smart meters collect detailed power consumption data from a residential or industrial consumer. Collecting data at granularities of minutes, or even seconds, enables for sophisticated applications that prevent power theft, detect power quality issues, in order to calculate and prevent penalties for fault duration, among other applications for triggering adaptation actions that increase efficiency or robustness of the power grid. Currently, these applications are deployed on dedicated servers maintained by utilities and system integrators. Therefore, they cannot be exploited for all customers because this would require a large data storage and processing infrastructure. Using cloud computing can help to provide such an infrastructure. Nevertheless, once this data is under control of an energy provider, an adversary who compromises this providers infrastructure, e.g., a malicious employee or an oppressive government, could gain access to them. The data therefore needs to be processed in a secure fashion and never be readable in a non-encrypted form outside the secure container. ### COPEL Copel _Companhia Paranaense de Energia_ , the largest company of the State of Parana, was founded on´ October 26, 1954 with ownership control held by the State of Parana. The Company went public in April´ 1994 (BM&FBovespa) and, in 1997, it was the first company of the Brazilian electricity sector to be listed at the New York Stock Exchange. As from June 2002, the brand is also present at the European Economic Community, having been listed at Latibex the Latin American arm of the Madrid Stock Exchange. As of May 7, 2008, Copels shares were ranked at Level 1 of So Paulo Stock Exchange (BM&FBovespa) Corporate Governance. The Company directly serves 4,391,313 consuming units, across 395 cities and 1,113 locations (districts, villages and settlements), located in the State of Parana. This network consists of 3.5 million´ homes, 89 thousand plants, 373 thousand commercial establishments and 369 thousand rural properties. The staff is composed of 8,653 employees. Copels structure comprises the operation of: * An own generating complex composed of 20 power plants (18 hydroelectric plants, 1 thermal plant and 1 wind plant), whose installed capacity totals 4,754 MW; * The transmission system totaling 2,302 km of lines and 33 substations (all of them automated); * The distribution system, which consists of 192,508 km of lines and network of up to 230 kV enough to spin four times round the Earth through the Equator line and 362 substations (100% automated); * The optical telecommunication system Paranas Infoway), which has 9,793 km of OPGW cables´ installed between the main ring and urban radials (self-sustained cables), totaling 18,212 km and reached 41,153 clients distributed reaching 399 cities of the State of Parana and 3 cities of the State´ of Santa Catarina. COPEL as a large utility will contribute to the requirements collection from an end-users point-of-view and will provide access to real-world Smart Meter measurements. The data provided by Copel are: * Historic Data of consumers: Ref. 3.1.1 Smart meter data covers consumption data (i.e. energy usage as well as historical consumption), production data. ### Israel Electric Corporation IEC Israel Electric Corporation is the main supplier of electrical power in Israel. IEC builds, maintains and operates power generation stations, sub- stations, as well as the transmission and distribution networks. The company is the sole integrated electric utility in the State of Israel and generates, transmits and distributes substantially all the electricity used in the State of Israel. The State of Israel owns approximately 99.85% of the Company. Since its establishment and up to today, IEC builds infrastructure, generates, transmits, and supplies electricity to 2.6 million customers. The Companys main activities take place within the State of Israel. It generates, transmits, distributes, and supplies most of the electricity used in the Israeli economy according to licenses granted by virtue of the Electricity Sector Law, 1996. In addition, the Company acts as administrator of the countrys electricity system. The data provided by Israel Electric Corporation IEC concern 3.2.1: * Distribution Management System * Transmission System * Smart Home # Data Description ## COPEL Copel collects data from Group A (high voltage customers) for at least 13 months. The data is useful to application developers. The data analysis or the application can feed a research paper. ### Data Set Description Data set reference and name Consumers Group A: Historic Data. Standards and metadata 1. Data Capture Methods: * Data is exported by an Oracle Database; * There is no standard or metodology associated with the kind of data that Copel exports; * Copel can name folders and files according to the customers’ number or the meters; Additionally, all data are obtained based in ABNT NBR 14522 standard and all captured data have an associated time stamp. The data definition can be found in deliverable 5.1. * The dataset will be exported only once. 2. Metadata: * Copel are not able to define/describe metadata issues. Maybe some examples can help to better understand this need. Data sharing Data sharing criteria were not defined yet. It will depend on the anonymizing process in order to not expose customers personal information. * Data consists in historic values of voltage, current, consumption, power usage, power factor and geographic location. The exporting format is CSV and XLSX. Copel expects no more than 100 GB of data; * Data must be anonimized before sharing. COPEL will adopt the Zenodo repository (appendix A) to make the data available to the research community. Archiving and preservation (including storage and backup) Copel recommends to fit with the application requirements. ## Israel Electric Corporation IEC ### Data Set Description The data will be generated by 3 simulators in the IEC lab: * Energy generation simulator (EGS) * Energy transmission and distribution simulator (ETDS) * Smart homes simulator (SHS) The simulated data contains the momentary state of each sub-system during the simulator execution and simulated IED’s (Intelligent Electrical Devices that are usually installed un the grid) data. The method for data capture is: when a value is changed (updated), it is written to the data base. Note: The user has to know that he will use mostly simulated data that is generated by simulated processes and not data that is generated by real field equipment. Standards and metadata The data of the above mentioned simulators is stored in an SQL server by SCADA system according to SCADA standards. Then procedure that runs automatically every hour (or other time slot that will be decided), extracts the data to text files. The folders in the repository are in hierarchic structure: 1. IEC a) EGS a) CONFIGURATION-ID a) MODEL-DESCRIPTION OR DATE 2. ETDS 3. SHS Note that CONFIGURATION-ID is updated due to change of configuration either in the subsystem itself or change of configuration in a subsystem which has relationships with it. The data of the different files in the IEC dataset will be synchronized by the date of generation which appears in the file name. Some of the metadata could be created manually and some not. For example the model-description file is a free text file describing the model and the data. The metadata (that describes the data items), which are the columns names will be generated automatically. Data sharing IEC has started to store data in a repository for some of the simulators. The repository will be available on a server in the IEC lab network. This server can be accessed also from the internet by IEC provided access rights to the lab. The files could be copied and processed by each partner in the project. There will not be any restrictions on using the data. IEC will publish the dataset in a disciplinary repository. Analysis will be performed using freely available Open Source Software tools. IEC will adopt the Zenodo repository (appendix A) to make the data available to the research community. Archiving and preservation (including storage and backup) The data of the simulators is stored in SQL data base. It will be extracted from the data base to text files in CSV format. Currently our estimation of the volume of the data is 10 MB/day. This amount could increase as some of the simulators are in development process and will produce more data. IEC preserves data for a log-term period of at least 7 years. Approximated end volume of dataset is 100 GB. Associated cost in preparing the dataset to be ready for archiving will be covered by project itself. Test dataset IEC will be generated in a test laboratory by IEC team. Dataset could be useful to other research groups working on similar research questions in the area of energy generation and distribution. IEC plans to make our dataset publicly accessible to the all project needs. Example of Data Set Description <table> <tr> <th> Electricity Chain Systems </th> <th> Data set and reference name </th> <th> Data set description </th> <th> Standards and Metadata </th> <th> Data Sharing </th> <th> Archiving and preservation </th> </tr> <tr> <td> Distribution Management System (DMS) operational mode normal and under attack </td> <td> Modbus/TCP normal </td> <td> network behavior of normal DMS operation </td> <td> Standard protocol TCP/IP </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> Modbus/TCP attack </td> <td> network behavior of DMS under attack </td> <td> Standard protocol TCP/IP </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> Attacker action </td> <td> Standard protocol TCP/IP Specific attacker activity </td> <td> Standard protocol TCP/IP Specific attacker activity </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> Transmission System </td> <td> IEC-60870-5-104 IEC-61850-9 normal </td> <td> traffic between national communication center and substation in normal operation </td> <td> IEC-60870-5-104 IEC-61850-9 </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> IEC-60870-5-104 IEC-61850-9 attack </td> <td> traffic between national communication center and substation under attack </td> <td> IEC-60870-5-104 IEC-61850-9 </td> <td> pcap file available on file server </td> <td> Files saved one xternal storage for a year </td> </tr> <tr> <td> Attacker action </td> <td> network attacker activity </td> <td> network specific attacker activity </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> Smart Home </td> <td> Zigbee normal </td> <td> common set to actuator and sensors normal </td> <td> Zigbee and Modbus/TCP </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> Zigbee attack </td> <td> common set to actuator and sensors normal </td> <td> Zigbee and Modbus/TCP </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> <tr> <td> Attacker action </td> <td> network attacker activity </td> <td> Specific attacker activity </td> <td> pcap file available on file server </td> <td> Files saved on external storage for a year </td> </tr> </table> Table 3.1: Example of Data Set Description # Appendices # A Zenodo Repository The SecureCloud project has chosen Zenodo repository for data sharing. Zenodo builds and operates a simple and innovative service that enables researchers, scientists, EU projects and institutions to share, preserve and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject-based repositories of the research communities. Zenodo enables researchers, scientists, EU projects and institutions to: * easily share the long tail of small research results in a wide variety of formats including text, spreadsheets,, audio, video, and images across all fields of science. * display their research results and receive credit by making the research results citable and integrating them into existing reporting lines to funding agencies like the European Commission. * easily access and reuse shared research results. Deliverables: * An open digital repository for everyone and everything not served by a dedicated service; the so called long tail of research results. * Integration with OpenAIRE infrastructure and assured inclusion in OpenAIRE corpus. * Easy upload and semi-automatic metadata completion by communication with existing online services such as DropBox for upload, Mendeley/ORCID/CrossRef/OpenAIRE for upload and pre-filling metadata. * Easy access to research results via an innovative viewing option, open APIs, integration with existing online services, and the preservation of community independent data formats. * A safe and trusted service by combining community based curation with short- and long-term archival and digital preservation strategies in accordance with best practices. * Persistent identifiers, Digital Object Identifiers (DOIs), for sharing research results. * Service hosting according to industry best practices in CERNs professional data centres. * An easy way to link research results with other results and products, funding sources, institutions, and licenses. # Bibliography 1. Guidelines on Data Management in Horizon 2020 Version 2.1, 15 February 2016, http://ec.europa.eu/research/participants/data/ref/h2020/grants_ manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf 2. Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 Version 2.1, 15 February 2016, http://ec.europa.eu/research/participants/data/ref/ h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-pilot-guide_en.pdf 3. IEC TR 62357-1:2012 Power systems management and associated information exchange - Part 1: Reference architecture, https://webstore.iec.ch/publication/6918
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1105_DAFNE_690268.md
**Abbreviations** CA: Consortium Agreement GA: Grant Agreement DoA: Description of Action (Annex I of the Grant Agreement) GAs: General Assembly MB: Management Board PAB: Project Advisory Board WP: Workpackage QM: Quality Management CS: Case Study EC: European Commission PO: Project Officer PR: Project Review DM: Deliverable Manager DDP: Deliverable Development Plan RP: Reporting Period iv Month Year # 1\. DATA SUMMARY DAFNE advocates an integrated water resources management approach, which addresses the water-energy-food (WEF) nexus explicitly and from a novel perspective. The project’s core is the development of a decision-analytic framework (DAF) that will enable the extensive, quantitative analysis of the anticipated effects of alternative planning options on the broad range of heterogeneous and often competing interests in transboundary river basins of fast developing countries in Africa. Data in DAFNE will be: * Collected from multiple sources (e.g. ground stations, remote sensing) to characterize the hydrologic regime, engineering developments, social and economic systems, terrestrial and aquatic ecosystems. A baseline scenario will be defined considering the present situation of the system; * Generated from models, to build future scenarios and assess effects of changes in the system. Existing data, available in public repositories or from institutional partners will be re-used whenever possible, while localized field surveys will be carried out to inspect local scale issues or to compensate for eventual data lack. # 2\. FAIR DATA DAFNE will adopt three different tools for data management: * A project **intranet,** composed by shared directories, managed by ETHZ (ETH Zurich) and built using both institutional storage facilities (ETH Zurich _polybox_ cloud storage) - to store collected data - and external services ( _Dropbox Business Solution_ ) - for filing generated data in a password-protected way and with access restricted to project partners; * The **DAFNE Geoportal** , publishing all relevant project data, both public and with restricted access, targeted to be used by project stakeholders during the project. The Geoportal will be powered by a webGIS platform and equipped with customizable user and roles permission tools, providing a subset of common metadata for each dataset, and data viewing and download functionalities. DAFNE Geoportal will be online and working during the lifetime of the project, while the Summer School and the MOOC course will provide the knowledge to ensure its maintenance also after the project completion; * **Zenodo** 1 repository, to publish and maintain final project outcomes, deliverables and scientific publications, with all the data produced or needed for verification. Project information and related publications and data uploaded to Zenodo will automatically become visible on the OpenAIRE portal. 2 As for H2020 guidelines on Data Management, this document describes the data management life cycle in DAFNE and will be updated over the course of the project whenever significant changes related to its contents will occur. ## 2.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA DAFNE will adopt the Zenodo repository to publish project outcomes, including datasets. Using this repository, all the public data of the project will be provided with a Digital Object Identifier (DOI )and a common dataset of metadata (based on _Dublin Core_ 3 ). Keywords will be sourced by the standard dictionaries, like the USGS water dictionary 4 . Versions of each dataset will report main and minor changes with dot notation. Main version change will occur after significant changes in the data (e.g. change in the data structure, massive correction or update, changes in the procedure for data collection or generation, …), while minor version changes will occur after data updates or limited correction. Any changes will also be mentioned in the description metadata field. Naming convention is reported, whenever applicable, in the attached data catalogue (Annex 1 5 ) ## 2.2 MAKING DATA OPENLY ACCESSIBLE Data generated in DAFNE, if relevant for project deliverables or scientific publications, will be published in the Zenodo repository, together with associated metadata. Data collected will be published on the same repository, unless limitations specified in the attached data catalogue (Annex 1). Data with restricted access will be published in the DAFNE Geoportal, which will be made accessible through user identification to all project partner and stakeholders. Data will be stored using standard formats specified in Annex 1 for each dataset and any open source software and tool developed within DAFNE needed to access data will be published on Zenodo or on other public software code repositories, like Github 6 . ## 2.3 MAKING DATA INTEROPERABLE DAFNE strives to integrate data and information from different disciplines and domains: in order to provide a common understanding on data within the project itself, the use of the _Dublin Core Metadata Element set_ vocabulary will be adopted. Whenever possible and useful, more discipline-specific metadata will be also adopted, as those defined by OGC 7 for geospatial data. **2.4 INCREASE DATA RE-USE (THROUGH CLARIFYING LICENCES)** Generated data will be licenced with CC-BY-SA licence 8 9 . Collected data will be generally subject to the same licence, published on Zenodo repository to be accessible to third parties without any time restriction, unless with the limitations reported in Annex 1\. Quality assurance process will be performed through the following steps: * Collected data 1. Storage of raw datasets, without any further processing, in a dedicated folder in the intranet; 2. Data check and editing for assuring positional, attribute and temporal quality, completeness and consistency, under the responsibility of the project partner listed as reference for each dataset in Annex 1; 3. Compilation of metadata (both generic and domain specific where applicable) reporting a brief summary with the editing done; 4. Storage of the final version of the datasets in a dedicated folder in the intranet; 5. Uploading of the datasets on DAFNE Geoportal, if relevant for sharing with project stakeholders, and/or on Zenodo repository, if compliant with limitation detailed in Annex 1 and if relevant for maintaining also after the project lifetime. * Generated data 1. Compiling metadata (both generic and domain specific where applicable) reporting a brief summary specifying the generation process (lineage); 2. Storing the final version of the generated datasets in a dedicated folder in the intranet; 3. Uploading of the datasets on DAFNE Geoportal, if relevant for sharing with project stakeholders, and/or on Zenodo repository, if relevant for maintaining also after the project lifetime. # 3\. ALLOCATION OF RESOURCES The data catalogue reported in Annex 1 identifies, for each dataset, the responsible project partner. Costs are included in the tasks related to data collection and generation and cannot be listed separately. The DAFNE Geoportal will be hosted on the Politecnico di Milano (POLIMI) servers using internal resources. Costs of the project intranet on _polybox_ are covered by internal ETHZ resources. As already mentioned in Section 2, _Dropbox Business_ accounts will also be activated in order to have more space where to store and share raw datasets and simulation outputs. Costs related to these accounts will be changing according to the numbers of accounts and disk space needed for each of them and they will be covered by project budget. # 4\. DATA SECURITY The tools mentioned in Section 2 of the present document are hosted partially on external services (e.g. Zenodo) and partially on ETHZ and POLIMI servers protected by firewall and institutional security policies. More precisely: * The **intranet** is relying both on ETHZ storage facilities and on external storage services; accessibility is anyway reserved only to registered users, via _HTTP Secure protocol_ (https), for both upload and download functionalities; * **DAFNE Geoportal** will be hosted at POLIMI **:** data upload will be performed from registered users through _Secure Shell Protocol_ (SSH), while download functionalities will be available for registered project partners and stakeholders, unless if not differently specified in Annex 1; * **Zenodo** repository is hosted at CERN and it is subject to its rules for data security as reported at _https://zenodo.org/policies_ All datasets maintained on the ETH Zurich _polybox_ Intranet and DAFNE Geoportal will be periodically subject to incremental backup in order to avoid data loss. **5\. ETHICAL ASPECTS** Ethical aspects related to data management have been already addressed in the Deliverable D8.2. # 6\. OTHER ISSUES The Zambezi Watercourse Commission (ZAMCOM) has implemented ZAMWIS 9 , a system to support Riparian States with an efficient and timely means of sharing data and information on water resources in the Zambezi River basin. ZAMCOM is involved in DAFNE as a key stakeholder for the Zambezi: data collected by ZAMWIS will be used as primary source of data for DAFNE, while data produced in the DAFNE project will be made available for integration into ZAMWIS. 9 Zambezi Water Resources Information System (http://zamwis.wris.info/) 4 February 2017
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1106_i-PROGNOSIS_690494.md
1 EXECUTIVE SUMMARY 7 2 INTRODUCTION 8 3 PRINCIPLES 8 3.1 PARTICIPATION IN THE PILOT ON OPEN RESEARCH DATA 8 3.2 THE I-PROGNOSIS DATA MANAGEMENT PORTAL 9 3.3 COMMON PROCEDURES 11 3.3.1 Data Collection 11 3.3.2 Data Access Procedures 12 4 I-PROGNOSIS DATASETS 12 4.1 DATASETS NAMING 12 4.2 SUMMARY OF THE I-PROGNOSIS DATASETS 13 4.3 DATASETS BREAKDOWN 15 4.3.1 Personal & Clinical Data 15 4.3.2 Sensed and Captured GData/SData 18 4.3.3 Intervention Data 46 4.3.4 Requirements Data 63 APPENDIX I – DATASET DESCRIPTION TEMPLATE 70 # LIST OF MAIN ABBREVIATIONS API Application Programming Interface BDI Beck Depression Inventory CSV Comma Separated Values DMP Data Management Plan DoA Description of the Action EC European Commission EDF European Data Format GData Generic Data HDF5 Hierarchical Data Format 5 HRA Health Research Authority JSON JavaScript Object Notation MoCA Montreal Cognitive Assessment NMSQuest Parkinson’s Disease Non Motor Symptoms Questionnaire PDSS Parkinson’s Disease Sleep Scale PGS Personalised Game Suite RBD-SQ REM Sleep Behaviour Questionnaire SData Specific Data UPDRS Unified Parkinson Disease Rating Scale XML Extensible Markup Language # EXECUTIVE SUMMARY This deliverable is the initial version of the Data Management Plan (DMP) of the iPROGNOSIS project, in accordance to the regulations of the Pilot action on Open Access to Research Data of the Horizon 2020 programme (H2020). It contains provisional information about the data that will be produced and collected within the project, whether and how it will be made accessible for re-use and further exploitation, and how it will be curated and preserved. Based on the guidelines on Data Management in Horizon 2020 1 , a dataset description template was initially drafted to provide the main pillar for the dataset descriptions (see Appendix I). All relevant datasets were recognized and a detailed list was produced, based on the datasets that have been described within the DoA to be produced within the life span of the project. The present deliverable consolidates all the partners’ feedback and provision for the datasets they contribute to. The project at its present stage is foreseen to develop a series of datasets, related to issues ranging from user requirements to the intervention and sensor captured data stemming from patients with PD and also healthy participants. Specifically, datasets are planned to be collected in two ways: the development data collection and the data that will be collected during the deployment of i-PROGNOSIS system. The former dataset will help the development and improvement of the algorithms and systems of the i-PROGNOSIS, while the latter will constitute the actual datasets that are foreseen to be the main input of the i-PROGNOSIS system. Given that the majority of the i-PROGNOSIS datasets involve data collection from human participants, the respective data produced either raw or processed, should be carefully handled, under thorough consideration of ethical and privacy issues involved in such datasets. For all the identified i-PROGNOSIS datasets, specific parts that can be made publicly available have been identified in the current first version of the project’s DMP. The public datasets of the i-PROGNOSIS project will become available through a common repository that will be formulated on the basis of the iPROGNOSIS “data management portal” based on the CKAN open source platform 2 for open datasets or similar system. This is an initial version of the Data Management Plan of the i-PROGNOSIS system. Having said that, the datasets described at this stage, represent an early reflection on the data that we foresee to be collected. During the evolution of the project, we expect that there will some changes either to the content of the datasets or the information classification. However, main principles - as described within this deliverable - is expected to remain intact until the end of the project, thus forming the main strategic axes of the overall Data Management Plan. The overall DMP will be delivered at the end of the i-PROGNOSIS project (M48) within the deliverable “D5.7 Report on open research data management”. # INTRODUCTION During the lifetime of i-PROGNOSIS, data of different nature will be generated and collected. These data are user-related and thus require a clear plan on how they are to be managed, i.e., stored, accessed, protected against unauthorized or improper use, etc. Thus, the main goals of i-PROGNOSIS Data Management Plan (DMP) are: 1. Outline of the types of data foreseen for generation at this stage of the project, including the context and procedures of this generation, as well as the degree of privacy and confidentiality of the data. 2. Outline of the protocols that will be followed to assess the generated/collected data with respect to their sensitiveness. 3. Outline of the data acquisition plan for the duration of the project. 4. Outline of the measures that are foreseen for the adequate management of the data from the ethical and security points of view. The remainder of the deliverable is structured as follows. In Section 3, the common data collection procedures are outlined. A short description about specifications of the data management portal is provided, in addition to different types of classified information. Finally, Section 4 elaborates on each dataset, based on the provided template of Appendix I. # PRINCIPLES ## PARTICIPATION IN THE PILOT ON OPEN RESEARCH DATA The Data Management Plan will guide all activities regarding the anonymisation, exchange and release of data gathered during the project, as required for participating in the Open Research Data Pilot of the H2020 framework. Datasets that are to be produced within the i-PROGNOSIS project span from users' demographic and clinical data to sensor data, intervention data and outcome measures of several interventions. These data will allow the research community to benchmark algorithms with respect to several aspects of Parkinson's disease (PD) detection and treatment, providing a common basis for augmented policy decision making. Since iPROGNOSIS data collection phases involve human participants, data collected will contain sensitive, personal information, and, as a result, the focus is also placed on possible ethical issues and access restrictions regarding personal data, so that no regulations on sensitive information are violated (see also D1.2 "Ethics and safety manual"). In this scope, this version of the deliverable describes in detail the datasets that are to be collected in each data collection phase, by each different technical or medical partner. Individual data collection mechanisms are treated as individual cases and therefore individual data management plans are foreseen, including the following information - as described in the available data management plan template and guidelines of the European Commission (EC) 1 : i) dataset reference and name, ii) dataset description, iii) standards and metadata, iv) data sharing, v) archiving and preservation (including storage and backup). As part of the Data Management Plan, the i-PROGNOSIS consortium aims at developing a central database station, where data and evaluation outcomes of all relevant data collection phases will be deposited. Furthermore, a central Data Management Portal will be developed, in order to serve for the main access point to the publicly available datasets. The i-PROGNOSIS Data Management Portal will be based on open-source data portal platforms, such as CKAN and/or Zenodo 3 , allowing for each partner to publish its datasets to the research community. ## THE i-PROGNOSIS DATA MANAGEMENT PORTAL The i-PROGNOSIS consortium will explore the possibility to develop its data management portal based on the popular open source software CKAN 2 , and it will be accessible through a portal (endpoint) at the following address: "ckan.i-prognosis.eu" CKAN may offer a better control over the stored data as they can be installed and run completely under the control of the research institution. CKAN is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organisations) wanting to make their data open and available. The software supports strong integration with third-party content management systems (CMSs), such as Drupal and WordPress. The portal’s features include among others: * Complete catalogue system with easy to use web interface and a powerful API. * Integration with the profile database system. * Data visualisation and analytics. * Fine access controls. * Store the raw data and metadata. * Search by keyword or filter by tags. See dataset information at a glance. * Rich application programming interface (API), and over 60 extensions including link checking, comments, and analytics. * Records datasets’ change logs and versioning information. * Allows deep customization of its features. * Supports unrestricted (non standards-compliant) metadata. In addition, i-PROGNOSIS will investigate the possibility to publish a subset of the data within the Zenodo repository service. Zenodo builds and operates a simple and innovative service that enables researchers, scientists, EU projects and institutions to share, preserve and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject-based repositories of the research communities. Zenodo enables researchers, scientists, EU projects and institutions to: * Easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. * Display their research results and receive credit by making the research results citable and integrating them into existing reporting lines to funding agencies like the EU. * Easily access and reuse shared research results. Some of the features provided by the Zenodo service are: * An open digital repository for everyone and everything not served by a dedicated service; the so-called “long tail” of research results. * A modern look and feel in line with current trends in state-of-the-art online services. * Integration with OpenAIRE infrastructure and assured inclusion in OpenAIRE corpus. * Easy upload and semi-automatic metadata completion by communication with existing online services such as DropBox for upload, Mendeley/ORCID/CrossRef/OpenAIRE for upload and pre-filling metadata. * Easy access to research results via an innovative viewing option, open APIs, integration with existing online services, and the preservation of community independent data formats. * A safe and trusted service by combining community-based curation with short- and long-term archival and digital preservation strategies in accordance with best practices. * Persistent identifiers, Digital Object Identifiers (DOIs), for sharing research results. * Service hosting according to industry best practices in CERN’s professional data centres. * An easy way to link research results with other results and products, funding sources, institutions, and licenses. * Supports Dublin Core, MARC and MARCXML for metadata exporting. * Complies with OAI-PMH for data dissemination. ## COMMON PROCEDURES ### Data Collection 3.3.1.1 Development Data Collection The newly introduced development data acquisition period is the first and the earliest collection period and it is highly correlated with software development and implementation. The existence of non-artificial data is mandatory for the effective development of software-related, and not limited, to minor tremor or dysphonia detection. Moreover, the need for data is further augmented due to the fact that only limited publicly available corpora exist, and the existing ones incorporate different data capturing methods than i-PROGNOSIS, constituting them not suitable for our needs. Furthermore, it is crucial for software implementation to capture and have access to raw data (e.g., unprocessed speech signals). The above is one of the main differences with the rest of the data collection periods, where sensitive data will be manipulated with proper methods. _Goals of the development data collection period:_ * Aid the implementation of the processing algorithms (machine learning approaches require data during their training phase). * Evaluate the effectiveness of various data descriptors. The latter is essential in order to successfully analyse, understand and translate the raw signals to meaningful information, as well as, to initially train the machine learning algorithms. Development datasets will only be publicly available in the form of processed data. Raw signal data will remain confidential and will be available within the consortium, only after the data collectors provide their approval to share the data. 3.3.1.2 Data to be collected during the deployment of i-PROGNOSIS The goal of the data collection at this stage of the project is to build the predictive algorithm for PD detection and evaluate it against the medical golden standard. In addition, the efficacy of the interventions regimen will be measured by medical partners, in terms of the specified medical evaluation protocol (see D2.2 "Data collection and medical evaluation protocol"). Data collected during the deployment of i-PROGNOSIS will be fully anonymised and will be offered to third parties in a processed format. Opening data during this stage of the project in a raw format will not be an option, due to privacy issues. Clinical data will be used to label the processed data, thus forming the required “semantic ground truth” for other researchers to apply their own machine learning techniques and statistical analyses and to compare their findings. ### Data Access Procedures 3.3.2.1 Public For the sections of the dataset that will be made publicly available, a respective Web page will be created on the data management portal that will provide a description of the dataset. External researchers will have unrestricted access, following the steps as indicated by the data management portal of the i-PROGNOSIS project. In addition, any third party stakeholder will be explicitly informed about the publication that need to be cited if part of the dataset has been used for publications. All public datasets should be strictly anonymised. 3.3.2.2 Protected Data denoted as protected can be shared out of the consortium, as long as interested parties _a priori_ request access from the consortium, explaining how these datasets will be used, e.g., research or commercial purposes. Appropriate forms will be created and be available through the data management portal, from where interested parties can use them and request for explicit access to certain datasets. Upon the approval by the i-PROGNOSIS partners, interested parties will be provided with credentials, in order to download the requested datasets. 3.3.2.3 Confidential/Private The private part of the datasets will be stored at a specific and designated private space of the partner responsible for the dataset, namely the data collector, on dedicated hard disk drives, to which only members of the data collector, whose work directly relates to these data will have access. In order for other i-PROGNOSIS partners to obtain access to these data, each partner must provide a proper written request to the data collector, including a justification over the need of access to the particular data. Once deemed necessary, the data collector will provide the respective data to the partner. # i-PROGNOSIS DATASETS ## DATASETS NAMING Concerning the convention followed for naming the project datasets, it should be noted that the name of each dataset comprises: 1. A prefix " **DS** " indicating a dataset. 2. Its unique identification number depending on the dataset category (see next Section), e.g., " **DS1** " for datasets belonging to the category of personal and clinical data, " **DS2** " for sensed and captured GData or SData, " **DS3** " for interventions data and “ **DS4** ” for requirements data. 3. Since sensed and captured GData or SData or interventions data contain different types of data, such as physiological recordings, audio, and visual features, further distinction takes place as follows: " **DS2.1** " for physiological features, " **DS2.2** " for visual features, " **DS3.1** " for serious games’ metrics, etc. 4. A short name indicative of its content and purpose. For example, body and gesture-related features collected during the interventions data collection phase form a dataset named: <table> <tr> <th> _Prefix_ </th> <th> _Category identification number_ </th> <th> _Type of data discriminator_ </th> <th> _Indicative Name_ </th> </tr> <tr> <td> " **DS** "+ </td> <td> " **3.** "+ </td> <td> " **1-** "+ </td> <td> " **BodyAndGesture** "+ </td> </tr> </table> i.e., " **DS3.1-BodyAndGesture** " ## SUMMARY OF THE I-PROGNOSIS DATASETS The i-PROGNOSIS datasets are divided in the following categories: 1. Personal and clinical data. 2. Sensed and captured GData or SData. 3. Interventions data. 4. Requirements data. Personal and clinical data will accompany and annotate Sensed and captured GData or SData and Interventions data, as essential metadata, thus allowing for the semantic annotation of the collected datasets. **TABLE 1** presents cumulatively all datasets that are initially planned to be collected within the i-PROGNOSIS project with respect to each aforementioned category. **TABLE 1** Indicative Datasets to be collected within the i-PROGNOSIS project <table> <tr> <th> **Dataset** **#** </th> <th> **Dataset Name** </th> <th> **Description** </th> </tr> <tr> <td> </td> <td> **PERSONAL & CLINICAL DATA ** </td> </tr> <tr> <td> 1.1 </td> <td> **DS1.1-** **ElectronicHealthRecordData** </td> <td> It contains information related to medical evaluation protocols, such as health screening results and interventions’ outcome measures. </td> </tr> <tr> <td> </td> <td> **GDATA / SDATA** </td> </tr> <tr> <td> 2.1 </td> <td> **DS2.1-VoiceQualityAnalysis** </td> <td> It contains voice data and vocal features captured using the smartphone microphone. </td> </tr> <tr> <td> 2.2 </td> <td> **DS2.2-PhotosFacialAnalysis** </td> <td> It contains facial features extracted from photos taken with the mobile phone camera (selfies for masked face etc.). </td> </tr> <tr> <td> 2.3 </td> <td> **DS2.3-ActivityAnalysis** </td> <td> It contains activity-related (steps, heart rate, skin temperature) features derived from the smartwatch. </td> </tr> <tr> <td> 2.4 </td> <td> **DS2.4-PhysioSignalAnalysis** </td> <td> It contains features derived from physiological data acquisition such as electrocardiography derived from the </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> Smart TV remote. </th> </tr> <tr> <td> 2.5 </td> <td> **DS2.5-TypingPatternAnalysis** </td> <td> It contains keystroke dynamics-related features collected during typing on a virtual keyboard of a touch screenenabled smartphone. </td> </tr> <tr> <td> 2.6 </td> <td> **DS2.6-** **ExploratoryWalkabilityAnalysis** </td> <td> It contains de-personalised location information and derived features from GPS, Wi-Fi, Mobile Network and IMU originating from the users’ smartphone and smartwatch. </td> </tr> <tr> <td> 2.7 </td> <td> **DS2.7-TextSentimentAnalysis** </td> <td> It contains SMS text and/or tweets collected from the users’ smartphone along with associated sentiment classification </td> </tr> <tr> <td> 2.8 </td> <td> **DS2.8-FoodIntakeAnalysis** </td> <td> The dataset is composed of objective quantification of meal mechanics as well as derivative information about the user’s eating behavioural elements (meal total intake and duration, number of bites, eating rate and eating rate changes across the meal). </td> </tr> <tr> <td> 2.9 </td> <td> **DS2.9-BowelSoundsAnalysis** </td> <td> It contains pre-processed bowel sound signals captured from the smart belt. </td> </tr> <tr> <td> 2.10 </td> <td> **DS.10-TremorAnalysis** </td> <td> It contains IMU based (accelerometer, gyroscope and magnetometer) derived features, originating from the user’s smartphone and smartwatch. </td> </tr> <tr> <td> **INTERVENTIONS** </td> </tr> <tr> <td> 3.1 </td> <td> **DS3.1-SeriousGamesMetrics** </td> <td> In-game metrics and performance such as scores, achievements, difficulty level, etc. of all the gaming sessions (exergames, dietary, handwriting, emotion and voice games). </td> </tr> <tr> <td> 3.2 </td> <td> **DS3.2-BodyGestureAnalysis** </td> <td> The dataset consists of postures and gestures tracking experiments, as well as balance and gait features, especially during the Exergames where Kinect will be the main sensor. </td> </tr> <tr> <td> 3.3 </td> <td> **DS3.3-SleepStageAnalysis** </td> <td> It comprises sleep stage data based on accelerometer, heart rate, and skin temperature measurements of users, captured by the smartwatch during their sleep (and provided they are wearing the smartwatch on their wrist). </td> </tr> <tr> <td> 3.4 </td> <td> **DS3.4-GaitAnalysis** </td> <td> It contains accelerometer, gyroscope and pedometer data captured from the smart watch/band with respect to rhythm guidance in terms of sound cues </td> </tr> <tr> <td> 3.5 </td> <td> **DS3.5-VoiceEnhancementAnalysis** </td> <td> It contains voice data captured using the smartphone microphone. </td> </tr> <tr> <td> **REQUIREMENTS** </td> </tr> <tr> <td> 4.1 </td> <td> **DS4.1-FocusGroupsDataset** </td> <td> It contains data gathered within focus groups during the requirements elicitation phase. </td> </tr> <tr> <td> 4.2 </td> <td> **DS4.2-WebSurveyDataset** </td> <td> It contains the questions and the qualitative answers to the questions of the i-PROGNOSIS Web survey, conducted within the context of the identification of user requirements and system specifications, from ~2000 anonymous survey participants. </td> </tr> </table> ## DATASETS BREAKDOWN In the following, datasets are described based on the template of Appendix I. ### Personal & Clinical Data <table> <tr> <th> **DATA SET REFERENCE** **NAME** </th> <th> **DS1.1-ElectronicHealthRecordData** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will contain the i-PROGNOSIS users' personal information, demographics and health related data (assessment batteries’ scores, comorbidities, medication). This dataset will facilitate subsequent analysis with respect to the wide GData and SData collection, as well as to clinical assessment tests towards research on the stealth assessment and screening of early signs of PD. In addition, results of the medical evaluation as represented within these datasets, will allow to evaluate the effectiveness of the user participation in the interventions (i.e., their involvement in the Serious Games). </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> This dataset a) will be collected by i-PROGNOSIS App and b) will be entered into the i-PROGNOSIS system by the clinical sites. The data collected through the mobile App will be taken during the first use of the mobile phone app; the user will be asked to provide short personal information, demographics and health related data by multiple choice questions. During the SData collection and intervention phases, the relevant clinical data for each user will be entered into the system from the recruiting clinical site (AUTH, KCL or TUD). In case of participants who will follow all the projects’ </td> </tr> </table> <table> <tr> <th> phases, multiple instances of follow up assessment tests’ scores will be available. </th> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The dataset will be collected during all phases of the project: the GData/SData collection and the intervention phases. The questions delivered through the iPROGNOSIS App will be designed in a multiple choice format, so that the obtained information will be in a numeric format. Health related data at this stage will include: 1. Diagnosis of Parkinson`s disease: yes / no, 2. Physical handicap: yes / no, 3. Family history of Parkinson`s disease: yes / no. The process of clinical assessment will take place in three countries (UK, Germany, and Greece), where health-centric partners are based, in order to facilitate the medical monitoring of the users and the subsequent clinical validation of PD risk. The clinical data will include all the relevant medical information collected by the recruiting clinical centres. Some of the clinical examinations include among others: Unified Parkinson Disease Rating Scale (UPDRS), Parkinson’s Disease Non Motor Symptoms Questionnaire (NMSQuest), Montreal Cognitive Assessment (MoCA), Parkinson’s Disease Sleep Scale (PDSS), REM Sleep Behaviour Questionnaire (RBDSQ), Beck Depression Inventory (BDI), Parkinson Fatigue Scale, Senior fitness test and BERG balance scale. During the initial pilot phase more than 5000 older adults (above 55 years of age) is expected to participate, whereas some 80 and 60 will take part in the SData phase and the intervention phases. The data will be stored as records of a profile database, part of the i-PROGNOSIS repository. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset will be used to classify and rate the obtained sensed and captured further GData and SData, intervention data and requirements data in respect to age, gender and health state dependent values. In addition, clinical data will be used for ground truth purposes, to train the machine learning algorithms responsible for the second stage early PD detection. Also, part of this dataset will be each intervention user’s baseline and follow up data, for the evaluation of their health state progress through the use of the serious games Personalised Game Suite. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> This kind of dataset is collected in every type of clinical trial in the field of PD or otherwise. Example: Martinez-Martin et al. EuroInf: A Multicenter Comparative Observational Study of Apomorphine and Levodopa Infusion in Parkinson's Disease. Mov Disord. 2015 Apr;30(4):510-6. </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> Not applicable within the application context of the novel technologies used in iPROGNOSIS. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> </table> <table> <tr> <th> Clinical data will not contain any metadata, instead the name of the examination and the measurement score will be provided. There would be a possibility to support electronic health data exchange, via partial implementation of the CEN/ISO 13606 standard as declared the base standard of the European Interoperability Framework for Health. </th> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Personal identification as stated in the consent forms will be kept separate from any research and health-related data, which will be pseudonymised (only initials and date of birth will be kept). Furthermore no personal data that could identify an individual person e.g., name, date of birth is obtained. The data for personal identification of the users will be kept locally at each recruitment clinical centre, outside this dataset. This dataset will only include personal and clinical data relevant to the i-PROGNOSIS protocols. Due to ethical reasons, only averaged group data could become publicly available, while individual data will be private to serve the i-PROGNOSIS R&D objectives. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> For the parts of the dataset that will be made publicly available, a respective Web page will be created on the data management portal that will provide a description of the dataset and links to a download section. The private part of this dataset will be stored at a specifically designated private space of clinical partners, in dedicated hard disk drives, on which only members of the clinical research teams will have access. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available 2 years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be in a numeric manner and therefore designed to allow easy reuse with commonly available tools. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at the data management portal of the project Website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The dataset will be preserved as long as there are regular downloads. After that it would be made accessible by request and preserved by AUTH at least until the end of </td> </tr> <tr> <td> the project. Locally, the personal and clinical data will be preserved based on the national and departmental practises about scientific data handling. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> Some MBs. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> There are no costs associated with data preservation in institutional servers. However, data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data** **Collector** </td> <td> **TUD, KCL, AUTH, MICROSOFT** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **Both clinical and technical partners** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **TUD, KCL, AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within activities of WP4, WP6 and WP7, to mainly serve the research efforts of T4.1 - T4.5, T6.1 - T6.5, T7.1, T7.3 and T7.4. </td> </tr> </table> ### Sensed and Captured GData/SData <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.1-VoiceQualityAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> Speech dataset for the detection and identification of voice-related PD symptoms for early PD detection. The database will contain speech data collected from phone calls recorded from the i-PROGNOSIS smartphone dialler application. The data set includes extracted voice features and several annotations about the speech data and the underlying speakers. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected by recording conversations using the i-PROGNOSIS smartphone dialler application during GData and SData collection phase. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The goal of this dataset is to enable the detection and identification of voice-related PD symptoms for early PD detection. The dataset will contain the raw speech signals recorded by the i-PROGNOSIS smartphone application during the GData and SData </td> </tr> </table> <table> <tr> <th> collection phase as well as extracted features. Annotations including date and time of the recordings along with speaker information (e.g., ID, gender, PD scale) will provide the ground truth and will facilitate the subsequent analysis of the speech recordings. The dataset will contain raw speech data (e.g. 44.1 kHz sampling rate, wav unencoded) and text files for the annotations. _Data Format:_ Encoded (mp3, aac) or unencoded (wav, raw) audio waveforms; XML or plain text files for annotations/metadata; HDF5 for feature files. The dataset will be on the order of 0.5 MB to 5 MB per minute of speech depending on the encoding and sampling rate. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset will be used within the project for the development and evaluation of automatic signal processing and pattern recognition algorithms for the extraction of discriminative features for the detection and identification of voice-related PD symptoms for the early detection of PD. This data will be used in WP3 and WP6, supporting the early PD symptoms detection and for the overall assessment of the iPROGNOSIS system. This dataset may also be useful in the future for other researchers who want to explore voice-related PD symptoms. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> This database will build the foundation of our research and development of algorithms towards the identification and detection of voice-related PD symptoms for early PD detection. We plan to propose our findings on ICASSP, Interspeech or other voice and biomedical-related conferences. </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge no long-term speech database covering voice recordings of healthy and PD subjects is (publicly) available. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: (a) description of the experimental setup and procedure that led to the generation of the dataset and (b) documentation of the variables recorded in the dataset. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the raw data, captured by normal healthy control subjects, (during the development data collection phase) as well as a subset of the extracted features of the collected datasets could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> For the parts of the dataset that will be made publicly available, a respective web </td> </tr> </table> <table> <tr> <th> page will be created on the data management portal that will provide a description of the dataset and links to a download section. The private part of this dataset will be stored at a specifically designated private space of FRAUNHOFER, in dedicated hard disk drives, on which only members of the FRAUNHOFER research team will have access. </th> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse and access with commonly available tools (e.g. Matlab, Python, VLC, GVIM) and software libraries (e.g. Tensorflow, FFMPEG, HDF5 C++ API, C++ STL), because the data will be stored primarily in common file formats and generic data containers. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public data will be hosted on the central database (Microsoft Azure-based iPROGNOSIS Cloud infrastructure) that will serve the needs of the Data Management Portal of the i-PROGNOSIS project. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of the dataset will be preserved as long as there are regular downloads. After that it would be made accessible by request and preserved by AUTH at least until the end of the project. The private part of the dataset will be preserved by FRAUNHOFER at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to consume up to 100 GB depending on the encoding and the length and quantity of the speech signals. (E.g. single channel audio waveform 16 bit, 44.1 kHz is of size 5.3 MB/min, single channel mp3 192 Kbps is of size 1.44 MB/min) </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> There are no costs associated with data preservation in institutional servers. However, data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project.. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **TUD, KI, FRAUNHOFER** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **FRAUNHOFER** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **FRAUNHOFER** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data is going to be collected within activities of WP3 and WP6, to mainly serve the research efforts of T3.3, T6.1, T6.2 and T6.3. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.2-FacialAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> Facial image dataset for the detection and identification of masked face symptoms for early PD detection. The database will contain selfie images collected from the frontal camera of the smartphone of the user. The data set will include annotations about a) the bounding box of the recognized face of the user b) self-assessment of the emotional state of the user. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected from the user smartphone by collecting the (selfie) images obtained during GData and SData collection phases and detecting/recognizing the face of the user. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The goal of this dataset is to enable the detection and identification of masked face symptoms for early PD detection. The dataset will contain the images (selfie) images obtained during GData and SData collection phases. Annotations including date and time of the recordings along with bounding box of the identified user face in the selfie will provide the ground truth and will facilitate the subsequent face expression analysis task. The dataset will contain JPEG selfie images and XML (or text) files for the annotations. Data Format: JPEG files for selfie images from the frontal camera of the smartphone, XML or plain text files for annotations/metadata. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset will be used within the project for the development and evaluation of automatic facial expression recognition algorithms which will then be used to detect masked face symptoms for the early detection of PD. This data will be used in WP3 and WP6, supporting the early PD symptoms detection and for the overall assessment of the i-PROGNOSIS system. This dataset may also be useful in the future for other researchers who want to explore masked face PD symptoms. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> </table> <table> <tr> <th> This database will build the foundation of our research and development of algorithms towards the masked face PD symptoms for early PD detection. We plan to propose our findings on computer vision/image processing- and biomedical- related conferences and journals (e.g., ICIP conference). </th> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge no image database covering face expressions of healthy and PD subjects is (publicly) available. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the procedures that led to the generation of the dataset and b) documentation of the contents of the dataset. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the data captured by a subset of the patients, during the GData and SData collection phases, as well as normal healthy control subjects could become publicly available, while the rest of them will be private to serve the iPROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> For the portions of the dataset that will be made publicly available, a respective web page will be created on the data management portal that will provide a description of the dataset and links to a download section. The private part of this dataset will be stored at a specifically designated private space of CERTH, in dedicated hard disk drives, on which only members of the CERTH research team will have access. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and its annotations will be produced. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse and access with commonly available tools and software libraries. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at the data management portal of the project Website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud </td> </tr> <tr> <td> infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of the dataset will be preserved on the Data Management Portal of the i-PROGNOSIS project at least until the end of the project. The private part of the dataset will be preserved by CERTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> Each selfie image in JPEG format has size 1-2 MBs, so a dataset of e.g., 1000 images is expected to consume up to 2 GB. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> There are no costs associated with data preservation in institutional servers. However, data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **CERTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **CERTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **CERTH, AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data is going to be collected within activities of WP3 and WP6, to mainly serve the research efforts of T3.4, T6.1 and T6.2. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.3-ActivityAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will include accelerometer, gyroscope, pedometer, estimated calories, distance travelled, motion type (walking, jogging or running), heart rate, skin temperature as well as altimeter and barometer when available to identify steps climbing. The origin of the data will be from a smart watch/band regarding activity analysis. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be captured by the accelerometer, gyroscope, pedometer heart rate and skin temperature sensors of the smart watch that will be worn by the participants </td> </tr> </table> <table> <tr> <th> during all the i-PROGNOSIS data collection phases. </th> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The first part of the dataset is going to be collected during the development data collection phase where the subjects will follow specific activity scenarios. The second part of the dataset will be collected mainly during the SData and Interventions phases, where smart watches/bands will be provided to 80 and 60 participants, respectively. Moreover, the GData phase will contribute to the dataset, however, the contribution is expected to be limited since the smart watch/band is optional at GData. At these phases, i.e., GData, SData and Interventions, the subjects will not perform specific activity tasks, instead, the data will be captured throughout their daily routine activities. The dataset will include: i) the measures (double precision) of the acceleration force in g units (9,81 m/s 2 ) that is applied to the smart watch/band on all three physical axes (x, y, and z), ii) the measures (double precision) of the smart watch/band’s rotation in rad/s around each of the three physical axes, and iii) the absolute number of steps taken, the steps ascended and descended (based on the smart watch’s altimeter), the number of kilocalories (kcals) burned, the distance travelled (in cm) along with the speed (cm/s), the pace (ms/m) and the motion type (walking, jogging, running, etc.), the heart rate (beats/min) and the skin temperature. The minimum sampling frequency for each sensor will be based on the outcome of the analysis of the development data (where the sampling frequency will be the highest possible). Data format: TXT or CSV file. The dataset will be in the order of ~5 MB per participant per hour of usage. Usually the smart watch/band owners use it for at least 12 hours per day. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of the activity analysis of the i-PROGNOSIS project. The dataset could be useful in analysing the activity levels and patterns over the course of the day. Moreover, it may be possible to identify symptoms and signs related to Parkinson’s Disease, such as tremor. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany the research results in the field of activity analysis through a smart watch/band of people with PD. At least one publication is intended to be made in the IEEE Biomedical Engineering journal (or similar). The following publication will be taken into account: Sharma, Vinod, et al. "SPARK: personalized Parkinson disease interventions through synergy between a smartphone and a smartwatch." International Conference of Design, User Experience, and Usability. Springer International Publishing, 2014. </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge, there is no available smart watch/band-based activity dataset for PD-related. Related datasets include data captured from smartwatches (some of them along with datasets originating from smartphones). For example: </td> </tr> </table> <table> <tr> <th> _Crowdsignals.io_ ( _http://crowdsignals.io_ ) . Crowdsignals creates the largest set of rich, longitudinal mobile and sensor data recorded from smartphones and smartwatches available to the community. _Dbworld_ ( _http://permalink.gmane.org/gmane.comp.db.dbworld/54394_ ) : Dbworld dataset contains free dataset for downloads originating from smartphone+smartwatch mobile, sensor, and human activity. It is part of the CrowdSignals.io data collection campaign. _Heterogeneity Activity Recognition Dataset_ ( _https://archive.ics.uci.edu/ml/datasets/Heterogeneity+Activity+Recognition_ ) . The Heterogeneity Human Activity Recognition (HHAR) dataset from smartphones and smartwatches is a dataset devised to benchmark human activity recognition algorithms (classification, automatic data segmentation, sensor fusion, feature extraction, etc.) in real-world contexts; specifically, the dataset is gathered with a variety of different device models and use- scenarios, in order to reflect sensing heterogeneities to be expected in real deployments. </th> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) health status of the subject, and c) documentation of the variables recorded in the dataset. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical issues, only part of the dataset will be publicly available. More specifically, the data that correspond to a subset of the PD patients, as well as the healthy control subjects, captured at the development phase of the i-PROGNOSIS project will become publicly available. The rest of the data will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> The access procedures for the publicly available sub-dataset and the private subdataset that will serve the project’s R&D objectives are described in Sections 3.3.2.1 and Section 3.3.2.3 respectively. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available 2 years after the end of the project to allow the consortium to prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a respective link will be provided through the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse with commonly available tools and software libraries, such as Excel, Matlab, .NET etc. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The dataset will be accommodated at the data management portal of the project Website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. The private part of the dataset will be preserved by AUTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to be several Gigabytes, provided that each participant is expected to produce at least 20MB per day and the duration is at least 22 months (SData and Interventions phase). </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> There are no costs associated with data preservation in institutional servers. However, data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within the activities of WP3, WP4 and WP6, to mainly serve the research efforts of T3.2, T4.3, T6.1, T6.2, T6.3 and T6.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.4-PhysioSignalAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will contain features derived from physiological data acquisition (e.g. electrocardiography), as well as raw data. Whenever suitable, annotations will be associated with the data to provide ground truth and facilitate the subsequent analysis of pre-recorded data. Annotations may be generated by the users in real </td> </tr> </table> <table> <tr> <th> time and recorded simultaneously with the biosignal data (e.g., by means of a manual trigger) or produced by a human expert upon revision of pre-recorded data. </th> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected using the i-PROGNOSIS physiological data acquisition devices, such as the Smart TV remote and/or the Microsoft Band. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The datasets will be collected in guided and controlled testing scenarios, producing SData, and include data acquired from interaction between the elder and everyday living sensorial artefacts. The goal is to identify changes in the cardiac and motion (i.e. tremor) patterns, which may relate to changes in health status, behaviours or other aspects relevant to PD. The dataset will contain raw data, as well as temporal, spectral and non- linear features extracted from the raw time series. Examples of such features are the Very Low Frequency (VLF), Low Frequency (LF), High Frequency (HF) and related spectral indicators associated with the Heart Rate Variability (HRV), instant heart rate, interbeat intervals, heart rate histogram, amongst others. _Data Format:_ Comma-Separated Values (CSV) or Hierarchical Data Format 5 (HDF5) The dataset is expected to include individualized records per user per device usage session, possibly segmented in files of duration compatible with easily manageable post-processing (e.g. 1 hour segments). </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used within the project for the development and evaluation of data mining and signal processing algorithms that incorporate cardiac and motion data in the identification of early indicators of PD. Physiological data can also be useful to follow-up the interventions, and also to assess the progress and status of the disease (if possible). This data will primarily feed WP4 and WP7, supporting the design of the interventions and to the overall assessment of the i-PROGNOSIS system. Together with the annotations, it may also be useful in the future for other researchers and practitioners working not only in PD but in other medical specialties as well. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will contribute to extend our research results in the field of physiological data sensing to the specific case of people with PD. We envision part of the dataset to include data collected from healthy controls. Recent publications from the project team concerning multimodal physiological datasets include: . P. da Silva, C. Carreiras, A. ouren o, A. Fred, R. C. das Neves, and R. Ferreira, “Offthe-person electrocardiography: Performance assessment and clinical correlation,” Health and Technology, vol. 4, no. 4, pp. 309–318, 2015. _http://link.springer.com/article/10.1007/s12553-015-0098-y_ . P. da Silva, A. ouren o, A. Fred, N. Raposo, and M. A. de Sousa, “Check Your Biosignals Here: A new dataset for off-the-person ECG biometrics,” Computer Methods and Programs in Biomedicine, vol. 113, no. 2, pp. 503–514, 2014. _http://www.sciencedirect.com/science/article/pii/S0169260713003891_ </td> </tr> </table> <table> <tr> <th> . Gamboa, . P. da Silva, and A. Fred, “ iMotion: a new re- search resource for the study of behavior, cognition, and emotion,” Multimedia Tools and Applications, vol. 73, no. 1, pp. 345–375, 2014 _http://link.springer.com/article/10.1007%2Fs11042-013-1602-x_ </th> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge there are no datasets available (at least publicly) that simultaneously target the specific case of PD and contain the physiological data sources envisioned by the project. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) type of activity performed c) documentation of the variables recorded in the dataset, d) manual annotations provided by the subjects, and e) manual annotations provided by human experts. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the raw data, captured by normal healthy control subjects, (during the development data collection phase) as well as a subset of the extracted features of the collected datasets could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> The recorded data will be primarily stored as CSV or HDF5 files containing the raw data streams and the features derived from the data. The files will hosted on the private i-PROGNOSIS central database that will serve the needs of the Data Management Portal of the project, or in a suitable analogous digital space, with protected access reserved only to the relevant members of the i-PROGNOSIS project team or people for which the consortium partners decide that access to the data is of relevant interest to the execution of the project. Only anonymised data will be provided, unless otherwise deemed necessary for the adequate pursuit of the project goals. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be available two years after the end of the project to allow the consortium the preparation and submission of scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> A technical publication describing the dataset and acquisition procedure will be published is expected. Dissemination will be mostly performed through postprocessed data and result analysis by means of relevant i-PROGNOSIS publications. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse and access with commonly available </td> </tr> <tr> <td> tools and software libraries. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The data will be hosted on the central database (Microsoft Azure-based iPROGNOSIS Cloud infrastructure) that will serve the needs of the Data Management Portal of the i-PROGNOSIS project. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The private part of the dataset will be preserved on the Data Management Portal of the i-PROGNOSIS project at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to be several hundreds of MB, considering that the devices can produce up to 5KB of data per second. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> There are no costs associated with data preservation in institutional servers. However, data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH, PLUX, MICROSOFT** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH, PLUX** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH, PLUX, MICROSOFT** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within activities of WP4, WP6 and WP7, to mainly serve the research efforts of T4.1, T4.5, T6.1, T6.4, T7.3 and T7.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.5-TypingPatternAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will include typing patterns in the form of keystroke dynamics- related features, extracted during the users' typing on a virtual keyboard (of a touch screenenabled smartphone) so as to be used as indicators towards the formulation of early PD detections tests. Features will be accompanied by metadata in order to facilitate the analysis within the i-PROGNOSIS project, as well as, by other researchers outside of the project. </td> </tr> </table> <table> <tr> <th> **Origin of data** </th> </tr> <tr> <td> The dataset will be collected as part of the development of the i-PROGNOSIS first stage of early PD detection. It will comprise a development sub-dataset and corresponding metadata collected by a small number of users, as well as, a deployment sub-dataset and corresponding metadata collected by a larger number of users (in the range of hundreds). Both data collection procedures will be accompanied by ethics approval. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The dataset will be collected during the interaction of users with the virtual keyboard of a touch screen-enabled smartphone, i.e., when they will be typing on the keyboard. The development sub-dataset will be collected when development users' will be typing specific text excerpts in a controlled environment using specific smartphones. Indicative features that will comprise the dataset are the hold time (the time a key is pressed), the press latency (the time between pressing a key and fully releasing it), flight time (the time in-between releasing a key and pressing the next key) and release latency (the time between releasing a key and pressing the next key), all measured in milliseconds (ms). The deployment sub-dataset will comprise the same features as the development sub-dataset, but it will be collected through the iPROGNOSIS PD detection application [via the **typing pattern service** (see Section 6 of D2.1)] that will be released to the public for download. As a result, it is expected that the dataset will be formulated through a significantly larger number of users' typing on their smartphone keyboard. In the case of the deployment dataset, keystroke dynamics will be collected from each typing session, i.e., starting when the keyboard becomes active and finishing when the keyboard is suppressed. In the case of the deployment dataset, _keys pressed will not be recorded_ in order to comply with personal data privacy and protection regulations. The dataset is expect to be composed of 40 records for the development subdataset, while the deployment sub-dataset scale cannot be defined as it depends on the number of users that will download the i-PROGNOSIS PD detection application, the number of those that will allow for the respective service to collect typing pattern data and their frequency of actions that require typing (e.g., composing SMSs/Tweet or e-mails). _Data Format:_ For each text excerpt typed (development sub-dataset) a CSV file will be generated, including the values of features and metadata. For each typing session (deployment sub-dataset) a structured XML file will be generated, including values of features and metadata. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The development sub-dataset will be used to initialise the i-PROGNOSIS machine learning algorithms as far as the typing pattern inference is concerned, that will be further trained by the deployment sub-dataset. The deployment sub-dataset will be used for training the i-PROGNOSIS machine learning algorithms in order for the i-PROGNOSIS investigators to examine whether it is useful to employ this type of information towards the realisation of early PD detection tests based on the users' interaction with their everyday digital devices, </td> </tr> </table> <table> <tr> <th> such as the smartphone. Moreover, the dataset could be useful for biomedical researchers that are interested in studying the relationship between the interfacing (requiring motor skills) of people with digital devices and psychomotor impairments, such as PD. The later constitutes a relatively new research subject. Adequate use of these data presupposes at least basic background regarding PD symptomatology, as well as, basic knowledge in data analysis methodology and experience in the use of statistical software packages. </th> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The development sub-dataset will accompany the research results regarding the feasibility of early PD detection tests based on users' interaction with everyday digital devices. Research results are planned to be published initially in the indicative journals and/or conferences provided below: * IEEE Transactions on Biomedical Engineering (Journal) * Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Conference) * Human Computer Interaction International (Conference) The respective research field is relatively new. A recent publication has been made targeting the psychomotor impairment detection based on users' natural typing (see Giancardo et al. (2015) at _Indicative existing similar data sets_ ) </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> A similar dataset has been made available by Giancardo et al. (2015), linked to the publication: Giancardo, ., Sánchez-Ferro, A., Butterworth, I., Mendoza, C. S., & Hooker, J. M. (2015). Psychomotor impairment detection via finger interactions with a computer keyboard during natural typing. Scientific reports, 5, article no: 9678, doi:10.1038/srep09678 The dataset includes keystroke dynamics and it is part of the supplementary information accompanying the publication and can be found through: _http://www.nature.com/articleassets/npg/srep/2015/150409/srep09678/extref/srep09678-s1.pdf_ i-PROGNOSIS investigators plan to use this dataset and the respective experiment protocol in order to configure the framework for the collection and analysis of the development sub-dataset. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The European Data Format+ (EDF+) ( _http://www.edfplus.info/specs/edfplus.html_ ) will be exploited for the production of the metadata files. The development sub-dataset will be accompanied by a detailed description of its contents. Indicative metadata include: a) description of the experiment set-up and the procedure that led to the generation of the dataset, b) demographics, anthropometrics and basic health-record data of the experiment participants, c) description of the keystroke dynamics-related features, and d) annotation of features </td> </tr> </table> <table> <tr> <th> based on (b). The deployment sub-dataset will be annotated based on the following indicative metadata: a) basic health record data of the users and b) smartphone/smartwatch device model. </th> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> The development sub-dataset will be publicly available (see Section 3.3.2.1), accompanied by the respective ethics approval. Due to ethical compliance, the deployment sub-dataset will be confidential and only i-PROGNOSIS partners will be able to access it after following the appropriate procedure (see Section 3.3.2.3) </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> The deployment sub-dataset that will be confidential will be handled according the framework reported in Section 3.3.2.3. The development sub-dataset that will be publicly available will be open for third-party stakeholders to download based on the procedure described in Section 3.3.2.1. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The development sub-dataset is planned to be publicly available after month 24 of the project to allow i-PROGNOSIS investigators to prepare and submit the respective scientific publications, but allow also for third-party researchers to conduct further analysis and comment on the i-PROGNOSIS results before the end of the project. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> The development sub-dataset will be available through the i-PROGNOSIS data management portal (see Section 3.2). Links redirecting to the portal and the available dataset will be provided through a dedicated page of the i-PROGNOSIS project website ( _www.i-prognosis.eu_ ) . A technical description providing information on the experiment protocol through which the development sub-dataset was captured will accompany the development sub-dataset. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> As the publicly available compressed dataset (development sub-dataset) will comprise standard CSV files, no specialised software or tools will be required for the dataset to be parsed and reused, other than a (de-)compression software and a basic text editor (minimum requirement to access the content of the CSV file). </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The development sub-dataset will be stored in AUTH secure servers, as well as in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure (data centres are located in Ireland) where the data management portal will be installed. The deployment subdatasets will be also stored in the i-PROGNOSIS Cloud infrastructure, but access will be restricted (Section 3.3.2.3). </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads and at least one year after the end of the project. After that it would be made accessible by direct request to the data collector. The confidential part of the dataset will be preserved by the owners at least one year after the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The development sub-dataset is expected to be approximately 4 MB (including metadata), based on the number of participants (40) and the approximate size of features recorded during a 10 min typing session (~ 70 KB). The deployment dataset is expected to be in the range of hundreds of Gigabytes (GB), based on the expected number of the i-PROGNOSIS App users (in the range of thousands), an average of 10 typing activities per day for at least six months, an average typing activity duration of 5 min, and a 7 KB size of typing pattern features per one minute of typing. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The development sub-dataset will be stored on AUTH servers. There are no costs associated with the latter means of archiving and preservation. The development and deployment sub-datasets will be also archived and preserved in the Microsoft Azurebased i-PROGNOSIS Cloud infrastructure. The indicative cost of the latter means of archiving and preservation is 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The development sub-dataset will be collected within WP3 and WP6, to mainly serve the research efforts of T3.2, T3.7, T6.2 and T6.3. </td> </tr> </table> <table> <tr> <th> **DATA SET** **REFERENCE NAME** </th> <th> **DS2.6-ExploratoryWalkabilityAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTI** </td> <td> **ON** </td> </tr> <tr> <td> **Generic description** </td> <td> </td> </tr> </table> <table> <tr> <th> Dataset for building an anonymous and depersonalised user sociability profile, based on information (derived features) originating by location sensors/modules (e.g. Wi-Fi, GPS). </th> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The derived information will originate from data by the smartphone and smartwatch embedded location and IMU (GPS, GSM, Wi-Fi, accelerometer, gyroscope, and magnetometer) sensors and modules during the GData and SData collection periods. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The collected data will be captured while the user carries her/his smartphone or wears her/his smartwatch while commuting. More specifically, the dataset will contain derived de-personalised features based on the fusion of multiple sensors and modules such as: Wi-Fi, GSM, GPS and IMU from the user’s smartphone and GPS and IMU from the user’s smartwatch device. Since anonymisation is of outmost importance when dealing with location information, all derived features will be free of any absolute world coordinates or any way to derive them by processing. _Data format:_ JSON/XML for data representation as well as for any relevant annotations. It is expected that the dataset, will be composed of an equal number of PD patients and healthy subjects; however, the number of subjects is heavily related with the overall user participation. The data volume per subject is anticipated to be in the order of ~10-15 MB per subject. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of the location and the physical activity service (i.e. background services, see D2.1 for additional information) as well as the module responsible for building the user’s behavioural profile. Furthermore, the dataset will be of particular interest to the researchers and health professionals that are willing to explore the underlying information contained in location data regarding the sociability of potential PD patients. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> A paper describing the dataset is expected to be published. Furthermore, a paper describing the approach of de-personalising location information and fusing different location data sources is also target for publication. More information on the latter can be found in D8.3 (Dissemination plan; “A paper presenting an approach for generating user profiles based on de-personalized location information”). </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To this day, publicly available location based PD datasets are non-existent. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied by a complete documentation containing: </td> </tr> </table> <table> <tr> <th> 1. A description of the overall procedure regarding the collection of the data. 2. A brief definition of how the derived location-based features were generated. 3. A brief definition of the de-personalisation processes. 4. An annotation file, indicating if the subject (indicated solely by a subject ID) is a diagnosed PD patient or a healthy individual. If deemed informative or necessary, additional non-identifiable information may be included in this section. </th> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Since all identifiable information, like correlation to absolute world coordinates, will be stripped clean off of the dataset, access type can be classified as publicly available (i.e. can be publicly shared). </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> A respective webpage will be created on the data management portal that will include a brief description of the dataset as well as the publication(s) that need to be cited in case any part of the dataset is used. </td> </tr> <tr> <td> **Embargo periods** </td> </tr> <tr> <td> The applicable dataset will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> The data management link leading to the dataset will be included in all relevant publications. A complete technical publication describing in-depth the dataset as well as the data acquisition procedures will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> A typical JSON/XML (e.g. libxerces, libJSON, JSON-java) parsing library is the only requirement for accessing the content of the dataset. Such libraries are typical in most (if not all) programming languages. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The entirety of the dataset will be accommodated at the data management portal of the project’s website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> Depending on the magnitude of user participation, the end volume of data is expected to be in the range of ~15-25 GB. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> There are no costs associated with data preservation in institutional servers. However, data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The entire “ExploratoryWalkabilityAnalysis” dataset will be collected during WP3, to mainly serve the research efforts of T3.2, T3.7 and T4.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE** **NAME** </th> <th> **DS2.7-TextSentimentAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> SMS/Tweet(Twitter message) datasets in different languages will be collected from the users’ smartphones. These messages will be manually annotated by a value indicating the sentiment classification (at minimum three classes will be used: positive, negative, neutral or n/a). </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected from i-Prognosis users’ smartphones SMS or Twitter messages (User consent will be provided). </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The goal of this dataset is to enable the detection of depression symptoms for early PD detection. Data Format: plain text files of SMS/Tweets with annotations/metadata. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset will be used within the project for the development and evaluation of depression recognition algorithms as a symptom for the early detection of PD. This data will be used in WP3 and WP6, supporting the early PD symptoms detection and for the overall assessment of the i-PROGNOSIS system. This dataset may also be useful in the future for other researchers who want to explore depression symptoms from short messages. </td> </tr> </table> <table> <tr> <th> **Related scientific publication(s)** </th> </tr> <tr> <td> This database will build the foundation of our research and development of algorithms towards detecting depression symptoms from short messages for early PD detection. We plan to publish our findings on natural language processing- related conferences and journals. </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> Many manually annotated databases are already available for SMS/twitter sentiment analysis. To name a few: * Stanford Twitter Corpus: _http://help.sentiment140.com/for-students_ * HCR and OMD datasets: _https://bitbucket.org/speriosu/updown_ * Sentiment Strength Corpora: _http://sentistrength.wlv.ac.uk/_ * Sanders: _http://www.sananalytics.com/lab/twitter-sentiment/_ * SemEval: _http://www.cs.york.ac.uk/semeval-2013/task2/_ </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> SMS/tweet messages are ASCII text with a length of 160/140 characters respectively. The available metadata will be the available sentiment labels(classes). </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the SMS/Tweets collected by a subset of the patients during the initial phases by normal healthy control subjects could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> For the portions of the dataset that will be made publicly available, a respective web page will be created on the data management portal that will provide a description of the dataset and links to a download section. The private part of this dataset will be stored at a specifically designated private space of CERTH, in dedicated hard disk drives, on which only members of the CERTH research team will have access. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and its annotations will be produced. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> No specialised software or tools will be required for the dataset to be parsed and reused, other than a basic text editor. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public data will be hosted on the central database that will serve the needs of the Data Management Portal of the i-PROGNOSIS project, hosted by Microsoft Azurebased i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of the dataset will be preserved on the Data Management Portal of the i-PROGNOSIS project at least until the end of the project. The private part of the dataset will be preserved by CERTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> Each text/SMS message is less than 500 bytes (assuming utf-8 encoding for tweets), so a corpus of 10000 SMS/Tweets is expected to consume up to 5 MB (per language). </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The private part of the dataset will be stored on a dedicated hard drive, with very small costs for its preservation. Data publicly archived and preserved in the Microsoft ,Azure-based, i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **CERTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **CERTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **CERTH, AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data is going to be collected within activities of WP3 and WP6, to mainly serve the research efforts of T3.4, T6.1 and T6.2. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.8-FoodIntakeAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> Dataset for the objective quantification of meal mechanics, based on the real- time recording of the cumulative intake curve during a meal, extrapolated by the weightloss recording from a personal weighting scale placed under the user’s plate. This dataset will also include derivative information about the specific values of the user’s eating behavioural elements, e.g.,: meal total intake and duration, number of bites, </td> </tr> </table> <table> <tr> <th> eating rate and eating rate changes across the meal. </th> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be produced by use of Mandometer electronic plate scales during the SData collection phase. It is possible that the dataset will be complemented by data collected during the intervention phase if/when the Mandometer is used. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> It is expected that the datasets will be collected mainly during the SData phase, and potentially during the intervention phase. The users will be asked to use the Mandometers during their main (breakfast, lunch, dinner) meals across a pre-set number of days (e.g., 5 times/week). The Mandometers will be supposed to be used throughout the duration of a main meal. The dataset will contain the raw weight-loss data series generated by the removal of food from a plate during the meal, the corrected cumulative intake curve of the user for each meal, the extracted meal behavioural indicators (e.g., the meal total intake and duration, the recorded number of bites, the total eating rate and the modelled eating rate changes across the meal). Potentially, the dataset will also include averaged values per user across multiple recorded meals. Additionally, the dataset will contain around-the- meal self-rated subjective information concerning the meal, e.g., perceived fullness before/after, food taste evaluation etc. _Data Format:_ XML files for all the described measures. The dataset will be on the order of 0.5 MB per recorded meal. The size will be significantly higher if the food pictures are also included in the dataset. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset will facilitate evaluation of the progress of each individual and their comparison with healthy, age-matched control populations. This comparison can be valuable for health professionals who care for patients with Parkinson patients, the patients themselves and researchers who want to explore the underlying shifts that happen in eating behaviour due to Parkinson’s disease. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will complement our research and clinical results in the field of eating behaviour quantification. A subset of the investigated population, participating in the SData phase, without having diagnosed Parkinson disease will be the reference control population. Corresponding publications describing and validating different components of the described methodology are: Ioakimidis I, Modjtaba Z, Eriksson-Marklund L, Bergh C, Grigoriadis A, & Södersten P. (2011). “Description of Chewing and Food Intake over the Course of a Meal.” Physiology & Behavior,104 (5,: 761–769. Papapanagiotou V, Diou C, Langlet B, Ioakimidis I, & Delopoulos A. (2015). “A parametric Probabilistic Context-Free Grammar for food intake analysis based on continuous meal weight measurements.” Conf Proc IEEE Eng Med Biol Soc.,78537856. </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> </table> <table> <tr> <th> While there are many age stratified datasets about qualitative measures pertaining to the nutritional profile of national or local populations, to the best of our knowledge there exists no comparable dataset for Parkinson patients or aged-matched control group. Available national (Sweden) dataset example: Riksmaten 2010 ( _http://www.livsmedelsverket.se/matvanor-halsa-- miljo/kostrad-ochmatvanor/matvanor---undersokningar/riksmaten-2010-11--- vuxna/_ ) : In this dataset, a representative sample of 1800 individuals between 18–80 years, living in Sweden, were invited to participate in the survey. The data collection took place between May 2010 and July 2011\. </th> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) type of consumed food (probably identified through mobile-recorded pictures), c) documentation of the variables recorded in the dataset, and d) the relative “positioning” of an individual in the corresponding population distribution. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the data captured by a subset of the patients and normal healthy control subjects, could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. It will be investigated further whether the complementary food pictures can also become publicly available. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> For the portions of the dataset that will be made publicly available, a respective web page will be created on the data management portal that will provide a description of the dataset. The private part of this dataset will be stored at a specifically designated private space of KI, in encrypted hard disk drives, on which only members of the KI research team, whose work directly relates to these data will have access. For further i-PROGNOSIS partners to obtain access to these data, they should provide a proper request to the KI primarily responsible, including a justification over the need to have access to these data. Once deemed necessary, KI will provide the respective data portions to the partner. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available 2 years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link to this will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS </td> </tr> <tr> <td> publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> A typical XML parsing library is the only requirement for accessing the content of the dataset. Such libraries are typical in most (if not all) programming languages. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at the data management portal of the project website, hosted within Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. The private part of the dataset will be preserved by KI at least until 10 years after the publication of the scientific results. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset will be on the order of 0.5 MB per recorded meal. The size will be significantly higher if the food pictures are also included in the dataset. A first estimation of the dataset, including food pictures would be in the range from ~2-7 GB. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> Probably parts from three hard disk drives in the KI lab (primary, backup 1 and backup 2) will be allocated for the dataset. There are no costs associated with its preservation. Data publicly archived and preserved in the Microsoft Azure-based iPROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **KI** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **KI** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **KI** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within activities of WP3, WP4 and WP6, to mainly serve the research efforts of T3.2, T3.7, T4.1, T4.5, T6.1 and T6.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.9-BowelSoundsAnalysis** </th> </tr> </table> <table> <tr> <th> **DATA SET DESCRIPTION** </th> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will include bowel sound data captured from a smart belt towards bowel immobility related to increased constipation detection, since constipation is one of the earliest non-motor symptoms in PD patients. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be captured by a custom-made smart belt that will carry distributed microphones covering the abdomen area and will be worn either directly on the skin or above clothes. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The first part of the dataset is expected to be collected during the development data collection phase where the subjects (healthy control and PD patients) will wear the smart belt for at least 1 hour in laboratory environment. The second part of the dataset will be collected mainly during the SData and Interventions phases, where smart belts will be given to 80 (potential PD patients and healthy controls) and 60 (identified PD patients) subjects, respectively. The recording will take place at each subject’s indoor/outdoor environment. Since the bowel sounds, according to the literature, exist in the range of 100 – 1000/1500 Hz, the raw signal will be filtered (high-pass filtering at 80 Hz in order to eliminate the influence of cardiac and pulmonary sounds). Data format: TXT or CSV file Considering three recording channels, an average usage of 4 hours per day and 3 kHz sampling frequency, the dataset is estimated to be on the order of ~500 MB per subject per day. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of constipation detection algorithms (through sound-based intestinal motility assessment). Moreover, the dataset may be useful to identify other gastrointestinal disorders, such as obstructions, ascites, infections and trauma. The fact that there are not any available datasets of this kind, makes this dataset valuable to the research community, especially to the gastroenterologists. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany the research results in the field of bowel sounds analysis for constipation detection of people with PD. One publication is intended to be made in the World Journal of Gastroenterology (or similar). </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge, there is no available bowel sounds dataset. To this end, the dataset that will be collected during i-PROGNOSIS project is expected to have major impact. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. </td> </tr> </table> <table> <tr> <th> Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) health status of the subject, and c) documentation of the variables recorded in the dataset. </th> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical issues, only part of the dataset will be publicly available. More specifically, the data that correspond to a subset of the PD patients as well as the healthy control subjects, captured at the initial phase of the i-PROGNOSIS project will become publicly available. The rest of the data will be private to serve the iPROGNOSIS R&D objectives. The inclusion of a subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> The access procedures for the publicly available sub-dataset and the private subdataset that will serve the project’s R&D objectives are described in Sections 3.3.2.1 and Section 3.3.2.3 respectively. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available 2 years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link to this will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse with commonly available tools and software libraries, such as Excel, Matlab, .NET etc. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at the data management portal of the project website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. The private part of the dataset will be preserved by AUTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to be several Terabytes, provided that each participant is expected to produce at least 500MB per day and the duration is at least 22 months </td> </tr> <tr> <td> (SData and Interventions phase). </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> Probably four dedicated hard disk drives will be allocated for the private dataset. There are no costs associated with its preservation. Data publicly archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **PLUX, AUTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **PLUX, AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **PLUX, AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within the activities of WP3 and WP6, to mainly serve the research efforts of T3.6, T6.1, T6.3 and T6.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS2.10-TremorAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> Dataset for detecting the presence of PD minor tremor in the upper limbs. The dataset will include derived features from 3-axis acceleration, orientation as well as magnetic data streams. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The data will originate from the smartphone and smartwatch embedded IMU (gyroscope, accelerometer and magnetometer) sensors during the GData and SData collection periods. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The collected data will be captured during the user’s interaction with the smartphone/smartwatch devices. More specifically, smartphone generated data will be collected while the user performs certain actions while holding the device (e.g., during a voice call). On the other hand, data originating from the smartwatch device will be captured while the user performs everyday activities while wearing the smartwatch. _Data format:_ JSON/XML for data representation as well as for any relevant annotations. </td> </tr> </table> <table> <tr> <th> It is expected that the dataset, will be composed of an equal number of PD patients and healthy subjects; however, the number of subjects is heavily related with the overall user participation. The data volume per subject is anticipated to be in the order of ~5-10 MB per subject. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of the minor tremor detection module (i.e. handling service, see D2.1 for more information) of the i-PROGNOSIS smartphone detection application. Furthermore, the dataset will be of particular interest to the researchers and health professionals that are willing to explore the underlying information contained in IMU data regarding PD tremor. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> A paper describing the dataset is expected to be published. Furthermore, a paper describing the approach for detecting PD minor tremor is also target for publication. More information on the latter can be found in D8.3 (Dissemination plan; “A paper presenting a method for detecting the presence of minor-tremor, based on IMU data collected by the smartphone/smartwatch”). </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To this day, publicly available IMU based PD tremor datasets are non-existent. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied by a complete documentation containing: a) a description of the overall procedure regarding the collection of the data, b) a brief definition of how the derived IMU-based features were generated and c) an annotation file, indicating if the subject (indicated solely by a subject ID) is a diagnosed PD patient or a healthy individual. If deemed informative or necessary, additional non-identifiable information may be included in this section. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Since all identifiable information will be stripped clean off of the dataset, access type can be classified as publicly available (i.e. can be publicly shared). </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> A respective webpage will be created on the data management portal that will include a brief description of the dataset as well as the publication(s) that need to be cited in case any part of the dataset is used. </td> </tr> <tr> <td> **Embargo periods** </td> </tr> <tr> <td> The applicable dataset will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> The data management portal will provide a link leading to the dataset, which will be </td> </tr> <tr> <td> included in all relevant publications. A complete technical publication describing indepth the dataset as well as the data acquisition procedures will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> A typical JSON/XML (e.g., libxerces, libJSON, JSON-java) parsing library is the only requirement for accessing the content of the dataset. Such libraries are typical in most (if not all) programming languages. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The entirety of the dataset will be accommodated at the data management portal of the project’s website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud infrastructure.. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> Depending on the magnitude of user participation, the end volume of data is expected to be in the range of ~15-25 GB. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> Data archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The entire dataset will be collected during WP3, to mainly serve the research efforts of T3.2, T3.7 and T4.4. </td> </tr> </table> ### Intervention Data <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS3.1-SeriousGames** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will include in-game metrics and games’ performance of all the sessions </td> </tr> </table> <table> <tr> <th> in a semantically annotated way (by adopting proper ontologies for each specific domain) so as to facilitate subsequently analysis with respect to the clinical assessment tests towards research on stealth assessment and screening of PD early signs and disease progress through the suite (linear correlation of in-game metrics with clinical assessment tests). </th> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected using the i-PROGNOSIS Personalised Game Suite (PGS) which will accommodate different types of serious gaming interventions. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The datasets will be collected during the serious gaming interventions of iPROGNOSIS. The serious games will ask the user to go under specific exercises or gaming tests while mechanisms in the background will track and collect the user’s performance. In the case of Exergames, the user will be asked to perform specific exercises which will be captured by the Kinect sensor. The dataset will contain metrics like reaction time, player’s path / optimum path, goal time, movement range, balance, min and max angles of movements, wrong choices and much more in-game metrics that will arise during the design requirements of the serious games. _Data Format:_ RDF triples or JSON The dataset is expected to be composed of 180 records per serious gaming session. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of the Personalized Game Suite in terms of usability, acceptance, effectiveness and user’s assessment. The different parts of the (semantically annotated where applicable) dataset could be useful in the benchmarking of a series of serious games, focusing either on the effectiveness axis as the primary role of the interventions, as well as in detecting and assessing, if possible, the disease’s progress and status. The latter will feed the T4.5 which is intended to dynamically recommend game adaptations for personalised and optimised use as well as keeping the users in the “flow zone” which represents the feeling of being complete and energized focus in an activity with a high level of enjoyment and fulfilment towards increased adherence. Finally, this dataset will be part of the overall evaluation of the i-PROGNOSIS contributing to the validation of the pilot applications and interventions. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany our research results in the field of human activity monitoring of people with Parkinson’s. A subset of similar dataset, including recordings from elderly people without Parkinson’s going through Exergames, will be used as initial input to this dataset. Corresponding publications are: Bamparopoulos, G., Konstantinidis, E., Bratsas, C., & Bamidis, P. D. (2016). Towards exergaming commons: composing the exergame ontology for publishing open game data. Journal of Biomedical Semantics, 7(1), Article nr 4. http://doi.org/10.1186/s13326-016-0046-4 Konstantinidis, E., Bamparopoulos, G., & Bamidis, P. (2016). Moving Real Exergaming </td> </tr> </table> <table> <tr> <th> Engines on the Web: The webFitForAll case study in an active and healthy ageing living lab environment. IEEE Journal of Biomedical and Health Informatics, 1–1. http://doi.org/10.1109/JBHI.2016.2559787 </th> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> It should be noted that there are no serious games datasets available online. To the best of our knowledge, the only open dataset regarding serious games, and more specifically Exergames performed by elderly people, is the one that AUTH has published a couple of years before described in: Bamparopoulos, G., Konstantinidis, E., Bratsas, C., & Bamidis, P. D. (2016). Towards exergaming commons: composing the exergame ontology for publishing open game data. Journal of Biomedical Semantics, 7(1), Article nr 4. http://doi.org/10.1186/s13326-016-0046-4 </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) type of exercise or game, c) documentation of the variables recorded in the dataset, and d) semantic annotation based on existing ontologies. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the data captured by a subset of the patients during the initial phases by normal healthy control subjects could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> An ontology that describes Exergames using the Web Ontology Language (OWL) is available at _http://purl.org/net/exergame/ns#_ . The acquired game results will be automatically converted to RDF triples and published on the web as open data, accessible through a SPARQL Endpoint. The data will be accessible from a SPARQL endpoint that is available at _http://www.fitforall.gr/sparql_ , where queries can be made using the GET or POST method. In order to facilitate access, links to a download section where the datasets will be downloaded as JSON files will be provided if required. The private part of this dataset will be stored at a specifically designated private space of AUTH, in dedicated hard disk drives, on which only members of the AUTH research team whose work directly relates to these data will have access. For further i-PROGNOSIS partners to obtain access to these data, they should provide a proper request to the AUTH primarily responsible, including a justification over the need to have access to these data. Once deemed necessary, AUTH will provide the respective data portions to the partner. Another option is to provide a public link to download the dataset through the iPROGNOSIS data management portal. </td> </tr> </table> <table> <tr> <th> **Embargo periods** (if any) </th> </tr> <tr> <td> The applicable datasets will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link to this, as well as to the SPARQL endpoint _http://www.fitforall.gr/sparql_ , will be provided from the Data management portal. The link and the SPARQL endpoint will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse with commonly available tools and software libraries. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at AUTH servers, accessible by the SPARQL endpoint _http://www.fitforall.gr/sparql_ . In addition, a subset of these data will be hosted to the central database that will serve the needs of the Data Management Portal of the i-PROGNOSIS project. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. The private part of the dataset will be preserved by AUTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to be several hundreds of MB, provided that each session is expected to produce a volume of ~80 KB of data. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The dataset will be stored in the serious gaming server hosted by AUTH. There are no costs associated with its preservation. In the case that Microsoft Azure will accommodate the serious gaming server, there will be an additional cost of approximately 20 Euros (€) per month for one year </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> </table> The data are going to be collected within activities of WP4, WP6 and WP7, to mainly serve the research efforts of T4.1, T4.5, T6.1, T6.4, T7.3 and T7.4. <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS3.2-BodyAndGesture** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> Dataset for human identification, postures and gestures tracking experiments, especially during the Exergames where Kinect will be the main sensor. The dataset is also planned to include balance and gait features. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected using a Kinect2 sensor during the exergaming intervention. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> It is expected that the datasets will be collected mainly during the exergaming interventions. The Exergames will ask the user to go under specific exercises which will be captured by Kinect. The approximate duration of each session is expected to be close to 45min. The dataset will contain the user’s silhouette as this is provided by the Kinect SKD (Skeleton with bones and joints). It will be investigated further whether the RGB and depth image will be collected. _Data Format:_ PNG/JPG for images (both RGB and depth), JSON for the coordinates of the joints and bones, XML or TXT for annotations. The dataset will be on the order of ~2-5 GB per recording hour. </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of the human activity monitoring and the Exergames intervention of the i-PROGNOSIS project. The different parts of the dataset could be useful in the benchmarking of a series of human tracking methods, focusing either on human identification, on posture and gesture analysis and tracking as well as in detecting, if possible, symptoms and signs of those appear at people with Parkinson’s. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany our research results in the field of human activity monitoring of people with Parkinson’s. A subset of similar dataset, including recordings from elderly people without Parkinson’s going through Exergames, will be used as initial input to this dataset. Corresponding publications are: Konstantinidis, E. I., Antoniou, P. E., Bamparopoulos, G., & Bamidis, P. D. (2014). A lightweight framework for transparent cross platform communication of controller data in ambient assisted living environments. Information Sciences, 300, 124–139. http://doi.org/10.1016/j.ins.2014.10.070 Konstantinidis, E. I., Billis, A. S., Bratsas, C., & Bamidis, P. D. (2016). Active and Healthy </td> </tr> </table> <table> <tr> <th> Ageing Big Dataset streaming on demand. In Proceedings of the 18th International Conference on Human-Computer Interaction. Toronto, Canada. </th> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> It should be noted that although several RGB-D datasets stemming from Kinect sensor dealing with human activity analysis are publicly available (see datasets below), to the best of our knowledge, there is no available any Parkinson monitoring dataset yet. _G3D_ ( _http://dipersec.king.ac.uk/G3D/G3D.html_ ) : G3D dataset contains a range of gaming actions captured with Microsoft Kinect. The Kinect enabled us to record synchronised video, depth and skeleton data. The dataset contains 10 subjects performing 20 gaming actions: _punch right, punch left, kick right, kick left, defend, golf swing, tennis swing forehand, tennis swing backhand, tennis serve, throw bowling ball, aim and fire gun, walk, run, jump, climb, crouch, steer a car, wave, flap and clap_ . _MSRC-12 Kinect gesture dataset_ ( _http://research.microsoft.com/enus/um/cambridge/projects/msrc12/_ ) : The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as bodypart locations, and the associated gesture to be recognized by the system. _RGB-D Person Re-identification Dataset_ ( _http://old.iit.it/en/datasets- andcode/datasets/rgbdid.html_ ) : A new dataset for person re-identification using depth information. The main motivation is that the standard techniques (such as _SDALF_ ) fail when the individuals change their clothing, therefore they cannot be used for longterm video surveillance. Depth information is the solution to deal with this problem because it stays constant for a longer period of time. _DGait Database_ ( _http://www.cvc.uab.es/DGaitDB/Summary.html_ ) : DGait is a new gait database acquired with a depth camera. This database contains videos from 53 subjects walking in different directions. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) type of exercise in case the dataset produced during exergames, c) documentation of the variables recorded in the dataset, and d) annotated posture, action and activity. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the data captured by a subset of the patients during the initial phases by normal healthy control subjects could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. It will be investigated further whether the silhouette (coordinates of joints and bones) subset of the dataset could be publicly available. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> </table> <table> <tr> <th> For the portions of the dataset that will be made publicly available, a respective web page will be created on the data management portal (the use of the CAC-playback manager 4 will be assessed) that will provide a description of the dataset, links to a download section and a playback possibility in case the playback manager approach is followed. The private part of this dataset will be stored at a specifically designated private space of AUTH, in dedicated hard disk drives, on which only members of the AUTH and CERTH research team whose work directly relates to these data will have access. For further i-PROGNOSIS partners to obtain access to these data, they should provide a proper request to the AUTH/CERTH primarily responsible, including a justification over the need to have access to these data. Once deemed necessary, AUTH/CERTH will provide the respective data portions to the partner. Cac-playback manager: Konstantinidis, E. I., Billis, A. S., Bratsas, C., & Bamidis, P. D. (2016). Active and Healthy Ageing Big Dataset streaming on demand. In Proceedings of the 18th International Conference on Human-Computer Interaction. Toronto, Canada. </th> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available 2 years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link to this will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse with commonly available tools and software libraries. In case of supporting the online playback of the datasets, libraries for a variety of programming languages will be released (e.g. _http://www.cacframework.com/_ ) </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at the data management portal of the project website. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. The private part of the dataset will be preserved by AUTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to be several gigabytes, provided that each recording hour is expected to be ~2-5 GB. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> Probably two dedicated hard disk drives will be allocated for the dataset; one for the public part and one for the private. In this case the costs associated with its preservation will be according to the hardware cost (hard drives) . Azure will be also considered as candidate data archiving means although the volume of the data (1-2 TB) would make this choice very expensive. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> Small one-time costs covered by i-PROGNOSIS. In case of Azure, the cost will be higher. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH, CERTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH, CERTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within activities of WP3, WP4 and WP6, to mainly serve the research efforts of T3.2, T3.7, T4.1, T4.5, T6.1 and T6.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS3.3-SleepStageAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will include physiological (heart rate and skin temperature) data and IMU (accelerometer and gyroscope) data, captured by the smartwatch (Microsoft Band 2) collected during the users' sleep, to be i) used for the development of or ii) collected during the targeted nocturnal intervention (TNI), i.e., an i-PROGNOSIS intervention that will use pacifying sounds in order to reinstate a satisfactory sleep stage when a sleep disturbance episode is detected. Features will be accompanied by metadata in order to facilitate the analysis within the i-PROGNOSIS project, as well as, by other researchers outside of the project. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected as part of the development of the i-PROGNOSIS supportive interventions and the respective data collection phase. It will comprise a development sub-dataset and corresponding metadata, collected by experiment participants during a short period of time (~2 days), as well as, a deployment subdataset and corresponding metadata collected by users during a significantly longer period (in the range of months) during the interventions data collection period. Both data collection procedures will be accompanied by ethics approval. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The data collection leading to the formulation of the dataset will take place during the users' sleep. The development sub-dataset will be collected based on an </td> </tr> </table> <table> <tr> <th> experiment protocol, i.e., the participants (PD patients) will be asked to wear the smartwatch during their sleep for ~2 consecutive nights and afterwards report on their sleep quality. Indicative features that will comprise the dataset are the heart rate (beats per minute - 1 Hz sampling frequency), the skin temperature (degrees Celsius - 1 Hz sampling frequency), accelerometer data (X, Y, and Z acceleration in g units or meter per second squared, 62 Hz sampling frequency) and gyroscope data (X, Y, Z angular velocity in degrees per second, 62 Hz sampling frequency) as captured and outputted by the Microsoft Band 2. The deployment sub-dataset will comprise the same features as the development sub-dataset, but it will be collected through the iPROGNOSIS PD interventions application [via the **targeted nocturnal intervention (TNI) service** (see Section 6 of D2.1)] that will be provided to a subset of interventions users to use (~10 PD patients out of 60 interventions users) based on a pre-interventions medical evaluation. The dataset is expected to be composed of 60 (30 participants × 2 nights) records for the development sub-dataset, while the deployment sub-dataset will comprise approximately 1800 (~6 months interventions period × ~30 nights × ~10 interventions users) records. _Data Format:_ For each sleep session (development sub-dataset) four CSV files, one per feature (heart rate, skin temperature, accelerometer and gyroscope data), will be generated, including the values of features and timestamps, as well as a CSV or EDF+ file including metadata. For each sleep session (deployment sub-dataset) a structured XML file will be generated, including all the aforementioned features and metadata. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The development sub-dataset will be used to develop a sleep stage/disturbances recognition model focusing on sleep disturbances that PD patients experience. The latter model will be used as part of the targeted nocturnal intervention (TNI) and the triggering of pacifying sounds will occur based on the model outputs. The deployment sub-dataset will be used for validation and fine-tuning of the model produced during development, as well as for evaluating the effectiveness of the TNI in real life scenarios. Moreover, the dataset could be useful for biomedical researchers that are interested in studying the inference of sleep disturbances, experienced by PD patients, based on data captured by commercially available wearable sensors. Adequate use of these data presupposes at least basic background regarding PD symptomatology, as well as, basic knowledge in data analysis methodology and experience in the use of statistical software packages. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The development sub-dataset will accompany the research results regarding the feasibility of a sleep stage/disturbances recognition model based on data captured by commercially available wearable sensors, such as smartwatches. Research results are planned to be published initially in the indicative journals and/or conferences provided below: * IEEE Transactions on Biomedical Engineering (Journal) * Elsevier International Journal of Medical Informatics (Journal) </td> </tr> </table> <table> <tr> <th> \- Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Conference) </th> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the researchers' knowledge there are no similar datasets publicly available at the moment. However, it is expected that such dataset will be available in the near future through the Michael J. Foundation for Parkinson's Research and Intel collaborative "Fox Insight" clinical study ( _https://foxinsight.michaeljfox.org/_ ) that started in 2015 and aims at collecting big data arising from users' (healthy and PD patients) interaction with smartphones and smartwatches. The latter interaction will also lead to the generation of smartwatch data during the users' sleep. i-PROGNOSIS investigators will proceed to all the necessary actions to access the latter dataset (when available) and exploit it for the development of the sleep stage/disturbance model of the TNI in conjunction with the development subdataset. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The European Data Format+ (EDF+) ( _http://www.edfplus.info/specs/edfplus.html_ ) will be exploited for the production of the metadata files. The development sub-dataset will be accompanied by a detailed description of its contents. Indicative metadata include: a) description of the experiment set-up and the procedure that led to the generation of the dataset, b) anthropometrics and basic health-record data of the experiment participants, c) description of the sleep-stagerelated features, d) annotation of features based on (b), and e) ground truth based on self-reported sleep quality by participants and sleep stage evaluation as provided by the Microsoft Health mobile application. The deployment sub-dataset will be annotated based on the following indicative metadata: a) basic health record data and anthropometrics of the interventions users, b) timestamps and duration of sleep disturbances detected, c) timestamps of starting/stopping the playback of pacifying sounds, and d) statistics on the usage of the TNI. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> The development sub-dataset will be protected (see Section 3.3.2.2), accompanied by the respective ethics approval. Due to ethical compliance, the deployment subdataset will be confidential and only i-PROGNOSIS partners will be able to access it after following the appropriate procedure (see Section 3.3.2.3) </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> The deployment sub-dataset that will be confidential will be handled according the framework reported in Section 3.3.2.3. The development sub-dataset that will be protected will be available for third-party stakeholders to download based on the procedure described in Section 3.3.2.1. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The development sub-dataset is planned to be available as protected after month 32 </td> </tr> </table> <table> <tr> <th> of the project to allow i-PROGNOSIS investigators to prepare and submit the respective scientific publications, but enable also third-party researchers to conduct further analysis and comment on the i-PROGNOSIS results before the end of the project. </th> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> The development sub-dataset will be available through either the CKAN-based iPROGNOSIS data management portal (see Section 3.2). Links redirecting to the portal and the available dataset will be provided through a dedicated page of the iPROGNOSIS project website ( _www.i-prognosis.eu_ ) . A technical description providing information on the experiment protocol through which the development sub-dataset was captured will accompany the development sub-dataset. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> As the publicly available compressed dataset (development sub-dataset) will comprise standard CSV files, no specialised software or tools will be required for the dataset to be parsed and reused, other than a (de-)compression software and a basic text editor (minimum requirement to access the content of the CSV file). </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The development sub-dataset will be stored in AUTH secure servers, as well as in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure (European data centres are located in Ireland) where the CKAN-based data management portal will be installed. The deployment sub-datasets will be also stored in the i-PROGNOSIS Cloud infrastructure, but access will be restricted (Section 3.3.2.3). </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads and at least one year after the end of the project. After that it would be made accessible by direct request to the data collector. The confidential part of the dataset will be preserved by the owners at least for one year after the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The development sub-dataset is expected to be approximately 600 MB (including metadata), based on the number of experiment participants (30) and the approximate size of features recorded during an average 7-hour sleep (~10 MB), for 2 nights. The deployment dataset is expected to be in the range of tens of gigabytes (GB), based on the expected number of the i-PROGNOSIS interventions users that will test the TNI (~10 users), an average 7-hour sleep per night for ~6 months (~30 days per month), and an average size of features of ~10 MB per 7-hour sleep. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The development sub-dataset will be stored on AUTH servers. There are no costs associated with the latter means of archiving and preservation. The development and </td> </tr> <tr> <td> deployment sub-datasets will be also archived and preserved in the Microsoft Azurebased i-PROGNOSIS Cloud infrastructure. The indicative cost of the latter means of archiving and preservation is 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The development sub-dataset will be collected within WP4 and WP6, to mainly serve the research efforts of T4.2 and T6.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS3.4-GaitAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset will include accelerometer, gyroscope and pedometer data captured from a smart watch/band regarding gait rhythm identification and analysis so as to provide personalised cueing and guidance in case of gait freezing episodes. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be captured by the accelerometer, gyroscope and pedometer sensors of the smart watch/band that will be worn by the participants mainly during the GData, SData and Interventions phases. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The first part of the dataset is going to be collected during the development data collection phase where the subjects will follow specific walking scenarios. The second part of the dataset will be collected mainly during the SData and Interventions phases, where smart watches/bands will be provided to 80 and 10 participants, respectively. Moreover, the GData phase will contribute to the dataset; however, the contribution is expected to be limited since the smart watch/band is optional at GData. At these phases, i.e., GData, SData and Interventions, the subjects will not perform specific walking tasks, instead, the data will be captured throughout their daily routine activities. The dataset will include: i) the measures (double precision) of the acceleration force in g units (9,81 m/s 2 ) that is applied to the smart watch/band on all three physical axes (x, y, and z), ii) the measures (double precision) of the smart watch/band’s </td> </tr> </table> <table> <tr> <th> rotation in rad/s around each of the three physical axes, and iii) the absolute number of steps taken. Since the pedometer sensor provides the total number of steps the wearer has taken since the last factory-reset of the smart watch/band, the absolute number of steps, that will be contained in the dataset, will be determined by taking the difference between the sensors returned values. The above data will be annotated with the health status of the subject, i.e., healthy control or Parkinson’s disease patient. The minimum sampling frequency for accelerometer and gyroscope data is 8Hz. However, this value is too high for pedometer data. Consequently, pedometer data will be recorded every 15 seconds. Data format: TXT or CSV file. The dataset will be on the order of ~1.3 MB per participant per hour of usage and usually the smart watch/band owners use it for at least 12 hours per day. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The collected data will be used for the development and evaluation of the personalised gait rhythmic guidance intervention of the i-PROGNOSIS project. The dataset could be useful in analysing the gait patterns, speed, periodicity, complexity and habits as well as in detecting sudden freezing episodes and falls. Moreover, it may be possible to identify symptoms and signs of those appear at people with Parkinson’s Disease, such as tremor. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany the research results in the field of rhythm gait analysis and freezing episodes’ identification of people with PD. At least one publication is intended to be made in the IEEE Biomedical Engineering journal (or similar). </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge, there is no available smart watch/band-based accelerometer/gyroscope and pedometer dataset for PD-related gait freezing detection. Related datasets include data captured from standalone inertia sensors attached to specific parts of the body, and/or accelerometer/gyroscope of a smartphone. For example: _Daphnet_ ( _https://archive.ics.uci.edu/ml/datasets/Daphnet+Freezing+of+Gait_ ) : Daphnet dataset contains the annotated readings of 3 acceleration sensors at the hip, thigh and ankle of 10 Parkinson's disease patients that experience freezing of gait during walking tasks. Users performed three kinds of tasks: straight line walking, walking with numerous turns, and finally a more realistic activity of daily living (ADL) task, where users went into different rooms while fetching coffee, opening doors, etc. _OU-ISIR_ ( _http://www.am.sanken.osaka-u.ac.jp/BiometricDB/InertialGait.html_ ) : OU-ISIR database contains accelerometer and gyroscope data captured from: i) 3 inertia sensors located around user’s waist and ii) a smartphone which includes only a triaxial accelerometer located at the back waist of the users. There were three walking scenarios: flat level, slope up and slope down. _ZJU-GaitAcc_ ( _http://www.cs.zju.edu.cn/~gpan/database/gaitacc.html_ ) : The ZJUGaitAcc dataset contains the gait acceleration series of 175 subjects, who were </td> </tr> </table> <table> <tr> <th> equipped with 5 Wii remotes, acting as the inertia sensors, fastened at 5 body locations: the left upper arm, the right wrist, the right side of the pelvis, the left thigh, and the right ankle. _Gait Dataset_ ( _http://www.cs.mcgill.ca/~jfrank8/data/gait-dataset.html_ ) : This dataset was collected at McGill University using the HumanSense open- source Android data collection platform. It contains the raw sensor data collected from a mobile phone in the pocket of 20 individuals, performing two separate 15 minute walks. </th> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: a) description of the experimental setup and procedure that led to the generation of the dataset, b) health status of the subject, and c) documentation of the variables recorded in the dataset. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical issues, only part of the dataset will be publicly available. More specifically, the data that correspond to a subset of the Parkinson’s Disease patients as well as the healthy control subjects, captured at the initial phase of the iPROGNOSIS project will become publicly available. The rest of the data will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> See Section 3.3.2.1 and Section 3.3.2.3. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available 2 years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link to this will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse with commonly available tools and software libraries, such as Excel, Matlab, .NET etc. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public part of this dataset will be accommodated at the data management portal of the project website, hosted by Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of this dataset will be preserved online for as long as there are regular downloads. After that it would be made accessible by request. The private part of the dataset will be preserved by AUTH at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset is expected to be several gigabytes, provided that each participant is expected to produce at least 6 MB per day and the duration is at least 22 months (SData and Interventions phase). </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> Probably two dedicated hard disk drives will be allocated for the dataset; one for the public part and one for the private. There are no costs associated with its preservation. Azure will be also considered as candidate data archiving means. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH, MICROSOFT** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within the activities of WP3, WP4 and WP6, to mainly serve the research efforts of T3.2, T4.3, T6.1, T6.2, T6.3 and T6.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE** **NAME** </th> <th> **DS3.5-VoiceEnhancementAnalysis** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> Speech database for the design and implementation of the voice enhancement algorithms within the assistive interventions software. The database will contain speech data collected from phone calls recorded from the i-PROGNOSIS smartphone dialler application. The data set includes several annotations about the speech data and the underlying speakers. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset will be collected by recording conversations using the i-PROGNOSIS smartphone dialler application during SData and Intervention Data collection phase. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The goal of this dataset is to enable the development and implementation of voice enhancement algorithms for speech from PD patients. The dataset will contain the raw speech signals recorded by the i-PROGNOSIS smartphone dialler application </td> </tr> </table> <table> <tr> <th> during the SData and Intervention data collection phase. Annotations including date and time of the recordings along with speaker information (e.g. ID, gender, PD scale) will provide the necessary information and will facilitate the development and evaluation of the speech enhancement algorithms. The dataset will contain raw speech data (e.g. 44.1 kHz sampling rate, wav unencoded) and text files for the annotations. _Data Format:_ Encoded (mp3, aac) or unencoded (wav, raw) audio waveforms; XML or plain text files for annotations/metadata. The dataset will be on the order of 0.5 MB to 5 MB per minute of speech depending on the encoding and sampling rate. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset will be used within the project for the development and evaluation of automatic signal processing speech enhancement algorithm for speech from PD patients. This data will be used in WP4 and WP6, supporting the assistive intervention software and for the overall assessment of the i-PROGNOSIS system. This dataset may also be useful in the future for other researchers who want to explore PD patients’ speech and develop speech enhancement algorithms. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> This database will build the foundation of our research and development of algorithms for the automatic enhancement of the speech from PD patients within the assistive intervention software. We plan to propose our findings on ICASSP, Interspeech or other voice and biomedical-related conferences. </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> To the best of our knowledge no speech database for the development and evaluation of speech enhancement algorithms for the speech of PD patients is available. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> The dataset will be accompanied with detailed documentation of its contents. Indicative metadata include: (a) description of the experimental setup and procedure that led to the generation of the dataset and (b) documentation of the variables recorded in the dataset. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Due to ethical reasons, only the data captured by a subset of the patients during the initial phases by normal healthy control subjects could become publicly available, while the rest of them will be private to serve the i-PROGNOSIS R&D objectives. The inclusion of a (normal healthy control) subject’s data in the public part of this dataset will be done on the basis of appropriate informed consent to data publication. </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> For the portions of the dataset that will be made publicly available, a respective web </td> </tr> </table> <table> <tr> <th> page will be created on the data management portal that will provide a description of the dataset, links to a download section and a playback possibility in case the playback manager approach is followed. The private part of this dataset will be stored at a specifically designated private space of FRAUNHOFER, in dedicated hard disk drives, on which only members of the FRAUNHOFER research team will have access. </th> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The applicable datasets will be publicly available two years after the end of the project to allow the consortium prepare and submit the scientific publications. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> For the public part of the dataset, a link will be provided from the Data management portal. The link will be provided in all relevant i-PROGNOSIS publications. A technical publication describing the dataset and acquisition procedure will be published. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be designed to allow easy reuse and access with commonly available tools (e.g. Matlab, Python, VLC, GVIM) and software libraries (e.g. Tensorflow, FFMPEG, HDF5 C++ API, C++ STL), because the data will be stored primarily in common file formats and generic data containers. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The public data will be hosted on the central database that will serve the needs of the Data Management Portal of the i-PROGNOSIS project. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The public part of the dataset will be preserved on the Data Management Portal of the i-PROGNOSIS project. The private part of the dataset will be preserved by FRAUNHOFER at least until the end of the project. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The dataset should be expected to consume up to 10 GB depending on the encoding and the length and quantity of the speech signals. (E.g. single channel audio waveform 16 bit, 44.1 kHz is of size 5.3 MB/min, single channel mp3 192 Kbps is of size 1.44 MB/min) </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The public data will be archived and preserved in the Microsoft Azure-based iPROGNOSIS Cloud infrastructure, will cost approximately 20 Euros (€) per month for one year (at least) after the end of the project. The private dataset will be stored on a dedicated hard drive. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **TUD, KI** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **FRAUNHOFER** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **FRAUNHOFER** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data is going to be collected within activities of WP4 and WP6, to mainly serve the research efforts of T4.3, T6.3 and T6.4. </td> </tr> </table> ### Requirements Data <table> <tr> <th> **DATA SET REFERENCE** **NAME** </th> <th> **DS4.1-FocusGroupsDataset** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> This dataset aims at providing a non-exhaustive list of end-user requirements, as these have been elicited through a series of focus groups and personal interviews with experts (both patient groups and clinicians) in PD. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> KCL and relevant partners gathered pertinent data and established focus groups data set, so as to then design a final paper-based user requirement questionnaire that was used as an acquisition means in four focus groups. This dataset contains information from all stakeholders (developers and clinicians) involved in the development of the iPROGNOSIS system and its potential users (healthcare professionals, and patients). The main components of the focus groups questionnaire included: a) demographics, e.g. gender, age, health problem related to PD, b) technology adoption, c) iPROGNOSIS App design- oriented questions, d) i-PROGNOSIS App delivery through existing smart phones, e) usability aspects of the specialized gadgets, such as smartwatch (Microsoft Band was provided as an example), smart belt, Mandometer and a ECG-enabled TV remote control, f) general questions about the PGS interventions and finally, g) some specialized questions about specific interventions, such as the sleep and gait interventions. For a more detailed description on the questions and the results produced from the analyses, one is referred to D2.1 _\- First version of user requirements analysis_ . </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The focus groups were conducted among health professionals such as nurses, therapists, clinicians and researchers as well as patients and carers. The subjects of the focus groups were about the design of the i-PROGNOSIS project and expectations of it, the design of the application and special gadgets that will be used in i-PROGNOSIS (smart belt, Mandometer, TV smart remote control, etc.). Finally, the questions focus on the i-PROGNOSIS interventions. The participants were carefully recruited for the face-to-face focus groups. The dataset contains the output of the systematic analysis </td> </tr> </table> <table> <tr> <th> Data format: The dataset will be made available as a single Microsoft Excel file (.xlsx) by merging the CSV files of all languages. All questions and answers will be translated in English. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset and the respective outcomes and findings will be used to shape the user requirements and the technical specifications of the i-PROGNOSIS system. In addition, this kind of datasets could be exploited by researchers that investigate to build new technologies of patients with PD relating to self- care, as well as, developers, system architects and user experts that are interested to develop a similar to i-PROGNOSIS ICT-based system for health management. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany the research results regarding the prospective users' attitude towards ICT-based self-care solutions and the i-RPOGNOSIS system against PD. Research results and findings is planned to be published initially in one of the indicative conferences provided below: * International Conference on e-Health (Conference) * Human Computer Interaction International (Conference) </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> There are no similar datasets with open access. Results of relevant end-user focus groups target to participatory design of symptomatic domains to be measured as well as interventions (through serious games) design., i.e., Serrano, J. A., Larsen, F., Isaacs, T., Matthews, H., Duffen, J., Riggare, S., ... & Graessner, H. (2015). Participatory design in parkinson's research with focus on the symptomatic domains to be measured. Journal of Parkinson's disease, 5(1), 187196. McNaney, R., Balaam, M., Holden, A., Schofield, G., Jackson, D., Webster, M., ... & Olivier, P. (2015, April). Designing for and with People with Parkinson's: A Focus on Exergaming. In Proceedings of the 33rd annual ACM conference on Human Factors in Computing Systems (pp. 501-510). ACM. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> For this dataset, the metadata correspond to the participant profile as well as the settings of the focus group, and mainly include: a) occupational and expertise of the participants (nurse, therapist, clinician, researcher, patient, carers), b) the country where the focus group conducted, c) number of the participants and d) health status relating to PD. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> Personal identification as stated in the consent forms will be kept separate from any research and health-related data, which will be pseudonymised (only initials and date of birth will be kept). All paper based data will be stored in an access restricted, locked building with access for the research team. Electronic data will be saved on a </td> </tr> </table> <table> <tr> <th> secured NHS server and CKAN-based i-PROGNOSIS data management portal. </th> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> Personal and clinical data will be accessible by technical, medical and scientific personnel being involved in the i-PROGNOSIS project and not by any third-party users. When applicable, the publicly available dataset will be open for third-party stakeholders to download based on the procedure described in Section 3.3.2.1. </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> The dataset is planned to be publicly available two (2) years after the end of the project to allow i-PROGNOSIS investigators to prepare and submit the respective scientific publications regarding the final version of user requirements. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> The dataset will be available either through the CKAN-based i-PROGNOSIS data management portal (see Section 3.2) or other out of the project open access repository. Links redirecting to the portal and the available dataset will be provided through a dedicated page of the i-PROGNOSIS project website ( _www.i-prognosis.eu_ ) . </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> The dataset will be kept in a numeric manner and therefore designed to allow easy reuse with commonly available tools. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The data will be hosted to the central database that will serve the needs of the Data Management Portal of the i-PROGNOSIS project and part of them in a secured NHS server. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The dataset will be preserved as long as there are regular downloads. After that it would be made accessible by request and preserved by AUTH at least until the end of the project. Data set will be stored as per HRA regulations and requirements. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The approximate size of the dataset is a couple of tens of MB. </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The downloaded CSV files and the cumulative Microsoft Excel file will be stored in an AUTH server, both not imposing any additional costs. The Microsoft Excel file that will be made publicly available will be also archived and preserved in the Microsoft Azure-based i-PROGNOSIS Cloud infrastructure. The indicative cost of the latter means of archiving and preservation is 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **TUD, KCL, AUTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **All** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within the activities of WP2, to mainly serve the research efforts of T2.1 and T2.4. </td> </tr> </table> <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS4.2-WebSurveyDataset** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> The dataset contains the questions and the qualitative answers to the questions of the i-PROGNOSIS Web-based questionnaire (Web survey), conducted within the context of the identification of user requirements and system specifications, from ~2000 anonymous survey participants. </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> The dataset is populated by the answers of participants to the questions of the iPROGNOSIS Web survey. The Web survey consists of three sections corresponding to three groups of survey participants, i.e., healthy adults, PD patients, and experts in PD (including physicians and carers). Each participant answers the questions of her/his corresponding section based on the group s/he belongs to. Each section is further divided in three parts, i.e. demographics, PD detection and interventions. The section corresponding to the group of healthy adults does not include the interventions part. The participant is redirected to the corresponding section based on a question allowing her/him to denote the group s/he belongs to. If no option is selected, the participant is redirected to the survey section corresponding to healthy adults. There are six versions of the Web survey corresponding to six languages, i.e., English, Greek, French, German, Portuguese and Swedish. </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> The Web survey is conducted electronically and it is available online. The questionnaires were designed using the Google Forms online software. The answers to the questions are qualitative and are stored in Google Sheets files (one per language) that are updated with new entries (an entry corresponds to a participant in the survey) automatically. The current dataset consists of six Google Sheets files with a total of ~2000 entries, but it is updated frequently as the Web survey is planned to be online until the updated user requirements will be produced (month 28 of the project). The dataset (Google Sheets files) can be downloaded manually as CSV files. _Data format:_ The dataset will be made available as a single Microsoft Excel file (.xlsx) by merging the CSV files of all languages. All questions and answers will be translated </td> </tr> </table> <table> <tr> <th> in English. </th> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> The dataset and the respective results will be used to shape the identified user requirements and the technical specifications of the i-PROGNOSIS system. The dataset could be of interest to researchers that investigate the relationship of similar groups of users with new technology relating to self- care, as well as, developers, system architects and user experts that are interested to develop a similar to i-PROGNOSIS ICT-based system for health management. Adequate use of these data presupposes at least basic knowledge in data analysis methodology and experience in the use of statistical software packages. </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> The dataset will accompany the research results regarding the prospective users' attitude towards ICT-based self-care solutions and the i-RPOGNOSIS system against PD. Research results is planned to be published initially in one of the indicative journals and/or conferences provided below: * Elsevier International Journal of Medical Informatics (Journal) * IOS Journal of Parkinson's Disease (Journal) * International Conference on e-Health (Conference) * Human Computer Interaction International (Conference) </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> There are no similar datasets with open access. Results of similar surveys are available through publications and may be used for shaping the i-PROGNOSIS user requirements, i.e., Zhao, Y., Heida, T., van Wegen, E. E., Bloem, B. R., & van Wezel, R. J. (2015). E-health support in people with Parkinson’s disease with smart glasses: a survey of user requirements and expectations in the Netherlands. Journal of Parkinson's disease, 5(2), 369-378. </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> For this dataset, the metadata correspond to the participant-provided information via the first part of each web survey section, i.e., the demographics part, and mainly include: a) the age of the participant, b) the country of residence, c) their occupational category, d) health status relating to PD, e) expertise (in case of health care professionals), and (f) familiarity with new technology and new means of communication. Thus, no standard is followed. </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> The dataset will be publicly available (see Section 3.3.2.1). </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> The publicly available dataset will be open for third-party stakeholders to download based on the procedure described in Section 3.3.2.1. </td> </tr> </table> <table> <tr> <th> **Embargo periods** (if any) </th> </tr> <tr> <td> The dataset is planned to be publicly available after month 28 of the project to allow i-PROGNOSIS investigators to prepare and submit the respective scientific publications regarding the final version of user requirements. </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> The dataset will be available through either the CKAN-based i-PROGNOSIS data management portal (see Section 3.2). Links redirecting to the portal and the available dataset will be provided through a dedicated page of the i-PROGNOSIS project website ( _www.i-prognosis.eu_ ) . A technical description providing information on how the web-survey was structured and conducted will accompany the dataset. </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> As the compressed dataset will consist of a Microsoft Excel file (.xlsx), only a (de)compression software and the Microsoft Excel software (or other compatible with this type of file software) will be required for data parsing and re-use. </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> The dataset will be stored in AUTH secure servers, as well as in the Microsoft Azurebased i-PROGNOSIS Cloud infrastructure (data centres are located in Ireland) where the CKAN-based data management portal will be installed. External repositories will also be considered as alternatives, to increase visibility and dissemination efforts. </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> The dataset will be preserved online for as long as there are regular downloads and at least for one year after the end of the project. After that it would be made accessible by request. </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> The approximate size of the dataset is ~ 3 MB based on the size of each entry (~ 1.2 KB) and the expected final number of entries (participants in the survey) (~ 2500 to 3000 participants - entries). </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> The original Google Sheets files comprising the dataset are stored in the Google drive account of the i-PROGNOSIS project that is free of charge. The downloaded CSV files and the cumulative Microsoft Excel file are stored in an AUTH server, both not imposing any additional costs. The Microsoft Excel file that will be made publicly available will be also archived and preserved in the Microsoft Azure-based iPROGNOSIS Cloud infrastructure. The indicative cost of the latter means of archiving and preservation is 20 Euros (€) per month for one year (at least) after the end of the project. </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> The costs associated with the data archiving and preservation will be covered by the </td> </tr> <tr> <td> i-PROGNOSIS project budget. </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> **AUTH** </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> **AUTH** </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> The data are going to be collected within the activities of WP2, to mainly serve the research efforts of T2.1 and T2.4. </td> </tr> </table> # APPENDIX I – DATASET DESCRIPTION TEMPLATE The table template used to describe each dataset along with clarifications on each field: <table> <tr> <th> **DATA SET REFERENCE NAME** </th> <th> **DS#.#- <DatasetName> ** </th> </tr> <tr> <td> **DATA SET DESCRIPTION** </td> </tr> <tr> <td> **Generic description** </td> </tr> <tr> <td> _ <Provide a summary of data to be collected> _ </td> </tr> <tr> <td> **Origin of data** </td> </tr> <tr> <td> _ <How will the dataset be produced, e.g. mobile phone app logs or Microsoft Band and a sampling rate of xx Hz etc.> _ </td> </tr> <tr> <td> **Nature and scale of data** </td> </tr> <tr> <td> _ <Description of file format, e.g. TXT files, estimated size, number of participants> _ </td> </tr> <tr> <td> **To whom the dataset could be useful** </td> </tr> <tr> <td> _ <What research purposes will the dataset facilitate, e.g. The dataset will be valuable for benchmarking algorithms for activity analysis etc.> _ </td> </tr> <tr> <td> **Related scientific publication(s)** </td> </tr> <tr> <td> _ <Either existing ones in case we get open data from external to the project repositories, or future publications that we intend to make, e.g. description of indicative research area like neuroscience or name a few scientific conferences or journals that will be targeted> _ </td> </tr> <tr> <td> **Indicative existing similar data sets** (including possibilities for integration and reuse) </td> </tr> <tr> <td> _ <List other already existing datasets or repositories> _ </td> </tr> <tr> <td> **STANDARDS AND METADATA** </td> </tr> <tr> <td> _ <Reference to existing suitable standards of the discipline. Format, e.g. EDF format for biosignals. _Metadata_ should contain information such as: (a) description of the experimental setup and procedure that led to the generation of the dataset, (b) documentation of the variables recorded in the dataset and (c) annotated experiment state of the monitored person per time interval. If these do not exist, provide an outline on how and what metadata will be created.> _ </td> </tr> <tr> <td> **DATA SHARING** </td> </tr> <tr> <td> **Access type** </td> </tr> <tr> <td> _ <E.g., open - can be publicly shared, protected - can be shared but the participants have to provide their consent, or confidential - cannot be shared outside the project> _ </td> </tr> <tr> <td> **Access Procedures** </td> </tr> <tr> <td> _ <Explain how will you handle private (if any) and publicly available datasets, e.g., depending on the access type account for each dataset a relevant access procedure should be defined. Access procedure should contain information about the developed _ </td> </tr> <tr> <td> _area within the portal that will let third-party users download files > _ </td> </tr> <tr> <td> **Embargo periods** (if any) </td> </tr> <tr> <td> _ <When will the data be published and access provided> _ </td> </tr> <tr> <td> **Technical mechanisms for dissemination** </td> </tr> <tr> <td> _ <Accompany datasets with a technical description of the dataset and the way data were captured> _ </td> </tr> <tr> <td> **Necessary S/W and other tools for enabling re-use** </td> </tr> <tr> <td> _ <Code/libraries/open source software to read and process data so as to allow for reproducible research> _ </td> </tr> <tr> <td> **Repository where data will be stored** </td> </tr> <tr> <td> _ <i-PROGNOSIS portal dedicated download section, institutional portals or standard repository for the discipline> _ </td> </tr> <tr> <td> **ARCHIVING AND PRESERVATION** (including storage and backup) </td> </tr> <tr> <td> **Data preservation period** </td> </tr> <tr> <td> _ <For how long should the data be preserved? We should take into account any national regulations as well> _ </td> </tr> <tr> <td> **Approximated end volume of data** </td> </tr> <tr> <td> _ <E.g., ### GB raw signal> _ </td> </tr> <tr> <td> **Indicative associated costs for data archiving and preservation** </td> </tr> <tr> <td> _ <Hard disk drives, servers> _ </td> </tr> <tr> <td> **Indicative plan for covering the above costs** </td> </tr> <tr> <td> _ <Within project, whole consortium, local partner, centrally> _ </td> </tr> <tr> <td> **PARTNERS ACTIVITIES AND RESPONSIBILITIES** </td> </tr> <tr> <td> **Partner Owner / Data Collector** </td> <td> _ <Partner Acronym> _ </td> </tr> <tr> <td> **Partner in charge of the data analysis** </td> <td> _ <Partner Acronym> _ </td> </tr> <tr> <td> **Partner in charge of the data storage** </td> <td> _ <Partner Acronym> _ </td> </tr> <tr> <td> **WPs and Tasks** </td> </tr> <tr> <td> _ <During which stage of the project will data be captured?> _ </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1107_AEOLIX_690797.md
**Executive Summary** According to the "Guidelines on Data Management in Horizon 2020" 1 , the Data Management Plan has the aim to produce data so that researchers may benefit by their use directly, and / or to apply their methods based on data generated by Research in Horizon 2020. The AEOLIX Data Management Plan governs all data generated and collected during the project, the standards that will be used, how the research data will be preserved and what parts of the datasets will be shared for verification or reuse. The documentation of this plan is part of work package 4 (WP4) Collaborative data exchange certification framework. The data relevant in the AEOLIX project can be roughly divided into three categories: 1. Data retrieved from project participants, LL actors and other stakeholders by non-technical means, e.g. by participation in surveys, through interviews etc. 2. Data used in the technical implementation of the AEOLIX platform, e.g. position data of vehicles or cargo, scheduling data (ETA), custom or billing information. 3. Project internal administration data such as meeting agendas and minutes, cost statement, internal reports. The category 1 data will be mainly used in the WP 2 (Lessons learned, strategic needs and requirements) which is focused on experiences and future needs of potential AEOLIX users and WP6 (AEOLIX evaluation framework) which concentrates on the results achieved in the LL operations. Category 2 data is used in the application of the AEOLIX platform at the different living labs and is of relevance to WP3 (AEOLIX IT Ecosystem) dealing with the development of the AEOLIX system and WP5 (Verification with Living Labs) which aims at the implementation of AEOLIX in real-word situations, i.e. the Living Labs (LL).. The handling of category 2 data is considered highly complex and is can only be partly described in the present document. The details will be presented in the following deliverables of WP4: * D4.1: Best practices for AEOLIX collaborative data sharing * D4.2: AEOLIX high level architecture guideline for collaborative data exchange * D4.3: Guidelines for an AEOLIX framework for legal, trusted, interoperable and high quality data exchange Category 3 data is AEOLIX consortium internal and may have different levels of confidentiality. The agenda of AEOLIX general assemblies may for example be available to the public whereas cost statements of individual partners will not be disclosed outside the consortium. The format of the plan follows the Horizon 2020 template. * To comply with the Horizon 2020 Open Data Access Guidelines * To embed the AEOLIX project in the EU policy on data management which is increasingly geared towards providing open access to data that is gathered with funds from the EU. * To ensure that all possible data will be accessible to other researchers, helping to streamline the research process from start to finish. **1\. Introduction** ##### 1.1 Purpose of Document The purpose of the present document is the provision of a Data Management Plan (DMP). The AEOLIX Data Management Plan (DMP) is defined within the context of the Grant Agreement. The full text of this Deliverable is addressing the “Open Access to Research Data” Objective. The document describes the level of confidentiality of different data sets, defines data governance and privacy principles, and the strategy towards contributing to the Open Data Research Pilot. It has been prepared by taking into account the template of the “Guidelines on Data Management in Horizon 2020”. In addition, to that the project has dedicated a specific Work Package called ‘’Data governance’’ with the aim to address all issues related with data within the project. The intention of the AEOLIX project is to publish no confidential results under Open Access, regarding all scientific publications produced along the project lifecycle. Although it is preferable to publish in online publications free of charge, when it is not possible it will be up to the partner who will cover the associated costs. The AEOLIX Open Access strategy relates to the EU “Open” paradigm for publishing project results, which foresees two possible ways of accessing the published results: Gold Open Access, which grants immediate access through a publisher, and Green Open Access. The consortium will decide which data will be part of an open data scheme to become available for the Open Access Data initiative. Various mechanisms to protect identities and sensitive information will be enforced as a part of the data management actions. The DMP is Deliverable D4.6 due on month 6 of the project. After that due date the DMP will remain a living document throughout the project's lifetime. The present first version may evolve during the project according to the progress of project activities. ### 1.2 Procedures to update the Data Management Plan The strategy for updating the present document has not been fully developed at this early stage of the AEOLIX project. However, certain factors have been identified that may trigger the publication of new versions of the DMP, such as * Change of the scope of a Work Package (WP); * Addition/Deletion of deliverables from WPs; * Input from Living Labs (LL) operation; * Inputs from work on deliverables D4.1, D4.2 and/or D4.3. ##### 1.3 Intended audience This deliverable is aimed to the AEOLIX Consortium as a reference document with the objective of covering all aspects of data management for all WPs. However, it may be of special importance to the LL within the project consortium as they may need to exchange data with the partners active in the development of the AEOLIX architecture in WP3. This is a public document which can be downloaded by external stakeholders which might be interested in cooperate with AEOLIX consortium. **2\. Literature Review of Data Management** ### 2.1 Definition The Data Management Plan will describe the data management life cycle for the data to be collected, processed and/or generated by the AEOLIX project, during the project lifetime itself, and once it is completed. This will help to better manage the data, meet funder requirements, and help others use these data when they are shared, ensuring that these data are well managed in the present, and prepared for preservation in the future. ### 2.2 Personal Data Personal Data refers to any information relating to an identified or identifiable natural person, meaning by identifiable person the one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity. Data is considered personal when someone is able to connect information to a specific person, even when the person or entity that is holding the personal data cannot make the connection directly (e.g. name, address, e-mail), but has or may have access to information allowing such identification (e.g. through telephone numbers, credit card numbers, license plate numbers, etc.). #### 2.2.1 Personal Data Protection The fundamental right to the protection of personal data is explicitly recognised in Article 8 of the Charter of Fundamental Rights of the European Union, and in Article16 of the Treaty on the functioning of the European Union, according to which _everybody has the right to the protection of personal data concerning them_ . Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. #### 2.2.2 Personal Data Processing and movement The European Parliament and Council Directive 95/46/EC _on the protection of individuals with regard to the processing of personal data and on the free movement of such data_ establishes on its Article 6 that Personal Data must be processed fairly and lawfully, and collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes. These data must be also adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed, and also accurate (and, where necessary, kept up to date) keeping them in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed. The Article 7 continues to determine the rules for processing data, allowing this only if the data subject has unambiguously given his consent, meaning by this (as per Article 2) “ _any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed”._ Another circumstance required for processing personal data is that it is necessary: * for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract; * for the compliance with a legal obligation to which the controller is subject; * in order to protect the vital interests of the data subject; * for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller or in a third party to whom the data are disclosed; * for the purposes of the legitimate interests pursued by the controller or by the third party or parties to whom the data are disclosed, except where such interests are overridden by the interests for fundamental rights and freedoms of the data subject which require protection. #### 2.2.3 Information given to the data subject According to the Article 10 of the Directive, the subject must be informed by the controller of the data or his representative about from whom data relating to him are collected, specifically about: * the identity of the controller and of his representative, if any; * the purposes of the processing for which the data are intended; * any further information such as the recipients or categories of recipients of the data, whether replies to the questions are obligatory or voluntary, as well as the possible consequences of failure to reply, the existence of the right of access to and the right to rectify the data concerning him in so far as such further information is necessary, having regard to the specific circumstances in which the data are collected, to guarantee fair processing in respect of the data subject. Article 11 establishes that the same information will be given to the data subject where the data have not been obtained from him, at the time of undertaking the recording of personal data or if a disclosure to a third party is envisaged , no later than the time when the data are first disclosed, except for the cases where, in particular for processing for statistical purposes or for the purposes of historical or scientific research, the provision of such information proves impossible or would involve a disproportionate effort or if recording or disclosure is expressly laid down by law. ## 3\. AEOLIX data ##### 3.1 Data summary The data relevant in the AEOLIX project can be roughly divided into three categories: 1. Data retrieved from project participants, LL actors and other stakeholders by non-technical means, e.g. by participation in surveys, through interviews etc. 2. Data used in the technical implementation of the AEOLIX platform (supply chain data coming from the living labs, other open data- e.g. statistics, GPS, metadata from standards or information exchanges) 3. Project internal administration data such as meeting agendas and minutes, cost statement, internal reports. The category 1 data will be mainly used in the WP 2 (Lessons learned, strategic needs and requirements) and WP6 (AEOLIX evaluation framework). WP2 is focused on experiences and future needs of potential AEOLIX users and will gather its data by a set of measures: 1. Web based survey to establish the state-of-the-art and the problem awareness (lessons learned); 2. Analysis of information obtained in other WPs; 3. Individual interviews based on a defined questionnaire; 4. Delphi analysis in physical meetings. Delphi is used in complex situations and its aim is to reach the correct response through consensus. Whereas in the original technique question rounds are administered in writing, the WP2 of the AEOLIX projects has focused on physical meetings where the selected experts have been invited to express their opinions about the proposed issues, answering several questions and deciding together a consensus about those. WP6 concentrates on the results achieved in the LL operations. As the methodology on how to obtain the results is part of the work of WP6 (Cost- Benefit Analysis evaluation tool, Multi-Criteria Analysis and Analytic Hierarchy Process), the exact nature of the resulting data cannot be described at the time of the writing of the present data management plan. Category 2 data is used in the application of the AEOLIX platform at the different living labs and is of relevance to WP3 and WP5. This data will carry a big variety of information (supply chain data, geographic positions, schedules, legal data, price information, etc.) of different levels of confidentiality. Furthermore, the data will be present in many different formats that will be processed by the AEOLIX connectivity engine. The concept of the AEOLIX platform does not imply the long term storage of the data under processing; however, short term storage of real-time data may not be completely avoidable. However, the handling of this data is highly complex and is not completely in the scope of the present document. However, some considerations are made in clause 3.2 of DMP. Further details will be presented in the following deliverables of WP4: 1. D4.1: Best practices for AEOLIX collaborative data sharing 2. D4.2: AEOLIX high level architecture guideline for collaborative data exchange 3. D4.3: Guidelines for an AEOLIX framework for legal, trusted, interoperable and high quality data exchange Category 3 data is AEOLIX consortium internal and may have different levels of confidentiality. The agenda of AEOLIX general assemblies may for example be available to the public whereas cost statements of individual partners will not be disclosed outside the consortium. ##### 3.2 Accessibility of data The raw category 1 data (e.g. results from the web survey, qualitative KPIs from LL evaluations) will be kept in restricted storage areas as they are considered private and/or confidential. However, the aggregated results derived from the different data gathering methods will be made available through publicly available AEOLIX project deliverables, mainly of WP2 and WP6. They will be available through the AEOLIX website Category 2 data is mostly owned data that the data owners do only want to disclose to partners with which they have a business relation. The nature of the data exchanges between AEOLIX partners, e.g. in the context of a LL, may be manifold and aspects such as confidentiality or data security are usually covered by mutual agreements between the partners that exchange and process data. Consequently, category 2 data is not publicly available to any party outside of the AEOLIX consortium and even within the consortium strict rules for data access have to be established (see D4.3). This is of special importance as data from many partners will run through the AEOLIX platform, where it may be manipulated (e.g. data conversion) and exchanged between different data handling/displaying entities. The concept of the AEOLIX platform does not imply the long term storage of the data under processing; however, short term storage of real-time data may not be completely avoidable. During the operation of the AEOLIX project, partners of the consortium will have access to the web based project management tool project place. This tool allows the exchange and sharing of any kind of information. Within the AEOLIX context, it will be mainly used to exchange and archive category 3 data. ##### 3.3 Interoperability of data Data interoperability only applies to category 2 data and is at the heart of the AEOLIX project. The whole AEOLIX concept aims at achieving data interoperability between all players in the technical area of logistics. The AEOLIX Platform represents a critical way forward of supply chain interoperability through decentralized information sharing. AEOLIX is established via cloud services where data, application, on-premises and cloud- based processes and services from multiple actors can be connected - enhancing collaboration and interoperability, potentially across the entire freight transportation system. Data governance in AEOLIX will ensure the proper Data Management of important and sensitive data including information for customers, product designs, to be appropriately managed, anonymized, encrypted and sanitized, managing risks which should arise upon their access by third parties. The details of achieving data interoperability will be presented in the deliverables of WP3. ##### 3.4 Reusability of data The concepts of the reuse of data apply to category 2 data and will be investigated throughout the operation of the LLs in WP5 (AEOLIX Verification with Living Labs). Within WP8 (Collaborative business models and AEOLIX exploitation) the principles for the exploitation of the AEOLIX platform will be developed. This will include also aspects of data reusability. The following aspects will be covered by WP8 * How the data will be licenced to permit the widest reuse possible * How the data produced and/or used in the project is useable by third parties, in particular after the end of the project * How long the data will remain re-usable ##### 3.5 Allocation of resources to Data Management The AEOLIX project has a whole WP dedicated to data management issues. WP4 (Collaborative data exchange certification framework) is considered complementary to the technical approach for a collaborative architecture as set up in WP3 (AEOLIX IT Ecosystem) and aims to establish a sustained flow of information across modes, entities and countries enabled by interoperable systems and standards, allowing the re-use of data and trustworthiness of the data content. AEOLIX has assigned the WP leader PTV to provide a mechanism for controlling changes related to the data, data processes, and the data architecture of AEOLIX and act as data protection officer. WP4 addresses the structural problems and barriers of data exchange and sharing and develop solutions to overcome. The AEOLIX governance model is to address the different levels of data exchange starting reaching from strategic over tactical to operational issues. Key stakeholders in industry and administrations are involved. More specifically WP4 aims to: * Identify and assess key AEOLIX practices for collaborative data sharing (liaise with LL) * Define an architecture for cross entity data exchange and sharing * Develop a framework for trusted, interoperable and high quality data exchange including legal aspect * Develop a certification framework to certify company, who want to join the AEOLIX platform In total a resource of 112 man months has been allocated to WP4. However, this also includes the development of certification schemes and the organisation and execution of two TESTFEST events. ##### 3.6 Data security The AEOLIX consortium will use the web based projectplace to tool to store and exchange category 3 data within the partnership. Access to projectplace is not public and the ERTICO project management team will assign and manage the access rights of the project partners to the tool. AEOLIX stakeholders should stay in control on who has access to their data. Privacy and data security are very important strategic requirements. The Conceptual Security and Privacy Taxonomy will be applied. It contains four main Big Data security & privacy principles: * Data confidentiality topic: safeguarding the confidentiality of personal data. * Data provenance topic: safeguarding the integrity and validation of personal data. * System health topic: safeguarding the availability personal data. * Public policy, social, and cross-organizational topics: safeguarding the specific Big Data and privacy & data protection requirements. Project participants are already covered by a Consortium Agreement which encompasses NonDisclosure clauses as well as their Contract with the EC which itself includes clauses on the treatment of data. However, NDAs are implemented to cover future users, as well as through the implementation of security measures in their processing systems aimed at a level of protection, according also to the principle of proportionality, which should be at least the same as that provided to their own confidential information. ##### 4\. Data confidentiality The data to be considered as confidential as not so much related to personal data or information as these are protected under basic data protection legislation, as analysed in the previous section. Here it is particularly technical, commercial information that are placed within the scope of protection. This information may have intellectual property rights protection over it, but it may well be the case that it is unprotected raw data, that however still possesses business value for the disclosing party. The preferred means through which to protect confidential information exchanged during execution of the AEOLIX project are (NDAs) and these are included in the present Consortium Agreement which has been signed by all partners. However, after the end of the research project, when results will be rolled-out to organizations which are not partners in AEOLIX and therefore not covered by the present Consortium Agreement, it is essential that comprehensive NDAs are signed prior to any disclosure of information. Such agreements should be drafted by the Project Coordinator and entered whenever deemed important before confidential information exchanges. Regarding confidentiality measures of the different AEOLIX categories: 1. Data retrieved from project participants, LL actors and other stakeholders by non-technical means, e.g. by participation in surveys, through interviews etc. Participants in interviews or other activities must be asked to sign a relevant form, in order for their consent to be demonstrable in writing. The form must be composed in accordance with legal requirements, i.e. among others to describe how the information will be used, how the person concerned may review or amend them. Such data are responses to interviews, or views from DELPHI studies. More details can be found in Data per WP. 2. Data used in the technical implementation of the AEOLIX platform These supply chain data coming from the living labs are often commercial data (shipper’s data like invoices, inventory data, GPS position data, delivery information, data coming from services of the Living lab). This information constitutes valuable commercial business information and if unauthorized access was granted to them could cause serious damage to AEOLIX partners. Therefore, as a general principle: This data will not be shared outside the consortium and access within the consortium will be strictly limited to only those parties agreed by the data owner. Other types of data are coming from the evaluation and impact assessment evaluation results. Data in this category require the agreement of all commercial parties related, to eliminate sensitive information. More details can be found in Data per WP. 3. Project internal administration data such as meeting agendas and minutes, cost statement, internal reports. These administrative data and project minutes are Confidential, only for members of the Consortium (including the Commission services). More details can be found in Data per WP. ## 5\. Detailed Data Management Aspects per WP The data that is collected or generated in AEOLIX, contributes to different purposes depending of the objective of the WP they relate to. The following sub clauses provide an overview of the deliverables per WP giving also the origin of the data in tables. The tables will deliver the following information: * **Data set reference and name:** Pointer to the deliverable containing the data * **Data set description:** Descriptions of the data that will be generated or collected * **Data collection procedures:** Description how the data was collected or produced * **Data privacy** : Description how data is kept confidential, if applicable * **Data sharing/ confidentiality** : Description of how data will be shared, e.g. publicly available or consortium internal only ### 5.1 Data in WP1 Table 1 – Data in WP1 <table> <tr> <th> **WP1** </th> <th> **Project Management** </th> </tr> <tr> <td> **Data set reference and name** </td> <td> Quality and Risk Management Plan </td> </tr> <tr> <td> **Data set description** </td> <td> The AEOLIX Quality Plan will define all quality processes and procedures to be applied throughout the project, including documents and Deliverable Management. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Both Quality and Risk Management Plan will be developed under PRINCE2 Principles which are: Business justification, Roles and responsibilities, Learn from experience, Manage by exceptions, Manage by stages, Tailor to suite the environment, Focus on product. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> Confidential, only for members of the Consortium (including the Commission services) </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Coordination Plan for a harmonised coordination of the LLs </td> </tr> <tr> <td> **Data set description** </td> <td> Each Living Lab develops a plan, agreed with the EC, with the aim to demonstrate progress in the AEOLIX LLs </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Each LL will include a Gantt Chart (What the various activities are, When each activity begins and ends, How long each activity is scheduled to last, Where activities overlap with other activities, and by how much, The start and end date of the whole project). </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> Confidential, only for members of the Consortium (including the Commission services) </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Yearly Management and Progress reports. </td> </tr> <tr> <td> **Data set description** </td> <td> The AEOLIX Yearly Management and Progress reports will inform the PO and reviewers of the progress made by the Consortium. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Produced by the AEOLIX management team </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> Confidential, only for members of the Consortium (including the Commission services) </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Final report. </td> </tr> <tr> <td> **Data set description** </td> <td> The AEOLIX Final Report will summarise the work performed by the Consortium members and the interaction with any external stakeholders </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Produced by the AEOLIX management team </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> Public </td> </tr> </table> ### 5.2 Data in WP2 Table 2 – Data in WP2 <table> <tr> <th> **WP2** </th> <th> **Lessons learned, strategic needs and requirements** </th> </tr> <tr> <td> **Data set reference and name** </td> <td> Lessons learned </td> </tr> <tr> <td> **Data set description** </td> <td> The result is a deliverable giving a clear vision of the technical and non- technical elements that led previous solutions in the logistics field to either success or to failure. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Analysis of the existing approaches, screening of projects, commercial solutions and roadmaps within the domain of AEOLIX. </td> </tr> <tr> <td> **Data collection method** </td> <td> Web-based survey among stakeholders (from Europe and beyond) to identify additional current and future needs and gaps in current provisions. </td> </tr> <tr> <td> **Data privacy** </td> <td> The individual survey answers will be treated with confidentiality. </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D2.1 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> New and revised general needs and requirements </td> </tr> <tr> <td> **Data set description** </td> <td> The requirements are divided into two groups: 1. Functional requirements that define specific behaviour or functions with a focus on the utility 2. Non-functional requirements (that define the quality and general behaviour of the actors involved in the system operations </td> </tr> <tr> <td> **Data collection procedures** </td> <td> The data is coming from the living labs in the project and much of the data will be collected via deliverables D5.2 Connectivity Gap Analysis and D5.3 Logistics Business Needs and Data Identification Report. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D2.2 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX global needs and requirements </td> </tr> <tr> <td> **Data set description** </td> <td> Information to be collected: 1. Trends in business needs as to demand and supply in logistics services 2. New types of processes envisaged and new kinds of ICT and information services needed 3. Trends in ICT technologies (including Secure, resilient and trusted communications and information storage and processing) 4. Trends in ITS technologies especially those focusing on the driver, the vehicles and the cargo. 5 Current and future expected legislative and standards requirements </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Questionnaire as basis for individual interviews followed by a two round Delphi analysis in physical meetings. </td> </tr> <tr> <td> **Data privacy** </td> <td> The individual interview answers and the discussions during the Delphi analysis will not be disclosed. </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D2.3 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX reference book </td> </tr> <tr> <td> **Data set description** </td> <td> Continuous upgraded information on AEOLIX requirements </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Establishment of a task force consisting of representatives from logistics actors, software developers, authorities and academia from within the consortium, that performs a continuous monitoring of the evolutions of the demand and supply in the logistics market, of the ICT domain, of the information sharing tools etc. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> The AEOLIX Reference Book will be continually communicated via a dedicated space on the AEOLIX website and via direct meetings with representatives of other WPs. </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> KPIs and evaluation criteria </td> </tr> <tr> <td> **Data set description** </td> <td> Indicators of local and global levels, e.g. Network efficiency, Environmental impact, Economical sustainability, Traffic network management, Driver specific metrics, Goods specific metrics </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Evaluation criteria and performance targets will be set in collaboration with the LL leaders and will take into account use cases, LL setup, historical data and projection and models of the expected impact of each of the applications at each site. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D2.5 and will serve as input to WP6.1 Evaluation framework WP 6.2 AEOLIX Living Labs business impacts, WP6.4 Business Paradigm Shift Analysis and WP 7.4 Business Model Scenarios. </td> </tr> </table> ### 5.3 Data in WP3 Table 3 – Data in WP3 <table> <tr> <th> **WP3** </th> <th> **AEOLIX IT Ecosystem** </th> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX Ecosystem technical architecture specification </td> </tr> <tr> <td> **Data set description** </td> <td> Definition of the AEOLIX IT architecture that enables the real-time data- sharing platform, supporting the connectivity network and interoperability of the different systems, applications and objects of the ecosystems through AEOLIX IT platform services </td> </tr> <tr> <td> **Data collection procedures** </td> <td> _Definition of the AEOLIX Reference Architecture_ : Description of the different ecosystem services and elements that will comprise the AEOLIX Cloud Ecosystem, detailing the connectivity engine communication protocols that will be used to interact among them, the collaborative ecosystem builder module and detailing the services to support the functionalities. </td> </tr> </table> <table> <tr> <th> </th> <th> _Definition of Platform services_ : Exchange of specific information among the systems of the ecosystem and the security in those processes _Definition of additional AEOLIX Platforms services_ : Management of the ecosystem resources (reporting, statistical information, user administration, support tickets, ecosystem status, admin newsletter, etc.) </th> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> This data is publicly available as AEOLIX deliverable D3.1 </td> </tr> <tr> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Connectivity engine reference implementation </td> </tr> <tr> <td> **Data set description** </td> <td> Specification and provision of the Connectivity Infrastructure Services of the AEOLIX Ecosystem based on development of data pipelines and cloud messaging technology </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Not applicable </td> </tr> <tr> <td> **Data privacy** </td> <td> The software source code will be non-public. IPR details are for further study. </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> The textual description is publicly available as AEOLIX deliverable D3.2 </td> </tr> <tr> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX Intelligent Dashboard </td> </tr> <tr> <td> **Data set description** </td> <td> Definition and development of the AEOLIX dashboard as the component that enables logistic stakeholders to create and configure the collaborative environments composed by those stakeholders that are part of the logistic network required for the transport of goods, and aiming to share information between them. AEOLIX provides an open AEOLIX API to provide access to main AEOLIX IT Connectivity Services and a Software Development Toolkit (SDK) to enable developers to join the ecosystem or creating the opportunity to develop new pan-EU logistics applications based on the platform capabilities and AEOLIX toolkit capabilities. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Not applicable </td> </tr> <tr> <td> **Data privacy** </td> <td> The software source code will be non-public. IPR details are for further study. </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> The textual description is publicly available as AEOLIX deliverable D3.3 </td> </tr> <tr> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Services and interoperability adapters </td> </tr> <tr> <td> **Data set description** </td> <td> Provision of the technical specification of the AEOLIX Data interoperability services and security recommendations for a secure development of AEOLIX ecosystem and services. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Control methods proposal, for the main security risks in the architecture. Identification of the most suitable security solutions. </td> </tr> <tr> <td> **Data privacy** </td> <td> Logistics stakeholders use different standards and formats: * EDIFACT * ODETTE * GS1 Standards * Common Framework </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> This data is publicly available as AEOLIX deliverable D3.4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX API and SDK </td> </tr> <tr> <td> **Data set description** </td> <td> Provision of the procedure and necessary tools to allow developers, public and private services to publish services and apps in the central functionalities provided by AEOLIX for seamless integration into dashboards and intelligent applications. Development and implementation of standard interface mapping procedures. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Not applicable </td> </tr> <tr> <td> **Data privacy** </td> <td> The software source code will be non-public. IPR details are for further study. </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> The textual description is publicly available as AEOLIX deliverable D3.5 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX Toolkit </td> </tr> <tr> <td> **Data set description** </td> <td> The toolkit provides a set of functional tools and component that can manage the multitude of logistics information flows that will be accessible through AEOLIX. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Not applicable </td> </tr> <tr> <td> **Data privacy** </td> <td> The software source code will be non-public. IPR details are for further study. </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> The textual description is publicly available as AEOLIX deliverable D3.6 </td> </tr> </table> ### 5.4 Data in WP4 Table 4 – Data in WP4 <table> <tr> <th> **WP4** </th> <th> **Collaborative data exchange certification framework** </th> </tr> <tr> <td> **Data set reference and name** </td> <td> Best practices for AEOLIX collaborative data sharing </td> </tr> <tr> <td> **Data set description** </td> <td> AEOLIX Architecture for Collaborative Data Exchange; Complete overview of the high level governance architecture related to the integration, securing, management of data flows and organisation within AEOLIX. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Practices from different use cases, industry collaborations or PPP are the basis for this task. Topics and approaches will be defined from the LL and WP3. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D4.1 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX high level architecture guideline for collaborative data exchange </td> </tr> <tr> <td> **Data set description** </td> <td> Definition of generic data exchange and sharing architecture for AEOLIX describing the processes related to the integration, securing, management and measurement of data flows and organisation. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Based on the AEOLIX LL requirements, taking into account other research initiatives (e.g. NEXTRUST). </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D4.2 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Guidelines for an AEOLIX framework for legal, trusted, interoperable and high quality data exchange </td> </tr> <tr> <td> **Data set description** </td> <td> Guidelines for legal and organisational framework: 1. Information ownership, data access levels and scope, data liability as treated within the EU member states. 2. Data re-use and sharing 3. Data quality and accuracy 4. Data interoperability </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Extension of work from WP4.2 </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing** </td> <td> This data will be publicly available as AEOLIX deliverable D4.3, a high level description of the data management within AEOLIX will be publicly available in deliverable D4.6. </td> </tr> <tr> <td> **Archiving and preservation** **(including storage and backup)** </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Specification for certification framework for the AEOLIX platform </td> </tr> <tr> <td> **Data set description** </td> <td> Framework for certifying companies, who want to join the AEOLIX platform. Objectives: 1. Evaluation of compliancy of devices and services with the data governance model 2. Specification of the AEOLIX certification framework to check compliancy with the data governance model </td> </tr> <tr> <td> **Data collection procedures** </td> <td> The data from the guidelines in D4.2 and D4.3 will be used as input to develop the certification framework. Feedback will be included by performing a test certification at the TESTFESTs. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D4.5, the results from the TESTFEST events will be public in deliverable D4.5 </td> </tr> </table> ### 5.5 Data in WP5 Table 5 – Data in WP5 <table> <tr> <th> **WP5** </th> <th> **Verification with Living Labs** </th> </tr> <tr> <td> **Data set reference and name** </td> <td> Connectivity gap analysis </td> </tr> <tr> <td> **Data set description** </td> <td> Definition of the data flow and gaps as experienced by the logistics partners and official agencies which collaborate to improve supply chain performance across specified corridors </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Data Issue Analysis for each LL, describing the specific data gaps that currently hamper effective operations. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing** </td> <td> This data is publicly available as AEOLIX deliverable D5.2 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Logistics Business Needs and Data Identification </td> </tr> <tr> <td> **Data set description** </td> <td> Analysis of the business needs data needed and the sources of such data and the format available and needed in each LL. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Identification of the specific sources of data needed and the </td> </tr> <tr> <td> </td> <td> format in which it exists and it is provided. Definition of procedures and mechanisms required for authorization of data access and import, and the level of data analysis and manipulation needed by users within the Intelligent Dashboard, and which data will be supplied by or sent to partners’ existing in-house systems, and what Toolkit solutions may be deployed. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data sharing/confidentiality** </td> <td> This data is publicly available as AEOLIX deliverable D5.3 </td> </tr> </table> ###### 5.6 Data in WP6 Table 6 – Data in WP6 <table> <tr> <th> **WP6** </th> <th> **Collaborative data exchange certification framework** </th> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX evaluation framework </td> </tr> <tr> <td> **Data set description** </td> <td> Framework for assessing the impacts of the AEOLIX project: 1. Identification of the operational benefits 2. Mechanism depicting the market penetration </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Cost-Benefit Analysis (CBA) evaluation tool, Multi-Criteria Analysis (MCA) and Analytic Hierarchy Process (AHP). KPIs defined on based of those tools complemented by the KPIs identified in WP2.5. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D6.1 </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> AEOLIX impacts </td> </tr> <tr> <td> **Data set description** </td> <td> AEOLIX Living Labs impact assessment, will present the results living lab evaluation including socio-economic impacts of the AEOLIX project on society and the environment. Continuous presentation of quick win impacts as identified from the initial implementation of the LLs. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> As described in AEOLIX evaluation framework </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverables D6.2, D6.3 and D6.4. D6.3 AEOLIX quick win impacts will be updated quarter-yearly as of project month 15. </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Data set reference and name** </td> <td> Business Paradigm Shift analysis </td> </tr> <tr> <td> **Data set description** </td> <td> Business Paradigm Shift analysis describing the modification of the business models required by the implementation of the AEOLIX ecosystem and the new collaboration schemes among the stakeholders. </td> </tr> <tr> <td> **Data collection procedures** </td> <td> Based on the impacts of the Living Labs in comparison to a benchmark. Paradigm shift analysis by a dynamic monitoring of the performance and the impacts of the pilots. </td> </tr> <tr> <td> **Data privacy** </td> <td> Not applicable </td> </tr> <tr> <td> **Data Sharing/confidentiality** </td> <td> This data will be publicly available as AEOLIX deliverable D6.5 </td> </tr> <tr> <td> **Archiving and preservation** **(including storage and backup)** </td> <td> </td> </tr> </table> ###### 5.7 Data in WP7 WP7 (Dissemination, Communication and Market Outreach) aims to maximise market take-up of the AEOLIX project objectives, concepts and outputs as realised in the AEOLIX Living labs, to all relevant logistic stakeholders and public authorities both during and after the life of the AEOLIX project by raising awareness about the AEOLIX project, its activities and its achievements. This WP does not deal with data in itself. However, it may be necessary to define the status of certain documents, e.g. slide sets of speakers at a conference. The details of the related proceedings will be defined within WP7. ###### 5.8 Data in WP8 WP8 (Collaborative business models and AEOLIX exploitation) addresses all deployment/business aspects that are essential for ensuring sustainable after- project life of the AEOLIX platform, piloted services and established Living Labs. This WP does not deal with data in itself. However, certain data used in the development of e.g. deliverable D8.6 (Cost benefit Analysis) may need consideration in regards to data privacy. These data are related to costs and benefits of the different systems. These data will need to be confidential internal to the consortium for analysis but can be disseminated externally as a overall evaluation of the project. The details of the related proceedings will be defined within WP8. **6\. Ethics** The AEOLIX partners are to comply with the ethical principles as set out in Article 34 of the Grant Agreement, which states that they must ensure that the activities under the action have an exclusive focus on civil applications, and must be carried out in compliance with:  ethical principles (including the highest standards of research integrity — as set out, for instance, in the European Code of Conduct for Research Integrity and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and  applicable international, EU and national law. Partners will deploy all the efforts needed to ensure a high level of transparency in data treatment and compliance with national and European legislations. The ethics procedures in the involved countries will be analysed and, if needed, partners will collect the necessary permissions in their entity / country. The involved AEOLIX parties that will lead the design and implementation of the overall system architecture and data feeding components, has notified the National Authority for Protection of Personal Data (Garante per la protezione dei dati personali, no: 2014-061000196946) in order to be qualified, as required by the national law (Legislative Decree no. 196 of 30th June 2003), to manage data about position and tracking of persons and objects for research activities. Deliverables D9.1 PODP – Requirement 1, describe the AEOLIX procedures for research actions involving data collection. Each beneficiary must submit to the coordinator in good time notifications for activities raising ethical issues (e.g. personal data collection) and before the beginning of an activity raising an ethical issue (e.g. prior to any personal data being collected), the coordinator must submit to the Agency copy of: * The Ethics Committee opinion required under national law and * Any notification or authorisation for activities raising ethical issues required under national law. This act should have had received favourable opinions from the relevant Ethics Committee and, if applicable, the regulatory approvals of the competent national or local authorities in the countries where the research is going to be carried out. **7\. Confidentiality** All AEOLIX partners must keep any data, documents or other material confidential during the implementation for the project and for four years after the project period set out in Article 3 (36 months); as per Article 36 of the Grant Agreement (GA). Further detail on confidentiality can be found in Article 36 of the GA. This confidential information will only be used to implement the Agreement; consequently the partners may disclose confidential information to their personnel or third parties involved in the action only if they:  need to know to implement the Agreement, and;  are bound by an obligation of confidentiality. Nevertheless, the confidentiality obligations no longer apply if: * the disclosing party agrees to release the other party; * the information was already known by the recipient or is given to him without obligation of confidentiality by a third party that was not bound by any obligation of confidentiality; * the recipient proves that the information was developed without the use of confidential information; * the information becomes generally and publicly available, without breaching any confidentiality obligation, or * the disclosure of the information is required by EU or national law.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1109_REFLEX_691685.md
**ABBREVIATIONS** <table> <tr> <th> DMP </th> <th> Data management plan </th> </tr> <tr> <td> DSM </td> <td> Demand side management </td> </tr> <tr> <td> DWH </td> <td> Data warehouse </td> </tr> <tr> <td> EIM </td> <td> Exploitation and Innovation Manager </td> </tr> <tr> <td> EMS </td> <td> Energy models system </td> </tr> <tr> <td> LCA </td> <td> Life Cycle Assessment </td> </tr> <tr> <td> RES </td> <td> Renewable Energy Sources </td> </tr> </table> **SCOPE OF THE DOCUMENT** This document provides the draft version of the Data Management Plan (DMP) for the REFLEX project according to the Open Research Data Pilot (ORD pilot) under Horizon 2020. The purpose of the DMP is to support the data management life cycle of all data that will be collected, processed or generated by the project. The document structure and contents are based on the Guidelines on FAIR Data Management in Horizon 2020 (Version 3.0, 26 July 2016) and on the Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 (Version 2.1, 15 February 2016). The document was generated by using the Digital Curation Centre's DMP online tool. The following sections outline the types of collected and generated data, how these data will be exploited and made accessible for verification and re-use and how data will be curated and preserved upon closure of the project. Changes are reserved – the DMP will be continuously updated during the project. **ADMIN DETAILS** **Project Name:** REFLEX (Horizon 2020 DMP) **Grant Agreement No.:** 691685 **Principal Investigator / Researcher:** Prof. Dominik Möst, TU Dresden **Project Description:** The future energy system is challenged by the intermittent nature of renewables and requires therefore several flexibility options. Still, the interaction between different options, the optimal portfolio and the impact on environment and society are unknown. It is thus the core objective of REFLEX to analyse and evaluate the development towards a low-carbon energy system with focus on flexibility options in the EU to support the implementation of the SET-Plan. The analysis are based on a modelling environment that considers the full extent to which current and future energy technologies and policies interfere and how they affect the environment and society while considering technological learning of low-carbon and flexibility technologies. For this purpose, REFLEX brings together the comprehensive expertise and competences of known European experts from six different countries. Each partner focusses on one of the research fields techno-economic learning, fundamental energy system modelling or environmental and social life cycle assessment. To link and apply these three research fields in a compatible way, an innovative and comprehensive energy models system (EMS) is developed, which couples the models and tools from all REFLEX partners. It is based on a common database and scenario framework. The results from the EMS will help to understand the complex links, interactions and interdependencies between different actors, available technologies and impact of the different interventions on all levels from the individual to the whole energy system. In this way, the knowledge base for decision-making concerning feasibility, effectiveness, costs and impacts of different policy measures will be strengthened, which will assist policy makers and support the implementation of the SETPlan. Stakeholders will be actively involved during the entire project from definition of scenarios to dissemination and exploitation of results via workshops, publications and a project website. **Nature:** Research and innovation actions based on energy systems modelling **Research Questions:** 1. How do current and future energy technologies and policies interfere? 2. What will be an optimal combination of flexibility options to cope with the future flexibility needs? 3. How do these technologies and policy measures affect the environment, economy and society? **Purpose:** Support for the implementation of the SET-Plan: * Analysing the impacts of technological development and innovation on the energy system and its dynamics * Comparative assessment of the impacts and the sustainability performance of all relevant energy technologies * Assessing the related impacts on the environment, society and economy - Analysing of technology policy measures in the framework of the SET-Plan * Understanding the complex links/interactions/interdependencies between the different actors, the available technologies and the impact of the different interventions on all levels from the individual to the whole energy system * Providing model based decision support tools for the different actors in the energy system in order to facilitate handling the complex system **Funder:** European Commission (Horizon 2020) **1 DATA SUMMARY** The core objective of the REFLEX project is to analyse and evaluate the development towards a low-carbon energy system in the EU up to the year 2050. The focus is laying on flexibility options to support a better system integration of Renewable Energy Sources (RES). The analysis and the assessment of REFLEX are based on a modelling environment that considers the full extent to which current and future energy technologies and policies interfere and how they affect the environment and society. ### 1.1 PURPOSE OF DATA COLLECTION AND GENERATION The purpose for data collection and their preparation within REFLEX is to provide the needed input data for the applied mathematical energy system models. The model pool of the REFLEX partners contains bottom-up simulation tools and fundamental system optimisation models on national and therewith also on European level as well as approaches for Life Cycle Assessment (LCA). Typically, one model cannot cover all aspects of an energy system or the implications of specific policies. Each of these different models focuses on a specific sector or aspect (heat, electricity, mobility, environmental/ social impacts etc.) of the energy system. For analysing and answering the given research questions (see Admin Details above), the different models and approaches will be coupled to a so called integrated Energy Model System (EMS). The analysis by applying the EMS allows to perform an in-depth and at the same time holistic assessment of the system transformation and shall contribute to the scientific underpinning of the SET- Plan. The final result data of the EMS helps to understand and investigate the complex links, interactions and interdependencies between the different actors and technologies within the energy system as well as their impact on society and environment. The results of a model based analysis depend not only on the chosen methodology, but also on the quality of the data used. For a consistent analysis within the EMS in REFLEX, a common database with harmonised datasets is needed. It will be implemented in a form of a Data Warehouse 1 (DWH). The DWH of the REFLEX project will contain four groups of data (see Table 1 for an overview), which will be explained in more detail in the following sections. **Table 1: Groups of data within the DWH of the REFLEX project** <table> <tr> <th> **Data group** </th> <th> **Description/Contains** </th> </tr> <tr> <td> **Existing model input data** </td> <td> * Data collected, generated or purchased from commercial providers by a project partner before the start of the REFLEX project * Input data collected, generated or purchased from commercial providers by a project partner in the context of projects on behalf of other clients run in parallel to the REFLEX project </td> </tr> <tr> <td> **Collected and generated new model input data** </td> <td> * Data collected from public available sources or purchased from commercial providers by a project partner in the context of REFLEX * Data collected through surveys conducted by a project partner in the context of REFLEX * Data generated based on existing, new collected or new purchased data by a project partner in the context of REFLEX </td> </tr> <tr> <td> **Generated intermediate model output data** </td> <td> \- Intermediate results of the model applications for data exchange between the models or for further assessments within the project; due to this the 2 nd and the 3 rd data group could overlap </td> </tr> <tr> <td> **Generated final result data of the EMS** </td> <td> \- Final results of REFLEX generated by model applications, e.g. CO2-emission, energy demand, technology impact evaluation etc. </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> ### 1.2 EXISTING MODEL INPUT DATA Energy system models need many different input data for modelling the real world. Each of the applied models in REFLEX has already been used as stand- alone applications. Therefore each model has its own database with already existing data. By this input data are rather model specific and an un- conditional application over several models is limited. They originate from own previous work and own assumptions of the project partners as well as from literature and have been developed over many years. Some of this existing data will be re-used in the REFLEX project, if they are up to date or if no better data are available. Not all of the existing data are relevant for the DWH. Only data that * are needed for more than one model within the EMS and therefore have to be harmonized and/or * are needed to validate the results presented in scientific publications (so called "underlying data") will be included in the DWH. The harmonization of input data is necessary to ensure a consistent analysis within the EMS. For the same information the same dataset (values) have to be used in all models. The consortium decides which of the existing datasets will be used. They will be included in the DWH and will be provided to all models before initializing the EMS run. Table 2 gives an overview of the existing and re-used input datasets/parameters. # Table 2: Existing essential (re-used) datasets in REFLEX <table> <tr> <th> Sub- Period Spatial Dataset (category) categories (from Source Reference (Quantity) until) </th> <th> Values per scenario (Quantity) </th> </tr> <tr> <td> Buildings average life time of heating systems 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Buildings compliance rate index 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Buildings energy efficiency increase due to mandatory commissioning 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Buildings energy efficiency investement cost (with national financial incentives) 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Buildings minimum energetic standards 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Buildings thermal renovation 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Power plants availability </td> <td> 6 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013 </td> <td> </td> </tr> <tr> <td> Power plants efficiency </td> <td> 11 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013 </td> <td> </td> </tr> <tr> <td> Power plants emisson factor </td> <td> 3 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> UBA 2014 </td> <td> </td> </tr> <tr> <td> Power plants interest rate </td> <td> 0 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> IEA et al. 2010 </td> <td> </td> </tr> <tr> <td> Power lifetime of investment </td> <td> 6 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> IEA et al. 2010 </td> <td> </td> </tr> <tr> <td> Power plants load change cost (depreciation) </td> <td> 0 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013, Traber & Kemfert 2011, own assumptions </td> <td> </td> </tr> <tr> <td> Power plants load change cost (fuel factor) </td> <td> 0 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013, Traber & Kemfert 2011 </td> <td> </td> </tr> <tr> <td> Power plants operations management cost fixed </td> <td> 18 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013, VGB PowerTech 2011a </td> <td> </td> </tr> <tr> <td> Power plants operations management cost variable </td> <td> 11 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013, Traber & Kemfert 2011 </td> <td> </td> </tr> <tr> <td> Power plants specific investment 2 </td> <td> 11 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> DIW 2013 </td> <td> </td> </tr> <tr> <td> Power plants start-up cost (depreciation) </td> <td> 18 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> Traber & Kemfert 2011 </td> <td> </td> </tr> <tr> <td> Power plants start-up cost (fuel factor) </td> <td> 18 </td> <td> 2010 2050 </td> <td> EU28 </td> <td> Traber & Kemfert 2011 </td> <td> </td> </tr> <tr> <td> Tertiary & residential sector tax energy carrier 1 </td> <td> 3 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, other projects, own assumptions </td> <td> </td> </tr> <tr> <td> Vehicles CO 2 standard 1 </td> <td> 2 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> EU regulation, own assumptions </td> <td> </td> </tr> <tr> <td> Vehicles fuel consumption factors </td> <td> 4 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> GHG-TransPoRD project, ASSIST project </td> <td> </td> </tr> <tr> <td> 1 2 Scenario-dependent Only for technologies for which no experience curve data are available or will be generated </td> <td> </td> </tr> </table> ### 1.3 COLLECTED AND GENERATED NEW MODEL INPUT DATA Some of the needed input data for the models will be updated or newly defined according to the research questions and the focus of the analysis within the REFLEX project. Therefor publicly and commercially available data will be used. Unavailable data will be generated by empirical surveys and/or appropriate assumptions. These data will be included in the DWH and in that way provided as harmonized datasets for all models within the EMS. This group includes data for: * Scenario framework * Demand side management  Experience curves #### 1.3.1 DATA FOR SCENARIO FRAMEWORK These data describe the overall framework for the model-based analysis and include the main macro-economic and societal drivers as well as techno- economic parameters and regulations/conditions of the political environment. Therefor the defined scenario storylines for REFLEX will be translated into quantitative model input parameters until the year 2050, which is the defined horizon for the analysis. They may be defined on a European level, or be further distinguished on a national, sectoral or technological level. The macro-economic trends and the societal drivers are likely to be based upon official projections provided by the European Commission (e.g. the upcoming Reference Scenario). All political assumptions will be elaborated considering current and past policy implementations and will be discussed with the European Commission and stakeholders. Table 3 gives an overview of the data for the scenario framework. # Table 3: Data for scenario framework <table> <tr> <th> Dataset (category) </th> <th> Subcategories (Quantity) </th> <th> Period (from until) </th> <th> Spatial Reference </th> <th> Source </th> <th> Values per scenario (Quantity) </th> </tr> <tr> <td> GDP-Gross domestic product </td> <td> 0 </td> <td> 2000 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Capros et al. 2016 (Assumptions for NO+CH based on other Horizon 2020 projects) </td> <td> 330 </td> </tr> <tr> <td> POP-population </td> <td> 0 </td> <td> 2000 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Capros et al. 2016(Assumptions for NO+CH based on other Horizon 2020 projects) </td> <td> 330 </td> </tr> <tr> <td> Price electricity (initial average cost of gross electricity generation) </td> <td> 0 </td> <td> 2000 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> (Assumptions for NO+CH based on other Horizon 2020 projects) </td> <td> 330 </td> </tr> <tr> <td> Price fossil energy carrier </td> <td> 3 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> Capros et al. 2016 </td> <td> 24 </td> </tr> <tr> <td> ETS-CO 2 -Price 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> Capros et al. 2016, own assumptions </td> <td> 8 </td> </tr> <tr> <td> NON-ETS-CO 2 -Price 1 </td> <td> 0 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> own assumptions </td> <td> 8 </td> </tr> <tr> <td> Vehicles CO 2 standard 1 </td> <td> 2 </td> <td> 2015 2050 </td> <td> EU28 </td> <td> Capros et al. 2016 </td> <td> 16 </td> </tr> <tr> <td> 1 Scenario-dependent </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> #### 1.3.2 DATA FOR DEMAND SIDE MANAGEMENT Relevant data for investigating system flexibility by Demand Side Management (DSM) are rarely available from public and commercial sources. In particular the available database for the tertiary sector with regard to DSM is incomplete. Therefore, an empirical survey on DSM in the tertiary sector will be conducted with the aim to improve the model input data and to fill data gaps. Based on the analysis of the collected specific empirical data, existing datasets will be extended as well as new datasets generated. The design of the survey will be established by the REFLEX partners and it will be conducted for 10 European countries by an international market research institute. The DSM data for further countries will be deduced from the survey results. Relevant model input parameters for modelling DSM options which should be deduced from the empirically ascertained data are given in Table 4\. # Table 4: Parameters for modelling DSM options <table> <tr> <th> Dataset (category) </th> <th> Subcategories (Quantity) </th> <th> Period (from until) </th> <th> Spatial Reference </th> <th> Source </th> <th> Values per scenario (Quantity) </th> </tr> <tr> <td> DSM potential (share of flexible load per energy usage process) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO +CH) </td> <td> empirical survey, public and commercial sources </td> <td> 28000 </td> </tr> <tr> <td> DSM cost (activation cost per energy usage process) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO +CH) </td> <td> empirical survey, public and commercial sources </td> <td> 28000 </td> </tr> <tr> <td> DSM time of interfere (maximum load reduction time) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO +CH) </td> <td> empirical survey, public and commercial sources </td> <td> 28000 </td> </tr> <tr> <td> DSM number of interventions (frequency of DSM measures) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO +CH) </td> <td> empirical survey, public and commercial sources </td> <td> 28000 </td> </tr> <tr> <td> DSM shifting time (allowed points of time or time periods/frames for DSM) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO +CH) </td> <td> empirical survey, public and commercial sources </td> <td> 28000 </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> #### 1.3.3 DATA FOR EXPERIENCE CURVES To enable endogenous modelling of technological developments and due to this to production cost reductions, experience curves for the most relevant technologies for each sector will be developed and implemented in the sectoral models. Special attention will be given to determination of uncertainty ranges of progress ratios (i.e. the slopes of the experience curves), as these can have a major impact on modelling results, especially for long-term modelling until 2050. In addition, especially for technologies that depend strongly on either the available geographical potential (e. g. wind onshore, offshore, biomass) or on raw material prices, decomposition of the experience curve using a multi-level experience will be performed. This allows determination of the most important factors behind cost development, such as variations in steel or oil prices, as well as scale effects. The needed empirical data for defining the experience will be surveyed by means of interviewing experts, specific survey methods and by analysing detailed statistics (e. g. construction, production and consumer price indices as well as installed capacities and cost developments in the electricity, heat and mobility sector). Table 5 gives an overview of the technologies for which ## experience curves will be developed. **Table 5: Defined technologies per sector for which experience curves will be developed** <table> <tr> <th> Category </th> <th> Technology </th> </tr> <tr> <td> Electricity Generation </td> <td> CCS (Membrane, Oxyfuel, Pre, Post) </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> CCGT (Gas) </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> EFCC (Gas) </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Biomass: digestion </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Biomass: gassification </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Biomass: combustion </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Concentrated Solar Power (CSP) </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Geothermal: (dry, flash, binary) </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Photovoltaics: modules (mono/poly, CdTe) </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Photovoltaics: system level aspects </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Photovoltaics: CdTe </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Wind: onshore </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> Wind: offshore </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> CHP </td> <td> </td> </tr> <tr> <td> Electricity Generation </td> <td> micro-CHP </td> <td> </td> </tr> </table> **Table 5 (continuation)** <table> <tr> <th> Category </th> <th> Technology </th> </tr> <tr> <td> Electricity Generation </td> <td> Pulverized Coal-fired </td> </tr> <tr> <td> Electricity Generation </td> <td> (P)FBC (Coal) </td> </tr> <tr> <td> Electricity Generation </td> <td> IGCC (Coal) </td> </tr> <tr> <td> Electricity Generation </td> <td> Gas Turbine (inlc. Mini- and Micro-Gas-Turbine) </td> </tr> <tr> <td> Electricity Generation </td> <td> Nuclear (generation 2, 3 and 4) </td> </tr> <tr> <td> Electricity Generation </td> <td> Steam turbine (coal/gas) </td> </tr> <tr> <td> Electricity Generation </td> <td> Photovoltaics: CIGS </td> </tr> <tr> <td> Electricity Storage </td> <td> Battery: Lithium-(Ion, Polymer, Air) </td> </tr> <tr> <td> Electricity Storage </td> <td> Battery: Redox-Flow </td> </tr> <tr> <td> Electricity Storage </td> <td> CAES </td> </tr> <tr> <td> Electricity Storage </td> <td> Flywheel </td> </tr> <tr> <td> Electricity Storage </td> <td> Pumped Storage Plants </td> </tr> <tr> <td> Electricity Storage </td> <td> Battery: Molten Salt </td> </tr> <tr> <td> Electricity Storage </td> <td> Battery: NiMH </td> </tr> <tr> <td> Electricity Storage </td> <td> Battery: NiCd </td> </tr> <tr> <td> Heating/cooling </td> <td> Electric boiler (P2H) </td> </tr> <tr> <td> Heating/cooling </td> <td> Heat Pump (air/air) </td> </tr> <tr> <td> Heating/cooling </td> <td> Heat Pump (air/water/ground) </td> </tr> <tr> <td> Heating/cooling </td> <td> Solar Thermal (process heat or large scale) </td> </tr> <tr> <td> Heating/cooling </td> <td> Air conditioning </td> </tr> <tr> <td> Heating/cooling </td> <td> Thermal Energy Storage: (sensible, underground, Phase Change, TCS, Ice) </td> </tr> <tr> <td> Heating/cooling </td> <td> Night-Storage Heaters </td> </tr> <tr> <td> Heating/cooling </td> <td> Electric boilers for district heating </td> </tr> <tr> <td> Heating/cooling </td> <td> Gas- or oil boiler </td> </tr> <tr> <td> Heating/cooling </td> <td> Fridges, freezers: cooling </td> </tr> <tr> <td> Industry </td> <td> Air separation: membrane </td> </tr> <tr> <td> Industry </td> <td> Air separation: conventional </td> </tr> <tr> <td> Industry </td> <td> Aluminium electrolysis </td> </tr> <tr> <td> Industry </td> <td> Chemical & mechanical pulp production </td> </tr> <tr> <td> Industry </td> <td> Electric arc and induction furnaces: copper, cink, etc. </td> </tr> <tr> <td> Industry </td> <td> Electrolysis (wet chemical): copper, cink, etc. </td> </tr> <tr> <td> Industry </td> <td> Electrolysis, chemical industry </td> </tr> <tr> <td> Industry </td> <td> ODC, chlorine electrolysis </td> </tr> <tr> <td> Industry </td> <td> Fischer-Tropsch-synthesis </td> </tr> <tr> <td> Industry </td> <td> Industrial CCS </td> </tr> <tr> <td> Industry </td> <td> Industrial heat </td> </tr> <tr> <td> Industry </td> <td> Cement and raw mill: cement production </td> </tr> <tr> <td> Industry </td> <td> Electric booster: container glas production </td> </tr> <tr> <td> Industry </td> <td> HIsarna process: steel production </td> </tr> <tr> <td> Industry </td> <td> ULCOWIN process: steel production </td> </tr> <tr> <td> Mobility </td> <td> Battery electric vehicles </td> </tr> <tr> <td> Mobility </td> <td> Fuel Cell vehicles </td> </tr> <tr> <td> Mobility </td> <td> Plug-in hybrid electric car </td> </tr> <tr> <td> Mobility </td> <td> Flexifuel car </td> </tr> <tr> <td> Mobility </td> <td> Overhead wiring trucks </td> </tr> <tr> <td> Mobility </td> <td> Biofuels (jet, vehicle, marine) </td> </tr> <tr> <td> Mobility </td> <td> CNG vehicles </td> </tr> <tr> <td> Mobility </td> <td> LPG car </td> </tr> <tr> <td> Mobility </td> <td> Diesel vehicles </td> </tr> <tr> <td> Mobility </td> <td> Gasoline car </td> </tr> <tr> <td> Power to X </td> <td> Power to Hydrogen </td> </tr> <tr> <td> Power to X </td> <td> **Power to Methane** </td> </tr> <tr> <td> Power to X </td> <td> Power to Methanol </td> </tr> <tr> <td> Power to X </td> <td> **Hydrogen synthesis, ammonia** </td> </tr> <tr> <td> Electricity end-use </td> <td> Lighting: LED </td> </tr> <tr> <td> Electricity end-use </td> <td> **Elevators/escalators** </td> </tr> <tr> <td> Electricity end-use </td> <td> Data centers </td> </tr> <tr> <td> Electricity end-use </td> <td> **Building Energy Management/Automation systems** </td> </tr> <tr> <td> Electricity end-use </td> <td> Ventilation </td> </tr> <tr> <td> Electricity end-use </td> <td> **Dishwashers, dryers, washing machines** </td> </tr> <tr> <td> Other </td> <td> Power Grids (HVDC point-to-point, also meshed systems) </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> In order to estimate the potential of alternative fuel technologies, both regarding Europe (e. g. MID in Germany or UKTS in UK) and regarding Asia and North America, mobility patterns and related market potentials will be derived from available mobility surveys. The analysis will be focused on major common driving patterns. The reason for the analysis of main global passenger car markets is to identify the global market penetration of electric vehicles, which will influence the demand for Li-ion batteries substantially in order to assess the future prices of electric vehicle batteries and fuel cells based on the learning curve theory. In this sense, the global automotive market (especially including Northern America and Asia) will be taken into account for investigating the uptake of alternative car technology in Europe. Emission degression data will also be considered in REFLEX. However, the content, nature and scope of these data are still under discussion and will be determined during the project. ### 1.4 GENERATED INTERMEDIATE MODEL OUTPUT DATA By coupling the different approaches of the RFELEX partners, the systems boundaries of each stand-alone model will be partly disbanded and most exogenous parameters of each model will become endogenous variables of the EMS. This will be done by using the relevant output data of one model as input data of another model. To achieve a stable final state of the EMS within each REFLEX scenario storyline, several iterations with all models are performed. The generated results during these iterations are needed for the data exchange between the different models. Therefore these data will be included in the DWH and will be provided to each model by using a data interface. Table 6 shows the relevant datasets for the data exchange within the EMS. # Table 6: Datasets as intermediate results for data exchange between models within the EMS <table> <tr> <th> Dataset (category) </th> <th> Subcategories (Quantity) </th> <th> Period (from until) </th> <th> Spatial Reference </th> <th> Source </th> <th> Values per scenario/iteration (Quantity) </th> </tr> <tr> <td> Price electricity (hourly, wholesale and retail prices incl. taxes) </td> <td> 0 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 12,264,000 </td> </tr> <tr> <td> Demand electricity (hourly) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 12,264,000 - 245,280,000 </td> </tr> <tr> <td> Demand electricity for mobility (yearly) </td> <td> 7 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 1,050 </td> </tr> <tr> <td> Demand district heating (yearly) </td> <td> 3 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 450 </td> </tr> <tr> <td> Power plants installed capacity and operating (yearly and hourly) </td> <td> 22 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 5,280 - 67,200 </td> </tr> <tr> <td> Power plants emissions (yearly) </td> <td> 132 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 15,840 </td> </tr> <tr> <td> Power plants demand energy (yearly) </td> <td> 22 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 2,640 </td> </tr> <tr> <td> Mobility demand energy (yearly) </td> <td> 56 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 6,720 </td> </tr> <tr> <td> Mobility emissions (yearly) </td> <td> 56 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 6,720 </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> ### 1.5 GENERATED FINAL RESULT DATA OF THE EMS After achieving a stable state based on several iterations with all models within the EMS within each REFLEX scenario storyline, the result data of the different models will be collected and combined within the DWH to the final result data of the EMS. These data will be analysed to derive the key findings and are the basis for answering the research questions of the REFLEX project (see ADMIN DETAILS). Table 7 gives an overview on the major result data of the EMS. # Table 7: Major result data of the EMS <table> <tr> <th> Dataset (category) </th> <th> Subcategories (Quantity) </th> <th> Period (from until) </th> <th> Spatial Reference </th> <th> Source </th> <th> Values per scenario (Quantity) </th> </tr> <tr> <td> Price electricity average yearly (wholesale and retail prices incl. taxes) </td> <td> 0 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 120-240 </td> </tr> <tr> <td> Demand electricity (yearly) </td> <td> 50 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 6000 </td> </tr> <tr> <td> Demand district heating (yearly) </td> <td> 3 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 450 </td> </tr> <tr> <td> Power plants installed capacity (yearly) </td> <td> 22 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 2640 </td> </tr> <tr> <td> Power plants operation (yearly) </td> <td> 22 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 2640 </td> </tr> <tr> <td> Power plants emissions (yearly) </td> <td> 132 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 15,840 </td> </tr> <tr> <td> Net transfer capacities between countries installed (yearly) </td> <td> 143 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 572 </td> </tr> <tr> <td> Net transfer capacities between countries operation (yearly) </td> <td> 143 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 572 </td> </tr> <tr> <td> Mobility demand energy (yearly) </td> <td> 56 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 6,720 </td> </tr> <tr> <td> Mobility emissions (yearly) </td> <td> 56 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 6,720 </td> </tr> <tr> <td> Life cycle environmental and resource impacts </td> <td> 19 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 2280 </td> </tr> <tr> <td> Life cycle human health (damage / toxicity) </td> <td> 2 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 240 </td> </tr> <tr> <td> Life cycle societal impacts (risk level) </td> <td> 5 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 600 </td> </tr> <tr> <td> Costs external </td> <td> 1 </td> <td> 2015 2050 </td> <td> NUTS 0 (EU28+NO+CH) </td> <td> Applied energy system models </td> <td> 120 </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> ### 1.6 DATA UTILITY REFLEX collects and generates a certain amount of research data. On one hand, these data are necessary to meet the objectives of the project. On the other hand most of the collected and generated data will be useful for further research and even for the energy industry. #### 1.6.1 DATA FOR SCENARIO FRAMEWORK The collected and prepared data for the scenario framework are tailored to the aim and scope as well as the specific research questions of the REFLEX project and the applied models to answering them. They are thus primarily useful as underlying data to ensure the transparency of the generated results and the comparability of the project outcomes to other studies with similar analysing scope. However, the data will be used for updating the existing databases of the different models and will be used for further model based research of the project partners outside of REFLEX. #### 1.6.2 DATA FOR DEMAND SIDE MANAGEMENT The empirical survey on DSM aims to provide the needed model input data for the REFLEX project. With the collected specific empirical data the database for investigating system flexibility by DSM will be improved in general, because relevant data – especially for the tertiary sector – are rarely available from public and commercial sources. Thus, existing datasets will be extended as well as new datasets generated. Furthermore, the survey data will allow the identification of promising energy applications and DSM potentials in the selected sector in different European countries. These data have a high potential for re-use after the end of the REFLEX project within further research projects. They are useful for other researchers and for stakeholders from industry and policy making. #### 1.6.3 DATA FOR EXPERIENCE CURVES Endogenizing technological learning through experience curves allows for an enhanced assessment of the evaluation of impacts from policy measures or alternative incentive schemes on realizable future cost reduction. In addition, in view of current rapid and necessary changes in energy systems (driven partially by policies and partially by markets) and the ensuing need for flexibility, the endogenous modelling of the cost development of existing and new energy-related technologies in bottom-up models will become even more important. However, the data and experience curves required to do so are not readily available. A comprehensive review of many energy supply (and some energy demand) technologies have been published by Junginger et al. (2010), but these require updating. Since then, recent studies have been published for some individual technologies (e. g. Bolinger and Wiser 2012; Candelise et al. 2013 or Chen et al. 2012). However, an up-to-date overview is not available. Especially with regard to technologies needed for increasing the flexibility in energy systems (such as storage technologies or DSM-devices) little or no experience curves have been published. Thus, to advance the energy models included in REFLEX beyond the state-of-the-art by implementing these experience curves, data collection will be required to devise or update experience curves for existing technologies and to estimate experience curves for new technologies. Furthermore, it will require smart and innovative incorporation and interlinkage of these experience curves in various sectoral energy models to comprehensively assess the effects of technological learning and the demand for increased flexibility in energy systems. The outcome – a state-of-the-art and up-to-date overview of experience curves and underlying database – could benefit other energy models outside the project, i.e. developed in the EU as well as worldwide) to meet the challenges of modelling our changing energy systems for the coming decades. #### 1.6.4 GENERATED INTERMEDIATE MODEL OUTPUT DATA These data are only intermediate results of the EMS and will transfer between the models during the iteration process of the EMS. A relevant benefit of these data for further applications outside the framework of REFLEX is not expected. #### 1.6.5 GENERATED FINAL RESULT DATA OF THE EMS The overall objective of REFLEX is to support the SET-Plan by strengthening the knowledge base for transition paths towards a low carbon energy system based on a cross-sectoral analysis for the entire energy system of the European Union. Due to the complexity of this system, it is obvious that the implementation of the SET-Plan requires in-depth knowledge on the interrelationship between the different energy sectors (electricity, heat and mobility), energy technologies but also on the interdependencies between energy and non-energy industries, environment (beyond greenhouse gas emissions) and society. The result data of the EMS within REFLEX helps to understand and investigate the complex links, interactions and interdependencies between the different actors and technologies within the energy system as well as their impact on society and environment. Based on the EMS result data, recommendations for effective strategies for a transition of the European energy system to a low- carbon system will be derived. Policy makers at EU level as well as at regional level can use these findings when developing policy measures. Furthermore, the data of the REFLEX project can be used as a reference or starting point for further research work on the future design of the energy system of the European Union. ### 1.7 DATA PROTECTION AND EXPLOITATION STRATEGY In order to ensure efficient implementation of dissemination and exploitation activities amongst the participants, a Consortium Agreement (CA) was signed by all partners. The CA is among other things dealing with the exact details on the participants’ background data, the rights to, the protection of and the exploitation of data/results generated solely and/or jointly during the project. Moreover the CA sets up specific rules on how to deal with dissemination activities and to ensure open access to all peer-reviewed scientific publications. The model used is the DESCA template, version 1.0 (www.desca-2020.eu). The following basic rules apply: * All participants define their individual background data required for their successful participation in the project (own model input data and commercial model input data purchased before the start of the project). The rights to this background remain with the respective owner but royalty-free access to other participants is granted if not restricted by third parties and if it is required to enable other participants to carry out their research and development activities in the context of the project. * During the project: Background data that is acquired by individual partners during the project, e. g. in the context of projects on behalf of other clients run in parallel, will be treated as pre-existing background data. * The property rights to data collected and data/results generated during the project belong to those involved in its collection and generation. When more than one consortium member is involved in the creation of results it will be jointly owned by the respective consortium member. Dissemination and exploitation of data and results will be executed in accordance with EU laws and with respect to specific laws in the participating countries. Before any dissemination activity will take place, respective legal aspects will be examined and clarified. This is particularly the case for data from the DSM-survey and data purchased from commercial providers. The possibility for protection of generated results within the project (consortium) will be also examined before publication. All participants have departments specifically devoted to managing intellectual property. These departments will manage the relevant protection processes. Within REFLEX the dissemination and exploitation of data will be coordinated by the Exploitation and Innovation Manager (EIM) regarding knowledge management and innovation activities. The EIM is responsible for: * maintaining a registry of background data, * maintaining a registry of data gathered and generated in the work packages during the project, * assessing the opportunities for exploitation, for example by following political events in the energy sector or searches of other scientific databases for similar developments and * proposing specific exploitation measures, e. g. policy briefs and events. In REFLEX periodic analysis of transfer opportunities to adjust the exploitation strategies takes place. All consortium partners contribute to the exploitation plan of the project throughout its life span. Thereby, the EIM is in close contact and regularly informed about the exploitation plans of the partners to use synergies and to ensure the best and suitable use and exploitation of results. Furthermore, the EIM will regularly advise the consortium and individual partners about possible strategies. The exploitation strategy is outlined below: * First it will be decided whether to disseminate a dataset and in what way. * Participants inform the EIM and other consortium members if they wish to publish or disseminate any datasets, whether in a direct way or indirectly. * Before any dissemination activity may take place the participants must examine the possibility of protecting generated results. * Upon (affirmative) dissemination decision the dataset will be made available (regarding different dissemination types see section 2.2) **2 FAIR DATA** All collected and generated data will be implemented in the DWH in a standardized way. The DWH includes several databases which are depending on the structure and contents of the different datasets. For managing the databases the database-management-tool “Mesap” will be used, which provides also several data preparing and identifier mapping functions. Mesap is developed and commercially provided by the Seven2one GmbH. A selection of existing data as well as data collected and generated during the project will be made available to interested research groups and interested parties from policy and industry. The following section outlines how the data will be exploited and made accessible for verification and re-use and how data will be curated and preserved upon closure of the project. ### 2.1 MAKING DATA FINDABLE For making data findable, a data catalogue will be prepared, which will be implemented in the REFLEX project website ( _www.reflex-project.eu_ ). The catalogue gives an overview of all provided datasets and affords the metadata of them. The scope and design of the metadata will be oriented on the metadata structure of the “Open Power System Data” platform ( _www.data.open-power- system-data.org/_ ) see Table 8. # Table 8: Scope and contents of the metadata for a provided dataset <table> <tr> <th> **Category** </th> <th> **Content** </th> </tr> <tr> <td> **Name** </td> <td> Name of dataset - a concise one (short but informative) </td> </tr> <tr> <td> **ID** </td> <td> Dataset identifier </td> </tr> <tr> <td> **Description/Notes** </td> <td> Short description of scope/contents of the dataset. Also specific remarks (e. g. restrictions, data gaps etc.) </td> </tr> <tr> <td> **Keywords** </td> <td> List of used keywords for the dataset </td> </tr> <tr> <td> **Version** </td> <td> Dataset version given as number and/or date. Also the information if it is the latest available version </td> </tr> <tr> <td> **Last changes** </td> <td> Short description of changes to previous dataset version </td> </tr> <tr> <td> **Timescale** </td> <td> If the dataset is a time series: * Values for which years e. g.: - 2010-2050 (yearly) * 2010-2030 (5-year-steps) * 2010, 2012, 2017, 2022 * Values structure in the course of year if applicable e. g.: - seasonal * hourly * quarter-hourly * Structure of type days if used </td> </tr> <tr> <td> **Spatial reference** </td> <td> Spacial reference of values with scope and level of differentiation/aggregation e. g.: - EU 28 (NUTS 0) \- EU 28 + X (NUTS 3) - Poland, Germany,… </td> </tr> <tr> <td> **Sectoral reference** </td> <td> Sectoral reference of values, if applicable e. g.: - Households, Industy, Traffic… * Road Traffic, Rail Traffic, Aviation... * C24_C25 , D, E (Codes from Eurostat) </td> </tr> <tr> <td> **Sources** </td> <td> Used sources to prepare/provide the dataset, if possible with links to the primary data/original input data </td> </tr> <tr> <td> **Attribution** </td> <td> Recommended text for attribution </td> </tr> <tr> <td> **Contact** </td> <td> Contact information for questions/remarks </td> </tr> <tr> <td> **Access** </td> <td> Terms of data access/usage (licence, free of charge, XX € protective charge etc.) </td> </tr> <tr> <td> **Field documentation** </td> <td> List of used fields within the dataset with following subcategories: * field name: e. g. capacity_installed * type (format): e. g. number (float) * unit: e. g. MW * description: e. g. installed electrical capacity at the end of year </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> Each dataset can be unambiguously identified via the combination of dataset name and the version label. Both will be included in the unique dataset ID. With regard to the publication and transparency of results of the project work (e. g. in journals), the preparation of tailored dataset packages for the publications is considered. A package contains together with a short content description of the package a compilation of: * the metadata of the published results * the metadata of the relevant datasets required to verify the results, as long as these can be made available. It will be discussed within the REFLEX consortium, if for these dataset packages digital object identifiers (DOI) will be used, which could be easily included in publications as a reference. ### 2.2 MAKING DATA OPENLY ACCESSIBLE Within REFLEX three different possibilities for data dissemination will be considered, as described in Table 9. Both, the collected input data as well as generated data, will be made available, mostly as open access according to the guidelines of the EU. The specification of data which will be made openly available is still under discussion. It will be decided within the project consortium case by case and after that updated in this section of the DMP. # Table 9: Possibilities of data dissemination considered in REFLEX <table> <tr> <th> **Dissemination** </th> <th> **Description** </th> </tr> <tr> <td> **Open Access Publication** </td> <td> Owners will be granting royalty-free access of a meaningful selection of generated results to other participants and to the public, possibly restricted by appropriate embargo periods and/or respecting restrictions from editors of scientific journals and organizers of conferences. </td> </tr> <tr> <td> **Commercial Exploitation** </td> <td> Data suitable for commercial exploitation (e. g. for a commercial re-use by consulting companies) will be managed by the project partner ESA 2 which was funded after completion of the EU funded innovation project ESA2 explicitly with the purpose to exploit research results (including research data) related to (coupled) energy systems modelling. </td> </tr> <tr> <td> **Indirect** </td> <td> Parts of the generated data will be disseminated only indirectly as part of intermediate or final results of models and/or as qualitative outcome based on post-analysis of results. </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> The access to all provided data will be offered via the project website ( _www.reflexproject.eu/_ ). The planned formats are: * csv * xlsx * sql In the case of an _Open Access Publication_ , the dataset can be easily downloaded. The download links for different formats are given within the metadata in the category “Access”. In case of _Commercial Exploitation_ of a dataset, a registration procedure for all those interested in such datasets will be implemented. This includes the opportunity, to differ the conditions for access depending on type of the inquirer or planned re-usage (e. g. dataset is free of charge for public scientific institutions for scientific work, but with a charge in case of commercial re-use by a company). After registration of the request of a dataset, a timelimited download link will be provided via e-mail to the registered contact together with the terms of usage and as the case may be with the invoice. The requisition will be implemented in the metadata, which are available free of charge in any case, in the category “Access”. ### 2.3 MAKING DATA INTEROPERABLE To increase the interoperability of provided data, commonly used vocabularies for the metadata contents as well as for the identifiers and the contents of the identifiers within the datasets will be applied. These include standardized name conventions and codes used in official statistics (e. g. for countries, regions etc.). Furthermore, specific energy system topics related name conventions will be orientated on the “Open Power System Data” platform ( _www.data.open-power-system-data.org/_ ). An additional mapping procedure or the provision of mapping tools for data users is not envisaged. ### 2.4 INCREASE DATA RE-USE The datasets will be made available to third parties as soon as they are generated, prepared and reviewed for publication/commercial exploitation and when the conditions of dissemination are decided and possible protections of dataset are clarified within the consortium. However, additional restrictions by setting appropriate embargo periods and/or respecting restrictions from editors of scientific journals and organizers of conferences are also possible. A generally valid statement regarding the embargo periods is not possible at the moment. It can differ from case to case. The use of Creative-Commons-Licences (CC) for provided data will be discussed within the consortium. Table 10 gives an overview of conceivable licences for the different types of dissemination. Which licence is used will be decided dataset by dataset and after that updated in this section of the DMP. # Table 10: Conceivable licences for the different types of dissemination <table> <tr> <th> **Dissemination** </th> <th> **Data group Licences** </th> </tr> <tr> <td> **Open Access Publication** </td> <td> Existing model input data </td> <td> </td> </tr> <tr> <td> Collected and generated new model input data </td> </tr> <tr> <td> Generated final result data of the EMS </td> <td> </td> </tr> <tr> <td> **Commercial Exploitation** </td> <td> Existing model input data </td> <td> </td> </tr> <tr> <td> Collected and generated new model input data </td> </tr> <tr> <td> Generated final result data of the EMS </td> <td> </td> </tr> <tr> <td> **Indirect** </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> </tr> </table> Regarding the data quality assurance the processes given in Table 11 are implemented in the project work. # Table 11: Processes of data quality assurance <table> <tr> <th> **Data group** </th> <th> **Processes** </th> </tr> <tr> <td> **Existing model input data** </td> <td> * Harmonization of model input data to ensure a consistent analysis within the EMS and regarding the defined scenario storylines. For the same information have to be used the same dataset (values) in all models. (Consortium decision) * Harmonized data will be provided to all models before initializing the EMS run </td> </tr> <tr> <td> **Collected and generated new model input data** </td> <td> * Minimum two internal reviews of the generated new model input data * Additionally Peer-Reviews in case of publication in Journals * Harmonization of model input data to ensure a consistent analysis within the EMS and regarding the defined scenario storylines. For the same information have to be used the same dataset (values) in all models. (Consortium decision) * Harmonized data will be provided to all models before initializing the EMS run </td> </tr> <tr> <td> **Generated intermediate model output data** </td> <td> \- Check of plausibility of intermediate output data of a model during EMS runs by the responsible modeller before implementation of the data in the DHW for data transfer to another model </td> </tr> <tr> <td> **Generated final result data of the EMS** </td> <td> * Minimum two internal reviews of the generated final result data * Additionally Peer-Reviews in case of publication in Journals </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> The consortium will continue to provide the data via the REFLEX project website for a limited period of time after the end of the REFLEX project. The project website will be maintained during this period. After this period, an appropriate reference/link to the final data repository will be integrated in the REFLEX project website, which is no longer being maintained after that. The final repository for these data has not been chosen yet. The choice of repository will depend on: * location of repository * research domain * costs * open access options * prospect of long-term preservation. The following approaches for long term data provision (or a combination of them) are conceivable: 1. The data remain in the DWH of the project partner ESA². The data provision is transferred to the website of the ESA² Company ( _www.esa2.eu_ ). 2. Another repository could be ZENODO _https://zenodo.org/_ . This is online, free of charge storage created through the European Commission’s OpenAIREplus project and is hosted at CERN, Switzerland. It encourages open access deposition of any data format, but also allows deposits of content under restricted or embargoed access. Contents deposited under restricted access are protected against unauthorized access at all levels. Access to metadata and data files is provided over standard protocols such as HTTP and OAI-PMH. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape every night. Data files are replicated in the online system of ZENODO. Data files have versions attached to them, whilst records are not versioned. Derivatives of data files are generated, but the original content is never modified. Records can be retracted from public view; however, the data files and records are preserved. The uploaded data is archived as a Submission Information Package in ZENODO. Files stored in ZENODO will have MD5 checksum of the file content, and it will be checked against their checksum to assure that a file content remains correct. Items in the ZENODO will be retained for the lifetime of the repository which is also the lifetime of the host laboratory CERN which currently has an experimental programme defined for the next 20 years. Each dataset can be referenced at least by a unique persistent identifier (DOI), in addition to other forms of identifications provided by ZENODO. 3. A third option is provided by the Technische Universität Dresden, which is currently setting up an institutional, inter-disciplinary repository with long-term archive in the project OpARA. It will provide open access long-term storage of data, including metadata and will go into production in 2017. Other institutional and thematic repositories will be considered and evaluated in the next months. The procedure will be discussed and decided at the end of the project lifespan. In any case the data will be available for third parties after the end of the project. The length of time for which the data will remain re-usable is not restricted. 3. **ALLOCATION OF RESOURCES** The EIM is responsible for data management within the REFLEX project (see section 1.7). The estimated costs for making REFLEX data FAIR are 50.000 Euro. The costs include: * the clarification of data protection and licences, * the final preparation of data by each project partner for publishing (without effort/costs for data collection/purchasing/generation etc.), * the processes for assurance of data quality, * the development and implementation of the data catalogue in the project website, * the implementation of the registration procedure for access to commercially exploited datasets, * the data hosting and backup for security, * the data updating and maintenance of the data and of the data provision during the project lifespan. These costs are covered by the project funds, mainly by the budgeted personnel costs. The costs for long term preservation after the end of the project are difficult to estimate at the moment. They depend mainly on the level of convenience of data providing (with/without comprehensive search routines and/or additional consulting) but also on the scope and size of the collected and generated datasets. The permanent costs of preserving datasets on the ZENODO repository will be free of charge as long as the single dataset storage is no greater than the maximum 2GB of data. The permanent costs of preserving datasets on the OpARA repository are planned to be free of charge for TUD members. But the final decision on costs has not been taken. The costs for long term preservation shall be covered by the collected charges from the commercial exploitation of datasets during the project lifespan and after. 4. **DATA SECURITY** Most of the data handled in the REFLEX project are not sensitive regarding the laws governing data protection and data security. An exception represents the data from the DSM-survey. A provision/publication of these data is only possible in an anonymous form. The DWH as well as the data provision via websites will be implemented on servers with regular backup and data recovery procedures. 5. **ETHICAL ASPECTS** The data collection, data storage, data usage, data generation and data dissemination in this project do not affect to ethical issues. 6. **OTHER** No other national/funder/sectorial/departmental procedures for data management will be used.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1111_ESPRESSO_691720.md
# Executive Abstract The goal of the Data Management Plan (DMP) is to detail the data to be used and generated by the project. This is a precise requirement since ESPRESSO is a Pilot on H2020 Open Research Data initiative. Therefore, the ESPRESSO project will make publicly available the data generated and that will include data regarding the community of stakeholders, market analysis, best practice assessment and other activities to be carried on during the CSA. In addition, corresponding metadata generated within the project will be made available through open data portals (including the European Open Data Portal). This will ensure the widest public accessibility as well as long-term preservation. Indeed, ESPRESSO will ensure that all the data generated is properly collected accessed, curated, preserved, and eventually made public after any possible data-ownership issue has been cleared. For this reason, a specific Data Management Plan is planned. This actual document is the last of the reports that are due on M06, M14, and M24. # Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 The ESPRESSO project has worked in accordance with the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, whose main aims are to: * lay down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data, * protect fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data, * neither restrict nor prohibit the free movement of personal data within the Union, for the management of the data collected. In particular, the project team has focused on the principles of: * “lawfulness, fairness and transparency”, thus processing data lawfully, fairly and in a transparent manner in relation to the data subject; * “purpose limitation”, thus ensuring that the data has been collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; * “data minimization”, thus ensuring that the collected data has been treated in an adequate and relevant manner and limited to what is necessary in relation to the purposes for which they are processed; * “storage limitation”, thus keeping them in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; the only possibility for which the personal data may be stored for longer periods is only for statistical purposes in accordance with Article 89(1) in order to safeguard the rights and freedoms of the data subject; * “integrity and confidentiality”, thus processing data in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures. # Data managed through the ESPRESSO website This section describes the data that are being managed directly by the ESPRESSO project. The section is subdivided into different groups to cover all the different types of information collected. ## Personal Data/registry of SmaCStak participants The ESPRESSO system collects and stores information about the personal data of the SmaCStak participants that give their consent to record their personal data in the project ESPRESSO database. This is done through a form available from the website at the following address: _https://espressoprojekt.typeform.com/to/fo1DK0_ and that collects the following information: * Name * Surname * Email address * Company name * Company website * Country * City * Nature of the company. Be it: * Public o Private o Industry o Research Institution o University o NGO o SDO o NSB o ESO * City/Municipality * Regional Planning Association o Other **Figure 1. SmaCStak registration page. Available _here_ . ** Below, screenshots of the questions for being registered as SmaCStak participant are presented. Those registering to the SmaCStak by filling in the form above will have their personal data stored herewith: _https://www.typeform.com_ . To this database, only Mr. Mario Conci of Trentino Innovation and Mr. Jan-Philipp Exner of the University of Kaiserslautern have access and can extract regular copies locally stored for analysis. Names and email addresses are also stored on Mailchimp.org, used for sending the project ESPRESSO newsletter. ## Personal Data/registry of participants to the ESPRESSO Atlas of smart cities and standards The ESPRESSO system also collects and stores information about the personal data of the participants to the ESPRESSO Atlas of smart cities and standards that give their consent to record their personal data in the project ESPRESSO database. The Atlas is accessible through the website: _http://www.espresso- project.eu/_ . The platform is composed by a multitier architecture, which the presentation layer, the business logic layer, the service layer, and the data access layer. The **Presentation Layer** represents the web application through which the content is available to the end users. It is developed in AngularJS, an open source framework with Model-View-ViewModel (MVVM) architecture. The UI component is developed using Angular Material with responsive patterns, in order to extend the compatibility and ensure the same user experience through different devices, included the mobile devices. The GIS features and 3D visualization are developed over the framework NASA Web World Wind, an open source virtual globe with advanced features for managing and viewing geometries and display of map layers. The **Business Logic Layer** is developed in NodeJs, and handles the manipulation of the Smart Cities data and the related ancillary information (i.e. geographic information, documents, attributes, sections, etc.). At this level the permissions and the filters are managed, ensuring data integrity and proper handling of sensitive and confidential data. Regarding the data visibility, it handles different types of users with different permissions associated in order to limit both the visibility of some information and the operations that can be executed (Note that the data source of the users has not be defined yet). The **Service Layer** deals with data exchange with other layers through a RESTful architecture. In addition, it provides public APIs for a limited data exposure and for the data insert/update in order to make possible integration with third-party software. This also ensures a greater facility for importing data and therefore a benefit for the data population in order to enrich the platform Finally, **the Data Access Layer** provides access to the database, for which it was used PostgresSQL with PostGIS, a relational database with an extension to provide support for the geo-data. The Smart Cities data is organized over a dynamic template. Each Smart City contains a set of basic attributes (i.e. identifier, coordinates, name, website, etc.) and a set of categories. Then, each category can contain a set of subcategories or a set of attributes. The template can be represented as follow: • Smart City o Basic attributes * … * Basic attributes o Category A o … * Category Z o Sub-Category A o … o Sub-Category Z o Attribute A o … * Attribute Z The architecture allows to customize the smart city template by inserting and editing of categories and attributes. It means that new categories and new attributes can be created and added (as well as they can be removed) to existing smart cities without changing the database structure. Service Layer Business Logic Layer Presentation Layer Data Access Layer **Figure 2. Platform diagram.** ## Personal Data taken within the Conceptual Standards Framework survey The ESPRESSO system also collected personal data within the survey launched for identifying the open standards currently being used and most beneficial, and what gaps and weaknesses there are that could be addressed by the ESPRESSO project. The short survey was mainly addressed to the members of the SmaCStak community. The closing date for taking part was Tuesday 28th June 2016\. The personal/demographic data collected included: * name and surname * type of institution/organisation * geographical scope of the institution/organisation * role/position * department/unit * location of the institution/organisation As a disclaimer, at the beginning of the survey it was included the following privacy statement: _"All responses recorded, including any personal information you provide, will be kept strictly confidential. Your input will only be used in combination with the responses of others participating in the survey. Our research examines the opinions of groups of respondents. Your individual responses will not be shown to anyone outside the study team"._ As stated in the above mentioned statement, any personal data has been included in any ESPRESSO project deliverables. Furthermore, all personal data collected within this survey will be deleted once the ESPRESSO project comes to an end. ## Personal Data taken within the Smart City Standartization survey The ESPRESSO system also collected personal data within the survey launched for defining the ESPRESSO’s project CASSIOPEiA (Definition of a ConceptuAl StandardS InterOPErability frAmework) and to define testbeds. The link to access the survey is the following one: _https://sabina24.typeform.com/to/JDjCyI_ . The ESPRESSO project is conducting a detailed requirements-engineering campaign in order to establish the baseline for interoperability between the various Smart City sectors. Indeed, a first step towards the definition of a Smart City interoperability scope is the identification and selection of Use Cases of relevance, to be used further real-life test-beds within ESPRESSO. Those filling in the survey had and will have (since the survey is still opened) a part in this effort, by sharing needs and requirements for standardization in Smart City solutions, projects, initiatives, or even by providing the Consortium with a “best practice” case. The survey collects the following personal information: * Name * Company name * E-mail address (optional) Below, screenshots of the questions are presented. **9\. Conclusion** Understanding the needs of the stakeholders – as primary input into the various activities of the project - has been achieved using surveys, polls, events and webinars. ESPRESSO will ensure that all the data generated is properly collected accessed, curated, preserved, and eventually made public after any possible data-ownership issue has been cleared.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1114_SEARMET_692299.md
10\. Embryo flushing The data and resulting scienfic findings would also be beneficial to animal breeders who are interested in aceiving greater efficiency in production of cattle of specific types or sexes. Researchers could also use the data generated from this research regarding bovines to link to other animal models in efforts to better understand reproduction, in particular reproductive problems. # 2.1 Making data findable, including provisions for metadata [FAIR data] Outline the discoverability of data (metadata provision) The metadata is stored in Dublin Core format. All the data is captured through measurements and sampling by the researchers of EMU. Some of the data can be created automatically, but mainly all the data is produced bt the researchers doing experiments and collecting data from those experiments. For the metadata standarsd we are using appropritate sample size and correct sampling methology. **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** EMU DSpace mints (produce for the first time) DOIs via DataCite for deposited research outputs. **Outline naming conventions used** All files will be named uniformly when storing them for public use, based upon the following criteria: No special characters such as "/ \ : * ? " < > [ ] & $ will be used in names. Underscores (_) will be used to separate terms not spaces Names will be 30 charcters or less in length Names can specify the year and month of creation (YYYY-MM) at the end of the name Names will be descriptive of what information they contain - so that they are understandable to someone who is unfamilar with the research. Some information could be described in metadata including the following: -Project or experiment name or acronym -Researcher name/initials -Date or date range of experiment -Type of data -Conditions -Version number of file -Three-letter file extension for application-specific files **Outline the approach towards search keyword** Library will provide basic discovery metadata online (title, author, subjects, keywords, department, etc.). **Outline the approach for clear versioning** Versions of files will only be stored and made available when relevant Whenever possible, obsolete versions will be discarded or deleted (whilst retaining the original 'raw' copy) Files with multiple versions will include the letter V and the number of the version before the date - for example V1 V2 V3, etc. If appropriate, version control software e.g. Subversion, TortoiseSVN will be used **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** Dublin Core # 2.2 Making data openly accessible [FAIR data] **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so** All metadata will made openly available via EMU Dspace. Any data that will be kept closed will be done so either for IRP reasons or to protect confidentiality issues. **Specify how the data will be made available** All the data will be available via DSpace. **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?** Metadata will be available in conventional formats such as pdf files, individual level data would be accessible using R software. **Specify where the data and associated metadata, documentation and code are deposited** The data and associated metadata, documentation and code are deposited in Estonian University of Life Sciences Library digital repository EMU DSpace. **Specify how access will be provided in case there are any restrictions** Data is freely accessible. Restricted data is accessible via password protection. # 2.3 Making data interoperable [FAIR data] **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** The metadata will be created by experiments and different measurements. Some of the information about bovines can be created automatically. The metadata is storaged at first in the University of Life Sciences personal server spaces and after data publication in DSpace. Whenever possible interoperatable files, formats will be used such as ,CSV. **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** Whereever possible standard vocabulary will be used for data sets. No mapping to more commonly used ontologies will be offered. **2.4 Increase data re-use (through clarifying licenses) [FAIR data] Specify how the data will be licenced to permit the widest reuse possible** Unless protected via IRP, no licenses will be needed to use data. **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** The data will be made available after articles have been published in peer- reviewed journals. As the time to publish or seek patents may vary greatly, no precise time period can be stated. **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** Restrictions to data sharing may be due to participant confidentiality, consent agreements or IPR. Restrictions are required until the data is published. After that all the data can be used by third parties. Restrictions are needed in order to be published in peer-reviewed journals. Limited embrago period until data is published. **Describe data quality assurance processes** The consistency and quality of data collection will be controlled and documented with repeated samples or measurements, standardised data capture, data entry validation, peer-review of data or representation with controlled vocabularies. **Specify the length of time for which the data will remain re-usable** In perpetuity # Allocation of resources **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** Free of charge. **Clearly identify responsibilities for data management in your project** Data Management is handled by researchers during collection and by staff of the EMU Library during curation and preservation **Describe costs and potential value of long term preservation** Its librarians voluntary job. # Data security **Address data recovery as well as secure storage and transfer of sensitive data** EMU DSpace is maintained and operated by Estonian University of Life Sciences Library. All data files are stored in Estonian University of Life Sciences storage service in two independent copies. Preservation and back-ups of the data are ensured by the EMU DSpace preservation policies: daily backups, security updates, tightly controlled administrative access. Sensitive data is available only for SEARMET group members via login. # Ethical aspects **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** No research involving human participants is done. All the researchers have given consent for data preservation and sharing. For animal experiments there are licenses obtained from the local ethics committee in accordance with the EU guidelines. These licences are preserved in EMU DSpace. Sensitive data will be securely stored at EMU DSpace. # Other **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** N/A SEARMET Deliverable #2.3 <table> <tr> <th> </th> <th> **H2020-TWINN-2015** **Grant Agreement no** This project is funded under the **EUROPEAN COMMISSION** in the Framework Programme for Research and Innovation (2014-2020). </th> </tr> <tr> <td> Call: </td> <td> Work programme **H2020** under **“Spreading Excellence and Widening Participation”, call: H2020-TWINN-2015: Twinning** (Coordination and Support Action). </td> </tr> <tr> <td> Project full title: </td> <td> Scientific Excellence in Animal Reproductive Medicine and Embryo Technology </td> </tr> <tr> <td> Project acronym: </td> <td> SEARMET </td> </tr> <tr> <td> Work Package (WP): </td> <td> 2 </td> </tr> <tr> <td> Deliverable (D): </td> <td> 2.3 </td> </tr> <tr> <td> Due date of deliverable: </td> <td> 20 </td> <td> </td> </tr> <tr> <td> Author(s): </td> <td> Ülle Jaakma </td> </tr> <tr> <td> Contributor(s): </td> <td> </td> </tr> <tr> <td> Start date of project: </td> <td> 1 January 2016 </td> <td> Duration: 36 Months </td> </tr> </table> <table> <tr> <th> **Dissemination Level** </th> <th> </th> </tr> <tr> <td> **PU** </td> <td> Public </td> <td> _PU_ </td> </tr> <tr> <td> **CO** </td> <td> Confidential, only for members of the consortium (including the Agency Services) </td> <td> </td> </tr> </table> Version Page
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1118_SONNETS_692868.md
**Executive Summary** The SONNETS project participates in the Pilot on Open Research Data launched by the European Commission along with the Horizon 2020 programme. The main purpose of a Data Management Plan (DMP) is to describe Research Data with the metadata attached to make them discoverable, accessible, assessable, usable beyond the original purpose and exchangeable between researchers. SONNETS general strategy for data management follows the Guidelines on Data Management in Horizon 2020 and will specify the types of data to be generated and collected along with their respective identification and classification methods, the standards and metadata to be used, the corresponding data exploitation and availability perspectives as well as the data sharing and preservation schemes to be used. The DMP will be implemented through the Task 1.3 lead by ATOS. The SONNETS project aims at accelerating the transformation of the public sector through the identification, analysis and take-up of emerging technologies that hold the potential to transform and infuse real value to public services. In order to achieve this goal, the SONNETS consortium will carry desk-based research, workshops and interviews, make use of data discovery tools, construct a wide network of collaborators, engage a group of renowned experts and develop a knowledge repository, which will produce a huge amount of data. Hence the importance for SONNETS project to define a data management policy. SONNETS involves primary data collection: (1) semi-structured interviews; (2) focus group discussions and stakeholder workshops; (3) Communication via SONNETS group. These sources will be the bases for the creation of seven SONNETS datasets: * Dataset 1: Contact Users’ Database * Dataset 2: Stakeholders’ Database * Dataset 3 Stakeholders’ Interviews * Dataset 4 Stakeholders’ Surveys * Dataset 5 Societal and Public Sector Needs  Dataset 6 Emerging Technologies  Dataset 7 SONNETS roadmap and briefs. These data sets will be handled as editable file (.doc) or Data will be shared in relation to publications (deliverables and papers). SONNETS project intends to make its research data available for open access and use by other researchers and members of the public. In furtherance of this goal, SONNETS will deposit its research data into a digital repository and make the data available for access and reuse under a Creative Commons Attribution License. Finally, the SONNETS data will be backed up regularly and stored on the ATOS Institutions server. Furthermore, for potential sensitivities (e.g. personal data of interviewees) of the data being collected, the project has established a system for protecting data while it is being processed. 1. **Introduction** **1.1 Purpose and scope** This document describes the Data Management Plan (DMP) with respect to the Horizon 2020 – EURO-6-2015 CSA Project SOcietal Needs aNalysis and Emerging Technologies in the public Sector (Grant Agreement No. 692868). The mission of the SONNETS project, as described in its proposal and subsequent grant agreement, calls on providing an ever-evolving methodological framework for public sector organizations and related stakeholders to accelerate the transformation of the public sector through the identification, analysis and takeup of emerging technologies that hold the potential to transform and infuse real value to public services. In order to tackle the above-mentioned goal, the SONNETS consortium will carry desk-based research, workshops and interviews, make use of data discovery tools, construct a wide network of collaborators, engage a group of renowned experts and develop a knowledge repository (the SONNETS LinkedIn group, https://www.linkedin.com/groups/8520112). These activities will produce a huge amount of data, hence the importance for SONNETS project to define a data management policy. The purpose of the DMP is to provide an analysis of the main elements of the data management policy that will be used by the consortium with regard to all the datasets that will be generated by the project. **1.2 Structure** This document provides a description of the Data Management Plan (DMP) within the SONNETS project through the following sections: * _Section 2_ defines the data management strategy to be followed during the SONNETS project according to the Guidelines on Data Management in Horizon 2020 published by the EC services. * _Section 3_ details the information to be included in the SONNETS data tables, listing the different datasets that will be produced by the project, the main exploitation perspectives for each of those datasets, and the major management principles the project will implement to handle those datasets * _Section 4_ presents the main conclusions and finally the _References section_ provides information about the documentation used to produce this deliverable. 2. **Data Management strategy** The general strategy for data management, according to the Guidelines on Data Management in Horizon 2020 1 , will be based on the identification and classification of data generated and collected, standards and metadata to be used, exploitation and availability of data as well as how the data will be shared and archiving preservation of the information. The SONNETS DMP will cover all the data life cycle. Hence, the task _T1.3 Data Management_ in WP1 will be devoted to formulate and continuously evolve the SONNETS research data management plan in accordance with the H2020 guidelines regarding Open Research Data. In Task 1.3, the metadata, procedures and file formats for note-taking, recording, transcribing, storing visual data from participatory techniques, and anonymising semi-structured interview and focus group discussion data will be developed and agreed. ATOS will be the leader for this task, though all partners are involved in the compliance of the DMP. 1. **Types of data to be generated/collected** Data to be generated/extracted/collected will be obtained with the collaboration of researchers, the external experts (both the Experts Group and Experts Advisory Group) and other collaborators. 1. **Data sources** SONNETS involves **primary data collection** : 1) semi-structured interviews; 2) focus group discussions and stakeholder workshops; 3) Communication via SONNETS group 1. _Semi-structured interviews with individuals_ : The team anticipates undertaking 20-25 semi-structured interviews in WP2 and WP3. Data will be collected and stored using digital audio recording (e.g. MP3) where interviewees permit. In case they do not, interviews will be undertaken in pairs to enable detailed note-taking. Interview notes will be typed up according to agreed formats and standards. During data analysis, the data will be accessible only by certified members of the project team. The research project will remove any direct identifiers in the data before sending it to a digital repository. 2. _Focus group discussions and stakeholder workshops_ : Focus groups and workshops conducted within WP2 and WP3 will involve two researchers. Whether recorded or not, the event will be transcribed or documented using agreed formats and standards for handling the issue of multiple voices, interruptions, labelling of participatory and visual activities, and so on. All transcripts will be in Microsoft Word. All the researchers will be reasonably fluent in both English and the main language in which interviews and focus groups will be conducted, so that transcriptions will be translated into English. 3. _Communication via SONNETS collaboration tool_ : The consortium has created a dedicated Web site and a SONNETS LinkedIn Group. On one hand, the project site has been established using a content management system (Drupal 7) so that data users can participate in adding site content over time in the Blogs section ( _http://www.sonnets-project.eu/blog_ ) , making the site self-sustaining. For preservation, we will supply periodic copies of the data to ATOS own digital data repository. That repository will be the ultimate home for the data. On the other hand, the consortium agrees to use LinkedIn to manage and distribute the communication data of the SONNETS network. In addition to the research community, we expect these data will be used by practitioners and policymakers. 2. **Type of data** The data to be collected can be: * **No personal data** : this information is not affected by Data Protection legislation * **Personal data** : data which relate to an individual who can be identified 2 o (a) from those data, or o (b) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller, and includes any expression of opinion about the individual and any indication of the intentions of the data controller or any other person in respect of the individual. In that case, SONNETS follows the European directives: * Data Protection Directive 3 * Directive on privacy and electronic communications. 4 The activities focussed on the ethical and privacy issues and compliance to legislation in SONNETS are carried out in WP6 and are applied in accordance to what have been defined in D6.2 PODP- Requirement No. 2. 2. **Standards and metadata to be used** Data will be shared in relation to publications (deliverables and papers). As such, the publication will serve as the main piece of metadata for the shared data. Hence, formats to be used mainly include .doc, .pdf and .xls files, which substantially reduce the amount of metadata. Other standards do not apply to this project. 3. **Exploitation, availability of data and re-use** SONNETS project intends to make its research data available for open access and use by other researchers and members of the public. In furtherance of this goal, SONNETS will deposit its research data into a digital repository (Zenodo 5 or similar) and make the data available for access and reuse under a Creative Commons Attribution License 5 . For sensitive/restricted data, if any, access restrictions will be enforced (e.g. by requiring specific credentials) or at least limited in the detail available, e.g. by granting an open access exclusively through aggregation, while providing the specific data to authorized users only. Additionally, the Grant Agreement and the SONNETS Consortium Agreement are to be referred to for further details on the ownership, management of intellectual property and access. 4. **Archiving and preservation** Our data will need to be backed up regularly; because of likely problems with viruses and hardware in developing countries, so that up-to-date versions are stored on the ATOS Institutions server. Qualitative data will be backed up and secured by the coordinator on a regular basis and metadata will include clear labelling of versions and dates. There are some potential sensitivities (e.g. personal data of interviewees) of the data being collected, so the project has established a system for protecting data (see deliverable D6.2 POPD - Requirement No. 2 for further details) while it is being processed, including use of passwords and safe back-up hardware. 3. **SONNETS Datasets** As mentioned in the previous chapter, the DMP should address the points below on a dataset by dataset basis and should reflect the current status of reflection within the consortium about the data that will be produced. * **Dataset reference and name:** Identifier for the data set to be produced. * **Dataset description** : Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. * **Standards and metadata** : it includes reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created. * **Data sharing** : A detailed description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related). * **Archiving and preservation** : Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. Within SONNETS, the consortium plans to create seven separate datasets. The datasets will have the structure in accordance with the guide of Horizon 2020 for the Data Management Plan. 1. **DATASET 1: Contact users’ database** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 1: Contact users’ database </th> </tr> <tr> <td> **2** </td> <td> **Dataset description** </td> </tr> <tr> <td> The data collected are implemented in Excel and refer to the following sections and purposes: * Contact users’ register for newsletter subscriptions, it contains name and e-mail (both mandatory). This dataset is automatically generated when visitors sign up to the newsletter form available on the project website. The register will be used in order to send issues of the project newsletters. * Contact user’s personal details with regard to messages sent to the website through the Contact form. It includes name, e-mail, message (all mandatory) and (possibly) phone. The contact details will be used to address the enquiry/request and to send </td> </tr> <tr> <td> </td> <td> information in the scope of the SONNETS project. </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> This dataset can be imported from, and exported to a CSV, TXT or Excel file. </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> The mailing list will be used for disseminating the project newsletter to a targeted audience. An analysis of newsletter subscribers may be performed in order to assess and improve the overall visibility of the project. As it implies personal data, the access to the dataset is restricted to SONNETS consortium. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in Atos’ servers. </td> </tr> </table> # Table 2: DATASET 1: Contact users’ database **3.2 DATASET 2: Stakeholders’ database** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 2: Stakeholders’ database </th> </tr> <tr> <td> **2** </td> <td> **Dataset description** </td> </tr> <tr> <td> This dataset is related to the stakeholders that will build the SONNETS network and contains the name, surname, job function, domain field/expertise, e-mail contact and location as well as project interests or benefits, what they can contribute, and future actions & communication. * Contact users’ register for Experts Group. * Contact users’ register for Expert Advisory Board This dataset is used to extend and disseminate the information about the project as well as to get feedback from these collaborators. </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> This dataset can be imported from, and exported to a CSV, TXT or Excel file </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> For these sensitive/restricted data, access restrictions will be enforced (e.g. by requiring specific credentials) or at least limited in the detail available, e.g. by granting an open access exclusively through aggregation, while providing the specific data to authorized users only. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in Atos’ servers. </td> </tr> </table> # Table 3: DATASET 2: Stakeholders’ database **3.3 DATASET 3: Stakeholders’ interviews** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 3: Stakeholders’ interviews </th> </tr> <tr> <td> **2** </td> <td> **Dataset description** </td> </tr> <tr> <td> This dataset will contain the answers of people who will participate in the SONNETS individual interviews, workshops and focus groups carried within WP2 and WP3 activities. </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> Data will be collected and stored using digital audio recording (e.g. MP3) where interviewees permit. In any case, the data from these sources will be always held in transcript form in accessible .doc file format (Word). </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will be used to produce analytical reports on the most important societal and public sector needs as well to identify the most appropriate technologies to tackle these tasks. Results will be mainly shared through the WP2, WP3 and WP4 deliverables. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in Atos’ servers. </td> </tr> </table> # Table 4: DATASET 3 Stakeholders’ interviews **3.4 DATASET 4: Stakeholders’ questionnaires** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 4: Stakeholders’ questionnaires </th> </tr> <tr> <td> **2** </td> <td> **Data set description** </td> </tr> <tr> <td> This dataset will contain the answers of people who will participate in the SONNETS questionnaires carried within WP2 and WP3 activities. The surveys will be built using Limesurvey or similar and will be hosted at the project website ( _www.sonnets-project.eu_ ) . </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> This dataset can be imported from, and exported to a CSV, TXT or Excel file. </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will be used to produce analytical reports on the most important societal and public sector needs as well to identify the most appropriate technologies to tackle these tasks. Results will be mainly shared through the WP2, WP3 and WP4 deliverables. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in the internal project repository (Owncloud) </td> </tr> </table> # Table 5: DATASET 4 Stakeholders’ surveys **3.5 DATASET 5: List of societal and public sector needs** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 5: List of societal and public sector needs </th> </tr> <tr> <td> **2** </td> <td> **Dataset description** </td> </tr> <tr> <td> Dataset for analysis of societal and public sector needs is the basis of the WP2. Data will be collected from stakeholders. Data collection techniques used will be mainly stakeholder interviews, focus groups and data collection sheets filled by stakeholders. Literature data will also be used. The dataset will be useful for all users of the project outcomes. Metadata will be comprised of contextual information about the data in a text based document. </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> This dataset is a combination of Excel/WORD/PDF documents. </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will be mainly shared through the WP2 and WP4 deliverables, which are public. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in project internal repository (Owncloud), in the project website ( _http://www.sonnets-project.eu/downloads_ ) and in a digital repository (Zenodo or similar). </td> </tr> </table> # Table 6: DATASET 5 societal and public sector needs **3.6 DATASET 6: List of emerging technologies** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 6: List of emerging technologies </th> </tr> <tr> <td> **2** </td> <td> **Dataset description** </td> </tr> <tr> <td> Dataset for analysis of emerging technologies is the basis of the WP3. Data will be collected from stakeholders. Data collection techniques used will be mainly stakeholder interviews, focus groups and data collection sheets filled by stakeholders. Literature data will also be used. The dataset will be useful for all users of the project outcomes. Metadata will be comprised of contextual information about the data in a text based document. </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> This dataset is a combination of Excel/WORD/PDF documents. </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will be mainly shared through the WP3 and WP4 deliverables, which are public. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in project internal repository (Owncloud), in the project website ( _http://www.sonnets-project.eu/downloads_ ) and in </td> </tr> <tr> <td> </td> <td> a digital repository (Zenodo or similar). </td> </tr> </table> # Table 7: DATASET 6 Emerging technologies **3.7 DATASET 7: SONNETS roadmap and briefs** <table> <tr> <th> **1** </th> <th> **Dataset reference and name** </th> </tr> <tr> <th> DATASET 7: SONNETs roadmap and briefs </th> </tr> <tr> <td> **2** </td> <td> **Dataset description** </td> </tr> <tr> <td> This dataset will be developed on the analysis of the identified technologies regarding technology readiness level, relevant international research programmes and important actors and on the identification on research needs. The dataset will provide a set of recommendations (namely, the SONNETS briefs) for policy makers, researchers and representatives of the public sector. Metadata will be comprised of contextual information about the data in a text based document. </td> </tr> <tr> <td> **3** </td> <td> **Standards and metadata** </td> </tr> <tr> <td> This dataset is a combination of /WORD/PDF documents. </td> </tr> <tr> <td> **4** </td> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will be mainly shared through the WP3 and WP4 deliverables, which are public. </td> </tr> <tr> <td> **5** </td> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> The dataset will be preserved in project internal repository (Owncloud), in the project website ( _http://www.sonnets-project.eu/downloads_ ) and in a digital repository (Zenodo or similar). </td> </tr> </table> # Table 8: DATASET 8 SONNETS roadmap and briefs **4 Conclusions** This Data Management Plan provides an overview of the data that SONNETS project will produce together with related challenges and constraints that need to be taken into consideration. The analysis contained in this report allows anticipating the procedures and infrastructures to be implemented by SONNETS project to efficiently manage the data it will produce. Nearly all project partners will be owners or/and producers of data. The SONNETS Research Data Management Plan will put a strong emphasis on the appropriate collection – and publication should the data be published – of metadata, storing all the information necessary for the optimal use and reuse of those datasets. Specific attention will be given to ensuring that the data made do not break either partner IPR rules or regulations and good practices related to personal data protection. For this latter point, systematic anonymization of personal data will be made. **5 References** 1. Guidelines on Data Management in Horizon 2020, V2.0, 30 October 2015, _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/_ _oa_pilot/h2020-hi-oa-data-mgt_en.pdf_ . 2. Guidelines on Open Access to Scientific Publication and Research Data in Horizon 2020, Version 2.0, 30, October 2015, _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/_ _oa_pilot/h2020-hi-oa-pilot-guide_en.pdf_ . 3. SONNETS Grant Agreement No. 692868 (Grant Agreement-692868SONNETS.pdf) 4. SONNETS consortium agreement (SONNETS_CA_v2.0_signed.pdf). 5. Key definitions of the Data Protection Act, ico (Information Commissioner’s Office), _https://ico.org.uk_ . 6. _http://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX:31995L0046_ 7. _http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2002:201:0037:0047:en:P_ _DF_ . 8. _http://zenodo.org_ 9. _https://creativecommons.org/licenses_ /
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1120_ARCHES_693229.md
1 Introduction 11 2 Data summary 12 2.1 Purpose 12 2.2 Types and formats 12 2.3 Origin 13 2.4 Re‐use 14 2.5 Expected size 15 2.6 Utility 16 3. FAIR data 18 1. Making data findable, including provisions for metadata 18 2. Making data openly accessible 18 3. Making data interoperable 19 3.4 Increase data re‐use 20 4 Allocation of resources 21 5. Data security 22 6. Ethical aspects 23 1. Principles of participation 24 2. Principles of consent 24 3. Principles of security 24 4. Principles of privacy 25 5. Deliverables 25 7. Conclusions 26 References 27 Annex A Ethics requirements 29 A.1 Information sheets 29 1. Information sheet for people with learning disabilities 29 2. Information sheet for people with sensory disabilities 32 A.2 Informed consents 34 1. Informed consent for people with learning disabilities 34 2. Informed consent for people with sensory disabilities 36 Figure 1: EDM class hierarchy [18]. 20 Figure 2: ARCHES Information Letter for people with learning disabilities – general description. 29 Figure 3: ARCHES Information Letter for people with learning disabilities – the research process (1/2). 30 Figure 4: ARCHES Information Letter for people with learning disabilities – the research process (2/2). 31 <table> <tr> <th> Figure 5: ARCHES Information Sheet for people with sensory disabilities (1/2). </th> <th> 32 </th> </tr> <tr> <td> Figure 6: ARCHES Information Sheet for people with sensory disabilities (2/2). </td> <td> 33 </td> </tr> <tr> <td> Figure 7: ARCHES Consent Form for people with learning disabilities (1/2). </td> <td> 34 </td> </tr> <tr> <td> Figure 8: ARCHES Consent Form for people with learning disabilities (2/2). </td> <td> 35 </td> </tr> <tr> <td> Figure 9: ARCHES Consent Form for people with sensory disabilities. </td> <td> 36 </td> </tr> </table> # Abbreviations **API** : Application Programming Interface **ARCHES** : Accessible Resources for Cultural Heritage EcoSystems **BERA** : British Educational Research Association **BSL** : British Sign Language **CA:** Consortium Agreement **CH** : Cultural Heritage **CIDOC** : International Committee of Documentation **CMS** : Central Management Server **Coprix** : (partner short name) Coprix Media **CRM** : Conceptual Reference Model **DMP** : Data Management Plan **DoA:** Description of Action **EC** : European Commission **EDM** : European Data Model **EEAB** : External Expert Advisory Board **FAIR** : Findable, Accessible, Interoperable and Reusable **FLG** : (partner short name) Fundación Lázaro Galdiano **GA:** Grant Agreement **H2020** : Horizon 2020 **ICOM** : International Council of Museums **IPR:** Intellectual Property Rights **KHM** : (partner short name) Kunst‐historisches Museum Wien **LIDO** : Lightweight Information Describing Objects **MBBAA** : (partner short name) Centro Regional de Bellas Artes de Oviedo **MN** : (partner short name) Moritz Neumüller – ArteConTacto **ORDP** : Open Research Data Pilot **ORO** : Open Research Online **OU** : (partner short name) The Open University **SignTime** : (partner short name) Sign Time GmbH **Tree** : (partner short name) Treelogic Telemática y Lógica Racional para la Empresa Europea S.L. **Thyssen** : (partner short name) FundaciónColección Thyssen‐Bornemisza **UBAH** : (partner short name) University of Bath **VCS** : Version Control System **VRVis** : (partner short name) VRVis Zentrum für Virtual Reality und visualisierung forschungs ‐ GmbH **V &A ** : (partner short name) Victoria & Albert Museum **WC** : (partner short name) The Wallace Collection **WP** : Work Package # Introduction ARCHES [2] (“Accessible Resources for Cultural Heritage Ecosystems”) is a Horizon 2020 (H2020) project which aims to generate more inclusive environments at museums and cultural heritage (CH) sites, so that people with differences and difficulties associated with perception, memory, cognition and communication can easily access and understand art. To this end, ARCHES brings together three key aspects: a participative research methodology with the active involvement of the previously cited target audiences organised in the so called exploration groups; the re‐use of available digital resources provided by the partners and external sources; and the development of innovative technologies that will be implemented and fine‐tuned by the tech‐partners with the feedback from the exploration groups. As a consequence, the generation, collection, re‐use and preservation of data is crucial for the smooth running of ARCHES and needed the collaboration of all the partners within the consortium: University of Bath (UBAH), The Open University (OU), Sign Time GmbH (SIGN), Neumüller Moritz – ArteConTacto (MN), Centro Regional de Bellas Artes de Asturias – Fine Arts Museum of Asturias (MBBAA), Coprix Media (Coprix), VRVis Zentrum für Virtual Reality and Visualization Forschungs GmbH (VRVis), KHMMuseumsverband – Kunsthistorisches Museum Wien (KHM), The Wallace Collection (WC), Fundación Colección Thyssen Bornemisza – Thyssen Museum (Thyssen), Fundación Lázaro Galdiano – Lázaro Galdiano Museum (FLG) and Victoria & Albert Museum (V&A). This document, the revised version of the ARCHES Data Management Plan, describes the research data generated and collected during the project, as well as the data (i.e. the partner museums digital assets) that has been used throughout the project. The strategy to make data FAIR (findable, accessible, interoperable and re‐usable) is commented in the following sections. # Data summary ## Purpose The purpose of the data collection/generation is the support of the ARCHES technical developments and complementary activities to validate the proposed solutions in real environments and organise demonstration activities open to everyone according to the objectives and guidelines defined in the document Description of Action (DoA). In particular, data collection and generation are crucial to achieve the following specific objectives: * To develop and evaluate strategies which enable an exploration of the value, form and function of mainstream technologies by and for people with differences and difficulties associated with perception, memory, cognition and communication. * To develop and evaluate the use of mainstream technologies to enable the inclusion of people with such disabilities as museums visitors and consumers of art. * To identify sources – Internet, internal archives, libraries, etc. – that can provide digital cultural resources and take advantage of their possibilities to integrate this content into innovative tools, applications and functionalities. * To validate the technological outcomes in operational environments based on a participatory research methodology consisting of three pilot exercises in museums. * To promote the tools and applications developed in ARCHES by means of on‐site demonstration activities all around Europe. Likewise, data collection/generation is in close connection with the tasks scheduled in the work plan, especially in the context of the technical work packages – i.e., WP3 “Development of an accessible software platform”, WP4 “Development of applications for handheld devices” and WP5 “On‐site multisensory activities” – paving the way for the system validation and pilot exercises in WP6. Therefore, the DMP will be a living document where new datasets may be incorporated at any time all along the lifespan of the project following the principles of FAIR data. ## Types and formats ARCHES will handle different types and formats of data according to the multiple research areas and technical developments. In principle, these are grouped into four categories: * **Multimedia research data** : A combination of sound, images, video and text components to represent the works of art in different ways and support the integration of digital surrogates into the web and mobile applications as well as the generation of complementary resources and tools. Photographs, 3D models, sketches, interactive guides, avatars and serious games are also considered within this category. * **Metadata** : Data associated to the multimedia content used to better describe and understand each of the works of art. This includes information about the author, style, period and other details regarding the history and characteristics of the paintings, sculptures and other CH assets. Subtitles are other form of metadata that will be generated together with the videos to become accessible to people with hearing difficulties. * **User data** : Based on the findings in WP3 and WP4 no user data is collected in the platform or apps. This also better complies with GDPR and ensures the privacy of the users. The access preferences of the users are just saved on the users device for usability purposes and are not sent to any server. * **Exploration data** : Data generated for/by the exploration groups regarding their involvement in the project, from the recruitment phase in the UK, Spain and Austria to the interaction with the research teams to provide feedback on the methodology, activities and technologies. Therefore, information sheets, consent forms and multiple materials to assess the outcomes and allow expressing opinions and feelings – e.g. surveys, questionnaires, interviews, etc. – were generated, stored and preserved under this category. The above information is summarised in Table 1 with the most common formats employed by the partners when managing these datasets. We will decide the ones that fit best, depending on the input sources and the applications. This will also be aligned with the definition of system requirements and architecture in the deliverables D3.1 and D4.1. **Table 1: Types and formats of data generated/collected.** <table> <tr> <th> </th> <th> **Type of data** </th> <th> **Description** </th> <th> **Formats** </th> </tr> <tr> <td> **1** </td> <td> **Multimedia research data** </td> <td> Image </td> <td> JPG, PNG, TIF, GIF, BMP </td> </tr> <tr> <td> High‐resolution photograph </td> <td> TIF, PNG, RAW </td> </tr> <tr> <td> 3D model (where applicable) </td> <td> PLY, DXF, COLLADA, STL, OBJ, PDF, U3D, FBX </td> </tr> <tr> <td> Audio and video </td> <td> MP3, MP4, MOV, AVI, WMV, MPG, MPEG, 3GP </td> </tr> <tr> <td> **2** </td> <td> **Metadata** </td> <td> Subtitles, captions </td> <td> SRT, VTT, QT </td> </tr> <tr> <td> Metadata </td> <td> JSON, plain text </td> </tr> <tr> <td> **3** </td> <td> **User data** </td> <td> None </td> <td> </td> </tr> <tr> <td> **4** </td> <td> **Exploration data** </td> <td> Information sheet </td> <td> DOC, DOCX, PDF </td> </tr> <tr> <td> Informed consent </td> <td> DOC, DOCX, PDF </td> </tr> <tr> <td> Feedback </td> <td> DOC, DOCX, XLS, XLSX, PDF, JPG, PNG, MP4, 3GP </td> </tr> </table> ## Origin We identify four different origins for the data collected/generated in ARCHES: * **Consortium partners** : Information and digital CH assets mainly gathered by the six participating museums, i.e., MBBAA, KHM, WC, Thyssen, FLG and V&A, so as to present them to the exploration groups in Spain, Austria and the UK. This data will be obtained from their digital archives, guides, library, etc. and exploited by means of state of the art technologies – e.g. augmented reality, avatars, relief printers and models, context‐sensitive tactile audio guides and advanced image processing techniques. ARCHES acquired new image (photographs, models, drawings) and video datasets (e.g. when enough documentation was not available or the development of accessible resources required additional multimedia content) that could be used in the project. In these cases, image and video datasets were produced at the participating museums using the most common file formats. Text files relating to information about museum objects and images were generated in formats such as DOC, PDF or TXT, although it may have been processed and converted to other schemes. * **Exploration groups** : ARCHES created new data based on the regular activities of the exploration groups prepared for them at the six ARCHES museums. It informed how the platforms, apps and activities were designed and developed with the purpose of improving their functionalities and accessibility. Qualitative and quantitative data resulting from diverse methods – e.g. interviews and discussions with participants, along with direct observation – was collected in audio, video and written formats taking the differences and difficulties of the individuals into account as done with the consent forms. * **External sources** : Repositories of CH assets of several institutions – such as Europeana [3], DBpedia [4] and the Rijksmuseum [5] – that publicly and freely release digital content on the Internet, useful to test the functionalities of the new tools and complement the online experience. Images, videos, audio and metadata were collected through the corresponding Application Programming Interfaces (API). The list of CH institutions continuously grew in parallel with the organisation of the sessions where the exploration groups evaluate the types of contents to be included in the platform and applications according to their needs and expectations. In the final platform data collected and combined by the Museums themselves was used in order to ensure completeness and quality. ## Re‐use As stated in the DoA and described in the previous sections, a significant part of the research data consisted of existing data coming from museums, CH sites and other related sources to be used for the development of technology such as apps, websites and sensory activities (e.g. 3D reliefs). This is clearly aligned with the expected impacts listed in the work programme under the topic Reflective‐6‐2015 [6], where promoting “the use of digital cultural heritage allowing its reinterpretation towards the development of a new shared culture in Europe” and exploiting “the rich and diverse European digital cultural heritage in a sustainable way” are two of the cornerstones. On the one hand, ARCHES re‐used data already stored on the Central Management Server (CMS) of each of the partner museums. For example, in case of the WC, this is MuseumPlus and eMuseumPlus. Therefore, there was no problem in providing specific digital resources to the technological developers once these are selected by the exploration groups with the support of OU, UBAH as well as the educators and experts working in the museums. This enabled the generation of tactile images among others. In addition, some images and complementary information was directly retrieved from the public websites, in particular when referring to masterpieces: * MBBAA: _http://www.museobbaa.com/en/collection/permanent‐collection/_ * KHM: _https://www.google.com/culturalinstitute/beta/partner/kunsthistorisches‐museumvienna‐museum‐of‐fine‐arts?hl=en_ * WC: _http://wallacelive.wallacecollection.org/eMuseumPlus_ * Thyssen: _https://www.museothyssen.org/coleccion_ * FLG: _http://www.flg.es/museo/la‐coleccion/bases‐de‐datos_ * V&A: _https://www.vam.ac.uk/collections_ On the other hand, different options were considered to collect and re‐use already‐existing digital CH resources, making use of the APIs released by the multiple institutions as pointed out in the previous section. These APIs provide direct access to a great deal of artworks in several formats, from images to 3D models together with videos, text and audio tracks. Europeana [3] – MBBAA, KHM, Thyssen and FLG are Europeana contributors – the Rijksmuseum [5] and the Finnish National Gallery [7] were explored during the initial phase. Furthermore portals like Google Arts and Culture [8] – part of the KHM, Thyssen, FLG and V&A collections can be found here – WikiArt (The Visual Art Encyclopaedia) [9] or DBpedia [4] also make digitization of paintings, 3D objects and associated metadata available to the general public. The consortium paid special attention to Intellectual Property Rights (IPRs) regarding data re‐use from external and internal sources. We respected the terms and conditions described on the corresponding websites and/or the copyright licenses under Creative Commons [10]. Similarly, neither the WC nor the V&A will allow their digital assets to be re‐used for commercial purposes and, specifically, the WC will not allow any re‐use of the assets by third parties for any reason without permission. The other participating museums in Spain and Austria allowed the sharing of the artworks that were selected by the exploration groups. After the takeover of the app and platform development by SIGN from TREE, the data for the apps and platform was collected by use of a management backend, developed as part of WP3 and stored on the server of SIGN. ## Expected size Multiple factors were taken into account to estimate the size of the data. Based on the types identified in section 2.2, we expect multimedia research data to represent the highest percentage of storage needs. Format, resolution, size, length and other variables were analysed throughout the duration of ARCHES to ensure that the hardware resources are enough to store and preserve data, especially when the platform and apps become public and new institutions are sought to share their contents. Some examples of data provided by the participating museums can be found in Table 2. Although metadata did not require much space, images (especially those employed to design 3D reliefs) are in high resolution for accurate results, ranging between 5 and 100 MB each. Likewise, data related to the generation of tactile reliefs can vary from many MB to several GB. In addition, the consortium was analysing the reproduction of 3D models with different materials, colours and dimensions taking advantage of existing or new scans. Depending on the method, the size was several GB and up to TB. **Table 2: Examples of data provided by the museums.** <table> <tr> <th> </th> <th> **Painting** </th> <th> **3D object** </th> <th> **Fabric** </th> </tr> <tr> <td> **Title** </td> <td> Madame de Pompadour </td> <td> Celadon dish </td> <td> Strawberry thief </td> </tr> <tr> <td> **Photo** </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Place of origin** </td> <td> Paris (France) </td> <td> Zheijiang (China) </td> <td> London (UK) </td> </tr> <tr> <td> **Date** </td> <td> 1758 </td> <td> 14 th century </td> <td> 1883 </td> </tr> <tr> <td> **Artist/maker** </td> <td> François Boucher </td> <td> Unknown </td> <td> William Morris / Morris & Co. </td> </tr> <tr> <td> **Materials/techniques** </td> <td> Oil on canvas </td> <td> Stoneware, glazed </td> <td> Indigo‐discharged and block‐printed cotton </td> </tr> <tr> <td> **Credit Line** </td> <td> Bequeathed by John Jones </td> <td> Bequeathed by Mr Arthur Hurst </td> <td> Given by Morris & Co. </td> </tr> <tr> <td> **Museum No.** </td> <td> 487‐1882 </td> <td> C.1‐1940 </td> <td> T.586‐1919 </td> </tr> <tr> <td> **Gallery location** </td> <td> Europe 1600‐1815 (Room 3) </td> <td> Ceramics galleries (Room 145) </td> <td> British galleries (Room 125) </td> </tr> <tr> <td> **Description** </td> <td> Madame de Pompadour was the official mistress of King Louis XV. She was also an influential patron of the arts and a leader of taste. S devoted supporter of the Sèvres porcelain factory and keen collector of Japanese lacquer, she furnished her residences with fine furniture and porcelain. […] </td> <td> Green‐glazed stonewares from Zhejiang were the most common type of Chinese ceramics exported to the Middle East before 1400. This dish was thrown and carved before being given a thick green 'celadon' glaze, which has pooled in the incised decoration and carved fluting. […] </td> <td> Morris was inspired to draw this design after finding thrushes stealing fruit in his garden. This complicated and colourful pattern is printed by the indigo discharge method and took a long time to produce. Consequently, it was expensive to buy. […] </td> </tr> <tr> <td> **Other** </td> <td> </td> <td> Audio description available: _http://bit.ly/2nRAfmM_ </td> <td> </td> </tr> </table> Because exploration data did mainly consist of written documents as well as photographs and videos usually taken with a smartphone, they only account for several GB. ## Utility Collected/generated data will be useful for different target audiences: * **People with differences and difficulties associated with perception, memory, cognition and communication** : Co‐creation is a crucial concept in ARCHES where end users are continuously involved in the value chain from the very beginning. Multimedia research data and multisensory activities allowed them to explore new forms of interaction with CH resources through the Internet and/or at the museum. Metadata was a good support to multimedia content so as to make the available information accessible to all. Besides, in addressing the combination of state of the art technologies and digital CH assets, the exploitation and re‐use of data built bridges not only between cultures but also between the heritage communication and a historically silenced audience, traditionally sidelined by mainstream cultural and intellectual activity. Papers and handouts/prints were also used, in order to make the workshops tactile and interactive. * **Researchers** : Social researchers took advantage of the documents to obtain feedback from the participants as well as information sheets and consent forms for people with learning difficulties and sensory impairments. It facilitated communication and paved the way to replicate the approach and methodology in other contexts, especially in the field of education. From a technological perspective, high‐resolution photographs, 3D scans and other datasets were of interest to design new functionalities and applications in the field of CH. * **Museums** : The multimedia research data, exploration data and multisensory activities allowed museums to define more inclusive strategies in the future building on top of the ARCHES results. Moreover, the generated material may be employed for other purposes, such as marketing and preservation, attracting new audiences to the facilities. Likewise, the generated digital content may be integrated in their own websites and other networks to boost dissemination and the reuse of these surrogates. * **General public** : Although specifically designed for the aforementioned target groups, access to multimedia data through the online platform and smartphone applications are granted to all end users on a free basis. In addition, the outputs will be available at the participating museums at least during the open days to be organised at the end of the project. # FAIR data This section deals with four key aspects within the DMP, i.e., findable, accessible, interoperable and reusable data. ## Making data findable, including provisions for metadata The collection and generation of consistent and accurate metadata significantly contribute to the improvement of data search and the identification of digital resources. In this particular context where CH assets are the main concern, metadata facilitates the finding of similar content across different online collections and repositories, such as the external sources cited in the previous sections. In order to ensure that interoperability among different approaches and structures, the adoption of a common standard was intended. Therefore, the consortium analysed the schemas used by the participating museums as well as the most relevant actors in the CH field. The implementation of a metadata scheme was based on the requirements of the participating museums and the research groups in order to ensure high usability and full accessibility. Likewise, the consortium defined a unique identifier for each of the digital resources to facilitate the handling of videos, images, etc. among professionals. The methodology enabled the storage and preservation of, for example, multiple photographs of the same artwork keeping their connection for further re‐use, studies and cross search. For data that requires versioning, the consortium took advantage of Version Control Systems (VCS). For example, the source code was maintained in GIT repositories. Furthermore, the documents were also versioned in repositories during design and authoring. ## Making data openly accessible The ARCHES project proposes a general strategy to make data openly accessible. Through the platform, apps and multisensory activities the multimedia content is openly available. Regarding the exploration data, when it does not rise ethical and privacy issues, the consortium benefits from the available platform Open Research Online (ORO) [11], the OU’s repository of research publications and other research outputs. ORO is an open access resource that can be searched and browsed freely by the general public. Open access to the content of the repositories is subjected to IPRs. As a consequence, each partner carefully assessed on a case‐by‐case basis which results – and to what extent – were made public. In particular, museum data was retained by the project partners in accordance to the Grant Agreement (GA) and Consortium Agreement (CA). For example, WC digital assets, such as high‐resolution images, cannot be made openly available to third parties without permission. On the contrary, other WC digital assets, such as low‐resolution images and video content is already publicly available online on different websites [12], [13] and can be used by third parties for non‐commercial purposes. Similarly, V&A digital assets are openly available to third parties for non‐commercial purposes. The other participating museums decided on the strategy ‐ with respect to making data openly accessible ‐ once the digital CH assets to be included in the solutions were selected in the exploration sessions. Technology developers evaluated the outputs and intermediate results that became public since most developments contain traces of copyrighted material (e.g. use of high‐resolution photographs) and, thus, developers do not have the full copyright on the derivative outputs. In the context of communication and dissemination actions, different activities were planned to achieve the maximum impact from the outset by taking advantage of the generated data. These are described in deliverable D7.2 “Communication plan, activities and publications – 1 st version” and include a dedicated website [2] – where reports about research and technological results that are part of the public deliverables are available – social networks, mass media, networking in different events, etc. Moreover, the consortium encourages publication in open access journals and the adoption of green or gold models for open access. Again, ORO was used to this end as well as the website. ## Making data interoperable The use of the most common and standardised formats for each of the data types (see Table 1) is the first step to make data interoperable with other systems and for professionals in other disciplines. Free online converters can be found browsing the web to easily obtain a compatible file. Regarding the vocabularies and ontologies, ARCHES proceeded as outlined in section 3.1, i.e. analysing the schemas and methodologies used by the participating museums and other partners with the purpose of developing a joint strategy to ensure interoperability among their systems. The following paragraphs describe some popular schemas and how they are connected to provide data openly. The Conceptual Reference Model (CRM) [14] developed by the International Committee of Documentation (CIDOC) of the International Council of Museums (ICOM) is a well‐known and extensively used semantic model based on earlier standards. CIDOC‐CRM establishes relationships among implicit and explicit concepts for CH documentation to transform isolated and inhomogeneous metadata into a valuable and coherent global resource. CRMdig [15] has extended CIDOC‐CRM in the framework of the 3D‐Coform project [16]. The Lightweight Information Describing Objects (LIDO) is an XML harvesting scheme intended to act as a gateway and provider of museum object metadata to online databases and repositories. Therefore, it does not replace CIDOC‐CRM but builds on top of this and other data schemas. The strength of LIDO lies in “its ability to support the full range of descriptive information about museum objects” [17]. The Europeana Data Model (EDM) used by the Europeana network [3] aims to guarantee the preservation of the original data from the diverse metadata schemas, while accommodating the rich variety of community standards for museums, archives and digital libraries. In particular, LIDO is one of the standard intermediary schemas that can be mapped to EDM in a straightforward manner. The EDM class hierarchy is presented in Figure 1. **Figure 1: EDM class hierarchy [18].** ## Increase data re‐use ARCHES is committed to the exploitation and sharing of the outcomes generated (knowledge, technologies, data, etc.) in the framework of the activities defined in the work plan. Free applications, public deliverables, open publications and reports or access to research data are indicative of this commitment. However, all the aspects related to IPR were thoroughly assessed for each result and addressed according to the clauses of both the GA and the CA and considering the particular interests and policies of each of the partners. Data re‐use is not an exception. The platform and smartphone applications can be used as they are after the end of the project. Updates and support will depend on the decision of the museums to invest in further content. The technical development, if viable, will follow the plan developed in WP7. Because multimedia research data and metadata are mostly related to the digital CH assets owned by the six museums, the re‐use by third parties will be subjected to their particular business strategies. Data will be licensed following the same principles each museum apply to its digital collection available on the Internet through its website, Google Art, Europeana and other means all along the execution and after the completion of ARCHES. Outputs derived from the use of digital CH assets will also be released/protected accordingly in agreement with the technical developers (if any) involved. To this end, access to datasets in our repositories may be conditioned to the registration of the researchers and entities willing to take advantage of this content. In order to obtain the necessary rights, they will be asked to get in contect. If approved by the consortium, access will be granted. Some designs – such as the relief printer – and tools will not be released since the owner will apply for a patent. Making the data public and allowing its re‐use would be a violation that would invalidate the process. Therefore, an embargo period is foreseen in this particular situation. The consortium jointly worked towards the achievement of the maximum quality of the results that were tested by the exploration groups in the museums. # Allocation of resources The estimated costs for making data FAIR were already considered when drafting the budget for the project before being submitted to the European Commission. The implementation of dedicated spaces in data repositories was deemed to be an important part of the approach as well as the access to the collected and generated data. Consequently, the consortium does not need to make any distinction or to consider additional resources to fulfil the expectations in this regard. Depending on the repository, there will be different people responsible for the storage, access and curation of data: * The contributions of ARCHES to the ORO repository of the OU are managed by the Research Manager Prof Jonathan Rix (OU). * The multimedia/metadata repository for the interaction with the online platform and applications for handheld devices are managed by Christoph Aschberger (SIGN). * Intermediate and final data to generate 3D reliefs, 3D printer design and tactile audio guide are managed by Mr Andreas Reichinger (VRVis). # Data security All data was kept in compliance with the “Data protection act” (1998) [19], the “Freedom of information act” (2000) [20] and the “General Data Protection Regulation” (GDPR) 2018 [24]. Research notes and visual records along with interview material and transcripts will be kept in secure conditions. ARCHES is registered with the Faculty of Data Protection Officer at OU. Therefore, any personal information will be kept on an OU secure server. * OU aims to keep collected datasets resulting from the participation of the exploration groups in the pilot exercises and validation separated from personal identity information. Any key linking codes to identity information such as names, addressed and telephone numbers were kept secure and separate from the dataset accessible only to the investigators. This, however, was not always acceptable to the exploration groups and the methods they chose to develop. Consequently, partners were flexible in balancing their needs for privacy with the needs for representation. This is a key aspect of the research approach being adopted. The methodology was submitted to the Open University Data Protection Officer and was recorded on 23 rd December 2016\. * SIGN dealt with different types of data. Regarding the infrastructure for multimedia research data and metadata storage, backups will be made on a daily basis following the company’s policy. Access to these backups will were restricted to the persons working in the project. The data was stored inside the EU on servers in Frankfurt, Germany and backups in Vienna, Austria. * Although most data generated for the multisensory activities is intended to be used only by the developers, VRVis has a reserved space on a secure data server in its facility, which only members of VRVis can access. An easily browse‐able folder hierarchy is used: Each museum and each object has its own main directory. There, VRVis collects all input data and data generated during the project. VRVis is not recording or storing any personalized data. # Ethical aspects The research carried out in ARCHES adheres to the “Ethical guidelines for educational research” [21] edited by the British Educational Research Association (BERA) and the “GDPR” (2018) [24]. It also followed the OU policy documents “Ethical principles for research involving human participants” and “The code of practice for research and those conducting research”. The OU insisted that all explorations groups operate under these guidelines as well as any local or national policies which are relevant. In fact, the research protocol for the ARCHES project already received a favourable opinion by the Open University Human Research Ethics Committee. Given the possible vulnerable nature of the participants in the exploration groups, the consortium recognised the need to be sensitive to several key aspects when referring to data management and how participants get involved in the pilots and validation (e.g. information sheets and information consents), collaborate (e.g. access to data and audio visual content generation) and provide feedback (e.g. surveys and questionnaires): * The need to be constantly alert to the potential breaches of confidentiality between exploration group members. * Consent as an ongoing, unfolding process, particularly in relation to people with learning difficulties. This is particularly relevant given that the project is endeavouring to give these participants a research voice. * The need to balance our desire to gather data. Our presence as observers to practices rooted in everyday relationships elsewhere means we will engage in discussion with relevant management or services at the earliest opportunity if we have evidence for practices, which cause us concern about individual well‐being. * Issues with privacy in relation to subsequent use of data beyond the initial confines of the exploration groups. We regard images and other audiovisual footage as the property of the individual. If subsequently we used any material, we needed to seek further specific permission. WP8 “Ethics requirements” focused on the activities tackling ethics in relation to the exploration groups. The university research teams (OU and UBAH) have considerable personal expertise within the field of ethics, but this was reinforced by the presence of an independent, international ethical specialist on the External Expert Advisory Board (EEAB). The work package was executed throughout the duration of the project to ensure the informed, consensual and secure involvement of people with intellectual and sensory impairments. In this context, the key objectives were: * Develop a range of mechanisms to inform participants about the project and to assure we are working with their consent throughout. * Maintain ethical standards in a manner and form appropriate to that laid out by national and institutional documents and committees. * Operate with key participatory principles, agreed with the exploration groups that underpin research practices and relationships throughout the project. * Ensure all participants understand the importance of rigorously applying these ethical principles. In order to achieve these objectives, ARCHES was organised and managed in line with principles of participation, consent, security and privacy. The main aspects connected with the DMP are outlined in the following subsections (and the full version can be found in the DoA). ## Principles of participation * ARCHES is enabling the research voice of the members of the exploration groups. * ARCHES must ensure the members of the exploration groups are active, recognised and willing participants. * Individual members of the exploration groups will join the project on a volunteer basis and will be able to leave whenever they so wish. * Data collection methods will also be developed in collaboration with the exploration group so they can best identify, capture and record their experiences and views. * Diverse forms of communication must be used to engage with authentic user perspectives and the diverse forms of evidence that this produces must be valued and treated as significant markers of certainty. * Reports will be provided twice a year to the EEAB on ethical issues, in relation to principles of participation, consent, security and privacy. ## Principles of consent * Consent and assent is an ongoing, unfolding process, to which the research teams need to be alert at all times. It will be demonstrated by engagement as well as through verbal or signed agreement. * Consent will be made via the communication medium in which the person is most adept (verbal and/or augmented communication), and recorded with the person’s initials (or alternative if necessary), witnessed by an advocate. * The prospective members of the exploration groups need to meet the researchers before the project begins. Informed consent will be sought following this meeting. Agreement to participate will be viewed as provisional consent. * Consent is provisional upon the research being conducted within the outlined framework, continuing to develop within participant expectations, and there being no adverse change in the person’s ability to give consent. * Participants will be encouraged to share information with people they trust, who in turn will be encouraged to ask questions. * Consent and assent materials must be accessible to people with the range of sensory and intellectual impairments, to ensure all participants are consensually involved in the project. * Information will be given verbally, supported by sign/symbols/illustrations and repeated on more than one occasion. * Supporters and other professionals involved must give their informed consent to participate using the agreement form, which will be completed prior to any data collection taking place. ## Principles of security * There is a need to be constantly alert to the potential for breaches of confidentiality and trust between exploration group members, impacting upon personal and collective well‐being. * Written records or audio recordings of consent will be maintained and updated as appropriate when new members join or if additional consents are sought. * The UK researchers will hold appropriate Disclosure and Barring certificates. ## Principles of privacy * Research notes and visual records along with interview material and transcripts will be kept in secure conditions. * The project will be registered with the Open University Faculty Data Protection Officer. * Images and other audio visual footage are property of the individual. Each individual will be informed in person of the possible use of photography and other data collection methods as part of ARCHES research sessions. If subsequently we wish to use any material, we will need to seek further specific permission. The aspects described in section 5 will also be taken into account in this field. ## Deliverables * Full ethical clearance for the collection of personal data from the appropriate committee at the participating universities was acquired prior to the end of the third month of the project, involving oversight of all consent materials. This was reflected in deliverable D8.2 “POPD – Requirement No. 3”. * A range of accessible consent material in English (see Annex A) was developed by the end of the second month of the project in deliverable D8.3 “POPD – H – Requirement No. 4”. Versions in Spanish and German will be available two months prior to the start of the research groups in Spain and Austria. * All participants were provided with an accessible letter (see Annex A) and access to an online video – one version for British Sign Language (BSL) speakers [22] and other version for the general public – explaining the research process and offering and opt‐out option [23]. * Detailed information on the informed consent procedures implemented were provided in deliverable D8.1 “H – Requirement No. 2”. # Conclusions In this deliverable the strategy to deal with data collection, generation, management and exploitation has been described. The section dealing with the data summary presented the purpose, types and formats, re‐use, origin, expected size and utility of the data collected and generated in ARCHES. Different categories were defined based on the diverse characteristics and needs of the project, involving the consortium partners as well as external sources. The approach to make data findable, accessible, interoperable and re‐usable was described in the corresponding section, where the use of a common data scheme to facilitate searches among different collections and the implementation of a platform to favour interoperability among different systems were commented. No extra resources were allocated for the actions described in the DMP since they fall within the common activities of the partners in charge of data management. Similarly, the security measures adopted for the ARCHES project were aligned with their own policies in this field – in particular, OU has submitted its plan to The Open University Data Protection Officer. However, this does not prevent from considering complementary measures if sensitive data is handled. Ethical aspects were monitored in the framework of WP8 “Ethics requirements”. The information sheets and informed consents provided in Annex A were translated into Spanish and German before the exploration groups in these countries are built and following the same principles detailed in section 6 and applied for the first exploration group in the UK. The proposed protocol has already received the confirmation from The Open University Ethics Committee.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1121_ARCHES_693229.md
# Introduction ARCHES [2] (“Accessible Resources for Cultural Heritage Ecosystems”) is a Horizon 2020 (H2020) project which aims to generate more inclusive environments at museums and cultural heritage (CH) sites, so that people with differences and difficulties associated with perception, memory, cognition and communication can easily access and understand art. To this end, ARCHES brings together three key aspects: a participative research methodology with the active involvement of the previously cited target audiences organised in the so called exploration groups; the re-use of available digital resources provided by the partners and external sources; and the development of innovative technologies that will be implemented and fine-tuned by the technology partners with the feedback from the exploration groups. As a consequence, the generation, collection, re-use and preservation of data is deemed to be crucial for the smooth running of ARCHES and will need the collaboration of all the partners within the consortium: Treelogic (Tree), University of Bath (UBAH), The Open University (OU), Sign Time GmbH (SignTime), Neumüller Moritz – ArteConTacto (MN), Centro Regional de Bellas Artes de Asturias – Fine Arts Museum of Asturias (MBBAA), Coprix Media (Coprix), VRVIs Zentrum für Virtual Reality and Visualization Forschungs (VRVis), KHM-Museumsverband – Kunsthistorisches Museum Wien (KHM), The Wallace Collection (WC), Fundación Colección Thyssen Bornemisza – Thyssen Museum (Thyssen), Fundación Lázaro Galdiano – Lázaro Galdiano Museum (FLG) and Victoria & Albert Museum (V&A). This document, the first version of the ARCHES Data Management Plan, describes the research data that will be generated and collected during the project, as well as the data (i.e. the partner museums digital assets) that will be used throughout the project. The strategy to make data FAIR (findable, accessible, interoperable and re-usable) is commented in the following sections. Nevertheless it is worth noting that updates and more details will be added as ARCHES progresses so as to provide the guidelines and strategies to follow beyond the completion for exploiting data collected and generated within ARCHES. A final version will be released in month 36 of the project. # Data summary ## Purpose The purpose of the data collection/generation is the support of the ARCHES technical developments and complementary activities to validate the proposed solutions in real environments and organise demonstration activities open to everyone according to the objectives and guidelines defined in the document Description of Action (DoA). In particular, data collection and generation are crucial to achieve the following specific objectives: * To develop and evaluate strategies which enable an exploration of the value, form and function of mainstream technologies by and for people with differences and difficulties associated with perception, memory, cognition and communication. * To develop and evaluate the use of mainstream technologies to enable the inclusion of people with such disabilities as museums visitors and consumers of art. * To identify sources – Internet, internal archives, libraries, etc. – that can provide digital cultural resources and take advantage of their possibilities to integrate this content into innovative tools, applications and functionalities. * To validate the technological outcomes in operational environments based on a participatory research methodology consisting of three pilot exercises in museums. * To promote the tools and applications developed in ARCHES by means of on-site demonstration activities all around Europe. Likewise, data collection/generation is in close connection with the tasks scheduled in the work plan, especially in the context of the technical work packages – i.e., WP3 “Development of an accessible software platform”, WP4 “Development of applications for handheld devices” and WP5 “On-site multisensory activities” – paving the way for the system validation and pilot exercises in WP6. Therefore, the DMP will be a living document where new datasets may be incorporated at any time all along the lifespan of the project following the principles of FAIR data. ## Types and formats ARCHES will handle different types and formats of data according to the multiple research areas and technical developments. In principle, these are grouped into four categories: * **Multimedia research data** : A combination of sound, images, video and text components to represent the works of art in different ways and support the integration of digital surrogates into the web and mobile applications as well as the generation of complementary resources and tools. Photographs, 3D models, sketches, interactive guides, avatars and serious games are also considered within this category. * **Metadata** : Data associated to the multimedia content used to better describe and understand each of the works of art. This includes information about the author, style, period and other details regarding the history and characteristics of the paintings, sculptures and other CH assets. Subtitles are other form of metadata that will be generated together with the videos to become accessible to people with hearing difficulties. * **User data** : Credentials to access the platform and applications to interact with the community as well as user preferences to personalise how the content is presented. These preferences will not be based on the identification of impairments but on the selection of the desired functionalities from a wide range of options, avoiding the management of potential sensitive information. * **Exploration data** : Data generated for/by the exploration groups regarding their involvement in the project, from the recruitment phase in the UK, Spain and Austria to the interaction with the research teams to provide feedback on the methodology, activities and technologies. Therefore, information sheets, consent forms and multiple materials to assess the outcomes and allow expressing opinions and feelings – e.g. surveys, questionnaires, interviews, etc. – will be generated, stored and preserved under this category. The above information is summarised in Table 1 with the most common formats employed by the partners when managing these datasets. We will decide the ones that best fit the depending on the input sources and the applications. This will also be aligned with the definition of system requirements and architecture in the deliverables D3.1 and D4.1. **Table 1: Types and formats of data generated/collected.** <table> <tr> <th> </th> <th> **Type of data** </th> <th> **Description** </th> <th> **Formats** </th> </tr> <tr> <td> **1** </td> <td> **Multimedia research data** </td> <td> Image </td> <td> JPG, PNG, TIF, GIF, BMP </td> </tr> <tr> <td> High-resolution photograph </td> <td> TIF, PNG, RAW </td> </tr> <tr> <td> 3D model </td> <td> PLY, DXF, COLLADA, STL, OBJ, PDF, U3D, FBX </td> </tr> <tr> <td> Audio and video </td> <td> MP3, MP4, MOV, AVI, WMV, MPG, MPEG, 3GP </td> </tr> <tr> <td> **2** </td> <td> **Metadata** </td> <td> Subtitles, captions </td> <td> SRT, VTT, QT </td> </tr> <tr> <td> Metadata </td> <td> JSON, plain text </td> </tr> <tr> <td> **3** </td> <td> **User data** </td> <td> Credentials </td> <td> TBD </td> </tr> <tr> <td> Preferences </td> <td> TBD </td> </tr> <tr> <td> **4** </td> <td> **Exploration data** </td> <td> Information sheet </td> <td> DOC, DOCX, PDF </td> </tr> <tr> <td> Informed consent </td> <td> DOC, DOCX, PDF </td> </tr> <tr> <td> Feedback </td> <td> DOC, DOCX, XLS, XLSX, PDF, JPG, PNG, MP4, 3GP </td> </tr> </table> ## Origin We identify four different origins for the data collected/generated in ARCHES: * **Consortium partners** : Information and digital CH assets mainly gathered by the six participating museums, i.e., MBBAA, KHM, WC, Thyssen, FLG and V&A, so as to present them to the exploration groups in Spain, Austria and the UK. This data will be obtained from their digital archives, guides, library, etc. and exploited by means of state of the art technologies – e.g. augmented reality, avatars, relief printers and models, context-sensitive tactile audio guides and advanced image processing techniques. ARCHES may also acquire new image (photographs, models, drawings) and video datasets (e.g. when enough documentation is not available or the development of accessible resources require additional multimedia content) that can be used in the project. In these cases, image and video datasets will be produced at the participating museums using the most common file formats. Text files relating to information about museum objects and images will be generated in formats such as DOC, PDF or TXT, although it may be processed and converted to other schemes. * **Exploration groups** : ARCHES will create new data based on the weekly activity of the exploration groups involved in the activities prepared for them at the six ARCHES museums. It will inform how the platforms, apps and activities are designed and developed with the purpose of improving their functionalities and accessibility. Qualitative and quantitative data resulting from diverse methods – e.g. interviews and discussions with participants, along with direct observation – will be collected in audio, video and written formats taking the differences and difficulties of the individuals into account as done with the consent forms. * **Users** : In order to register and log in to the platform, they will provide a name and valid email address. The system will store the options they select for the customised presentation and visualisation to enable the corresponding functionalities. * **External sources** : Repositories of CH assets of several institutions – such as Europeana [3], DBpedia [4] and the Rijksmuseum [5] – that publicly and freely release digital content on the Internet, useful to test the functionalities of the new tools and complement the online experience. Images, videos, audio and metadata will be collected through the corresponding Application Programming Interfaces (API). The list of CH institutions will continuously grow in parallel with the organisation of the sessions where the exploration groups evaluate the types of contents to be included in the platform and applications according to their needs and expectations. ## Re-use As stated in the DoA and described in the previous sections, a significant part of the research data will consist of existing data coming from museums, CH sites and other related sources to be used for the development of technology such as apps, websites and sensory activities (e.g. 3D reliefs). This is clearly aligned with the expected impacts listed in the work programme under the topic Reflective-6-2015 [6], where promoting “the use of digital cultural heritage allowing its reinterpretation towards the development of a new shared culture in Europe” and exploiting “the rich and diverse European digital cultural heritage in a sustainable way” are two of the cornerstones. On the one hand, ARCHES will re-use data already stored on the Central Management Server (CMS) of each of the partner museums. For example, in case of the WC, this is MuseumPlus and eMuseumPlus. Therefore, there will be no problem in providing specific digital resources to the technological developers once these are selected by the exploration groups with the support of OU, UBAH as well as the educators and experts working in the museums. This will enable the generation of tactile images among others. In addition, some images and complementary information may be directly retrieved from the public websites, in particular when referring to masterpieces: * MBBAA: _http://www.museobbaa.com/en/collection/permanent-collection/_ * KHM: _https://www.google.com/culturalinstitute/beta/partner/kunsthistorisches-museum-viennamuseum-of-fine-arts?hl=en_ * WC: _http://wallacelive.wallacecollection.org/eMuseumPlus_ * Thyssen: _https://www.google.com/culturalinstitute/beta/u/0/partner/museo-thyssen-bornemisza_ _?hl=en_ * FLG: _http://www.flg.es/museo/la-coleccion/bases-de-datos_ * V&A: _https://www.vam.ac.uk/collections_ On the other hand, different options will be considered to collect and re-use already-existing digital CH resources, making use of the APIs released by the multiple institutions as pointed out in the previous section. These APIs provide direct access to a great deal of artworks in several formats, from images to 3D models together with videos, text and audio tracks. Europeana [3] – MBBAA, KHM, Thyssen and FLG are Europeana contributors – the Rijksmuseum [5] and the Finnish National Gallery [7] will be explored during the initial phase. Furthermore portals like Google Arts and Culture [8] – part of the KHM, Thyssen, FLG and V&A collections can be found here – WikiArt (The Visual Art Encyclopaedia) [9] or DBpedia [4] also make digitisations of paintings, 3D objects and associated metadata available to the general public. The consortium will pay special attention to Intellectual Property Rights (IPRs) regarding data re-use from external and internal sources. We will respect the terms and conditions described on the corresponding websites and/or the copyright licenses under Creative Commons [10]. Similarly, neither the WC nor the V&A will allow their digital assets to be re-used for commercial purposes and, specifically, the WC will not allow any re-use of the assets by third parties for any reason without permission. The other participating museums in Spain and Austria will take a decision on how the digital content is shared once the artworks are selected by the exploration groups. ## Expected size Multiple factors should be taken into account to estimate the size of the data. Based on the types identified in section 2.2, we expect multimedia research data to represent the highest percentage of storage needs. Format, resolution, size, length and other variables will be analysed throughout the duration of ARCHES to ensure that the hardware resources are enough to store and preserve data, especially when the platform and apps become public and new institutions are sought to share their contents. Some examples of data provided by the participating museums can be found in Table 2. Although metadata will not require much space, images (especially those employed to design 3D reliefs) have to be in high resolution for accurate results, ranging between 5 and 100 MB each. Likewise, data related to the generation of tactile reliefs can vary from many MB to several GB. In addition, the consortium is currently analysing the reproduction of 3D models with different materials, colours and dimensions taking advantage of existing or new scans. Depending on the method, the size could be several GB and up to TB. **Table 2: Examples of data provided by the museums.** <table> <tr> <th> </th> <th> **Painting** </th> <th> **3D object** </th> <th> **Fabric** </th> </tr> <tr> <td> **Title** </td> <td> Madame de Pompadour </td> <td> Celadon dish </td> <td> Strawberry thief </td> </tr> <tr> <td> **Photo** </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Place of origin** </td> <td> Paris (France) </td> <td> Zheijiang (China) </td> <td> London (UK) </td> </tr> <tr> <td> **Date** </td> <td> 1758 </td> <td> 14 th century </td> <td> 1883 </td> </tr> <tr> <td> **Artist/maker** </td> <td> François Boucher </td> <td> Unknown </td> <td> William Morris / Morris & Co. </td> </tr> <tr> <td> </td> <td> **Painting** </td> <td> **3D object** </td> <td> **Fabric** </td> </tr> <tr> <td> **Materials/techniques** </td> <td> Oil on canvas </td> <td> Stoneware, glazed </td> <td> Indigo-discharged and block-printed cotton </td> </tr> <tr> <td> **Credit Line** </td> <td> Bequeathed by John Jones </td> <td> Bequeathed by Mr Arthur Hurst </td> <td> Given by Morris & Co. </td> </tr> <tr> <td> **Museum No.** </td> <td> 487-1882 </td> <td> C.1-1940 </td> <td> T.586-1919 </td> </tr> <tr> <td> **Gallery location** </td> <td> Europe 1600-1815 (Room 3) </td> <td> Ceramics galleries (Room 145) </td> <td> British galleries (Room 125) </td> </tr> <tr> <td> **Description** </td> <td> Madame de Pompadour was the official mistress of King Louis XV. She was also an influential patron of the arts and a leader of taste. S devoted supporter of the Sèvres porcelain factory and keen collector of Japanese lacquer, she furnished her residences with fine furniture and porcelain. […] </td> <td> Green-glazed stonewares from Zhejiang were the most common type of Chinese ceramics exported to the Middle East before 1400. This dish was thrown and carved before being given a thick green 'celadon' glaze, which has pooled in the incised decoration and carved fluting. […] </td> <td> Morris was inspired to draw this design after finding thrushes stealing fruit in his garden. This complicated and colourful pattern is printed by the indigo discharge method and took a long time to produce. Consequently, it was expensive to buy. […] </td> </tr> <tr> <td> **Other** </td> <td> </td> <td> Audio description available: _http://bit.ly/2nRAfmM_ </td> <td> </td> </tr> </table> User data should not be a problem in terms of size, since it will be restricted to contact details, username and password as well as the configuration options. Because exploration data will mainly consist of written documents as well as photographs and videos usually taken with a smartphone, they will all account for several GB. ## Utility Collected/generated data will be useful for different target audiences: * **People with differences and difficulties associated with perception, memory, cognition and communication** : Co-creation is a crucial concept in ARCHES where end users are continuously involved in the value chain from the very beginning. Multimedia research data will allow them to explore new forms of interaction with CH resources through the Internet or at the museum. Metadata will be a good support to multimedia content so as to make the available information accessible to all. Besides, in addressing the combination of state of the art technologies and digital CH assets, the exploitation and re-use of data will build bridges not only between cultures but also between the heritage communication and a historically silenced audience, traditionally sidelined by mainstream cultural and intellectual activity. * **Researchers** : Social researchers may take advantage of the documents to obtain feedback from the participants as well as information sheets and consent forms for people with learning difficulties and sensory impairments. It will facilitate the communication and pave the way to replicate the approach and methodology in other contexts, especially in the field of education. From a technological perspective, high-resolution photographs, 3D scans and other datasets may be of interest to design new functionalities and applications in the field of CH. * **Museums** : Both multimedia research data and exploration data will allow museums to define more inclusive strategies in the future building on top of the ARCHES results. Moreover, the generated material may be employed for other purposes, such as marketing and preservation, attracting new audiences to the facilities. Likewise, the generated digital content may be integrated in their own websites and other networks to boost dissemination and the re-use of these surrogates. * **General public** : Although specifically designed for the aforementioned target groups, access to multimedia data through the online platform and smartphone applications will be granted to all on a free basis. In addition, the outputs will be available at the participating museums at least during the open days to be organised at the end of the project. # FAIR data This section deals with four key aspects within the DMP, i.e., findable, accessible, interoperable and re-usable data. ## Making data findable, including provisions for metadata The collection and generation of consistent and accurate metadata significantly contribute to the improvement of data search and the identification of digital resources. In this particular context where CH assets are the main concern, metadata will also facilitate the finding of similar content across different online collections and repositories, such as the external sources cited in the previous sections. In order to ensure that interoperability among different approaches and structures, the adoption of a common standard is recommended. Therefore, the consortium will analyse the schemas used by the participating museums as well as the most relevant actors in the CH field. The implementation of a metadata scheme will be exploited for the search functionality in the online platform and applications for handheld devices to allow the user to find specific CH assets by date, data, author, title, origin, type of artwork, etc. Keyword-based search will also be developed to detect specific words and terms free-text fields, such as description, history and others. Likewise, the consortium will define a unique identifier for each of the digital resources to facilitate the handling of videos, images, etc. among professionals. The methodology will enable the storage and preservation of, for example, multiple photographs of the same artwork keeping their connection for further re-use, studies and cross search. For data that requires versioning, the consortium will take advantage of Version Control Systems (VCS). For example, source code will be maintained in GIT and SVN repositories; executable object code can be versioned on MyGet.org. Furthermore, the documents can also be versioned in repositories during design and authoring. ## Making data openly accessible The ARCHES project proposes a general strategy in which different alternatives will be developed to make data openly accessible. Aside from the envisioned platform, apps and multisensory activities where multimedia content will be available for specific actions, different repositories will contain the four different types of data generated/collected by the partners. When referring to the final version of multimedia research data and associated metadata, a dedicated repository managed by the coordinator (Tree) will be used to deposit this information. This has already been considered in the system architecture as pointed out in the deliverable D3.1 “Report on system architecture definition”. A REST-API will facilitate the sharing of the stored digital resources in an easy way by means of an HTTP browser that will call API methods. User data will not be accessible for people outside the consortium for privacy and security reasons. Even within the consortium, only the person in charge of the technical development of the platform at Tree will be allowed to handle this data. Passwords will be hashed. Regarding the exploration data, when it does not rise ethical and privacy issues, the consortium will benefit from the available platform Open Research Online (ORO) [11], the OU’s repository of research publications and other research outputs. ORO is an open access resource that can be searched and browsed freely by the general public. Open access to the content of the repositories will be subjected to IPRs. As a consequence, each partner will carefully assess on a case-by-case basis which results – and to what extent – can be made public. In particular, museum data may be retained by the project partners in accordance to the Grant Agreement (GA) and Consortium Agreement (CA). For example, WC digital assets, such as high-resolution images, cannot be made openly available to third parties without permission. On the contrary, other WC digital assets, such as lowresolution images and video content is already (and will be) publically available online on different websites [12], [13] and can be used by third parties for non-commercial purposes. Similarly, V&A digital assets are openly available to third parties for non-commercial purposes. The other participating museums will decide on the strategy to adopt with respect to making data openly accessible once the digital CH assets to be included in the solution are selected in the exploration sessions planned for the end of 2017 and 2018. Technology developers will also evaluate the outputs and intermediate results that can become public since most developments may contain traces of copyrighted material (e.g. use of high-resolution photographs) and, thus, developers might not have the full copyright on the derivative outputs. In the context of communication and dissemination actions, different activities are planned to achieve the maximum impact from the outset by taking advantage of the generated data. These are described in deliverable D7.2 “Communication plan, activities and publications – 1 st version” and include a dedicated website [2] – where reports about research and technological results that are part of the public deliverables will be available – social networks, mass media, networking in different events, etc. Moreover, the consortium will encourage publication in open access journals and the adoption of green or gold models for open access. Again, ORO may be used to this end as well as the website. ## Making data interoperable The use of the most common and standardised formats for each of the data types (see Table 1) is the first step to make data interoperable with other systems and for professionals in other disciplines. Free online converters can be found browsing the web to easily obtain a compatible file. Regarding the vocabularies and ontologies, ARCHES will proceed as outlined in section 3.1, i.e. analysing the schemas and methodologies used by the participating museums and other partners with the purpose of developing a joint strategy to ensure interoperability among their systems. The following paragraphs describe some popular schemas and how they are connected to provide data to the Europeana network [3] (the other way around is possible too). This is an initial suggestion that should be studied in depth by the consortium. The Conceptual Reference Model (CRM) [14] developed by the International Committee of Documentation (CIDOC) of the International Council of Museums (ICOM) is a well-known and extensively used semantic model based on earlier standards. CIDOC-CRM establishes relationships among implicit and explicit concepts for CH documentation to transform isolated and inhomogeneous metadata into a valuable and coherent global resource. CRMdig [15] has extended CIDOC- CRM in the framework of the 3D-Coform project [16]. The Lightweight Information Describing Objects (LIDO) is an XML harvesting scheme intended to act as a gateway and provider of museum object metadata to online databases and repositories. Therefore, it does not replace CIDOC-CRM but builds on top of this and other data schemas. The strength of LIDO lies in “its ability to support the full range of descriptive information about museum objects” [17]. The Europeana Data Model (EDM) used by the Europeana network [3] aims to guarantee the preservation of the original data from the diverse metadata schemas, while accommodating the rich variety of community standards for museums, archives and digital libraries. In particular, LIDO is one of the standard intermediary schemas that can be mapped to EDM in a straightforward manner. The EDM class hierarchy is presented in Figure 1. **Figure 1: EDM class hierarchy [18].** ## Increase data re-use ARCHES is committed to the exploitation and sharing of the outcomes generated (knowledge, technologies, data, etc.) in the framework of the activities defined in the work plan. Free applications, public deliverables, open publications and reports or access to research data are indicative of this commitment. However, all the aspects related to IPR should be thoroughly assessed for each result and addressed according to the clauses of both the GA and the CA and considering the particular interests and policies of each of the partners. Data re-use is not an exception. The platform and smartphone applications will be kept updated and running for a period of at least two years after the completion of the project, allowing the consortium to improve the functionalities, add new content and initiate the commercial exploitation in accordance with the business plan developed in WP7. During this period, the access to the repositories will be supported as usual, so that third parties will be able to re-use data following the conditions defined and agreed by the partners. Because multimedia research data and metadata are mostly related to the digital CH assets owned by the six museums, the re-use by third parties will be subjected to their particular business strategies. Data will be licensed following the same principles each museum apply to its digital collection available on the Internet through its website, Google Art, Europeana and other means all along the execution and after the completion of ARCHES. Outputs derived from the use of digital CH assets will also be released/protected accordingly in agreement with the technical developers (if any) involved. To this end, access to datasets in our repositories may be conditioned to the registration of the researchers and entities willing to take advantage of this content. In order to obtain the necessary credentials, they will be asked to provide a valid e-mail and additional contact details as well as some relevant information concerning the exploitation actions to be carried out. If approved by the consortium, a username and password (or alternatively an API key) will be generated. Some designs – such as the relief printer – and tools will not be released since the owner will apply for a patent. Making the data public and allowing its re-use would be a violation that would invalidate the process. Therefore, an embargo period is foreseen in this particular situation. The consortium will jointly work towards the achievement of the maximum quality of the results that will be tested by the exploration groups in the museums. # Allocation of resources The estimated costs for making data FAIR were already considered when drafting the budget for the project before being submitted to the European Commission. The implementation of dedicated spaces in data repositories was deemed to be an important part of the approach as well as the access to the collected and generated data. Consequently, the consortium does not need to make any distinction or to consider additional resources to fulfil the expectations in this regard. Depending on the repository, there will be different people responsible for the storage, access and curation of data: * The contributions of ARCHES to the ORO repository of the OU will be managed by the Research Manager Prof Jonathan Rix (OU). * The multimedia/metadata repository for the interaction with the online platform and applications for handheld devices will be managed by the Technical Manager Ms Ana Belén Rodríguez (Tree). * Intermediate and final data to generate 3D reliefs, 3D printer design and tactile audio guide will be managed by Mr Andreas Reichinger (VRVis). Costs associated to the long term preservation of data generated in the project will be assumed by the aforementioned partners as part of their daily operating costs. # Data security All data will be kept in compliance with the “Data protection act” (1998) [19] and the “Freedom of information act” (2000) [20]. Research notes and visual records along with interview material and transcripts will be kept in secure conditions. ARCHES is registered with the Faculty of Data Protection Officer at OU. Therefore, any personal information will be kept on an OU secure server. * OU will aim to keep collected datasets resulting from the participation of the exploration groups in the pilot exercises and validation separated from personal identity information. Any key linking codes to identity information such as names, addressed and telephone numbers would then be kept secure and separate from the dataset accessible only to the investigators. This, however, may not be acceptable to the exploration groups and the methods they choose to develop. Consequently, partners will be flexible in balancing their needs for privacy with the needs for representation. This is a key aspect of the research approach being adopted. The methodology was submitted to the Open University Data Protection Officer and was recorded on 23 rd December 2016. * Tree will deal with different types of data. In particular, passwords to log in to the user’s profile will be hashed so that nobody (except for the user) can see this information. Regarding the infrastructure for multimedia research data and metadata storage, backups will be made on a daily basis following the company’s policy. Access to these backups will be restricted to the persons working in the project. * Although most data generated for the multisensory activities will be intended to be used only by the developers, VRVis has a reserved space on a secure data server in its facility, which only members of VRVis can access. An easily browse-able folder hierarchy is used: Each museum and each object has its own main directory. There, VRVis collects all input data and data generated during the project. # Ethical aspects The research carried out in ARCHES adheres to the “Ethical guidelines for educational research” [21] edited by the British Educational Research Association (BERA) and the “Data protection act” (1998) [19]. It will also follow the OU policy documents “Ethical principles for research involving human participants” and “The code of practice for research and those conducting research”. The OU will insist that all explorations groups operate under these guidelines as well as any local or national policies which are relevant. In fact, the research protocol for the ARCHES project already received a favourable opinion by the Open University Human Research Ethics Committee. Given the possible vulnerable nature of the participants in the exploration groups, the consortium recognises the need to be sensitive to several key aspects when referring to data management and how participants get involved in the pilots and validation (e.g. information sheets and information consents), collaborate (e.g. access to data and audio visual content generation) and provide feedback (e.g. surveys and questionnaires): * The need to be constantly alert to the potential breaches of confidentiality between exploration group members. * Consent as an ongoing, unfolding process, particularly in relation to people with learning difficulties. This is particularly relevant given that the project is endeavouring to give these participants a research voice. * The need to balance our desire to gather data. Our presence as observers to practices rooted in everyday relationships elsewhere means we will engage in discussion with relevant management or services at the earliest opportunity if we evidence practices which cause us concern about individual well-being. * Issues with privacy in relation to subsequent use of data beyond the initial confines of the exploration groups. We regard images and other audio visual footage to be the property of the individual. If subsequently we wish to use any material, we will need to seek further specific permission. WP8 “Ethics requirements” focuses on the activities tackling ethics in relation to the exploration groups. The university research teams (OU and UBAH) have considerable personal expertise within the field of ethics, but this will be reinforced by the presence of an independent, international ethical specialist on the External Expert Advisory Board (EEAB). The work package will be executed throughout the duration of the project to ensure the informed, consensual and secure involvement of people with intellectual and sensory impairments. In this context, the key objectives will be to: * Develop a range of mechanisms to inform participants about the project and to assure we are working with their consent throughout. * Maintain ethical standards in a manner and form appropriate to that laid out by national and institutional documents and committees. * Operate with key participatory principles, agreed with the exploration groups that will underpin research practices and relationships throughout the project. * Ensure all participants understand the importance of rigorously applying these ethical principles. In order to achieve these objectives, ARCHES will be organised and managed in line with principles of participation, consent, security and privacy. The main aspects connected with the DMP are outlined in the following subsections (and the full version can be found in the DoA). ## Principles of participation * ARCHES is enabling the research voice of the members of the exploration groups. * ARCHES must ensure the members of the exploration groups are active, recognised and willing participants. * Individual members of the exploration groups will join the project on a volunteer basis and will be able to leave whenever they so wish. * Data collection methods will also be developed in collaboration with the exploration group so they can best identify, capture and record their experiences and views. * Diverse forms of communication must be used to engage with authentic user perspectives and the diverse forms of evidence that this produces must be valued and treated as significant markers of certainty. * Reports will be provided twice a year to the EEAB on ethical issues, in relation to principles of participation, consent, security and privacy. ## Principles of consent * Consent and assent is an ongoing, unfolding process, to which the research teams need to be alert at all times. It will be demonstrated by engagement as well as through verbal or signed agreement. * Consent will be made via the communication medium in which the person is most adept (verbal and/or augmented communication), and recorded with the person’s initials (or alternative if necessary), witnessed by an advocate. * The prospective members of the exploration groups need to meet the researchers before the project begins. Informed consent will be sought following this meeting. Agreement to participate will be viewed as provisional consent. * Consent is provisional upon the research being conducted within the outlined framework, continuing to develop within participant expectations, and there being no adverse change in the person’s ability to give consent. * Participants will be encouraged to share information with people they trust, who in turn will be encouraged to ask questions. * Consent and assent materials must be accessible to people with the range of sensory and intellectual impairments, to ensure all participants are consensually involved in the project. * Information will be given verbally, supported by sign/symbols/illustrations and repeated on more than one occasion. * Supporters and other professionals involved must give their informed consent to participate using the agreement form, which will be completed prior to any data collection taking place. ## Principles of security * There is a need to be constantly alert to the potential for breaches of confidentiality and trust between exploration group members, impacting upon personal and collective well-being. * Written records or audio recordings of consent will be maintained and updated as appropriate when new members join or if additional consents are sought. * The UK researchers will hold appropriate Disclosure and Barring certificates. ## Principles of privacy * Research notes and visual records along with interview material and transcripts will be kept in secure conditions. * The project will be registered with the Open University Faculty Data Protection Officer. * Images and other audio visual footage are property of the individual. Each individual will be informed in person of the possible use of photography and other data collection methods as part of ARCHES research sessions. If subsequently we wish to use any material, we will need to seek further specific permission. The aspects described in section 5 will also be taken into account in this field. ## Deliverables * Full ethical clearance for the collection of personal data from the appropriate committee at the participating universities was acquired prior to the end of the third month of the project, involving oversight of all consent materials. This was reflected in deliverable D8.2 “POPD – Requirement No. 3”. * A range of accessible consent material in English (see Annex A) was developed by the end of the second month of the project in deliverable D8.3 “POPD – H – Requirement No. 4”. Versions in Spanish and German will be available two months prior to the start of the research groups in Spain and Austria. * All participants are being provided with an accessible letter (see Annex A) and access to an online video – one version for British Sign Language (BSL) speakers [22] and other version for the general public – explaining the research process and offering and opt-out option [23]. * Detailed information on the informed consent procedures implemented were provided in deliverable D8.1 “H – Requirement No. 2”. # Conclusions In this deliverable the initial strategy to deal with data collection, generation, management and exploitation has been described. More information will continuously be included by the consortium partners until the final version is released in September 2019. This will allow us to improve and fine tune the preliminary actions (including the identification of datasets) as well as implement new measures if necessary. The section dealing with the data summary presented the purpose, types and formats, re-use, origin, expected size and utility of the data collected and generated in ARCHES. Different categories were defined based on the diverse characteristics and needs of the project, involving the consortium partners as well as external sources. The approach to make data findable, accessible, interoperable and re-usable was drafted in the corresponding section, where the use of a common data scheme to facilitate searches among different collections and the implementation of APIs to favour interoperability among different systems were commented. In addition, it was clearly stated that the re-use of existing digital CH assets fell within the objectives of the project. No extra resources will be allocated for the actions described in the DMP since they fall within the common activities of the partners in charge of data management. Similarly, the security measures adopted for the ARCHES project will be aligned with their own policies in this field – in particular, OU has submitted its plan to The Open University Data Protection Officer. However, this does not prevent from considering complementary measures if sensitive data is handled. Ethical aspects will be monitored in the framework of WP8 “Ethics requirements”. The information sheets and informed consents provided in Annex A will be translated into Spanish and German before the exploration groups in these countries are built and following the same principles detailed in section 6 and applied for the first exploration group in the UK. The proposed protocol has already received the confirmation from The Open University Ethics Committee.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1123_WeGovNow_693514.md
1 Introduction 5 2 Data summary 5 2.1 Administrative information 5 2.2 Purpose of data collation 5 2.3 Type and format of data collated 6 2.4 Re-use of existing data 7 2.5 Expected size of data 7 2.6 Expected data utility 7 3 Preserving access to quantitative WeGovNow platform evaluation data 8 3.1 Making quantitative WeGovNow platform evaluation data findable 8 3.2 Making quantitative WeGovNow platform evaluation data openly accessible 9 3.3 Making quantitative WeGovNow platform data interoperable 9 3.4 Increase re-use of quantitative WeGovNow platform evaluation data 10 4 Allocation of resources 10 5 Data security 10 6 Ethical aspects 11 7 Preserving access to publications 12 # Introduction According to available guidance, H2020 projects are to provide a first version of the Data Management Plan (DMP) within the first six months of the project 1 . The initial DMP should be updated during the project lifetime. The present document represents a final update of the initial version of the DMP. The DMP presented throughout this document starts with a summary of the data collated in the framework of the WeGovNow pilot evaluation (Chapter 2). This is followed by a description of how access quantitative data collated for the purposes of formative pilot platform evaluation will be preserved (Chapter 3). Next, the resources allocated to this are assessed (Chapter 4) and measures for ensuring data security are described (Chapter 5). This is followed by a description how ethical aspects and data privacy according to GDPR has been addressed (Chapter 6). Finally, it is described how access to publications is preserved (Chapter 7). # Data summary ## Administrative information Grant Agreement No.: 693514 Acronym: WeGovNow DPM version: 2.3 Planned update: n. a. DMP responsible: Lutz Kubitschke ## Purpose of data collation The WeGovNow project is to develop a new type of online platform for citizen engagement, thereby integrating existing software applications and newly developed ones in accordance with a multi-staged development approach (Figure 1). _Figure 1 – Multi-staged development approach of the WeGovNow online platform_ Phase I Phase II Phase III Phase IV **Basic Iterative prototype Platform piloting** **Platform architecture** ### component refinement & under day to day development integration extension conditions The WeGovNow platform will has been piloted in three municipalities from project month 25 onwards. During the pilot evaluation phase data have been gathered in relation to different evaluation objectives, namely whether 1. the WeGovNow pilot system works as anticipated (viability perspective) 2. the WeGovNow pilot system is worth being maintained after the ending of the pilot duration (sustainability perspective) Outcomes have been feed into the development of guidance on the further mainstreaming of the WeGovNow pilot platform after the ending of the pilot project duration, including recommendations on further exploitation of the WeGovNow pilot platform as a major project output. ## Type and format of data collated Different types of evaluation data have been gathered throughout the project’s pilot phase, including quantitative data: * _Quantitative evaluation data generated by means of automated monitoring of_ _platform usage:_ The WeGovNow online platform developed includes different functional software components which enable the pilot users to post and manipulate different types of content. This is reflected by the case-based data set including the following variables: * User account created * User account automatically validated o User account manual validation requested o User account manually validated o Date of birth of registered user account holder o Sex of registered user account holder * No. of original contributions posted through the WeGovNow First Life platform component * No. of comments posted through the WeGovNow First Life platform component * No. of original contributions posted through the WeGovNow Community Maps component o No. of original contributions updated through the WeGovNow Community Maps component o No. of comments posted through the WeGovNow Community Maps component * No. of objects created through the WeGovNow First Life component o No. of objects updated through the WeGovNow First Life component o No. of objects deleted through the WeGovNow First Life component o No. of votes casted d through the WeGovNow LiquidFeedback component These data have been processed in a common spread sheet processing programme format (Microsoft Excel). * _Qualitative evaluation data gathered by means of different data gathering methods:_ Semi-structured interviews were conducted with a number of selected stakeholders at the pilot sites for the purposes of the viability assessment, focussing on identifying potential impacts of the pilot service on the various stakeholder groups involved. Outcomes were documented in terms of a common reporting template. Further to this qualitative evaluation data has been gathered by means focus group sessions. In thematic regard, these events focused on gathering the stakeholders’ perceptions of the utility, usability and reliability of the pilot services, including the technical infrastructure through which the pilot services are delivered at each of the pilot sites. Also, perceived impacts have been addressed. Outcomes were again documented in terms of a common reporting template. All reporting templates were processed as a text files with help of common word processing software (MS Word). ## Re-use of existing data The WeGovNow project specifically has developed and piloted a new type of civic engagement platform integrating various participation functions. There was no existing data available that could be utilised for the purpose of pilot platform evaluation in the framework of WeGovNow. ## Expected size of data The following data volume was generated: * _Quantitative data gathered_ : A case-based variable set was generated for 11.833 instances overall. * _Quantitative data gathered:_ Overall, 18 aggregated reporting templates have been processed. ## Expected data utility Data gathered by means of the project’s multi-method evaluation approach specifically relate to the utilisation of the newly developed WeGovNow pilot platform in three participating pilot municipalities. These data are of utility primarily for formative evaluation purposes to support the further mainstreaming of the pilot platform, the latter representing a key output of the project. The pilot platform may however undergo further optimisation prior to envisaged mainstreaming after the ending of the pilot project. Due to the formative nature of the evaluation design adopted for the purpose of the WeGovNow project, the long term value of the evaluation data to be generated can be assessed as very low. Type and nature of the evaluation data gathered do not offer opportunities for subsequent research, e.g. in terms of secondary analyses or replication of results. 2 However, the quantitative data set that has become available from automated usage monitoring during the overall project’s pilot phase may potentially be utilised by others for verifying any published results referring to these data, albeit with a rather low probability given the strong RTD-focus of the current project. # Preserving access to quantitative WeGovNow platform evaluation data ## Making quantitative WeGovNow platform evaluation data findable Beyond the project duration, it is planned to make an anonymous case-level data set from the automated pilot platform usage monitoring available through the GESIS Datorium repository. The data set consists of 11.833 instances and 11 coded variables. To this end, the data set is intended to be made available after the formal closing of the project. The user ID which might in conjunction with registration data stored in the pilot platform, at least theoretically, enable the identification of an individual who’s platform usage activities have been monitored is not included in the quantitative data set that has been generated for formative evaluation purposes. Also, no person related information stemming from the required registration of the pilot users to the pilot platform has been derived from the platform for evaluation purposes. Hence, no personal data or any other information that would theoretically allow a third party to infer a pilot user’s identify has been included in the data set generated for formative evaluation purposes. This ensures anonymity of the data set to be publicly preserved. Findability will be supported by relying on the metadata framework applied by the data sharing repository to be utilized (GESIS Datorium) 3 . In particular, the following metadata will be provided: 1. Title 2. Principle investigator & Institution 3. Publisher 4. Publication Year 5. Availability 6. Subject Area 7. Topic Classification In contrast to “replicablility” in this sense, we understand “reproducibility” as making available data sets to others for verifying published results. C.f. e.g. Peng, R. (2009): Reproducible research and Biostatistics. In: Biostatistics, 10 (3): 405-408. 3 This meta data framework is compatible with the codebook standard of the Data Documentation Initiative (DDI 2). c.f. Wolfgang Zenk-Möltgen and Monika Linne (2014): Metadatenschema zu datorium - Data Sharing Repositorium. GESIS-Technical Reports 2014/103 8. Abstract 9. Geographical Area 10. Data collection Mode 11. Survey Period 12. Rights 13. Notes 14. File description 15. Research Data Type 16. Language 17. Number of Variables 18. Number of Units 19. Unit Type 20. Software These metadata will be fed by means of a persistent digital identifier (DOI) into the international community of DataCite and thus become findable at an international level. 3 ## Making quantitative WeGovNow platform evaluation data openly accessible As mentioned above, an anonymous case-level data set derived by means of automated pilot platform usage monitoring will be uploaded to the GESIS Datorium data sharing repository. An account has already been registered. The data set will be made available in terms of an OpenDocument Spreadsheet (.ods). In conjunction with the metadata to be provided the data will be immediately usable with help of commonly available spreadsheet software. No particular documentation of access software will thus be required. Access to this data is planned to be provided to everybody. Setting up a Data Access Comitee, e.g. for concluding particular data access agreements, is considered not necessary. ## Making quantitative WeGovNow platform data interoperable As discussed earlier the type and nature of the data set generated for the purposes of for formative evaluation of the WeGovNow pilot platform is not expected to be of high longterm value (c.f. 2.6). When it comes to potentially re-combinations with different datasets from different origins in particular, no value is seen at all due to the formative evaluation approach pursued. As discussed earlier as well, the quantitative data set that has become available from automated usage monitoring might be utilised by others for verifying any published results referring to these data, albeit with a rather low probability given the strong RTD-focus of the current project. This will be supported by a documentation of the all variables and numeric codes included in the data set that will be provided on the data sharing repository (GESIS Datorium). Where possible available standard codes have been followed, for example in the case of identification of date of birth data were coded according to ISO 8601\. ## Increase re-use of quantitative WeGovNow platform evaluation data The data set includes automatically derived monitoring data on pilot platform utilization. Prior to further processing, the data derived from the pilot platform underwent quality assurance procedure in relation to data integrity in terms of a plausibility assessment. The data set will be made publicly available on GESIS Datorium following the closing of the WeGovNow project. No embargo period is foreseen. No particular restrictions on the re-use of the data are planned to be imposed. The appropriateness of different licensing agreements under the Creative Commons licensing framework is currently explored. The data set will remain re-usable until the repository withdraws or goes out of business. # Allocation of resources Costs for data preparation and documentation are covered by the project budget. The costs for data preparation to be FAIR cannot be exactly specified at the current stage. However, expenses for data set preparation, data management and additional documentation concerning those data to be made openly accessible are estimated to not exceed 0.5 person months. No additional expenses are expected to accrue for purchasing supportive tools, e.g. for working with DDI, and repository charges for data submission. The lead partner (empirica) of the project’s evaluation work package (WP4) takes responsibility for data management. Lutz Kubitschke and Sonja Müller are responsible for data storage, archiving and publication. # Data security During the project, all evaluation data has been stored at the server of the lead organization (empirica) responsible for the evaluation work package (WP4), with daily backup at an institutional off-site server. The team member responsible for storage is supported by empirica’s IT team. Back up are checked manually at two weeks intervals. No additional costs are accruing for storage and back-up. When it comes to quantitative monitoring data, sensitive data (the user account ID recorded by the pilot platform) has been separated to create an anonymised data set. Beyond the user ID , no data item processed and stored is assessed as sensitive. When it comes to qualitative evaluation data generated for evaluation purposes, this data is planned to be stored locally at empirica’s servers for 10 years, whereby no costs will be associated with local storage. As described above, qualitative raw data will not be made available to external parties preventing data privacy threats. Only to project members will be granted access on request, with clearance of a non-disclosure agreement. # Ethical aspects Informed consent for the sharing of anonymised evaluation data and long term preservation was included during data collation. Sensitive data was separated and kept secure. When it comes to the processing of personal data in the framework of the local validation trials which have been implemented in the three WeGovNow pilot municipalities GDPR compliance was ensured. During the overall project’s pilot phase, the WeGovNow platform has been operated by three municipalities under day-to-day, namely in Turin, in the London Borough of Southward and in San Doná di Piave. With help of the platform, each pilot municipality offered a publicly available pilot service to its citizens until the end of the formal project duration. In technological respect, the individual platform components (software applications) were hosted by different consortium members according to a Software-as-aService (SaaS) deployment model. Pilot users entered data into the pilot platform through a common web interface. User data were shared across the platform components, either directly between individual components or by means of a common logger data base. From a legal point of view, two different perspectives deserved attention in framework of the WeGovNow pilots when it comes to GDPR requirements, namely an external one and an internal one: * External: One the one hand, a legal relationship was established between the pilot service provider (pilot municipality) and the pilot service user (citizen). * Internal: On the other hand, a legal relationship was established between the pilot service provider (pilot municipality) and those parties processing data on its behalf in accordance with a SaaS deployment model (technology partners hosting one or more platform component remotely). As far as the processing of personal data was concerned, the GDPR creates obligations for the WeGovNow municipalities offering the pilot service (“data controllers”) to their citizens (“data subjects”) with help of WeGovNow component providers (“data processors”). These obligations were mat in various ways: * Terms of use statement (ToU) complying with GDPR requirements was developed by each pilot site and made available through the local pilot platform instances to the pilot users. Consent was requested prior to user registration. * A data privacy statement was developed by each pilot site and made available through the local platform instances to the pilot users. Consent was requested prior to user registration. In this context, user were informed about which types of personally identifiable information was collected about them across the WeGovNow platform components, how the data is used and how users can control the information that is gathered. Current data protection legislation as well as the new GDPR put an obligation on data controllers to ensure data subjects can rectify remove or block incorrect data about themselves. Users were also informed about their various rights in relation to data protection as stipulated by GDPR. * To be able to respond to user request in relation to these rights, it was identified in advance what personal data were held within the individual WeGovNow platform components, where it comes from, who it is shared it with, how its processing can be restricted and how it can be erased. Also, a process was identified din advance how the consortium would react if a pilot user asked to have his/her personal data deleted, for example. In such a case the pilot municipality receiving such a request from one of its citizens was able to rely upon a commonly agreed procedure for informing the partners concerned, monitor how the user claim is met and provide informed feedback on this matter to the pilot user. * The GDPR makes privacy by design an explicit legal requirement under the term ‘data protection by design and by default’. As a general rule, personal data was processed within and across the WeGovNow platform software components only for those purposes intended to be achieved by the component. When it comes to the legal relationship between the pilot municipalities and the technical partners hosting one or more software components remotely, GDPR differentiates between the “data controller” (the pilot municipalities) and the “data processor” (the technical partners). Processing of personal data on behalf of a data controller requires an assignment in writing between both parties according to GDPR. Therefore a data processing agreement was concluded bilaterally between each of the three pilot municipalities and each WeGovNow component provider prior to the starting of the local pilots. Overall, 12 data processing agreements were hence concluded. # Preserving access to publications Project partners have relied on research results from WeGovNow for authoring scientific papers for journals and book chapters, as well as presenting conference papers in relevant disciplines. To preserve their accessibility these have been made available on different repositories as follows: * The Zenodo open access repository * The open access repository of the University of Heidelberg * The open access repository of the University of Turin * The Liquid Democracy Journal (permanently archived at German National Library) * The ACM digital Library * IEEE Xplore Digital Library * International Conference on Cartography and GIS (ICC&GIS) **Title** **)** **Year** **(** **Authors** **Journal/book/conferen** **ce** **Type** **Link to** **publication** <table> <tr> <th> **Title (Year)** </th> <th> **Authors** </th> <th> **Journal/book/conferen ce** </th> <th> **Type** </th> <th> **Link to publication** </th> </tr> <tr> <td> A Fair Distance Function (2017) </td> <td> Behrens, J. and B. Swierczek </td> <td> The Liquid Democracy Journal on electronic participation, collective moderation, and voting systems </td> <td> Other </td> <td> _http://www.liq_ _uid-democracyjournal.org/issu_ _e/5/_ </td> </tr> <tr> <td> LiquidFeeback’s Issue Limiter (2017) </td> <td> Behrens, J., Nitsche, A. and B. Swierczek </td> <td> The Liquid Democracy Journal on electronic participation, collective moderation, and voting systems </td> <td> Other </td> <td> _http://www.liq_ _uid-democracyjournal.org/issu_ _e/5/_ </td> </tr> <tr> <td> Unified User Management with LiquidFeedback (2018) </td> <td> Behrens, J. and B. Swierczek </td> <td> The Liquid Democracy Journal on electronic participation, collective moderation, and voting systems </td> <td> Other </td> <td> _http://www.liq_ _uid-democracyjournal.org/issu_ _e/6/_ </td> </tr> <tr> <td> Data Quality Concept for eGovernment Web-Map Based Services (2018) </td> <td> Noskov A., Zipf A. and A. Rousell </td> <td> Proceedings 7th International Conference on Cartography and GIS </td> <td> Conference proceedings </td> <td> _https://iccgis20_ _18.cartographygis.com/7ICCGI_ _S_Proceedings/_ _7_ICCGIS_2018_ _%20(34).pdf_ </td> </tr> <tr> <td> Computer Vision Approaches for Big Geo- Spatial Data: Quality Assessment of Raster Tiled Web Maps for Smart City Solutions (2018) </td> <td> Noskov A. </td> <td> Proceedings 7th International Conference on Cartography and GIS </td> <td> Conference proceedings </td> <td> _https://www.ge_ _og.uniheidelberg.de/_ _md/chemgeo/g_ _eog/gis/noskov_ _2018rastertiles_ _qualityinitial.pd_ _f_ </td> </tr> <tr> <td> Open Source Tools for Coastal Dynamics Monitoring (2018) </td> <td> Noskov A. </td> <td> Proc. SPIE 10773, Sixth International Conference on Remote Sensing and Geoinformation of the Environment </td> <td> Conference proceedings </td> <td> _https://www.ge_ _og.uniheidelberg.de/_ _md/chemgeo/g_ _eog/gis/noskov_ _2018osrccoastd_ _yn.pdf_ </td> </tr> <tr> <td> Backend and Frontend Strategies for Deployment of WebGIS Services (2018) </td> <td> Noskov A. and A. Zipf </td> <td> Proc. SPIE 10773, Sixth International Conference on Remote Sensing and Geoinformation of the Environment </td> <td> Conference proceedings </td> <td> _https://www.ge_ _og.uniheidelberg.de/_ _md/chemgeo/g_ _eog/gis/noskov_ _2018fbswebgis._ _pdf_ </td> </tr> <tr> <td> Smart City WebGIS Applications: Proof of Work Concept for Highlevel Quality-of-Service Assurance (2018) </td> <td> Noskov, A. </td> <td> ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-4/W7 </td> <td> Conference proceedings </td> <td> _https://www.ge_ _og.uniheidelberg.de/_ _md/chemgeo/g_ _eog/gis/noskov_ _zipf2018pow.pd_ </td> </tr> </table> <table> <tr> <th> **Title (Year)** </th> <th> **Authors** </th> <th> **Journal/book/conferen ce** </th> <th> **Type** </th> <th> **Link to publication** </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> _f_ </td> </tr> <tr> <td> Definition of Contour Lines Interpolation Optimal Methodas for EGovernment Solutions (2018) </td> <td> Noskov, A. and A. Zipf </td> <td> ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-4/W8 </td> <td> Conference proceedings </td> <td> _https://www.ge_ _og.uniheidelberg.de/_ _md/chemgeo/g_ _eog/gis/noskov_ _zipf2018interp._ _pdf_ </td> </tr> <tr> <td> Modelling and Assessing Spatial Big Data: Use Cases of the OpenStreetMap Full- History Dump (2019) </td> <td> Noskov, A. et al. </td> <td> Spatial Planning in the Big Data Revolution </td> <td> Book chapter </td> <td> _http://ar.nkov.com/i/Nosk_ _ovGrinbergerPa_ _papesiosRousell_ _TroiloZipf2019L_ _owLevelFHD.pd_ _f_ </td> </tr> <tr> <td> Open-Data Driven Embeddable Quality Management Services for Map-Based Web Applications (2019) </td> <td> Noskov, A. and A. Zipf </td> <td> Big Earth Data </td> <td> Journal Article </td> <td> _https://www.ge_ _og.uniheidelberg.de/_ _md/chemgeo/g_ _eog/gis/noskov_ _zipf2019embqu_ _ality.pdf_ </td> </tr> <tr> <td> From E-Government to We-Government: an analysis towards participatory public services in the context of the H2020 WeGovNow (2018) </td> <td> Tsampoulati dis, I., Kompatsiaris , I. and N. Komninos </td> <td> Information Society and Smart Cities Conference University of Cambridge, United Kingdom </td> <td> Conference proceedings </td> <td> _https://zenodo._ _org/record/257_ _8929#.XJoSoKAi_ _FLw_ </td> </tr> <tr> <td> La Pubblica Amministrazione responsabile : un caso di digital welfare (2018) </td> <td> Visentin, M. and G. Antonini </td> <td> Rivista Italiana di Public Management </td> <td> Journal Article </td> <td> _https://zenodo._ _org/record/257_ _9141#.XJoSXaAi_ _FLx_ </td> </tr> <tr> <td> First Life, the Neighborhood Social Network: a Collaborative Environment for Citizens (2016) </td> <td> Antonini, A. et al. </td> <td> Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion </td> <td> Conference publication </td> <td> _https://iris.unit_ _o.it/handle/231_ _8/1646139#.XH_ _QHFaAiFhE_ </td> </tr> <tr> <td> WeGovNow: a map based platform to engage the local civic society (2018) </td> <td> Boella, G. et al. </td> <td> WWW '18 Companion Proceedings of the The Web Conference 2018 </td> <td> Conference publication </td> <td> _https://dl.acm._ _org/citation.cfm_ _?id=3191560_ </td> </tr> <tr> <td> WeGovNow: an integrated platform for social engagement in shaping future cities (2018) </td> <td> Boella, G. et al. </td> <td> 4th Italian Conference on ICT for Smart Cities And Communities 2018 </td> <td> Conference publication </td> <td> _https://iris.unit_ _o.it/handle/231_ _8/1693782#.XH_ _P_aqAiFhE_ </td> </tr> <tr> <td> Back to public: Rethinking </td> <td> Lupi, L. et al. </td> <td> Proceedings of the </td> <td> Conference </td> <td> _https://iris.unit_ _o.it/handle/231_ </td> </tr> <tr> <td> **Title (Year)** </td> <td> **Authors** </td> <td> **Journal/book/conferen ce** </td> <td> **Type** </td> <td> **Link to publication** </td> </tr> <tr> <td> the public dimension of institutional and private initiatives on an urban data platform (2016) </td> <td> </td> <td> 2016 IEEE International Smart Cities Conference (ISC2) </td> <td> publication </td> <td> _8/1646137#.XH_ _QHiKAiFhE_ </td> </tr> <tr> <td> MiraMap: A We- Government Tool for Smart Peripheries in Smart Cities (2016) </td> <td> De Filippi, F. at al. </td> <td> IEEE Access Journal </td> <td> Journal Article </td> <td> _https://ieeexplo_ _re.ieee.org/doc_ _ument/7444140_ </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1125_I-Media-Cities_693559.md
# Executive Summary This Data Management Plan (DMP) describes all elements of the data management life cycle of I-MediaCities. It presents the type of data will be collected, processed and generated by the project and how the IMedia-Cities Consortium plans to manage these datasets in the respect of the FAIR-principles of data management, as needed for a European project participating in Horizon 2020. It provides information on the type of data the project will collect, process and generate, the partner responsible for the DMP during the course of the project, how this data will be handled during and after the project lifetime and what methods and standards will be applied to the data. The DMP also includes all rules and provisions on data preservation, security and ethics for I-Media-Cities. These rules and provisions were already listed and in article 6 on IPR Management of D1.1 and D9.1 on the ethics of the project. This deliverable will detail the specific data protection and ethical rules that apply to the different datasets collected, analyzed and generated within the project. Finally, a Data Register listing all details for all the open datasets, planned for in I-Media-Cities, is provided. # 1\. Introduction This Data Management Plan (DMP) provides all the information needed to answer any question that might arise concerning the data collected, processed and generated within the project. There is a multitude of diverse research and user-generated data planned for, which makes such a concrete Data Management Plan and its updated versions (at M16, M24 and M36) an absolute necessity. This document lists all the datasets that are going to be produced within the project, and provides an analysis of all the main elements of the data management policy the project will implement to handle those datasets. I-Media-Cities is part of the OPEN Research Data Pilot, a flexible pilot running under Horizon 2020, whose goal it is to improve visibility, reach more people and have a greater impact – for science, society and your own career. Consequently, the DMP provides information of how the research data of the project follows and adheres to the FAIR-principles of data management. It is also important to note that a key component of the project is that all datasets generated within the project will be made open, as long as there are not limitations due to copyright provisions or privacy laws. The DMP also develops rules and general principles designed in D 1.1 (Quality Assurance, Risk Management and IPR) D.9.1 (POPD – Requirement N° 4). This Data Management Plan provides information on the following elements of the Data Management lifecycle: * Type and volume of Data collected, processed and generated within the project * The ways the project data complies with the FAIR-principles of data management * Rules of preservation and archiving * Ethical rules of data management * Data Quality Assurance * Resourcing of data management * Data security provisions # 2\. Data of I-Media-Cities When speaking of the data used, collected, processed and generated within I-Media-Cities, a distinction has to be made between the data and metadata existing before the project ( _Background_ ) and the data and metadata created within the project ( _Foreground),_ as defined in D 1.1. I-Media-Cities will use, collect, process and generate both types of data and metadata, but the IPRrelated solutions for these two types differ greatly: 1. _The Background of the project_ The Background is any information, in hard copy or in electronic form, including, without limitation, documents, drawings, models, designs, data memoranda, tapes, records, and databases developed before or independent of performance under the project that is necessary for the performance of Project Work and exploitation of its results. All rights and obligations connected to the background of the project are listed in the Consortium Agreement and in D1.1, article 6.3. 1 The Coordinator, in collaboration with all partners, will ensure that the project complies with these rights and obligations. Concretely, the Consortium Agreement provides all the details on what constitutes Background for every partner and, to summarize, partners will retain ownership of the Background they contributed to the project. 2. _The Foreground of the project_ The Foreground means the results, including information, materials and knowledge, generated in a given project, whether or not they can be protected. It includes Intellectual property rights, similar forms of protection and unprotected know-how. Thus, foreground includes the tangible and intangible results of the project. Results generated outside the project scope do not constitute the Foreground. Ownership and IPR of metadata, data and technological solutions created within the project are all dealt with in D1.1 2 and the Consortium Agreement 3 , and in general the following rules apply: * Metadata produced within the project: to be made freely available, but the IP remains with the institution(s) who produced it. * User-generated content and metadata: IP will be regulated when users register to any I-MediaCities service or platform. * Technological solutions created within the project: IP and ownership remain with the partner(s) who developed it. If there are several partners who developed the technological solution, a shared ownership is granted. # 3\. Data Register Since I-Media-Cities generates a lot of datasets in order to achieve the project’s goals, they are classified and detailed in a Data Register, to enable their proper and full management. The Data register lists all datasets of a project and includes their volume (for some estimations), methodology, standards and handling of all data during and after the end of the project. This Data Register has to be understood as a living document, which will be updated regularly during the project lifetime in combination with the updates of the Data Management Plan (see chapter 8). This article lists and details the possible values of the different datasets and to provide for some context to go along with the Data register. Below, a preliminary list of all generated datasets of version 1 of the Data Register, completed with their names and the Work Package they are attached to. _Data generated within all Work Packages_  Deliverables that are reports and documents Every Work Package has a number of deliverables attached to it, reports that are under responsibility of specific partners. Some of these deliverables are reports or documents that need to be considered research data and are vital to understanding the research project as a whole. To this effect, the project will provide information on all documents that are relevant to the research project of I-Media-Cities in the Data register. All these documents will be kept at the same place and the details of their data management can be found in the data register (Annex I). ## _Datasets generated within WP3_ * Performance data of website * List of email subscribers for I-Media-Cities Newsletter ## _Datasets generated within WP5_ * Account details for end-users * User-generated metadata * User-generated data * Software ## _Datasets generated within WP6_ * I-Media-Cities Data model * Additional Metadata created for the I-Media-Cities Metadata Repository  Software ## _Datasets generated within WP7_ * Metadata generated through the use of automatic analysis tools * Software ## _Datasets generated within WP8_  Software _See Annex I for the I-Media-Cities Data Register, version 1._ ## 3.1. Data Register Component 1: Data Summary To keep a structured overview, the first component of every data set description in the Data Register is a Data Summary. It delivers the following information in accordance with the guidelines on fair data management in Horizon 2020 projects 4 : * Data set reference and name: Identifier for the data set. * Data set purpose: Description of the data that will be collected and generated and its purpose for the project and its objectives. * Standards and metadata: Specify the types and formats of data generated/collected. * Data Origin: Specify the origin of the data and whether or not existing data is being re-used. * Data size: States the expected size of the data (when known). The expected total size of every dataset in the data Register will be added as soon as the Coordinator and the WP leaders have a better understanding of the specifics of this dataset. As soon as the WP Leaders can make an educated estimation on the expected dataset size, this information will be added as ‘expected total size’ to the dataset in the Data Register. This will be done during one of the planned updates of the Data Management Plan, following the estimation (M16, M24, M36). Since the project applies an agile approach to platform design and development, storage is foreseen to be expandable to be able to store any unforeseen quantity of data the project produces. * Data utility: Specify to whom the data will be useful. ## 3.2. Data Register component 2: FAIR data-principles In accordance with the requirements for European H2020-projects, I-Media- Cities will make sure the research generated within the project adheres to the FAIR-principles, which means that there are clear rules on their management that will improve their findability, accessibility, interoperability and reusability. This Article will go into more detail for each of the different components that help define the FAIR-principles of I-Media-Cities. It provides further information on all the principles’ components and deals with the questions that need answering in order to comply to the FAIR DATA Guidelines. 5 ### 3.2.1. Making Data Findable To make the collected and generated data of I-Media-Cities findable, every dataset is characterized by specifications on the ways they are made findable through metadata, versioning and other identification mechanisms. Next to these identification methods, every dataset also provides the information on the main digital location, used for their storage. To help make the open data of I-Media-Cities discoverable, the project website includes a central catalogue of all open datasets, which includes rules and regulations for accessing these datasets. In general, the following rules and provisions apply for the different datasets: #### a. _Reports and other documents_ * In accordance with the mandatory properties of the DataCite metadata scheme 6 , which is a part of the DDI-standard for research metadata (version 3) 7 , all reports and documents collected or generated in I-Media-Cities has, at a minimum, the following metadata attached to it: Persistent and unique identifier (file name), Title, Keywords/tags, Author/Creator, Subject, Time of Publication, Publisher (Always I-MediaCities Consortium), Version, File size. * All metadata are added to the properties of the document itself. For deliverables, next to the metadata being added to the properties, some of the metadata will also be published on the document control and title pages. * The file name is the unique identifier I-Media-Cities uses for reports and other documents. It is constructed by the following standard identification mechanisms: _For deliverables_ : Start with IMC (Project code for I-Media-Cities), followed by the number of the deliverable (can be found in list of deliverables in the Grant Agreement), followed by the title of the deliverable or annex to the deliverable and the version of the document. All these elements are connected by dashes. Example: IMC-D7.1-Report on state of the art for moving image analysis-v. 1.3-final _For documents that are not deliverables_ : Start with IMC, followed by the type of the document (e.g. questionnaire, minutes …) and the title and the version of the document. All these elements are connected by dashes. Example: IMC-Minutes-Kick-off meeting Brussels May 23-24-version 1.2-final  Clear Versioning is provided for all documents. All deliverables will have a version number on the title page and all other documents will have a version number in the file name. The last version of a document will always carry the label ‘final’ at the end of the file name, after the version number of the document. The final versions of public documents and deliverables will be added to the project web site _www.imediacities.eu_ and on the CORDIS portal 8 . It is worth to note that information/documents/deliverables for the internal use within the consortium and for the monitoring procedures of the EU will not be made publicly available but will be available on the protected project Work Cloud, B2DROP 9 , where the project partners have unlimited access to all versions of all documents and reports. #### b. _Films, Photographs and text-files (contents) created within the project_ User-generated contents, which means any film, photograph and document, created by partners, researchers and the general public, using the I-Media- City services, and then saved in the I-Media-Cities repository. The user- generated content does not belong to the Background of the project. All rights and obligations concerning user-generated contents are listed and detailed in the IPR-provisions connected to every single I-Media-City service and must be approved by users, before they are able to save their data in the I-Media-City repository. Every Film, Photograph and text-file created within I-media-Cities has a title and a source (metadata elements) attached to them, when they are entered into the I-Media-Cities repository. Once entered into the repository, they receive a unique numerical identifier, dedicated to the I-media-Cities repository. c. _Metadata collected, processed or created within the project_ * The following types of metadata are collected and in part generated within I-Media- Cities: * Technical Metadata: these contain information on the technical aspects of the films, photographs and documents, such as size, format, color … * Descriptive Metadata: these contain information on the subject of the films, photographs and documents, such as title, director/author, summary, keywords … * Administrative Metadata: these contain information on the rights status, creation, preservation and quality control of the films, photographs and documents, such as source/publisher, rights holder. * All metadata connected to the films, photographs and text-files (Background or usergenerated) of I-Media-Cities, which is created within the project, are attached to a linked and open metadata model. Since a specific metadata model and repository is created for I-Media-Cities, the structure and standards used for this model are detailed and described in D6.1 _Content metadata subsets selection and analysis of metadata schemas and vocabularies_ (M 10). * All metadata connected to the films, photographs and documents (background or usergenerated) of I-Media-Cities, can derive from four sources: * Previously existing metadata in the databases of the Project partners of I-MediaCities (this metadata is considered part of the Background, since it is collected not created) * Automatic A/V Analysis Tools o Metadata generated by the software * Users (both project partners and external users) Rules of ownership and IPR management of all metadata collected, processed and generated within I-Media-Cities, are subject to the regulations and provisions detailed in D1.1 Quality Control, Risk Management and IPR, already published on the project website 10 . The approach I-Media-Cities will follow to maintain the distinction between metadata that is considered background (already present in the partners’ databases) and the metadata generated within and for the purpose of the project, is based upon the definition of the generating source of the metadata. ##### d. Software and its intellectual property rights (Patents, copyrights and Trademarks) * All software developed in I-Media-Cities are governed by the rights and obligations attributed to the results and the software of the project, as listed and detailed in D1.1 11 and the Consortium Agreement 12 . * The rights owners of any software developed in I-Media-Cities, is clearly indicated in the copyright information of the software. #### e. _Personal data_ * Personal data means any information relating to an identified or identifiable natural person. * An email address will function as the unique identifier for personal data. * Users can be asked to register an account on I-Media-Cities in order to be able to submit data into the project. When creating their account, users will also have to approve the Terms of Use, the Privacy Policy and the Disclaimer attached to this type of account. * All users can always have access to, edit or even cancel their own account and personal data, by logging in via their email address and their personal password. * All rules and regulations connected to personal data, included informed consent for data sharing and long term preservation, are listed and detailed in deliverable 9.1 13 on ethics and in the IPR section of deliverable 1.1 14 . ### 3.2.2. Making Data Openly Accessible Openness of data and free access is one of the key concepts of the project. As much as possible, and respecting all Intellectual Property involved, but also taking into considerations the principles of the Public Sector Directive 2003/98/EC, all data will be made openly and freely accessible. All data concerned are: content, metadata (including user-generated), and research produced within the project. To that avail, all publications, metadata and other content that are produced as a direct result of the project will be made openly available, which is known as a “gold” model. The project will also encourage researchers using its resources, to make their works (or a significant section of it) available via and within the e-environment and eco-system of I-Media-Cities. In other words, research that is produced by using and accessing the content made available via I-Media-Cities, will be made available via the project e-environments. For every dataset, the following specifications on their accessibility is detailed in the Data Register: a. _Is the dataset open available or not_ _._ Open data is defined as follows: _**Open data is data that can be freely used, re-used and redistributed by anyone - subject only, at most, to the requirement to attribute and share- alike.** _ 15 As a general rule, the project will openly make available the contents and metadata produced within the project through a “gold” model. This is due to the fact that the consortium strongly wants to encourage other stakeholders in using the material generated by the project, enhancing the replicability of the proposed approach and wider fruition of the technology. Some datasets generated in I-Media-Cities will not be openly accessible, depending on the copyright protection rules and ethics rules attached to the generated datasets. For background, the copyright status of the content and corresponding metadata, is identified by and under the responsibility of the partners who contribute these datasets to the project, and listed by CRB, which is responsible for the Data Management Plan. For the foreground, The Management Committee will be responsible for planning and identifying the copyright rules pertaining to every dataset belonging to the foreground, and CRB is responsible for listing the copyright protection pertaining to every dataset in the Data Register. The accessibility status is listed in the Data Register and will always be made clear to users before they can start working on and with datasets. b. _How will the dataset be made available?_ An open data license will be applied to all datasets that are openly available. This license will list and determine all intellectual property rights that exist on the datasets, and is conformant with the principles set forth in the Open Definition above. If user-generated data or content need to be anonymized in order to be able to make it openly available, I-MediaCities will do so. Open data needs to be technically open as well as legally open. To this end, I-Media-Cities will make sure that all open data is available in bulk in a machine-readable format or through an API and for a reasonable price. The project will also verify which part of the dataset is available for download. The Data Register will list with every dataset, which project partner is responsible for making it technically open. Data will be priced at no more than a reasonable cost for reproduction or even as a free download from the Internet. The open data license will mention the cost of the dataset or will provide instructions on how and where information on the cost for downloading the dataset can be found. The project website will have clear instructions on how to access the different open datasets and will try to avoid a Data Request Methodology as much as possible. If such a methodology is needed, the project website will also list the full details instructions for this DRM. #### c. _Methods and software necessary to access the dataset_ The methods used to share data are dependent on a number of factors, such as the type, size, complexity and sensitivity of the dataset. For easy data availability, the project will use one of the following methods of online publication to share data: * Files will be provided for download via the project website * Via ftp servers * As an API * Linked metadata structure * Via 3 rd party websites If the project uses specific software to share the data, the partners will make sure to primarily use open source software. If dedicated software, which is not open source, is necessary to access the data, the software will have clear indicators on its copyright situation, rules of use and will provide a manual for users. #### d. _The location of the deposited datasets_ All open datasets will be available through the project website. The user- generated metadata and content will be deposited in the dedicated I-Media- Cities repositories, which will be developed as a part of the project. The project website will provide information on how to access these datasets and the used methodology. The repositories will be hosted at CINECA partner location, in Italy. CINECA will provide access, during the project lifetime, for users to storage resources in the amount needed by all I-Media-Cities services. After the project lifetime, this storage will be subject to contractual agreements made between CINECA and the Managing organization, in line with the developed Business Plan. Consequently I-Media-Cities users will be allowed to access the iRODS data repository. Actually two iRODS server instances are installed and running on CINECA PICO HPC cluster 16 , intended to enable new "BigData" classes of applications, related to the management and processing of large quantities of data, coming both from simulations and experiments. The storage area is composed by high throughput disks for a total amount of about 4 PB, connected with a large capacity tape library for a total actual amount of 12 PByte (expandable to 16 PByte). Also Virtual Machines have been deployed and equipped with an iRODS server development version in order to perform advanced tests on the infrastructure before going on production. #### e. _Are there any restrictions on the datasets_ If any restrictions apply to a generated dataset, for whatever reason (e.g. ethical, rules of personal data, intellectual property, commercial, privacy- related, security-related), the Data Register indicates this restriction and provides information on the accessibility of the dataset and the request methodology users must follow to gain access. This information is also available through the project website. ### 3.2.3. Making Data Interoperable Metadata interoperability is fundamental for the project. Interoperability denotes the ability of diverse systems and organizations to work together (inter-operate). In this case, it is the ability to interoperate - or intermix - different datasets. 17 In order to make the data and metadata of IMedia- Cities interoperable with other datasets, the following specifications are added and detailed by the Coordinator to every dataset in the Data Register: * The open data license applied to the open datasets will allow for the intermixing of the dataset with other open datasets, even if these are external to I-Media-Cities. * A clear indication on which (meta)data standard was used for the dataset. The project will make largely use of open standards in the case of A/V content, the main metadata standard that is relevant is the one developed following a standardization mandate from the European Commission to the European Committee of Standardization (CEN). CEN produced two European Standards (EN) to facilitate the interoperability of film databases: * EN 15744:2009 “Film Identification – Minimum metadata set for cinematographic works”; * EN 15907:2010 “Film Identification – Enhancing interoperability of metadata – Element sets and structures”. These standards constitute the base for the activities of the project. * In order to open the structured information gathered in the project, Open Linked Data will be adopted so that it can be interlinked and become more useful through semantic queries. This type of data builds upon standard Web technologies, such as HTTP, RDF and URI’s. * Specifically for the identification of the A/V contents, the project will foster the adoption of Persistent Identifiers in the project and among the different cultural holding institutions. Among the main mature standards the DOI 18 and the Handles 19 are effective means of identifying an entity over the internet. * Specifically for the identification of names, the project will foster the adoption of the main mature standards such as ORCID 20 that provide a persistent digital identifier that distinguishes researchers or ISNI 21 . * Since I-Media-Cities will create its own metadata ontologies and vocabularies, details on how they are linked to commonly used metadata standards will be provided on the project website. I-Media-Cities strives to use a linked open data model for its database. ### 3.2.4. Increase Data Reuse In order to increase the re-use of the open datasets of I-Media-Cities, the project partners will use the dissemination and communication channels to spread the word on the availability of the open datasets: * I-Media-Cities will promote their available open datasets through the project website and the other available dissemination tools, described in deliverable 3.3, the dissemination and communication plan of I-Media-Cities 22 . * I-Media-Cities will make every effort to communicate the availability of its open datasets to channels that directly provide information to one or more of the 6 target groups and related projects, as described in D3.3 23 . In order to encourage re-use, the special section on the project website that will deal with all the open datasets, includes the following information: * Catalogue with all open datasets of the project. * All information on the IPR rules and the open data licenses used for the different datasets. As a general rule, the project will apply the licenses that permit the widest re-use possible.  All information on the methodology and location needed to access the open datasets. * A clear indication on how long the datasets will remain re-usable. * A clear contact point for any questions related to the access and re-use of the project datasets. * If there are any expected difficulties or restrictions in data sharing and re-use, the open data information page will provide this information, along with causes and possible measures to overcome these. Possible strategies to limit restrictions may include: anonymizing or aggregating data; gaining participant consent for data sharing; gaining copyright permissions; and agreeing a limited embargo period. A key component of the project is the openness of the solution developed. Although some components might involve some IP and require protection, the project is conceived in order to provide a solution that is open and easy to adopt by as many institutions as possible, in as many fields as possible. Concretely, I-Media-Cities will actively seek to involve other research centers and collection-holding institutions to use the tools it will produce, already during the project lifetime and after. The system will be designed to enable more archives, cities and institutions to join the project. Equally encouraged by the partnership is the fact that the concept and the underlying technological solutions are employed by others, on completely different subject matters. In order to enhance the chance that the open datasets will be re-used, they will also be published on 3 rd party data providing websites, such as GitHub 24 . Also thanks to the open link to Europeana via the EFG content aggregator, content and data will also be – as much as possible and compatible with the Europeana metadata policies – made available via that channel. This link will be drafted considering the evolution of Europeana portal and its relation with aggregators as EFG, which are now under reflection following Council conclusions 25 . Lastly, a key objective of the project is the ability to process queries across language barriers. This will be made possible by defining and implementing all necessary tools, which will be defined in D6.1, such as ontologies, thesauri and authority files. This part is considered as strategic, since I-Media-Cities will function as a pilot and a blueprint to solve this standing roadblock to effective sharing of collections and metadata across European heritage and research institutions. Therefore, it is a strategic objective of the project that this part of the development and the activities is completely open in its concept, results, methods, technical solutions and results. ## 3.3. Data Register Component 3: Data Preservation and Archiving The long-term preservation plan for every I-Media-Cities dataset will also require the following information to be added to every dataset of the Data Register: * The retention period of every dataset in the dedicated (Meta)data repositories. This period will depend on the type of data and will always be in compliance with the rules set out in D9.1 on ethics for I-Media-Cities and D1.1 on the IPR management plan for the project, but will also be influenced by the results reached in D3.1 Platform Sustainability report with business model scenarios. * The central preservation location of every dataset will be defined. This will almost always be one of the two repositories foreseen in the project system. * Clear indications about what will happen with the dataset after the planned retention period of the dataset is finished. # 4\. Data Quality Assurance I-Media Cities will apply a Data Quality Assurance methodology to avoid data contamination, incorrect, inaccurate or not recorded data and to generally prevent errors entering a dataset. The Data Quality Plan will apply quality assurance rules to the data before, during and after collection. ## 4.1 Data Quality Assurance before Collection To avoid errors, I-media-Cities will strictly define and enforce the format and metadata standards used for the different datasets. The technical solutions will be designed in such a way that automatically enforces the use of the defined standards and formats. _See D6.1 for the I-Media-Cities metadata standards definition._ ## 4.2 Data Quality Assurance during Collection To ensure user-generated data quality, a specific methodology will be applied to eliminate mistakes during data entry and evaluation. This methodology was defined in D4.1 on Research Activities and Platform Content Enrichment. By designing the data and metadata repositories in a specific way, before starting to use them, the project will also enhance the data quality during the collection process. In order to reach this goal, the following rules will be applied during the design phase: * The use of consistent terminology * Data is atomized * Guide data entry through field value definition _See D6.1 on the Metadata Model and D6.2 on the Metadata repository_ ## 4.3 Data Quality Assurance after Collection Statistical data could be generally provided by CINECA exploiting statistical tools as for example Elastic Search or Kibana, first producing logs containing information about database content, subsequently parsing log files in order to produce statistical visual representation of such information. Other specific quality assurance mechanisms, such as checksums, are not foreseen within the project at this point in the project, as they would not only require implementations at technical partner CINECA, but would demand a technological readiness level of the metadata providing partners (FHI partners), which surpasses both the technological capabilities of several of these partners as well as the available resources of all partners for this section of the project. The Consortium is certain, however, that visualizing statistical analysis of logged information, can provide a solution which is sufficient for quality assurance after storage. # 5\. Resourcing Since the design and use of the meta(data) repositories, the preparation of the project data and the application of the FAIR-principles to the datasets were already foreseen during the composition of the project proposal, no extra resources should be needed to fulfill these requirements. The relevant project partners will make sure that no extra resources are needed to ensure preparation and preservation of the datasets during the lifetime of the project. If there are any costs or charges attached to the preservation of the datasets after the lifetime of the project, these will be planned for and added to the business model scenarios that will be developed through the living lab methodology. The following project partners are responsible for the data management during the lifetime of the project: * All FHI partners are responsible for managing the correctness and quality of the data before uploading it into the project. This includes the rights status and copyright restrictions that might apply to the content. It is the responsibility of the content provider to provide the correct information to the project database and the WP Leaders, through the Coordinator or one of the WP leaders. Data provided by an FHI partner, which was already present in their local database and ingested into the project as background, cannot be edited by another partner or user. Only the partner that has contributed the data can edit it. * WP Leader DIF is responsible for managing the standardization of all datasets coming from the content providing partners to the project. This means managing the preparation before upload into the project repository in accordance with the chosen standards. Specific preparation guidelines and manuals have been created for any data contributing party and have been added to D4.2. * WP Leader CINECA is responsible for managing the preservation and security of the data once it has been stored in the I-Media-Cities repositories. * WP Leader CRB is responsible for managing the data register and making the datasets compliant with its FAIR-principles. If any unforeseen or unplanned resources are needed to complete and deliver any part of the project plan, the problem resolution mechanism, defined in the Risk Management Plan 26 , will be started. This means, that the relevant WP Leader will inform the Project Coordinator of the problem, and will analyze and try to manage the problem inside the related project team. In the case the insufficiency of resources can’t be resolved or entails risks for the whole project, its solution is referred to the Project Coordinator and Management Committee. The Project Coordinator will discuss the problem at the first Management Committee following their discovery and will devise a solution proposal that must be approved by the Management Committee. # 6\. Data Security All the data is stored on secured servers at CINECA. In order to ensure the continuity or the uninterrupted provisioning of the IIA service (the research service at CINECA), CINECA exploits Preventive, Detective and Corrective measures in order to reduce or eliminate various threats/incidents that may occur. The control measures are applied at different levels of CINECA’s infrastructure, main control measures adopted include: Uninterruptible power supply (UPS), Server Room conditioning and cooling and different types of connectivity, including firewall and VPN services. All the equipment is in redundant configuration. HA clustering performs hardware/software faults detecting, and immediately restarts the application on another system node, making the fault transparent vs system users. As EUDAT 27 partner, CINECA will support access to the CINECA data repository, also providing the web portal with authentication mechanisms implemented by the EUDAT consortium. Technologies developed within EUDAT will be exploited and coupled with novel web tools in order to expose interfaces to allow users browsing and visualizing files from the CINECA repository. Proper APIs will be included in the IMC web portal interface to activate data access using, for example, EUDAT federated identity management services (i.e. B2ACCESS) or invoking iRODS procedures in order to directly connect to the CINECA iRODS server instance. Data access and authentication within I-Media-Cities infrastructure will be implemented as a novel Account Service, managing system accounts through the I-Media-Cities web portal interface and mapping I-Media-Cities users to iRODS users so as to maintain the correspondence between accounts on the system in case they pertain to the same physical person. In EUDAT, access policies for data deposited directly in the Common Data Infrastructure (CDI) through user-facing services (for instance B2SHARE) are determined by the depositor. While the EUDAT CDI encourages open access, depositors can currently choose to keep data private. Consequently access policies for I-Media-Cities data stored in the CDI by a partner community will be determined by CINECA data managers as part of the community and subsequently respected across all copies maintained in the CDI. Moreover the EUDAT2020 project in the future will develop finer-grained authorisation for data sharing, enabling "group" or "approved reader" authorisation modes. Within the CDI framework, authorisation decisions (the granting of access rights) will remain with the relevant data owners. The EUDAT approach in terms of Access policies could obviously impact on the I-Media-Cities Security Service architecture and development, even if no specific high-level requirements has been already identified in order to determine how access should be granted to a specific resources within the IMedia-Cities infrastructure. To make sure no data will be lost, a data recovery plan generally provides basic replication mechanisms conceived in order to preserve data from failures or in case of system takeovers. This kind of approach includes file system backup as well as database support for replication. Upon such kind of basic and very general mechanisms, currently supported from the most part of HPC centres, CINECA offer includes more sophisticated high level mechanisms for data preservation and metadata management. CINECA-”Backup Servers” is an important control measure adopted in order to maintain a backup of the data contained at application and database servers as well as file system servers for both production and staging environment. The Advanced Data Backup mechanism enables recovery of the pertinent service from incident/disasters that cause a complete loss of the data. CINECA also supports high-level data and metadata management tools relying upon services developed within the EUDAT consortium and offers high level support for storage and data preservation. To this extent individual services in the EUDAT CDI provide shorter or longer periods of data preservation depending on Service Level Agreements (SLA). The B2DROP service (which could be easily integrated within the IMC infrastructure) is aimed at short-term sharing of active research data, B2SHARE is aimed at the medium-term storage and sharing of near-publication and published data and B2SAFE is aimed at longer-term preservation and backup of community data repositories. Backup of data in the EUDAT CDI is performed at two levels: multiple replicas of data can be stored at different sites (also on different geographical locations) through the B2SAFE service, governed by user-defined policies; data may additionally be backed up at an individual site, depending on the nature of the storage systems in use at that site. I-Media-Cities will apply extra security measures when dealing with the transfer of sensitive data. If such data needs to be transferred, the project will use a File Transfer protocol combined with an SSL Certificate to add an extra layer of security to the transfer of files. Encrypting the data this way is one of the safest ways to make sure that sensitive data is protected during transfer. In fact, CINECA provides file transfer mechanism based on GridFTP which supports x509 proxy certificates as well as SSL encryption. GridFTP is a very effective tool for data transfer: it enhances the standard ftp service making it more reliable and faster. The transfer performance with GridFTP may be easily raised of 45 times the ones provided by scp-like tools. GridFTP was developed by the Globus alliance as part of an open-source toolkit for data management. It is a client-server application: on CINECA HPC platforms a server is available. # 7\. Ethical Aspects of Data Management Plan All ethics and legal issues connected to data sharing in I-Media-Cities are covered in D9.1 POD Requirement N° 4 28 and the IPR-section of D1.1 29 . In this Article, we provide a summary of all rules, regulations and provisions planned for to deal with any ethical challenge that might arise with data sharing in I-Media-Cities. * When dealing with long term preservation or sharing of personal data of any kind, the project will ask the user for informed consent. This informed consent will be achieved through requesting the user to approve the Terms of Use, the Privacy Policy and the Disclaimer, in order to be able to register his account and submit any kind of personal data into the platform or any of its web services. These documents will be added to every web service of I-media-Cities and might differ from each other to fit the specific technical abilities of a specific web service. The Consortium will make sure that all web services will have a clearly marked and dedicated area where users can, at all times during their visit, find and read the terms and policies that apply. Templates for the Terms of Use, the Disclaimer and the Privacy Policy of I-Media-Cities have been added as annexes to this document. _See Annex II for the Terms of Use template_ _See Annex III for the Privacy Policy template_ _See Annex IV for the Disclaimer template_ * The project will make sure that all legislative standards and acts of ethics are followed when dealing with collecting, processing and sharing data in the project. * The Terms of Use and the Privacy Policy, attached to any part of the project where data is collected from users, will inform those users on how their data will be used, who it will be shared with and the details of its storage. * If data needs to be anonymized before it can be made open, the project will do so. # 8\. Updates of the Data Management Plan The Data Management Plan as well as the Data Register will be updated yearly to make sure that all information included in the plan and the Register are still valid and that specific information previously unavailable is added. This update will happen at least once a year at M16 (after first project review), M24 and M36 and an overview listing of all updates to the Data Management Plan (including Data register updates) will be added as an Annex to the updated versions of this deliverable (with references to the chapters). The updated version of this document will be available via the results section of the project website, where it will replace the previous version of this deliverable. It is the responsibility of the Coordinator to manage the updates of the Data Management Plan and the Data Register. # 9\. Conclusion This Data Management Plan provides an overview of all aspects of collecting, processing and generating data within I-Media-Cities, as well as all the related challenges and constraints that need to be taken into consideration. I-Media-Cities has constructed a Data Register that lists all datasets collected, processed and/or generated within the project and details all relevant data management information for each of them. Every dataset has a summary that indicates, among other things, the unique identifiers of the dataset. Since I-Media-Cities is part of the H2020 project and of its Open Research Data Pilot, the Data Register will include the information of how every dataset complies to the FAIR-principles of data management, as well the details on its storage and preservation. The last three sections of this DMP provides information on the ways data quality is assured throughout and after the project lifetime, the security provisions that protect the datasets, the ethical rules that apply and the way the resources are planned for all parts of data management throughout the project. A first version of the Data register with all datasets foreseen in I-Media- Cities, containing the detailed data management information for each dataset, has been created and added as an annex to this document. Since the Data Register is a living document, it will be updated throughout the project lifetime, to include previously unforeseen datasets. Templates for the Terms of Use, Privacy Policy and I-Media-Cities in order to collect the informed consent of users. An update planning for this deliverable is also provided and details when updates are scheduled, who is responsible for them and where updated versions or update overview information can be found.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1129_ENLIVEN_693989.md
# 3 Introduction ## 3.1 Project Summary ENLIVEN (‘Encouraging Lifelong Learning for an Inclusive and Vibrant Europe’ – H2020 Young Society 2015) is a research project supported by the European Commission’s Horizon 2020 research framework (Project No. 693989). The project responds to Call YOUNG-3-2015 (‘Lifelong learning for young adults: better policies for growth and inclusion in Europe’). Its duration is 36 months and it commenced on 1st October 2016. ENLIVEN’s overarching objective is to provide an innovative model and mechanism to support policy debate, policy formation and policy evaluation in lifelong learning, focussing on the needs of today’s young adults, and integrating theoretical and empirical perspectives from social and computer sciences. It will generate an evidence-based analysis of where, when and why policies have been effective, and develop a computer-based intelligent system to improve policy-making. The project draws on two research fields – social science and computer science – and combines expertise from both. Partner research institutions are located in nine countries (Australia, Austria, Belgium, Bulgaria, Estonia, Italy, Slovakia, Spain, and United Kingdom); some research tasks will also be undertaken under a specific arrangement with researchers at a university in Denmark. **Objectives** The specific objectives of ENLIVEN are to: * Map and critically assess key elements of programmes implemented at EU, national and regional levels to support access to and participation in adult learning among excluded population groups and those at risk of social exclusion, assess how these have addressed disadvantage, inequality, and social exclusion, and helped overcome barriers to participation, and in what ways participation in education and training benefits the social and economic inclusion of population groups suffering from exclusion and cumulative disadvantage. * Assess the impact of “system characteristics” (of initial and adult education, the labour market, the economy, and social protection) on aggregate participation rates (overall, and in various segments of adult education markets), and on the distribution of participation (with special reference to disadvantaged young adults and using gender-sensitive approaches). * Assess the role of lifelong learning in developing a productive, efficient and competitive economy through investigating what learning potential and innovation ability exists within workplaces, what organisational models favour innovation and innovative training, and how effective learning actions are. * Identify and map the nature and availability of data about adult and lifelong learning, and integrate these with new research findings from across the ENLIVEN project and, using data mining, establish a knowledge base for the development of an Intelligent Decision Support System to support policy making; * Design and implement an Intelligent Decision-making Support System (IDSS), and test how this could adapt to new knowledge and learn from restoring and updating users’ experience interactively. ## 3.2 Types of Data The main types of data the ENLIVEN project will be handling are as follows: * _Existing national and international datasets_ . These are both quantitative (e.g. LFS, AES, PIAAC, ESS) and qualitative (e.g. reports from Eurydice, Cedefop). All these data are either anonymised (in the case of the quantitative data), or in the public domain. 1 None contain private or personal details regarding identifiable individuals. We will ensure we utilise existing datasets wherever possible, to ensure best practice in the project. * Existing collections of aggregated data (e.g. Eurostat’s dissemination database, UNESCO’s database on education). * Existing accessible quantitative scientific-use microdata sets e.g. LFS, AES, CVTS obtained from Eurostat * Other quantitative scientific-use micro datasets not obtained from Eurostat (e.g. PIAAC data obtained from the OECD) o Reports from (mainly) European agencies, containing aggregated statistical data and various forms of qualitative data e.g. Eurydice, Cedefop * _Fieldwork data generated by ENLIVEN researchers_ . This will come from interviews with policy actors, programme managers, and young adults in educational, community and workplace situations. Fieldwork research will be conducted in a sample of EU member states selected to represent a diversity of socio-economic characteristics and institutional environments (AT, BE, BG, DK, EE, ES, IT, SK, UK), and in Australia. * Qualitative Interviews with participants in educational programmes and with participants in organisational case studies (managers; workers in their first 10 years of employment) and their analysis * Data collection within the participatory observation/action research phase of the case studies (in WP5) and their analysis. * Expert Interviews; Interviews with representatives of educational providers/policy makers in the field of adult learning (WPs 1, 2, and 3) and with business interest organisations and trade unions/social movement organisations (WP7) and their analysis ## 3.3 Organisation of the ENLIVEN project The ENLIVEN project is organised in eleven work packages (WPs). Nine of these involve research of various kinds; one is concerned with dissemination of the research and engagement with policy and practitioner communities; one is concerned with project management and the integration of the various elements of the research. The research WPs can be considered in three clusters: each involves distinct types of empirical research. The broad relationship between these WPs is shown in Fig. 1.1. WP8 **Knowledge** **Discovery for Evidence** **-** **based Policy** **-** **making** WP9 **Development of** **IDSS for Evidence** **-** **based Policy** **-** **making** **WP4 Determinants of** **Participation in Adult** **Learning: A Macro** **-** **systemic** **Approach** WP5 **Organisational Structure of** **Early Careers: HRM & Innovation ** WP6 **Quality of Work & Young ** **adults motivation & well ** **-** **being** WP7 **Understanding Role of** **Young Workers’ Activism** WP10 **Engagement, Dissemination & Impact ** WP11 **Project Management & Integration ** WP1 **Mapping EU & National ** **Policies & Programmes ** WP2 **Constraints on & ** **Facilitators of Participation** WP3 **Role of EU governance in** **Adult Learning Policy** **Figure 1.1** **ENLIVEN:** **:** **Organisational Structure** # 4 Data Management ## 4.1 Principles Data Management Plans (DMPs) are a key element of good data management. The DMP for Enliven has been written with reference to the Guidelines on Data Management in Horizon 2020 which specifically ask for provisions to be made 'FAIR', that is findable, accessible, interoperable and reusable. ENLIVEN will strictly adhere to the ‘Data Protection Directive’ (Directive 95/46/EC) 16 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, the Charter of Fundamental Rights of The European Union and the European Charter for Researchers, including the Code of Conduct for the Recruitment of Researchers. The European Commission has now issued a new directive on the Protection of Personal Data. The Regulation entered into force on 24 May 2016, and it will apply from 25 May 2018. The ENLIVEN researchers will comply with its requirements. The DMP covers: * What data the project will produce * How the data will be used, managed and stored * How the data will be made accessible for future research Additional information on data preservation and data sharing protocols are covered in full in our _Deliverable 11.2 Ethical Procedures (Part A)_ . The Project Team will seek guidance from the University of Nottingham’s relevant Professional Services where any uncertainties occur, including Libraries, Research and Learning Resources (LRLR) and Information Services (IS). ## 4.2 Organisation The Coordinator, Professor John Holford, and the ENLIVEN Management Board, supported by the Project Co-ordination Team at the University of Nottingham (a Senior Research Fellow, Sharon Clancy, and a Project Administrator, Ruth Elmer), are responsible for ensuring that data are handled by all Consortium Members according to the DMP. The DMP will be updated as required through the life of the project. Responsibility for this lies with the Project Co-ordinator, supported at Nottingham by the Project Administrator and the Senior Research Fellow. DMP updating will take place whenever significant changes arise. These might include new data emerging, changes in consortium policies, changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving). In any case, the DMP will be reviewed on a six monthly basis at every second Management Board meeting. It will be a standing agenda item for the meetings, to allow for discussion and updating on an ongoing basis. ## 4.3 Types of Data Beyond research literature and reports published by various authorities and institutions, the types of data which ENLIVEN will manage include: * Aggregated secondary statistical data (tables) obtained from national statistical providers, Eurostat or other European or transnational data providers (CEDEFOP; Eurofound; OECD, UNESCO) * Secondary data in the form of anonymised scientific use micro data sets, obtained mainly from Eurostat, and partly from other European or transnational organisations (e.g. ESS data from the ESS ERIC; the EJS data from CEDEFOP 2 , PIAAC data from the OECD;) 3 . and * primary data (gathered by project researchers), including data based on qualitative non- or semi-standardised interviews, small-scale written surveys and data gathered via participator observation during the field work (action research phase) in WP 5. ### 4.3.a Aggregated secondary statistical data Aggregated statistical data will be obtained from the various data repositories (e.g. Eurostat dissemination data base); data will be arranged, analysed, stored and reported in an appropriate form. **4.3.b Secondary data in form of anonymised micro data sets:** Micro data sets will be obtained according to the applicable procedures. Rules set by the data providers (in particular, by Eurostat) for data storage, use, dissemination and the publication of results based on the analysis will be strictly followed. #### 4.3.c Primary data collection Within the ENLIVEN project, primary data collection will predominantly involve the collection of qualitative data. Interviews will take place with the following categories of people (the categories are not mutually exclusive): policy makers; representatives of business interest organisations; representatives and members of trade unions and social movement organisations; participants in educational programmes; participants in organisational case studies (managers, line managers; HRD/HRM specialists; legal employee representatives; employees in their first 10 years of occupational careers). Particular emphasis will be given to interviews with employees in their early careers, as 3-5 young adults working for the organisations will be interviewed twice, with the goal of learning about their previous learning biographies and their current perception of learning opportunities available in the workplace. Records will take the form of interview recordings and transcripts. Participatory observation and action research case studies are undertaken as part of WPs 5-7. WP 5 has an action research phase, in which small projects (learning projects, workshops, training) will be arranged with the members of the case study organisations. These will be tailored to the particular conditions of the organisation concerned, will run for up to 6 months and allow for on-site participatory observations for at least 4 working days. Records will be taken in the form of research notes made by the researchers involved in the field phase. ## 4.4 Data Use and Protection In order to adhere to the principles of the Data Protection Directive and national legislation, and to the project’s own ethical principles, the ENLIVEN consortium will apply the standards and procedures set out below. These are designed to minimise the risk of misuse of data by third parties, and to safeguard anonymity of research participants (in respect both of secondary use of existing datasets and data collected by the ENLIVEN project). **4.4.a Secondary analysis of EUROSTAT and related scientific-use micro data files** International quantitative micro datasets used (e.g. LFS, AES, PIAAC, ESS) are anonymised. None contain private or personal details regarding identifiable individuals. In the case of data made available by the European Statistical System, Eurostat imposes a strict framework to avoid misuse of (already anonymised) datasets. The key emphasis is on avoiding leakage of data to third parties who may break the anonymization by merging datasets with other datasets (for example, register data from the Social Security system). In WPs 2-4, mainly 4 scientific-use micro datasets provided by Eurostat will be used (LFS, AES, CVTS). When preparing the scientific-use files, Eurostat minimises the risk of disclosure of research participants’ identities by removing information and by merging information into broader categories. It also requires research organisations applying to use these micro data to comply with detailed standards, which is, in this instance, the University of Leuven. Data will only be shared with ENLIVEN partners who were listed in the University of Leuven’s data request. All partners will fully comply with the requirements 5 set for the use of Eurostat’s micro data files, in particular with regard to safe storage of micro data and safeguarding of respondents’ identities in outputs based on the micro data used. With regard to safe storage of micro data, ENLIVEN partners will: * Ensure that they identify a senior researcher who will be responsible for storing the medium containing the confidential data in his/her office and will only allow access to them to the authorised researchers identified above. * Ensure that research premises where the micro data are used and stored are secure. * Ensure that only authorised researchers who have signed a declaration of confidentiality have access to the micro data and that data are not copied in any way * Store Eurostat Data-CD-ROMs and any information provided by Eurostat for data decryption in separate locked cabinets, making it unlikely that any third party can get both sets of information required for using the data sets. * Take precautions to prevent any copies of the micro data being made, and to prevent the data being transmitted via the internet or within an organisation’s network (e.g. by using a standalone computer connected neither with the internet nor with the organisation’s local network). No printer will be attached to computers used for micro-data analysis. * Document all use of the microdata set (who has used the dataset, on which dates, and for what purposes). * Share data only as required amongst the members of the national research team undertaking a specific piece of research; that team will be responsible for keeping the data safe and for removing identifying information. Any such data will be shared only on this anonymised basis with the wider (international) ENLIVEN project research team. * Ensure that all data, and other confidential items, are kept securely (i.e. accessible only to relevant members of the team), and that when such data and items are shared between partners, this is done in a secure way. * Retain all data, and all other confidential items, on a secure password-protected site at the University of Nottingham (the “R: Drive”). Only members of the team will have access to this file store; it is managed by the Project Administrator (Ruth Elmer). * Destroy datasets (and any output based on the datasets) in accordance with contractual requirements (e.g. for safeguarding non-disclosure of individuals, households, enterprises). ### 4.4.b Interview Transcription and Data Protection The procedures applied for the processing of qualitative data in line with the requirements of data protection and ethical principles have been developed in detail in deliverable 11.2 and are summarised in Tables at 4.1 and 4.2 in the annex. They include: * Safeguarding anonymity of research participants; * Safeguarding confidential information; * Restricting recording of sensitive information to the minimum required for answering the research questions and excluding any information from recordings which is either not relevant or potentially harmful; * Ensuring fully informed consent from research participants. Due to the difficulty in guaranteeing that recorded interviews will not contain personally identifiable data, such recordings will be subject to the Data Protection Act (or equivalent legislation in other countries). To ensure the security of the recording and its content, a research-institute-owned encrypted digital recorder or personally owned devices complying with the University of Nottingham’s policy and procedures relating to the use of mobile devices and remote working as set out in the Information Security Policy 6 will be used. Transcription will be carried out where possible within the premises of the research partners. In case of the use of external specialised service providers, the following rules will be applied. Recordings will be securely transferred to transcribers, for instance by secure upload to an approved transcriber’s website. Encryption keys will be sent to the recipient by some other means, e.g. telephone, email, and will not be sent with the encrypted recording. Similar care will be taken with security of transfers of completed transcriptions from transcribers and to the safe return and/or secure deletion of recordings. Transcription agencies used will have a track record of academic transcription work and will be expected to sign undertakings confirming that the information is kept strictly confidential. Informed consent forms will be distributed to all participants. They will be collected and stored in a locked facility, separate from the research material. Interviews will be analysed only in line with predefined research questions, as stated in the proposal and agreed on with the research participants in the informed consent process. The use of data for transcription purposes and the access to anonymised data sets for future projects will also be outlined in the informed consent forms. Confidential or sensitive data will be kept in our password-protected data confidential ENLIVEN filestore, on the Research drive (“R: Drive”) of the University of Nottingham’s computer system. Only members of the team have access to this filestore. The University of Nottingham's Research Filestore is centrally managed and provided by University of Nottingham Information Services, and is subject to IS Business Continuity Planning, IS Information Security Policy and IS Change and Release standards. The service includes failover support across two separate data centres. Full backups to tape are performed once a week, while incremental backups are performed nightly. Tapes are retained for 112 days, after which they are re-used or securely disposed of. Access to the service requires authorised University of Nottingham credentials, and is restricted by the institutional firewall. Data storage will comply with the UK Data Protection Act, equivalent national legislation, and applicable University of Nottingham and partner institutional policies. The Nottingham policy requires that data be retained intact for a period of at least seven years from the date of any publication which is based upon them. Data will be stored in their original form – i.e. tapes/discs etc. will not be deleted and reused, but kept securely. ## 4.5 Documentation ### 4.5.a Documentation Reports Publications (e.g. books, articles in peer reviewed and professional journals, and presentations at scientific conferences), along with engagement with policy-makers and other public communities play a vital role in raising awareness about ENLIVEN results. To this end: * All publications will be made available by open access. * The project’s web site, maintained by the University of Edinburgh, will have hierarchical levels for public engagement, interested stakeholders and detailed scientific information to increase and further participation and collaboration. * Unless required otherwise by confidentiality undertakings given to research participants, data sets (anonymised as indicated below) used for analysis will be deposited on institutional data repositories and/or through the UK Data Archive at the University of Essex and/or through national data archives. (The Co-ordinator will approach the UK Data Archive with a view to making all datasets generated by the project available through that source at the end of the project.) ### 4.5.b Project Management Documents **All internal board, committee, work package and meeting/workshop reports will not be included in the ENLIVEN Dropbox folder, but will be stored on the University of Nottingham’s system.** Confidential items will be maintained in a password-protected confidential ENLIVEN filestore on the University of Nottingham’s computer system. This will be accessible to members of all project teams, but will otherwise be confidential until and unless it is agreed that they should be made public. **On the basis of this management documentation, annual reports on the project’s activities and progress will be prepared (using the quarterly Coordinators Reports used for Management Board Meetings). These reports will be approved by the Management Board. One report will be prepared which will be for public use (e.g. on website and for Advisory Board). However, if there is any sensitive or IPR-restricted information there will be a ‘private’ version too that this can be included in.** #### 4.5.c Website The web site is hierarchical in design with initial entry levels designed to promote the subject and the project in general, and H2020, in order to stimulate public engagement and to engage with interested stakeholders. In line with our data utility objectives, intermediate public levels will also be designed to support school children and their curricular activities. Higher open access research levels will detail scientific and technical progress as well as comprehensive information on dissemination activities. In addition to providing links to each partner’s respective normal research web pages, the web site will include various resources such as on-line publications, benchmark problems, discussion groups and links to other related sites. ## 4.6 Hardware and Software ### 4.6.a Intelligent Decision Support System University of Nottingham resources and equipment (a desktop and a laptop) supported by funding from the ENLIVEN project will be utilised for the IDSS component of the project. IBM SPSS modeller software will be used. The University of Nottingham possesses a group licence for this software. Additional software usage will be covered by open source licences. ### 4.6.b Data Backup and Recovery The University of Nottingham's Research Filestore (“R: Drive”) is centrally managed and provided by University of Nottingham Information Services (IS), and is subject to IS Business Continuity Planning, IS Information Security Policy and IS Change and Release standards. The service includes failover support across two separate data centres. Full backups to tape are performed once a week, while incremental backups are performed nightly. Tapes are retained for 112 days, after which they are reused or securely disposed of. Access to the service requires authorised University of Nottingham credentials, and is restricted by the institutional firewall. ## 4.7 Intellectual Property and Ownership ### 4.7.a Intellectual Property The ENLIVEN partners will comply with intellectual property rights regulations and provisions in accordance with Horizon 2020 principles: * All data generated will be centrally stored on the Co-ordinator’s central computer system (University of Nottingham), using the University’s standard archiving data control procedures and routinely (daily) backed up onto secure areas on a central server, in line with the above statement on data backup and recovery. * The Project Administrator (Ruth Elmer) will hold management responsibility for these curation activities. * All Consortium members are responsible for complying with the intellectual property regulations and provisions. * At the earliest opportunity and appropriate times, data will be made publicly available by presentation at internal and external conferences and by publication in peer-reviewed journals. This is outlined in the ENLIVEN Dissemination plan. * All data presented or published will be anonymised. Immediately after publication access to original anonymised data sets will be freely available on appropriate request, adhering to relevant regulatory requirements and ethical use of data approval. A condition of such access will be subsequent acknowledgment by those applying. ### 4.7.b Joint ownership Joint ownership is governed by Grant Agreement Article 26.2 with the following additions: _Unless otherwise agreed: - Each of the joint owners shall be entitled to use their jointly owned Results for non-commercial research activities on a royalty-free basis, and without requiring the prior consent of the other joint owner(s), and each of the joint owners shall be entitled to otherwise Exploit the jointly owned Results and to grant non-exclusive licenses to third parties (without any right to sub-license), if the other joint owners are given: (a) At least 45 calendar days advance notice; and (b) Fair and Reasonable compensation_ ## 4.8 Open Access Data with an acknowledged long-term value will be preserved and remain accessible and useable for future research. Data which supports and validates published research will also be preserved and, as far as possible, openly accessible to other interested researchers. All output, including academic publications, will be made available by ‘Open Access’ so that every paper will be available to readers without additional cost. Where Green route publishing is not possible, authors will be provided with funding to publish through the Gold route. All data presented or published will be in an anonymised form. Immediately post-publication access to original anonymised data sets will be made freely available on appropriate request, wherever possible through institutional data repositories. Release of such datasets will be in accordance with relevant regulatory requirements, especially on the ethical use of data. A condition of access and reuse of the data will be subsequent acknowledgment of ENLIVEN as a source. ## 4.9 Quality Assurance ENLIVEN has an annual Quality Assurance System reporting plan which details the project deliverables each partner is responsible for delivering, and the peer reviewers responsible for checking them. The document also details the timescales for each deliverable, including the formal date for submission and internal timelines. The latter include dates for first drafts to be sent to the Co-ordinator by the Responsible Partner, the deadline for the Nottingham team to distribute to all Partners, the deadline for feedback from all Partners and the date for feedback from Peer Reviewers. Certain protocols and procedures are also outlined: e.g., when documents are sent to all Partners, this is for _comment only_ , mainly in respect of accuracy in relation to statements about partners’ countries or specific research input. Feedback must be given in the body of an email ( _not_ via word ‘tracking’ on the document itself as this creates a very high volume of work). Partners have three working days to respond with any comment; otherwise, correctness is assumed. Finally, the _peer reviewer offers detailed review/amendment_ regarding overall coherence and quality (with reference to the Quality Criteria document) - and comments and changes can be made in word tracking. # 5 Work-Package-specific Data Issues This section sets requirements for data management related to specific work- package tasks. ## 5.1 WP1: Mapping European and national policies and programmes, and their contribution to economic and social inclusion This WP maps and investigates, at European, national, and subnational levels, **policies and funding schemes** to tackle disadvantage, inequality and social exclusion. It includes a **comparative study** of policies and programmes in selected European countries and Australia, and analyses the role system characteristics (policy regimes) play in participation and inequalities. The main methods used are critical discourse analysis of **policy documents and funding schemes** . Country-based analysis of selected education programmes involving **in-depth case studies** (based in part on interviews) will collect data about individuals’ employment (e.g., ex-ante and ex-post, working conditions), empowerment (e.g., self-perceptions of personal and collective agency) and active citizenship (e.g., political participation, participation in community life). Learners will be selected for interview according to programme, funding received, orientation, and provider (public/private). ## 5.2 WP2: Constraints and facilitators of access and participation This WP **analyses** national institutional architecture and qualifications frameworks **using policy documents and key informant interviews.** Within each participating country, we will produce **a map of the institutional architecture** with a view to assessing its flexibility and inclusivity, based on policy documents and reviews, including Eurydice reports. All partners will analyse aggregated administrative and survey data, national and local policy documents, and qualifications frameworks, to illuminate institutional architecture in their jurisdiction. Up to ten **key informant interviews** will be conducted in each jurisdiction **with policy-makers, service providers and service users** to provide **commentary** on the extent to which official accounts are consistent with the practical experience of service users and those involved in delivery of services. **Interviews with** **young adults** (some of whom will be potentially vulnerable adults) will explore barriers to participation among various groups. ## 5.3 WP3: The role of European governance in adult education & learning policy This WP **maps the main actors** contributing to adult education policy developments in Europe. It identifies key policy actors at global, regional, national and sub-national levels via a thorough **web search.** It improves understanding of the coordinating function of European governance in adult education by identification and examination of governance mechanisms coordinated by or under the supervision of EU shared institutions, how they work, and how they have developed. It also looks at evidence of these governance mechanisms, and related EU policies, influencing public and regulatory agencies intervention in adult education markets at national and sub-national levels. It also **analyses the development of taxonomies and indicators in European adult education** and their **use of crossnational survey data (e.g. PIAAC)** to inform policy. Finally, it examines how taxonomies and indicators have developed in European adult education, and interrogates how PIAAC connects with governance mechanisms under the supervision of EU shared institutions. ## 5.4 WP4: Improving our understanding of the effect of system characteristics by building stronger data and adding a longitudinal, regional & sectoral focus This WP constructs a **pseudo-panel data set on lifelong learning participation** , starting from **the time series of the Labour Force Survey (LFS),** in order to investigate how lifelong learning participation and system characteristics have developed over time. LFS also permits breaking-up of the sample by NUTSregion (Nomenclature of Territorial Units for Statistics) and by employment sector. In order to improve develop **richer contextual information,** the LFS-based pseudo-panel data set will be enriched with **more detailed information from other datasets** (e.g., AES and PIAAC). To facilitate longitudinal, regional and sectoral analysis, indicator time series will be constructed to be linked to the panel data set. New indicators will be developed for characteristics that have not yet been fully covered, such as indicators of the demand for general, sector-specific and job-specific skills. However, micro data sets will not be merged in a way to avoid any risk of potentially compromising the anonymity of any individual units represented. ## 5.5 WPs5–7: Studying the role of workplace learning and patterns of work organisations for early career structuration; qualitative interviews on learning biographies WP 5 and 6 utilise **in-depth organisational case studies** . 16 in-depth organisational case studies will be undertaken, drawing on 64 interviews with managers, line managers, HRD experts and legal representatives of employees, and 64 in-depth interviews with adults in their first 10 years of their occupational career will be conducted. Each organisational case study will include two interviews with each of four managers, HRM/HRD professionals and line managers and at least one interview with an employee interest representative. During an **action research phase** small ‘learning projects’ will be arranged with the members of the case study organisation, tailored to the particular organisation, and will run for up to 6 months. In WP6, some potentially **sensitive data** will be collected (for example, **personal reports on observed conflicts in the workplace);** the young adults (aged 18-35) will be invited to share important aspects of their learning biographies, focusing on job-related non-formal and informal learning in the workplace. In WP7, beyond a broad review of the relevant literature, **Case studies** will be implemented in three countries (ES, AT, SK) on three initiatives advocating better employment conditions during early career stages, including young employees among their activists. Case studies will be based on two expert interviews (one with an official speaker for the initiatives, one with a representative of a corresponding employer interest organisation) and one interview with a young activist. Where feasible and not running against the outlined ethical principles, the activist interviewed will be employed by the organisations studied in WP5 and 6. For the interviews with the young activists, particular safeguards will be taken to shield them from any potential harm resulting from their participation in the research. Further detailed data management in these workpackages is provided in Table at 4.3 below. ## 5.6 WP8: Knowledge discovery on evidence-based policy making in participating countries; & WP9: Establishment of Intelligent Decision Support System for evidence-based policy making This work package involves knowledge discovery based on data available from international organisations (e.g., European Commission, OECD, UNESCO) and other ENLIVEN research findings to establish a **knowledge based methodology** (Case Based Reasoning: CBR) designed to facilitate analytical insights, informed governing of ‘policy problems’, and modelling of policy making on ALE/LLL programmes in the EU. A knowledge based system requires knowledge acquisition, which will begin with an in-depth analysis, working with other WPs, of **existing data** (e.g. ESS, LFS/SILC, Eurydice, Cedefop and Eurostat) and case studies. Based on the analysis, data mining techniques will identify case representation; investigations will then be conducted based on case representation and training in the case base to identify similarity measure models. These different stages are conducted iteratively, until the performance of the CBR system provides satisfactory evaluation metrics (accuracy and user feedback, etc.). Establishing an IDSS is an iterative process. Findings from WPs1-7 will contribute to **building common models** of policy making, reflected in the IDSS procedure. ## 5.7 WP10-11: Dissemination and Project Management & Integration A **data repository** , comprising anonymised data gathered during the project, and main findings, will be maintained for further use. This will include transcripts of interviews as well as data collected for case studies. On publication of a related output, or on conclusion of the project, these data will be transferred from the University of Nottingham’s Research Drive to the University’s Research Data Management Repository ( _https://rdmc.nottingham.ac.uk/_ ) and/or to the UK Data Archive ( _http://www.data-archive.ac.uk/home_ ) . Where feasible, partners will also deposit their data in institutional or national data repositories. This will be done in strict accordance with European and individual country Data Protection legislation, and in the light of the standards and guidance developed by the Consortium of European Social Science Data Archives (CESSDA : _https://cessda.net/_ ) . This will include ensuring that permission has been given by individuals and organisations that contribute data in any form not already public. All stored data will be thoroughly anonymised (except in the case of expert interviewees who explicitly decide not to be anonymous). The data will be citable and will be given a Digital Object Identifier code (DOI) within the data repository which will be guaranteed for 7 years, in line with Table 2, below. # 6 Appendices ## 6.1 Table 1: Key elements of the framework to ensure anonymization within the ENLIVEN research process (for storage/use within the project): (to be refined within the research project) <table> <tr> <th> **Level** </th> <th> **Item** </th> <th> **Description** </th> <th> **When** </th> </tr> <tr> <td> 1 </td> <td> Removing names/using replacements </td> <td> Family name (removed); forename (replaced by a name typical for the gender, generation and the community and **not** the name of another person in the sample surveyed); replacing the name of the employing organisation; replacing the name of the geographic entity (town, etc.) </td> <td> Immediately, when transcriptions are made. </td> </tr> <tr> <td> 2 </td> <td> Removing indirect identifier </td> <td> Transforming information on educational attainment, age, migrant background of a particular kind, household characteristics into categories (e.g. 25 to a 25-29 age brackets). </td> <td> When including transcripts in the R: Drive. </td> </tr> <tr> <td> 3 </td> <td> Removing information allowing an indirect identification </td> <td> Removing information which allows the identification of a person by her/his particularities, habits etc. for an observer familiar with the particular context. </td> <td> When choosing information for publication (biographical vignettes, quotations); before preparing interview transcripts for archiving (if any). </td> </tr> <tr> <td> 4 </td> <td> Alienation and strategic replacement of important information </td> <td> Replacing important (“telling”) details which potentially include a risk of de-anonymization by related details capable of communicating comparable information. </td> <td> When presenting information in biographical vignettes. </td> </tr> </table> ## 6.2 Table 2: Processing of data in the qualitative research implemented by the ENLIVEN project **Expert Interviews; Interviews with Qualitative Interviews with participants in Data collection within the** **representatives of business interest organisations educational programmes & with participant in participatory observation/action and trade unions/social movement organisations organisational case studies _research phase of the case studies_ ** <table> <tr> <th> **Informed consent** </th> <th> Forms on informed consent will be collected and stored on the R: Drive separate from other research material. </th> <th> Forms on informed consent will be collected and stored on the R: Drive separate from other research material. </th> <th> Informed consent forms for participants in the action research phase will be will be collected and stored on the R: Drive separate from other research material </th> </tr> <tr> <td> **Collection** </td> <td> Face-to-face interviews or phone interviews; onsite interviews in surveyed organisations </td> <td> Face-to-Face interviews (all); on-site case study enterprises (managers, line- managers); neutral environment at choice of the interviewee (all others) </td> <td> Participatory observation/participation in the action research module </td> </tr> <tr> <td> **Recording** </td> <td> Personal identifiers (name, age, function, related information): paper and pencil; experts are free to decide whether their names/the names of their organisations should be reported or should anonymised; main interview: digital voice recording (a numeric code links the voice recording to the personal identifier) </td> <td> Personal identifiers (name, age, function, related information): paper and pencil; main interview: digital voice recording (a numeric code links the voice recording to the personal identifier) </td> <td> Research Diary for on-site participant observation – paper and pencil; all personal identifiers will immediately be replaced by codes representing particular research participants (or omitted, where not relevant for the research activity) </td> </tr> </table> **Organisation** When participants choose anonymity or are **and storage** regarded as vulnerable: personal identifiers (single paper copy); information will be stored in a locked compartment. Information on personal identifies will be collected and stored electronically only in an anonymised version, attributing pseudonyms, using broader categories (e.g. age); two digital copies of the main interview will be stored in the R: drive separate from other research material; copies will be stored in a locked compartment separate from the papers with the personal identifiers; interviews will be transcribed (verbatim transcription); passages where personal When participants choose anonymity or are Notes must be stored in a locked regarded as vulnerable: personal identifiers (single room and a locked compartment paper copy); information must be stored in a locked compartment. Information on personal identifiers will be collected and stored electronically only in an anonymised version, attributing pseudonyms and using broader categories (e.g. age); two digital copies of the main interview will be stored in the R: drive separate from other research material; copies will be stored in a locked compartment, separate from the papers with personal identifiers; interviews will be transcribed (verbatim); passages where personal identifiers of the interviewee or related <table> <tr> <th> </th> <th> identifiers of the interviewee or related persons (e.g. name of a co-worker) are revealed will not be transcribed, but will be immediately replaced in a way safeguarding anonymity [e.g. “the interviewee”; “an interviewee’s co- worker]; individual transcripts will be further anonymised by the responsible researcher prior to compiling the transcripts within the software package used. Transcripts will be held only within the originating partner research organisation and the University of Nottingham R: Drive. </th> <th> persons (e.g. name of a co-worker) are revealed will not be transcribed, but will be immediately replaced in a way safeguarding anonymity [e.g. “the interviewee”; “an interviewee’s co-worker]; individual transcripts will be further anonymised by the responsible researcher prior to compiling the interview transcripts within the software package used; only anonymised interviews will be saved in the R Drive. Transcripts will be held only within the originating partner research organisation and the University of Nottingham R: Drive. </th> <th> </th> </tr> <tr> <td> **Adaption and alteration** </td> <td> When participants choose anonymity or are regarded as vulnerable: personal identifiers will be anonymised; verbatim transcripts will be anonymised in a two-step procedure (immediate replacement of identifiers during transcription; further removal of any information helping to identify the interviewee prior to compilation of interviews; a summary of each interview will provided [10,000 signs] and translated into English; for the summary, any further information which risks disclosure of respondent identity will be removed/replaced by more general information; at this stage, any sensitive information (e.g. on health condition, on sexual orientation etc.) not required for the research process will also either be removed or replaced by a more neutral statement (e.g. a description of a concrete case of misconduct will be replaced by a statement that a case of misconduct has been reported). </td> <td> Personal identifiers will be anonymised; verbatim transcripts will be anonymised in a two-step procedure (immediate replacement of identifiers during transcription; further removal of any information helping to identify the interviewee prior to the compilation of interviews; a summary of each interview will provided [10,000 signs] and translated into English; for the summary, any further information which risks disclosure of the respondent’s identity will be removed/replaced by more general information; at this stage, any sensitive information (e.g. on health condition, on sexual orientation etc.) not required for the research process will also either be removed or replaced by a more neutral statement (e.g. a description of a concrete case of misconduct will be replaced by a statement that a case of misconduct has been reported). </td> <td> Researchers will produce a detailed summary of their notes as an electronic document for each day of observation, but removing all personal identifiers; potentially harmful information/sensitive information not relevant for the project must not be included in the summaries </td> </tr> </table> **Retrieval and** Interview transcripts (with alterations required for Interview transcripts (with alterations required for (Anonymised) summaries of on-site **consultation** anonymisation) will be stored [in original anonymisation) will be stored [in original language] visits/participatory observation will language] within the software used for qualitative within the software used for qualitative text analysis, be shared within the local research text analysis, separately in each partner’s separately in each partner’s organisation (and saved team organisation (and saved on the University of on the University of Nottingham R: Drive); <table> <tr> <th> </th> <th> Nottingham R: Drive); otherwise, within the partnership, only translated interview summaries will be exchanged and compiled in one file (this will be the base for cross-country comparative analysis). </th> <th> otherwise, within the partnership, only translated interview summaries will be exchanged and compiled in one file (this will be the base for crosscountry comparative analysis). </th> <th> </th> </tr> <tr> <td> **Use** </td> <td> Interviews will be analysed only in line with predefined research questions, as stated in the proposal and agreed with the research participants in the informed consent process. </td> <td> Interviews will be analysed only in line with predefined research questions, as stated in the proposal and agreed on with the research participants in the informed consent process. </td> <td> Interviews will be analysed only in line with predefined research questions, as stated in the proposal and agreed on with the research participants in the informed consent process. </td> </tr> <tr> <td> **Dissemination** </td> <td> Original language interview transcripts will not be disseminated but will remain with the team of the research organisation responsible for the research. Fully anonymised transcripts will be saved on the University of Nottingham R: Drive. They will also be made available along with summaries (in English, and from which all information potentially disclosing the participants’ identities is removed) at the end of the project on a suitable data archive. </td> <td> Original language interview transcripts will not be disseminated, but will remain with the team of the research organisation responsible for the research. Fully anonymised transcripts will be saved on the University of Nottingham R: Drive. They will also be made available, along with summaries (in English, and from which all information potentially disclosing the participants’ identities is removed) at the end of the project on a suitable data archive. </td> <td> Research notes will not be disseminated, but will remain with the team of the research organisation responsible. </td> </tr> <tr> <td> **Alignment and** **Combination** </td> <td> Expert interviews will be analysed in combination, however, no further risks to confidentiality of information are expected </td> <td> Interview transcripts of respondents from each organisation will be analysed in combination; special attention will be paid to the risk of disclosure of respondents’ identities when bringing the information from the various interviews and information from the participatory observations together; information which carries the risk of disclosure will be omitted </td> <td> Summaries of research notes will be combined for each organisation studied and merged with the transcripts of the interviews; special attention will be paid to the risk of disclosing respondents’ identities when material is brought together; information which carries the risk of disclosure will be omitted </td> </tr> </table> **Deletion/** Data will be retained intact for a period of at least **Destruction** seven years from the date of any publication which is based upon them. Data will be stored in their original form – i.e. tapes/discs etc. will not be deleted and reused, but kept securely. Except Data will be retained intact for a period of at least Personal field notes (paper and seven years from the date of any publication which pencil) will be destroyed at the end is based upon them. Data will be stored in their of the project (on acceptance of the original form – i.e. tapes/discs etc. will not be final report). deleted and reused, but kept securely. Except where where individually agreed by the research individually agreed by the research participant, all participant, all records which could lead to records which could lead to identification of identification of individual persons will be individual persons will be destroyed at the end of destroyed at the end of the project (on acceptance the project (on acceptance of the final report by the of the final report by the Commission). Commission). 23
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1130_E2District_696009.md
EXECUTIVE SUMMARY The European Commission (EC) is enabling access to, and reuse of, research data generated by Horizon 2020 projects through the Open Research Data Pilot (ORD Pilot) as part of its ambition to make research data and publications openly available with a view to accelerating scientific progress, aiding the validation of project outcomes, and making scientific research more transparent in general. As the E2District project is participating in the ORD Pilot, project partners are required to deliver a Data Management Plan (DMP). The DMP is not a fixed document and will, therefore, evolve throughout the E2District project according to the progress of project activities. Subsequently, a mid-term DMP will be required and a final DMP when the project reaches completion. Therefore, this deliverable provides the first version of the E2District Data Management Plan for the datasets captured or processed inside the project, according to the guidelines published by the EC. The purpose of the DMP is to identify how data collected or generated by the E2District project will be organised, stored and shared and to specify what type of data will be made publicly available by the project (open access) and how. Suitable public repositories to store the data will also be identified and, where necessary, tools that help others to use the data will be provided. This report has been prepared by taking into consideration the template of the “Guidelines on Data Management in Horizon 2020”, 1 guidance via DMP Online 2 and OpenAIRE/EUDAT webinar and presentations 2 . Data Management Plan (DMP) ## 1.1 AIMS AND OBJECTIVES OF THE DATA MANAGEMENT PLAN The purpose of this deliverable is to describe the data management life cycle for all datasets that are being collected, processed and generated by the E2District research project and the specific conditions that are attached to them. The report outlines, as far as it is possible to do so at this stage, how research data will be handled during and after the project, what precise data will be collected, processed or generated, what methodology and standards will be applied, whether data will be shared/made open access and how it will be shared, and how the data will be curated & preserved. ## 1.2 INTENDED AUDIENCE The E2District consortium partners are the primary audience for the DMP. The report aims to establish clear practices in relation to data management between the consortium’s five partner organisations. The second audience for this report comprises the E2District project dissemination target groups as identified by the E2District Dissemination and Communication Plan (D6.1). This group includes DHC managers, DHC operators, subscribers, end-users, DHC owners, local authorities, designers, technical providers, investors and the community of researchers involved in related projects/initiatives. As a participant of the Horizon 2020 Open Data pilot, the project is committed to Open Access Publishing and is prioritising publication venues and promoting Open Access to its publications where possible. Where feasible, the project will openly make available through open access repositories, baseline data from the demo sites, statistics and measurements from experiments, business models and key stakeholder surveys and questionnaires. In conjunction with an espousal of FAIR DATA practice, the DMP’s establishment of consistent data practices will increase the efficiency of data handling throughout the lifespan of the project. Thus, the data will reach more people, have a greater impact, avoid duplication of efforts and preserve data for future researchers. ## 1.3 UPDATING THE DATA MANAGEMENT PLAN This is the initial DMP which will be updated throughout the project cycle whenever significant changes arise in the project such as (i) new datasets, (ii) changes in consortium policies and/or (iii) external factors. A mid-term DMP will be released in M18 which will address a number of questions suggested in the Horizon 2020 guidelines 3 (EC DG R&I, 2015): 1. Discoverable Are the data and associated software produced and/or used in the project discoverable (and readily located), identifiable by means of a standard identification mechanism (e.g. Digital Object Identifier)? 2. Accessible 1 Are the data and associated software produced and/or used in the project accessible and in what modalities, scope, licenses (e.g. licensing framework for research and education, embargo periods, commercial exploitation, etc.)? 3. Assessable and Intelligible: Are the data and associated software produced and/or used in the project assessable for and intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for the purpose of peer review, are data provided in a way that judgments can be made about their reliability and the competence of those who created them?). 4. Usable beyond the original purpose for which it was collected Are the data and associated software produced and/or used in the project useable by third parties even long time after the collection of the data (e.g. is the data safely stored in certified repositories for long term preservation and curation; is it stored together with the minimum software, metadata and documentation to make it useful; is the data useful for the wider public needs and usable for the likely purposes of non-specialists)? 5. Interoperable to specific quality standards: Are the data and associated software produced and/or used in the project interoperable allowing data exchange between researchers, institutions, organisations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing recombinations with different datasets from different origins)? The mid-term DMP will be followed by a final report at the end of the project. ## 1.4 PURPOSE OF DATA COLLECTION/GENERATION & RELATION TO PROJECT OBJECTIVES The main objective of the E2District project is to develop, deploy, validate, and demonstrate a novel cloud enabled District Management and Decision Support framework for DHC systems, which will deliver compound energy cost savings of 30%. A diverse range of data will be collected and generated by the E2District project for the purpose of achieving all of the project’s objectives. All of E2District’s work packages except WP1 and WP7 are dependent, to varying degrees, on data collection and generation: * WP1 will specify, gather and document requirements and use-cases for the E2District framework that capture realistic expectations and required features from the stakeholder point of view. These requirements will be used in WP2, WP3, WP4 and WP6 to develop the individual E2District technologies, platform architecture and business models. There is no dataset currently defined for WP1 as any related data will be defined and collated in WP2, WP3, WP4 and WP6. * WP2 will address SMART Objective 1 4 by developing and validating a District Simulation Platform that consists of physical and numerical simulation models of production and demand assets, which will be used as an Asset Portfolio Decision Support tool. WP2 will require the collection and generation of a variety of data including semantic information files that will be used to ensure interoperability between simulation platforms (District Simulation Platform, Supervisory Controllers and Production Scheduling Optimiser), District Simulation Platform parametric study results and District Information Files for demonstration sites comprising data relating to building geometry, thermal properties, HVAC systems, energy sources, heating/cooling schedules, lightings, district heating production, storage, occupancy profiles and weather data. Data relating to the Veolia demonstration sites including any DHN historical monitoring data, any data concerning the DHN design and equipment, any DHN operation rules and practices and any data generated through the tools developed in the project (eg. optimisation tools) will not be available for use in publications. * WP3 will focus on the development and validation of all the key control, optimisation, diagnostics and prosumer engagement algorithms and modules for reducing the energy consumption of a DHC system based on the historical data and real measurements, real prices and flexible assets, and for influencing the demand to be more efficient at the user level. Hence, it directly addresses SMART Objectives 2 & 3\. As such, WP3 requires the collection and generation of a variety of data including DSP data, BMS data, weather data, matlab data and behavioural model calibration survey data. As with data collected and generated in relation to Veolia demonstration sites for WP2, all data collected and generated for WP3 relating to Veolia sites will not be used in publications. * WP4 will address SMART Objective 4 5 by developing and validating a scalable District Operation System that will integrate all key control, optimisation, diagnostics and prosumer engagement modules into a cloud-enabled DHC management platform. A variety of sensor data relating to the CIT testbed site will be collected and generated during the execution of this work package. * WP5 will focus on the integration and deployment of the developed district simulation platform and operation system (and respective modules) to the E2District demonstration site. It validates and analyses the energy savings achieved from the demonstration, and the experience and lessons learned to WP6 to develop business models and replication studies. Thus, it addresses the deployment requirements of SMART Objectives 2 6 , 3 7 and 4\. Fulfilment of WP5 requires the collection and generation of a variety of data including baseline data, KPI’s, electricity consumption data, gas consumption data, heat data, building/areas set point data, monitoring data. * WP6 will, first, address SMART Objective 5 8 by developing new business models and services for the operators, designers and integrators of DHC systems and will provide studies and guidelines for the replication of the E2District technology. Secondly, it will develop dissemination, exploitation and awareness-raising, based on a global dissemination approach (dissemination targets, channels and instruments), to openly discuss, validate, and disseminate the results to the wider stakeholder and scientific community. WP6 requires the collection and generation of a variety of data including DHN historical monitoring data, data concerning the DHN design and equipment and data generated through the tools developed in the project (eg. economic evaluation tools). Data collected and generated in WP6 that is relative to Veolia sites will not be used in publications. ## 1.5 DATASET DESCRIPTION The following datasets have been identified by the E2District project partners. This list may be adapted in future versions of the DMP as the project develops. <table> <tr> <th> # </th> <th> Dataset (DS) name </th> <th> Responsible Partner </th> <th> Related WP </th> </tr> <tr> <td> 1 </td> <td> VeoliaSites_Data </td> <td> VERI </td> <td> WP2- T2.3/WP3- T3.2/WP3T3.5 </td> </tr> <tr> <td> 2 </td> <td> VeoliaSites-BusinessModels_Data </td> <td> VERI </td> <td> WP6-T6.3 </td> </tr> <tr> <td> 3 </td> <td> Data Templates for Simulation Interoperability </td> <td> CSTB </td> <td> WP2-T2.1 </td> </tr> <tr> <td> 4 </td> <td> District Simulation Platform Parametric Study Results </td> <td> CSTB </td> <td> WP2-T2.2 & WP3 </td> </tr> <tr> <td> 5 </td> <td> District Information Files for Demonstration Sites </td> <td> CSTB & ACC </td> <td> WP2-T2.3 </td> </tr> <tr> <td> 6 </td> <td> Behavioural Model Calibration Survey Data </td> <td> CIT </td> <td> WP3-T3.4 </td> </tr> <tr> <td> 7 </td> <td> Supervisory Control and Production Scheduling Optimisation Simulation-based Evaluation Data </td> <td> UTRC </td> <td> WP3-T3.5 </td> </tr> <tr> <td> 8 </td> <td> Simulation Data and Use-cases of the Existing District System (baseline simulation) </td> <td> UTRC </td> <td> WP3-T3.5 </td> </tr> <tr> <td> 9 </td> <td> CIT Sensor Data </td> <td> CIT </td> <td> WP4 </td> </tr> <tr> <td> 10 </td> <td> Acciona Baseline and Performance Evaluation </td> <td> ACC </td> <td> WP5 </td> </tr> </table> ## 1.6 ETHICS The E2District partners will comply with the ethical principles as set out in Article 34 of the Grant Agreement, which asserts that all project activities must be carried out in compliance with: 1. Ethical principles (including the highest standards of research integrity - as set out, for instance, in the European Code of Conduct for Research Integrity 9 \- and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) 2. Applicable international, EU and national law. # 2 DATA SHARING ## 2.1 BACKGROUND DATA With regard to Background data, the E2District Consortium Agreement states as follows: ‘According to the Grant Agreement (Article 24) Background is defined as “data, knowhow or information (…) that is needed to implement the action or exploit the results”. Because of this need, Access Rights have to be granted in principle, but parties must identify and agree amongst them on the Background for the project. #### 2.1.1 PARTY 1: (CSTB) As to CSTB, it is agreed between the parties that, to the best of their knowledge, the following background is hereby identified and agreed upon for the Project. Specific limitations and/or conditions, shall be as mentioned hereunder: <table> <tr> <th> Describe Background </th> <th> Specific limitations and/or conditions for implementation (Article 25.2 Grant Agreement) </th> <th> Specific limitations and/or conditions for exploitation (Article 25.3 Grant Agreement) </th> </tr> <tr> <td> District Simulation Platform (DIMOSIM, v2.03) This platform includes several modules: * A graphical district editor (to create or edit district configuration files) * A citygml import function for importing existing data on districts * A simulation kernel * Import and export functions from and to excel (load profiles, results …) * A global performance analysis (energy, environment and costs) * A classification module for comparing and classifying different energy concepts and parameters on an existing or new district * A data exchange module for coupling any district controller or optimiser Functions of DIMOSIM, v2.03: * Import of all building parameters and generation of building models: * Building parameters: floor, window and wall area, orientation, etc. * Building system parameters: consumer, prosumer, local storage or production, etc.  Import of all district parameters and generation of district model: * Hydronic network parameters: connections and distances * Electric network parameters: </td> <td> CSTB grants a free license of use to all E2DISTRICT partners for the use of a partial or full compiled version (executable) of the DIMOSIM simulation platform, as well as an access right to the user guide. </td> <td> </td> </tr> <tr> <td> </td> </tr> <tr> <td> connections and distances  Import of Energy HUB parameters and generation of HUB model: ▪ Central thermal and electrical production (configuration and sizing) ▪ Central thermal and electrical storage (configuration and sizing)  Generation of building electrical load profiles (therefore, a tool available at CSTB, based on statistical data, is used): ▪ Each building is divided in sublevels (e.g. apartments), for which a load profile is generated ▪ From the sublevels, a general load profile for each building is generated from the sum of all sublevels. These load profiles are then connected to the electrical grid model.  Sizing of the thermal system and network * Based on the nominal heat demand of each building, the tool sizes all network connections automatically, based on expert rules, using a database of district heating pipes and insulation * The Energy HUB is also sized automatically based on the heat load of the heating district network.  Sizing of the electric grid ▪ All grid connections are sized automatically, based on expert rules </td> <td> </td> <td> </td> </tr> </table> #### 2.1.2 PARTY 2: (ACCIONA) As to **ACCIONA** , it is agreed between the parties that, to the best of their knowledge, the following Background is hereby identified and agreed upon for the Project. Specific limitations and/or conditions shall be as mentioned hereunder: <table> <tr> <th> Describe Background </th> <th> Specific limitations and/or condition for implementation (Article 25.2 Grant Agreement) </th> <th> Specific limitations and/or conditions for exploitation (Article 25.3 Grant Agreement) </th> </tr> </table> <table> <tr> <th> Background that is covered under specific research agreements and confidentiality agreements and therefore subject to third party rights. </th> <th> Right for using the Background within the Project </th> <th> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </th> </tr> <tr> <td> All knowledge, technical information and experience related to other construction and research project in which ACCIONA is or has been involved </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> <tr> <td> Know-how and experience on energy efficient buildings design and passive strategies integration. </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> <tr> <td> Knowledge about implementation of Renewable Energy systems in buildings, integration of systems to generate electricity and new energy distribution systems by electrical and thermal micro-grids </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> <tr> <td> Background in patents and current applications </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> <tr> <td> Know-how relating to production (renewable), demand, control and storage systems at building level </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> <tr> <td> Background of software tools developed for the energy management of buildings </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> <tr> <td> Know-how on façade structures design backed up by previous real projects and those co-financed by EC where Acciona take active part or coordinate, on composite and aluminum façade multifunctional systems and including also knowledge in connexion between different parts. </td> <td> Right for using the Background within the Project </td> <td> This Background shall not be used until an exploitation agreement is signed, which will reflect the conditions on which royalties are provided </td> </tr> </table> #### 2.1.3 PARTY 3 (VEOLIA) As to VEOLIA, it is agreed between the parties that, to the best of their knowledge, the following Background is hereby identified and agreed upon for the Project. Specific limitations and/or conditions shall be as mentioned hereunder: <table> <tr> <th> Describe Background </th> <th> Specific limitations and/or conditions for implementation (Article 25.2 Grant Agreement) </th> <th> Specific limitations and/or conditions for exploitation (Article 25.3 Grant Agreement) </th> </tr> <tr> <td> Technical characteristics of VEOLIA sites and its components/assets </td> <td> Restricted to project implementation needs and to its duration </td> <td> Not usable for exploitation </td> </tr> <tr> <td> VEOLIA heating and cooling sites working/historical data </td> <td> Restricted to project implementation needs and to its duration </td> <td> Not usable for exploitation </td> </tr> <tr> <td> VEOLIA heating and cooling sites operational KPIs and their calculation methods </td> <td> Restricted to project implementation needs and duration </td> <td> Not usable for exploitation </td> </tr> <tr> <td> VEOLIA models corresponding to components of heating and cooling networks </td> <td> Restricted to project implementation needs and duration </td> <td> Not usable for exploitation </td> </tr> <tr> <td> Veolia current and classical business models general characteristics and principles </td> <td> Restricted to project implementation needs and duration </td> <td> Not usable for exploitation </td> </tr> </table> #### 2.1.4 PARTY 4 (CIT) As to CIT, it is agreed between the parties that, to the best of their knowledge, the following Background is hereby identified and agreed upon for the Project. Specific limitations and/or conditions shall be as mentioned hereunder: <table> <tr> <th> Describe Background </th> <th> Specific limitations and/or conditions for implementation (Article 25.2 Grant Agreement) </th> <th> Specific limitations and/or conditions for exploitation (Article 25.3 Grant Agreement) </th> </tr> <tr> <td> NICORE: Application </td> <td> Restricted to project </td> <td> Not usable for exploitation. </td> </tr> <tr> <td> Enablement Platform based </td> <td> implementation needs and to </td> <td> </td> </tr> <tr> <td> on Generic Event Driven </td> <td> its duration. </td> <td> </td> </tr> <tr> <td> System Architecture combined with dynamic </td> <td> Excluded: NICORE: all </td> <td> </td> </tr> <tr> <td> Service composition and invocation principles. </td> <td> elements of source code. </td> <td> </td> </tr> </table> ## 2.2 OPEN ACCESS According to article 29.2 of the Grant Agreement: 10 E2District, as a Horizon 2020 beneficiary, must ensure open access (free of charge online access for any user) to all peer-reviewed scientific publications relating to its results. In particular, the E2District project must: * As soon as possible, and at the latest on publication, deposit a machinereadable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; moreover, the research data needed to validate the results presented in the deposited scientific publications must be deposited at the same time. * Ensure open access to the deposited publication - via the repository - at the latest: on publication, if an electronic version is available for free via the publisher, or within six months of publication (twelve months for publications in the social sciences and humanities) in any other case. * Ensure open access - via the repository - to the bibliographic metadata that identify the deposited publication. As outlined in article 29.2 of the Grant Agreement, all E2District bibliographic metadata will be in a standard format and will include all of the following: * The terms “European Union (EU)” and “Horizon 2020”; * The name of the action, acronym and grant number; * The publication date, and length of embargo period if applicable, and  A persistent identifier. In accordance with the above guidelines, E2District, as a participant of the Horizon 2020 Open Data pilot, is committed to Open Access Publishing and is prioritising publication venues and promoting Open Access to its publications where possible. Where feasible, the project will openly make available through open access repositories, baseline data from the demo sites, statistics and measurements from experiments, business models and key stakeholder surveys and questionnaires. ## 2.3 OPEN DATA The Data Management Plan establishes the approach of the project in relation to open research data as much as it is possible to currently define and further detail will be provided in the mid-term and final plans. As stated in article 2.2.2 of the Grant Agreement: E2District is voluntarily participating in the Horizon 2020 Open Data pilot 11 . Therefore, the project will openly make available, baseline data from demo sites, statistics and measurements from experiments, business models and key stakeholder surveys and questionnaires, except when the release of datasets collected from the project is considered to: * Impact results that are expected to be commercially or industrially exploited * Be incompatible with the need for confidentiality in connection with security issues. * Be incompatible with existing rules on the protection of personal data.  Would jeopardise the achievement of the main aim of the action. * Be incompatible with existing rules on the protection of personal data. * Would jeopardise the achievement of the main aim of the action. * Create other legitimate reason to not take part in the Pilot. In conjunction with an espousal of FAIR DATA practice 12 , this Data Management Plan’s establishment of consistent data practices will increase the efficiency of data handling throughout the lifespan of the project and ensure that the data will reach more people, have a greater impact, avoid duplication of efforts and be preserved for future researchers. ## 2.4 ACCESSIBILITY Specifically, the data generated and collected by E2District will be accessible to the consortium through the CIT NICORE platform, using the available APIs and webservices. In addition, the data will be available through the consortium’s SVN repository. The consortium will also follow the dissemination validation process (see D6.1 Dissemination and Communication Plan, Section 6.3) to validate and approve for the public dissemination of the data. The data will then be shared publicly on the E2District website. # 3 DATA MANAGEMENT PLAN ## 3.1 DATASET 1: (VEOLIA) VEOLIASITES_DATA <table> <tr> <th> </th> <th> **VEOLIA** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP2-T2.3 (validation of the model) / WP3-T3.2 / WP3-T3.5 VERI </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> VeoliaSites_Data </td> </tr> <tr> <td> **Data set description** </td> <td> Any DHN historical monitoring data, any data concerning the DHN design and equipment, any DHN operation rules and practices, any data generated through the tools developed in the project (eg. optimisation tools) relative to the VEOLIA sites. This data will not be used in publications. </td> </tr> <tr> <td> **Availability** </td> <td> Project partners needing an access to these data during the project </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> Historical DHN sites and DHN sites design data located on VEOLIA IT platform; Generated data located on VEOLIA IT platform </td> </tr> <tr> <td> **Standards** </td> <td> Not defined </td> </tr> <tr> <td> **Data Sharing** </td> <td> Data will be accessible to project partners needing an access to these data during the project through VERI IT platform </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> Not communicated </td> </tr> </table> ## 3.2 2: (VEOLIA) VEOLIASITES-BUSINESSMODELS_DATA <table> <tr> <th> **Description** </th> </tr> <tr> <td> </td> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP6-T6.3 VERI </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> VeoliaSites-BusinessModels_Data </td> </tr> <tr> <td> **Data set description** </td> <td> Any DHN historical monitoring data, any data concerning the DHN design and equipment, any data generated through the tools developed in the project (eg. economic evaluation tools) relative to the VEOLIA sites. This data will not be used in publications. </td> </tr> <tr> <td> **Availability** </td> <td> Project partners needing an access to these data during the project </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> DHN sites design data located on VEOLIA IT platform, generated data located on VEOLIA IT platform </td> </tr> <tr> <td> **Standards** </td> <td> Not defined </td> </tr> <tr> <td> **Data Sharing** </td> <td> Data will be accessible to project partners needing an access to these data during the project through VERI IT platform </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> Not communicated </td> </tr> </table> ## 3.3 3: (CSTB) DATA TEMPLATES FOR SIMULATION INTEROPERABILITY <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP 2 - Task 2.1 Data Manager: Vincent Partenay (CSTB) </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> Data Templates for simulation interoperability </td> </tr> <tr> <td> **Data set description** </td> <td> Templates of semantic information files that will be used to ensure interoperability between simulation platforms (District Simulation Platform, Supervisory Controllers and Production Scheduling Optimizer) Two types: 1. District Information File, which is based on a semantic tree structure allowing to gather only district topology, building and system properties to allow energy simulation 2. Co-simulation Data File, which is also based on a semantic tree structure but only to exchange data dynamically between coupled platforms of the project. </td> </tr> <tr> <td> **Availability** </td> <td> Consortium </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> Data template / interoperability / co-simulation </td> </tr> <tr> <td> **Standards** </td> <td> For District Information File: based on citygml international standard (http://www.citygml.org/) For Co-simulation Data File: XML schema </td> </tr> <tr> <td> **Data Sharing** </td> <td> Through Nicore Even potentially accessible to all the partners, these data mainly concern the groups involved in running the simulation platforms </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> According to Nicore architecture Preservation period: inherently subject to time constraints of the simulation platform operation </td> </tr> </table> ## 3.4 4: (CSTB) DISTRICT SIMULATION PLATFORM PARAMETRIC STUDY RESULTS <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP 2 - Task 2.2 (also relevant to WP3) Data Manager: Vincent Partenay (CSTB) </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> District Simulation Platform Parametric Study Results </td> </tr> <tr> <td> **Data set description** </td> <td> In WP2, task 2.2, several elementary physical models are being developed and integrated into the District Simulation Platform. Once this modelling task done, a global parametric study for different system configurations and various climate zones shall be carried out. The data set in this case are the results of this parametric study as load time series (hourly, on an annual basis) and final integrated values relative to KPI’s calculation defined in WP1.1 </td> </tr> <tr> <td> **Availability** </td> <td> Consortium and Open Access </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> The district information file (using template elaborated in WP2.1) for each simulated configuration </td> </tr> <tr> <td> **Standards** </td> <td> XML or JSON </td> </tr> <tr> <td> **Data Sharing** </td> <td> Through Nicore </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> To be stored at least along the project life </td> </tr> </table> ## 3.5 5: (CSTB) DISTRICT INFORMATION FILES FOR DEMONSTRATION SITES <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP - Tasks 2.3 Data Manager: ACCIONA + CSTB </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> District Information Files for Demonstration Sites </td> </tr> <tr> <td> **Data set description** </td> <td> For each demonstration site, a District Information File will be filled accordingly from their specific properties for simulation objectives. These data will be inherited from intrinsic data that will be at disposal, but also from monitoring systems (for calibration) </td> </tr> <tr> <td> **Availability** </td> <td> Consortium; Open Access only for CIT test site data </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> Building geometry, thermal properties, HVAC systems, energy sources, heating/cooling schedules, lightings, district heating production, storage, occupancy profiles, weather data </td> </tr> <tr> <td> **Standards** </td> <td> Based on citygml international standard (http://www.citygml.org/) and ADE Energy </td> </tr> <tr> <td> **Data Sharing** </td> <td> Through Nicore </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> To be stored as long as controlling systems (deployed on demonstration sites requiring these calibrated district information models) will be operating </td> </tr> </table> ## 3.6 6: (CIT) BEHAVIOURAL MODEL CALIBRATION SURVEY DATA <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP 3/T 3.4 Julia Blanke </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> Behavioural Model Calibration Survey Data </td> </tr> <tr> <td> **Data set description** </td> <td> This dataset contains the survey data collected on the CIT campus for the purpose of calibrating the behavioural model. </td> </tr> <tr> <td> **Availability** </td> <td> Private </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> Behavioural survey data </td> </tr> <tr> <td> **Standards** </td> <td> </td> </tr> <tr> <td> **Data Sharing** </td> <td> Access to this data will be restricted - to be used for model calibration only - in order to protect the personal data of the survey participants. </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> The data will be stored in Excel files on CIT servers. </td> </tr> </table> ## 3.7 7: (CIT) CIT SENSOR DATA <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP4 Christian Beder </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> CIT Sensor Data </td> </tr> <tr> <td> **Data set description** </td> <td> This dataset contains all the sensor data collected on CIT campus during the E2D project. Data is collected from the main campus BMS, the Nimbus BMS, as well as through LoRa based sensors distributed across the campus. Each data point comprises of a timestamp, the sub-system the data was collected from, the name of the particular data point, as well as its value. The semantics of the data stream is such, that each data point remains valid until a new data point is indicating a change in value. </td> </tr> <tr> <td> **Availability** </td> <td> Consortium; Open Access for subsets </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> BMS data, sensor data </td> </tr> <tr> <td> **Standards** </td> <td> MongDB database, key-value pairs </td> </tr> <tr> <td> **Data Sharing** </td> <td> Data will be accessible to the E2D consortium through the NiCore platform. This platform provides access through a variety of services, including SOAP WebServices, subscription to live data via AMQP, or by directly accessing the MongoDB database through its API. </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> The data is stored on a MongoDB database on a server at CIT </td> </tr> </table> ## 3.8 DATASET 8: (UTRC) SIMULATION DATA AND USE-CASES OF THE EXISTING CIT DISTRICT SYSTEM (BASELINE SIMULATION) <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP3/T3.5 Kostas Kouramas </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> Simulation data and use-cases of the existing CIT district system (baseline simulation) </td> </tr> <tr> <td> **Data set description** </td> <td> A record of heating use-cases for the CIT demo-site and data generated from running simulations on the DSP platform for these use-cases. </td> </tr> <tr> <td> **Availability** </td> <td> Consortium and Open Access </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> DSP data; BMS data; Weather data </td> </tr> <tr> <td> **Standards** </td> <td> Matlab mat files; CSV files; Excel Sheets; NICORE data-base (?) </td> </tr> <tr> <td> **Data Sharing** </td> <td> The data will be accessible to the consortium through the CIT NICORE platform, using the available APIs and web-services. In addition, the data will be available through the consortium SVN repository. The consortium will also follow the dissemination validation process (see D6.1 Dissemination and Communication Plan, Section 6.3) to validate and approve for the public dissemination of the data. The data will then be shared publicly on the E2District web-site. </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> The data will be store in the NICORE platform data-base, E2District SVN repository and the project web-site for the duration of the project. </td> </tr> </table> ## 3.9 DATASET 9: (UTRC) SUPERVISORY CONTROL AND PRODUCTION SCHEDULING OPTIMISATION SIMULATION-BASED EVALUATION DATA <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP3/T3.5 Kostas Kouramas </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> Supervisory control and Production Scheduling Optimisation simulation-based evaluation data </td> </tr> <tr> <td> **Data set description** </td> <td> Comparison data of the Energy consumption cost, heating generation and demand (kW), comfort (deg. C) from the simulationbased analysis and comparison with baseline of the control and optimisation algorithms, for a number of use-case scenarios that will be determined as part of the work in T3.5. </td> </tr> <tr> <td> **Availability** </td> <td> Consortium </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> DSP data; Matlab data; BMS data; Weather data </td> </tr> <tr> <td> **Standards** </td> <td> Matlab mat files; CSV files; Excel Sheets; NICORE data-base </td> </tr> <tr> <td> **Data Sharing** </td> <td> The data will be accessible to the consortium through the CIT NICORE platform, using the available APIs and web-services. In addition, the data will be available through the consortium SVN repository. </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> The data will be available for the duration of the project through the above data-sharing means. </td> </tr> </table> ## 3.10 DATASET 10: (ACC) ACCIONA BASELINE AND PERFORMANCE EVALUATION <table> <tr> <th> </th> <th> **Description** </th> </tr> <tr> <td> **Work Package/Task** **Nos. re: dataset** **Data Manager** </td> <td> WP5 Jose C. Esteban </td> </tr> <tr> <td> **Dataset reference and name** </td> <td> Acciona Baseline and Performance Evaluation. </td> </tr> <tr> <td> **Data set description** </td> <td> The dataset is the collection of calculated items related to the description of the behaviour in the baseline situation and after the implementation of the E2District measures into the Cork demonstration; they are based on cross- compared data extracted from the dataset of BMS and sensor of the CIT (WP4 collection). The standards of the IPMVP (International Performance Measurement and Verification Protocol) of the EVO (Efficiency Validation Organization) are used to follow the improvements. </td> </tr> <tr> <td> **Availability** </td> <td> Consortium </td> </tr> <tr> <td> **E2District Project Metadata** </td> <td> European Union; H2020; Energy Efficiency Optimised District Heating and Cooling’; E2District; GA696009 </td> </tr> <tr> <td> **Metadata specific to dataset** </td> <td> Baseline, KPI; Electricity consumption; Gas consumption; Heat; Building/Areas Set Point Data; Monitoring data. </td> </tr> <tr> <td> **Standards** </td> <td> IPVMP. BacNet for BMS integration ModBus in some meter readings Analysis of metering data to be stored in .csv, .xls or .xlsx format </td> </tr> <tr> <td> **Data Sharing** </td> <td> Data are available for the E2District consortium, through the data sharing platform, Tortoise SVN Platform. </td> </tr> <tr> <td> **Archiving and** **Preservation (including storage and back-up)** </td> <td> Daily export of measurement, metering and set point data, to configure a monthly summary file. Update and storage of the baseline and performance verification minimum every three months, and upload to the SVN platform. </td> </tr> </table> # 4 CONCLUSION This document has established the E2District project’s approach to data management for the datasets captured or processed inside the project, according to the guidelines published by the EC. The plan has identified how data collected or generated by the E2District project will be organised, stored and shared and has specified what type of data will be made publicly available by the project (open access) in so far as it is possible to do so at this stage of the project. Suitable public repositories to store the data have also been identified. This Data Management Plan is not a fixed document and will, therefore, evolve throughout the E2District project according to the progress of project activities.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1131_DOMINO_696074.md
# 1 Introduction DOMINO is an EU-funded research project that intends to prompt energy reductions in households. It uses a smart plug technology to raise awareness for energy consumption and to change the participants’ behaviour towards energy saving. The project has chosen a playful approach in form of an energy saving game called “DOMINO Challenge” which is played in teams of five households each. The challenge takes place in Brussels, Berlin, and Naples and will involve around 4000 households. Including preparation and evaluation the project extends over a period of almost three years (March 2016 till February 2019 approximately). The project results will allow making assumptions on the overall potential of smart plugs for reducing energy consumption in households and will enable related initiatives to benefit from the DOMINO experience. This Data Management Plan describes the fundamentals of the data management concerning the DOMINO Challenge as executed by Plugwise and its DOMINO Challenge partners. The plan consists of various generally applicable Plugwise privacy and security policies and refers also to the specific DOMINO Challenge policies. Within the DOMINO project, Plugwise has been assigned to develop and deliver: * Project database * DOMINO website * DOMINO App * Smart plugs * Testing of the equipment and software * Logistic activities For a better understanding of the course of the DOMINO Challenge, the course of the action can be found below: * Potential participants from Brussels, Berlin and Naples learn about the challenge and become interested in participating. * They sign up via the DOMINO website individually. They receive six smart plugs and a “gateway” by postal mail from Plugwise. (The gateway is the Plugwise router called “Stretch”, that allows for communication with the plugs (ZigBee protocol) and the app via the existing home router by WiFi.) * They download the DOMINO app. * They install the plugs in their home by plugging them into a power outlet and then plugging in an electric appliance, e.g. a refrigerator or a washing machine. * They monitor their appliances’ electricity consumption for one month via the app. * In month two and three, they receive energy saving recommendations via the app for their choice of appliances. They can compare their consumption data to anonymised data of other users via the website. They can comment on how useful the recommendations are. * If they have not assembled a complete team (of 5 players) at the beginning, they can find another player during their 3-month-cycle and hand over the plugs to this person. * The process starts over until a team of 5 people has played. * Once all team members have played (anticipated date September 2018), the Usage Data will be evaluated and the winners will be determined. * The last person of a team sends the plugs back to the consortium partners with help of postage labels which will be available on the DOMINO website. In the data management plan, it is differentiated between Personal Data and Usage Data. Usage Data is collected through the stretch and stored in one cloud (called “Plugwise cloud”). Personal Data is collected through the website and through the app and is stored in a different cloud called “DOMINO cloud”. Personal data collected through the website include: * First Name, Name * Physical address incl. street number, street name city, zip code and country (used for shipment of smart plug equipment) * Email address Address data is collected to ship the hardware to the players of the game. The email address is used to send them information on their energy savings and important procedural steps in the DOMINO Challenge. Additional Personal Data collected through the app, include: * Age (recommended but optional) * Gender (recommended but optional) * Password * Choice of language * Name and type of device connected for all the plugs except the joker plug (recommended but optional) * Number of people in the household of the player (recommended but optional) * Electricity price (recommended but optional) * Electricity consumption from the past year (recommended but optional) * Information on appliances: i.e. energy efficiency class, year of construction and consumption as indicated by the manufacturer (recommended but optional) * Absence periods during the DOMINO challenge - retrospectively (recommended but optional) This type of data is collected for research purposes that are non-commercial and serve the goal of increasing the general knowledge on electricity consumption in European households which – ultimately – can inform activities on reducing electricity consumption. Usage data include: * current and historical consumption on o Monthly basis o Weekly basis o Daily basis o Hourly basis * settings and scheduling data This type of data is collected both for research purposes that are non- commercial and for the participants so that they get well-structured insights into their own consumption patterns. # 2 Plugwise general data and privacy policies ## 2.1 Plugwise gateway and applications The DOMINO Challenge is based on two different cloud environments in which data is collected and stored. For the purpose of the Challenge a specific and separate DOMINO cloud environment has been developed which allows the participants to register (via the specially developed DOMINO registration website). Before the registration can be completed, the participant will have to fill in the Consent Form (see chapter 6) after which certain Personal Data will be stored in the DOMINO cloud for further use during the Challenge, e.g. for determining the winners of the challenge. The Personal Data will not be available outside this specific DOMINO cloud. Plugwise will not have general access to any Personal Data from users of Plugwise gateway- and application based systems. * The application is available via the relevant app stores and is downloadable without registration. * During the registration process via the website the user has the possibility to make available to Plugwise his Usage Data by explicitly declaring his permission by ticking a dedicated box. * Each Plugwise gateway has a unique serial number (ID) which, upon activation through the application, will be connected to the Plugwise cloud based database. * The Plugwise gateway, called the Stretch, will make a secure internet connection (HTTPS) to the Plugwise database(s) by means of a unique password each time the participant/user uses the gateway and application. ## 2.2 Usage Data At installation, the user will make a secure connection between the application and the Plugwise gateway only when physically within his existing WLAN network at home. The Plugwise gateway then provides the application automatically with a secure ID and Access Key which allows communication between the application and the Plugwise gateway and Plugwise plugs. After installation of the Plugwise plugs and initiating the system, the user can access his Plugwise system from outside the house/WLAN network. Plugwise will, through the Plugwise Cloud, receive and have access to the Usage Data from the devices/appliances that are connected through the plugs to the Plugwise system. This concerns data only regarding the connected devices themselves, i.e. * the current and historical consumption * and settings and scheduling data. (This refers to the moments in time in which plugs are switched off/on and the schedule (as set in the app) is activated.) The historical consumption is available to the corresponding player during the 3 months of his cycle. It includes consumption data in Watts per hour on * Monthly basis * Weekly basis * Daily basis * Hourly basis This Usage Data is stored in the Plugwise database in the Plugwise Cloud, based on the Plugwise gateway ID only and will always be anonymous and therefore cannot be traced back to natural persons and/or address details. Through signing the consent form of the DOMINO challenge, the DOMINO player explicitly gives consent to link his Personal Data to the collected Usage Data, for specific situations, i.e. if he/his team wins the challenge or if the Smart Plug equipment has not been passed on to the next player. Plugwise guarantees that any and all Usage Data that is stored in its database(s) (third party database included) will not in any way and/or for whatever reason be made available to third parties, for example for commercial reasons. After handing over the Plugwise plugs to the next team member, the Plugwise system will register the new participant (via the Log-in on the app) and the new team member will only be able to see the historical Usage Data from the date of his first log-in. In other words, Usage Data collected earlier (from earlier participants in the same team) will not be available for the next participants. ## 2.3 Purpose of collecting Data ### 2.3.1 System control Plugwise collects Usage Data to allow players to gain insights into their appliances’ electricity consumption and control it, if they want to reduce it by switching the connected appliances and/or lighting on or off. Insights into consumption and control of devices is also possible from outside the WLAN network. In this case, data will be transferred via the internet to the Plugwise application of a smartphone or tablet. Access to the plugs via the DOMINO cloud, is only possible once the participant has made an initial connection to the Plugwise gateway (Stretch) and when he has received the secure ID and Access Key during that process on his phone. It is not possible to use the Cloud based controls (for example switching appliances on or off) without this initial local communication. ### 2.3.2 Analysis of Data The DOMINO consortium expects to generate data on energy consumption of participants related to different appliances (Usage data) and other optional anonymised data on: * household size, * region, * age, * gender, * last year’s electricity consumption, * electricity price, * connected appliances: energy consumption as indicated by the manufacturer, energy efficiency class, year of construction and in how far recommendations on saving energy have an effect on behaviour. For this reason feedback clicks for energy saving recommendations will be collected. In the app the participants are asked to indicate long absence periods (i.e. holidays) retrospectively so that they can be taken into account during data analysis, as well. The data mentioned above is collected for research purposes that are non-commercial and serve the goal of increasing the knowledge on electricity consumption in different types which can – ultimately – form the basis for activities on reducing electricity consumption. Anonymised data will be analysed by the responsible consortium partners (adelphi, IBGE and ANEA). Plugwise will make requested anonymised data available to the aforementioned partners for analysis and assure that it will be accessible in readable format (excel, open officeI), within maximum 2 weeks after the request and without violation of anonymity of the data. Metadata (e.g. IP address, location of app user etc.) is not collected with the app. The DOMINO website collects data on the number of users that visited the site from the various target countries. Those are stored for 7 days and then removed (standard procedure). ### 2.3.3 Sharing of Personal Data within the DOMINO Challenge teams In view of the key element of the DOMINO Challenge, the creation of teams to compete for a prize (for example, highest total electricity savings within a geographical area), will be supported by the regional DOMINO partners (ANEA, adelphi, and IBGE) in case a complete team cannot be formed by participants themselves. All team member households/participants have be informed for this case that their names will be made known to other team members. This is also covered in the consent form. ### 2.3.4 Public availability of Usage Data The DOMINO Challenge administrator Kees Schouten will keep the Usage Data collected throughout the challenge. Anonymised data resulting from the subsequent analysis will be made publicly available through an open access area on the DOMINO website. ### 2.3.5 DOMINO Administrator The DOMINO challenge administrator will set up an administrator dashboard and as such will be able to create an account for the regional DOMINO partners. Regional partners cannot get access to the database of Plugwise so they cannot see actual Usage Data. The regional partners can only see if a specific participant/user does indeed use the Plugwise plugs. At all times it is guaranteed through measures and procedures described earlier (see sections 2.1 and 2.2) that the Usage Data cannot in any way be connected to Personal Data. 2.4 **Plugwise System and Usage Data Security and Retention period** The Plugwise ZigBee network, gateway and plugs are highly reliable and secure, due to the use of a 128.bit AES encryption. The Usage Data will be stored for an indefinite period in the Plugwise cloud based database, provided by Web Services Hosting Companies Amazon.com and will be hosted in Ireland. Plugwise guarantees that the data storage (also through their contracted third parties) will be according to the latest and industry accepted Security Standards, in accordance with national legislation and in full harmony with the new General Data Protection Regulation (that succeeds the EU Data Protection Directive (95/46/EC)). ## 2.5 Deletion of Usage data Due to the fact the Usage Data is fully anonymous, Plugwise can in no practical way delete specific participant/user related Usage Data from its databases, since the database does only register gateway ID related data, and does not know what Usage Data relates to which participant/user. Within the Plugwise organisation the Data Management is allocated to the Data protection employee and falls under the responsibility of the Plugwise Chief Information Officer (CIO), Theo Vroege. Reference is made to the attachment AWS, outlining the Security measures of Amazon.com, the main hosting services partner of Plugwise (cf. Chapter 5). ## 2.6 Logistics The DOMINO website will generate a list of participants for the first cycle of DOMINO, including their Personal Data (Name, Address, Postal code & Country), which will be transferred by Plugwise to an Excel list for logistic/shipping purposes. Plugwise will hand over this list to its (out-sourced) logistics centre. The logistic centre will ensure that a DOMINO package will be sent to the listed addresses. Plugwise will separately (before packaging and shipping) scan all unique Stretch IDs allocated to the DOMINO Challenge, to ensure that, upon first installation, the Plugwise back-end will recognize the Stretch ID as part of the DOMINO Challenge. The individual shipping address will in no way be connected to the unique Stretch ID. The ID list will not be known by third parties nor communicated outside of Plugwise. Before the actual start of the Challenge the first participants will receive the DOMINO set per FedEx postal services. The administration and handling over of the sets to FedEx will be handled by the independent logistic handling company on mentioned out-sourced basis. They will use the formats as described in Chapter 8. # 3 DOMINO Registration and Consent procedures Potential participants of the DOMINO Challenge will be approached via the DOMINO website, social media and offline channels, such as newspaper articles, project presentation during relevant events or flyers. The target group consists of men and women, who are approximately 28-49 years old, tech savvy and own a smartphone. Via the consent form they confirm that they have informed other members of their household that they take part in the DOMINO project and assure to have obtained consent from the other members as well. The idea is to target people who are living in the areas of Berlin (Germany), Brussels (Belgium) and Naples (Italy). Communication channels will be chosen to particularly fit this group. The DOMINO web portal is the unique entry point into the challenge. All relevant information for potential participation in the DOMINO challenge, including this data management plan, will be available on the website. Potential participants who want to sign up through the website are asked to read the digital informed consent form (cf. Chapter 6 of this report) and confirm that they agree with the information provided. If they have questions on the consent form or on the data policy, they can submit them through an online form available on the website before participating in the DOMINO challenge. A response will be given to them via email from the data protection officer of the project team within 24 hours on weekdays. Possible risks and discomforts of participants during the research project, e.g. people need to invest some time and effort for setting up the plugs and passing them on to the next person, or participants not saving any energy at all, if they do not follow energy saving tips, are laid out in the consent form and generally they are assumed to remain at a tolerable level. The form will be available in German, French, Dutch and Italian so that participants in the three target regions will be able to fully understand it. All participants in the DOMINO Challenge are specifically and explicitly informed about the way that their Personal and Usage Data will be collected and administered. On the DOMINO website potential participants will be able to formally agree to the registration and administration of the personal information including: * First Name, Last Name * Physical address incl. city, zip Code and country * Email address * Choice of language The data above needs to be collected for sending the plugs to participants, for contacting them in case they receive a prize or have not passed the plugs on. _Figure1_ : Required information for registration The agreement will be gathered via so called opt-in option (“I agree to the terms and conditions” button). Via the URL www.dominoenergy.eu potential participants will get to the registration page. The terms and conditions are available for reading in a separate page of the registration process. In case a potential participant does not tick the acceptance button, he/she cannot be registered and cannot participate in the challenge. After successful registration the first cycle participants will receive a challenge package and will be able to download the DOMINO application from the relevant stores (Android and iOS). To activate the app the participant will have to log in by using the correct user name, i.e. the email address (as defined by themselves on the DOMINO website). The DOMINO Challenge will also use the Personal Data to communicate to the participants in the following manner: * Send them a prize if they win the challenge. * Send them a reminder if they have not passed on the plugs to the next person in their team. * Send them automated information on the savings after month two and three of their cycle. * Send them the invitation to a voluntary project evaluation sheet to a random selection of participants from each country. * Send them an encouraging email to stick to the energy saving habits they have learned during the project. The notification of the winners by email (9 teams including overall 45 households) will be done by the project partners For this they will evaluate the anonymised user data of the participants of the three regions and indicate the team IDs with the best performances to the data manager at Plugwise. For this purpose only, he will pair the user data with the personal data, at the end of the challenge or in the case of Italy at the end of each cycle and communicate the email addresses to the project team who will then inform the winners. ## 3.1 Deletion of Personal Data All personal data of all the participants will be deleted within the 90 days following the nomination of the winners. Postal addresses will be deleted 90 days after the equipment has been sent out. # 4 Ethics Issues ## 4.1 Recruitment Recruitment of participants of the DOMINO Challenge will be done via websites, in newsletters, on social media channels, as well as presenting the challenge in personal interactions in dedicated workshops, meetings or events. No specific part of the population in the DOMINO assigned geographical target areas are excluded from the challenge, taking into consideration the specifically defined DOMINO target group characteristics. Participants are accepted on a first-come-first-serve basis if they fulfil the minimum requirements for participating in the challenge as described above. Children under the age of 18 years are not allowed to sign up for the DOMINO Challenge. If they are part of a household that participates in the challenge the person/parent in charge of the child (legal guardian) has to declare his or her consent regarding the participation of the child to the person who signed up for the challenge. Children over 14 additionally have to be informed about the character of the DOMINO challenge and declare their consent to the person who signed up for the challenge. Participation in the DOMINO Challenge is voluntary and can be ended by the participants at any time without consequences. ## 4.2 Sensitive Personal Data At no time during the DOMINO Challenge data will be requested or processed in relation to: * Health * Sexual lifestyle * Ethnicity * Political opinion * Religious or philosophical conviction ## 4.3 Incidental findings policy The consortium expects to generate data on consumption of participants by household size, region, and gender and in how far recommendations on saving energy have an effect on behaviour. Other anticipated findings are data on electricity consumption of diverse household appliances. Researcher’s duty of care towards their research group suggests that the researcher should disclose unexpected findings that are beneficial to the participants or that could prevent damages to them. This guiding principle is mostly known and applied in the context of medical or psychological studies. Since the DOMINO Challenge is not a medical study, we do not expect any incidental findings that could harm health or well-being of participants. ### 4.3.1 General policy Therefore, if the project team comes across general and “benign” incidental findings in the course of this research project, the assumed policy it not to disclose them to the participants nor to other designated parties. Possible risks and discomforts of participants during the research project are laid out in the consent form and generally they are assumed to remain at a tolerable level. ### 4.3.2 Exception to the general policy However, in order to assume a reasonable standard of care that should be present in all research (also non-medical), there is one exception to the general policy: In the unlikely event, that findings are made which indicate _fundamentally harmful/negative economic or_ _social consequences_ for the participant, the consortium will inform the participant of these consequences, This approach is intended to protect the participant, as well as the researcher. The procedure to deal with fundamentally harmful findings is laid out below. 1. The consortium leader adelphi will set up a phone call with all project partners within three weeks to discuss the recommendations to be made to this particular participant to avert future damaging consequences. 2. The consortium will get a second expert opinion from the external advisor on ethics issues, Axel Carlberg (http://www.upspring.se/contact.html) within four weeks. Axel Carlberg has extensive experience in ethics analysis in divers EU projects and has declared his availability for the DOMINO project, if necessary. 3. The consortium will inform the EC about the findings and get feedback on the suggested recommendation after step one and two have been executed. 4. If the strategy is agreed on, the regional partners will inform the respective participant about the findings and recommendation in their national languages. The first contact on that issue will be made by email to set up a date when to talk on the phone. If the participant so wishes, one or more follow-up calls can be arranged after a few weeks. The goal of the calls with the participants is to enable him to deal with the consequences in an (for him) appropriate and satisfactory way. 5. Budget for the extra work will be shared by the partners equally. In case a high number of participants need extra care over a longer period of time, some budget from the dissemination activities might be reallocated to this task (in agreement with the EC). The general outcomes of the project will, as outlined in the project proposal, be published on the project website and is thus accessible for all participants. This is to say that they are transparently informed of research results that the consortium set out to acquire. (Participants might not always keep all the details on purposes of the DOMINO Challenge readily available in their head.) 5 **Attachment Security Benefits Amazon Web Services** # _Security Benefits of Amazon Web Services (AWS)_ Cloud security at AWS is the highest priority. As an AWS customer, Plugwise benefits from a data centre and network architecture built to meet the requirements of the most securitysensitive organisations. We refer to: _https://aws.amazon.com/privacy/_ An advantage of the AWS cloud is that it allows its customers to scale and innovate, while maintaining a secure environment. ## 5.1 Designed for Security The AWS Cloud infrastructure is operated in AWS data centres and is designed to satisfy the requirements of AWS’ most security-sensitive customers. The AWS infrastructure has been designed to provide high availability, while putting strong safeguards in place for customer privacy. All data is stored in highly secure AWS data centres. Network firewalls built into Amazon VPC, and web application firewall capabilities in AWS WAF allows creation of private networks, and control access to the instances and applications. _Figure 3_ : Global Infrastructure of AWS ## 5.2 Highly automated AWS proposes build security tools, which are tailored for a unique environment, size, and global requirements. Building security tools from the ground up allows AWS to automate many of the routine tasks security experts normally spend time on. This means AWS security experts can spend more time Amazon Web Services focusing on measures to increase the security of the AWS Cloud environment. AWS customers also automate security engineering and operations functions using a comprehensive set of APIs and tools. Identity management, network security and data protection, and monitoring capabilities can be fully automated and delivered using popular software development methods. Customers take an automated approach to responding to security issues. When automating and using the AWS services, rather than having people monitor the security position and reacting to an event, the system can monitor, review, and initiate a response. ## 5.3 Highly available AWS builds its data centres in multiple geographic Regions. Within the Regions, multiple Availability Zones exist to provide resiliency. AWS designs data centres with excess bandwidth, so that if a major disruption occurs there is sufficient capacity to load-balance traffic and route it to the remaining sites, minimizing the impact on our customers. Customers also leverage this Multi-Region, Multi-AZ strategy to build highly resilient applications at a disruptively low cost, to easily replicate and back up data, and to deploy global security controls consistently across their business. ## 5.4 Highly accredited AWS environments are continuously audited, with certifications from accreditation bodies across the globe. This means that segments of your compliance have already been completed. To help Plugwise meet specific government, industry, and company security standards and regulations, AWS provides certification reports that describe how the AWS Cloud infrastructure meets the requirements of an extensive list of global security standards. Customers of AWS inherit many controls operated by AWS into their own compliance and certification programs, and run security assurance efforts in addition to actually maintaining the controls themselves. 6 **Attachment Consent Form** ## Brief introduction of the project DOMINO is an EU-funded research project that intends to prompt energy reductions in households. It uses a smart plug technology, where plugs measure electricity consumption of different appliances, to raise awareness for energy consumption and to change the participants’ behaviour towards energy saving. The project has chosen a playful approach in form of an energy saving game called “DOMINO Challenge” which is played in teams of five households each. The challenge takes place in Brussels, Berlin, and Naples and will involve around 4000 households. The project results will allow making assumptions on the overall potential of smart plugs for reducing energy consumption in households and will enable related initiatives to benefit from the DOMINO experience. Including preparation and evaluation the project extends over a period of almost three years (March 2016 till end of February 2019 . ## Incidental findings If during the course of the project, the project team comes about incidental findings that would have potential or real fundamentally harmful and negative economic or social consequences for a participant, the project team will disclose these findings to the participant. Benign incidental findings will not be disclosed to participants or third parties. ## Point of contact, if questions on this form arise Different organisations are responsible for the DOMINO project in the three regions. For any further questions please contact adelphi in Berlin (Lena Domröse; [email protected]), Agenzia Napoletana Energia e Ambiente in Naples (Michele Macaluso; [email protected]), or Brussels Environment in Brussels (Xavier Van Roy; [email protected]). For data protection queries, please contact Plugwise in the Netherlands (Theo Vroege; [email protected] ). _Terms and conditions to which I agree by ticking the box_ 1 _:_ * My participation in the DOMINO Challenge requires the provision of Personal Data. I agree to provide personal information, such as: * First Name, Name * Physical address, i.e. street name, street number, city, zip code and country (used for shipment of smart plug equipment) * Email address * If I am the first player of my team, I agree that my name and address data will be transferred by the smart plug producer Plugwise to its (out-sourced) logistics centre and used by FEDEX for shipment. * I understand that my postal address will be deleted 90 days after the equipment has been sent out. * The DOMINO app will request and collect personal data on my: * Age * Gender * Password * Choice of language * Number of people in the household * Electricity price * Electricity consumption from the past year * Type/name of appliances connected to the plugs except for the joker plug * Appliances; i.e. information on energy efficiency class, year of construction and consumption as indicated by the manufacturer * Absence periods during the DOMINO challenge (retrospectively) * I understand that my personal data will not be given or sold to any other third party for commercial or other reasons. * I understand that the information requested by the app will help the research team in their analysis and it is not mandatory to provide but recommended. * I understand that the processing of my personal data relies on my consent, which is necessary to participate to the contest, and is required till the end of the contest to have the chance to win a prize. * I understand that my name can be seen by other members of my team. * I understand that participation in the DOMINO challenge is voluntary and can be ended at any time without affecting the lawfulness of processing based on my consent before its withdrawal. * I agree that my personal data (except for the postal address) will be safely stored until the end of the DOMINO project, i.e. until the end of February 2019. Then it will be deleted. 1. agree that my personal data will be processed by Plugwise and stored in a cloud from Amazon.com, in Ireland. * I understand that I have the right to request access to my _personal_ data, the right to rectification, erasure, or restriction of processing or to object to processing of personal data. * I agree that the research team collects and analyses the following _usage_ data: o the current and historical consumption of my appliances on Monthly basis Weekly basis Daily basis Hourly basis o scheduling data, meaning data on the schedule chosen by myself to switch appliances on and off * The collected _usage_ data will be stored anonymously and separately from the personal data in a secure data base of Plugwise and I understand that this data cannot be deleted from the data base after the project is over because it has been anonymised. It will therefore be stored there for an indefinite period. * If I click on feedback buttons in the tips and alerts section of the app, I agree that the number of my clicks will be collected. I understand that the collection is necessary so that it can be evaluated which team has most clicks because this is one way to win the DOMINO challenge. * I agree that my anonymised consumption data will be compared to anonymised data of other users and that general outcomes of the project will be published on the project website and will thus be accessible for all participants and the public. * I agree that my Usage data will be available outside my local WLAN network when I use the app outside my house and could be accessed by third parties via computer espionage. * During the DOMINO challenge, I agree to receive certain messages and notifications via the DOMINO app, for example energy saving tips, reminders to set goals and information on the end of my cycle. * During my 3-month long cycle of DOMINO, I agree to receive monthly emails with information on my monetary and energy savings. * After my cycle is over, I agree to receive an email asking for evaluating the challenge encouraging me to stick to the energy saving habits I may have adopted. * When I have allowed push notifications from the app, I understand that the app will be running permanently on my phone. 1. have the right to lodge a complaint with my national Data Protection Authority, if I have doubts concerning data protection measures. * I have informed members of my household that my household is part of a research project of the European Commission that deals with energy efficiency of household appliances and collects usage data on appliances that are connected to the smart plugs. * I assure that other household members have expressed their consent to being part of the DOMINO Challenge. Violation of this clause can lead to my exclusion from the DOMINO challenge. * I understand that the DOMINO Challenge only works if the plugs are passed on from one household to the next and I agree to pass the plugs on to another participating household in due time after my usage cycle has ended. If I am the last player, I will send the plugs back to the responsible project partner in my region within 14 days using the postage label that will be provided on the website: www.dominoenergy.eu. * If there is reason to believe that I did not pass on the plugs to another household, I agree to receive a notification email from the project team to remind me of that duty. * I understand that when the smart plugs are used by the next player, statistical consumption data of my appliances, or data on the type of appliances connected, will not be available anymore. * In case that my team belongs to the winners, I agree that Usage Data will be connected to my Personal Data (Email-address) in order to inform me about the award and to send me the prize. * I understand that there are possible risks and discomforts, as mentioned above (or: in the data management plan), that might occur during the research project. The risks are: I might not save any energy at all. (For example because I do not (want to) follow the recommendations that are sent to me via the app. * I understand that I will have to invest some time and effort for setting up the plugs and passing them on to the next person. (It is assumed that setting the system up will take between 10 minutes and 1 hour depending on the technical expertise of people.) * I refrain from connecting medical equipment or erotic toys to any of the plugs and from naming them correspondingly because obtaining data of this sort of appliances is not desired during the project. * In the unlikely event, that incidental findings are made which indicate real or potential fundamentally harmful/negative economic or social consequences for me, I allow the project team to contact me. I understand that the project team and their companies are not liable for any damage or harm that might occur, unless it is a consequence of intentional or grossly negligent behaviour. 7 **Attachment Logistics** ### _Fedex Excel file definition_ <table> <tr> <th> Excel column </th> <th> Description </th> <th> Trans Field # </th> </tr> <tr> <td> A (*) </td> <td> Unique Identification Number of the line </td> <td> 1 </td> </tr> <tr> <td> B </td> <td> Recipient Company name </td> <td> 11 </td> </tr> <tr> <td> C </td> <td> Recipient Contact Name </td> <td> 12 </td> </tr> <tr> <td> D </td> <td> Address line 1 </td> <td> 13 </td> </tr> <tr> <td> E </td> <td> Address line 2 </td> <td> 14 </td> </tr> <tr> <td> F </td> <td> Recipient City </td> <td> 15 </td> </tr> <tr> <td> G (*) </td> <td> Recipient State code </td> <td> 16 </td> </tr> <tr> <td> H (*) </td> <td> Recipient Country code or Country name </td> <td> 50 </td> </tr> <tr> <td> I (*) </td> <td> Recipient Postal code </td> <td> 17 </td> </tr> <tr> <td> J (*) </td> <td> Recipient Phone number </td> <td> 18 </td> </tr> <tr> <td> K (*) </td> <td> Recipient E-mail </td> <td> 1202 </td> </tr> <tr> <td> L </td> <td> Recipient VAT or Tax number </td> <td> 118 </td> </tr> <tr> <td> M (*) </td> <td> Shipment Reference </td> <td> 25 </td> </tr> <tr> <td> N </td> <td> Line numbering (drag down till last line) </td> <td> 38 </td> </tr> </table> #### Fedex Excel Template <table> <tr> <th> 1 </th> <th> 11 </th> <th> 12 </th> <th> 13 </th> <th> 14 </th> <th> 15 </th> <th> 16 </th> <th> 50 </th> <th> 17 </th> <th> 18 </th> <th> 1202 </th> <th> 118 </th> <th> 25 </th> </tr> <tr> <td> Bus. Code </td> <td> Company Name </td> <td> Contact Name </td> <td> Address Line 1 </td> <td> Address Line2 </td> <td> City </td> <td> State Code </td> <td> Country Code </td> <td> Zip </td> <td> Phone </td> <td> E-mail </td> <td> VAT_Tax Nr. </td> <td> Reference </td> </tr> <tr> <td> Manda tory Unique ID </td> <td> Mandatory if no Contact </td> <td> Mandatory if no Company </td> <td> Mandatory </td> <td> </td> <td> Manda tory </td> <td> Mandatory </td> <td> Mandatory </td> <td> Manda tory </td> <td> Mandatory </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> US-CA </td> </tr> <tr> <td> Max. Length </td> <td> Max. Length </td> <td> Max. Length </td> <td> Max. Length </td> <td> Max. Length </td> <td> Max. Length </td> <td> 2 Letter ID </td> <td> 2 Ltr ID </td> <td> Max. Length </td> <td> Numeric field </td> <td> Max </td> <td> Max </td> <td> Max. </td> </tr> <tr> <td> 20 </td> <td> 35 </td> <td> 35 </td> <td> 35 </td> <td> 35 </td> <td> 35 </td> <td> 10 </td> <td> Max. 15 </td> <td> 60 </td> <td> 18 </td> <td> 35 </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1134_RESCCUE_700174.md
# Introduction This document is developed as part of RESCCUE (RESilience to cope with Climate Change in Urban arEas - a multisectorial approach focusing on water) project, which has received funding from the European Union’s Horizon 2020 Research and Innovation program, under the Grant Agreement number 700174. This document is an updated version of the Data Management Plan (DMP) presented in M6 (D8.2) and corresponds to Deliverable 8.5 of Work Package 8 (WP8) – Project Management. WP8 ensures an optimal coordination and management of RESCCUE, guaranteeing the effective implementation of the project activities. This document is to be used by all partners to efficiently handle data, and make sure that the several obligations that RESCCUE has in terms of data are properly fulfilled by all partners at all time. The Data Management Plan (DMP) also ensures an effective implementation of the _Open Research Data Pilot_ initiative of the European Commission and addresses the points set in the DMP template of the EC Guidelines on FAIR Data Management in HORIZON 2020 (EC, 2016b). Nevertheless, this has to be balanced with the protection of scientific information, commercialisation and Intellectual Property Rights (IPR). The DMP establishes the ~~data management~~ life cycle for the data to be collected, processed, and/or generated which is to be followed by all partners in the consortium. Moreover, it details what kind of data will be reused and produced, which of the data generated by RESCCUE project will be open and how these data will be exploited and made accessible (for verification and/or reuse) and, on the contrary, which of the data will be preserved, considering also that this project deals with some cases of Critical Infrastructures (CI) and that Directive 2008/114/EC on CI must be respected (EC, 2008). # Objectives and methodology The goal of a DMP is to consider the many aspects of data management, metadata generation, data preservation and analysis, which ensures that data are well- managed in the present, and prepared for preservation in the future. On the other hand, it is of key importance to make sure that the research data is findable, accessible, interoperable and re-usable (FAIR).As stated in the Guidelines on FAIR Data Management in Horizon 2020 (EC, 2016b), a DMP must include information on: * the handling of research data during and after the end of the project * what data will be collected, processed and/or generated * which methodology and standards will be applied * whether data will be shared/made open access and * how data will be curated and preserved (including after the end of the project) Consequently, these several items are included in this Deliverable 8.5, describing the data management life cycle for the data to be collected, processed and generated by the RESCCUE Project. The methodology to produce this DMP, is based on the Guidelines on FAIR Data Management in Horizon 2020 (EC, 2016b) the Digital Curation Centre (DCC) online tool called DMP Online ( _https://dmponline.dcc.ac.uk_ ) and the Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 (EC, 2016a). The document has been divided into the following sections: 3. Research data management and sharing 4. Metadata and FAIR data 5. Protection of critical infrastructures and sensitive information 6. Ethics and Legal Compliance 7. Responsibilities and Resources 8. References # Research data management and sharing ## Data classification and management RESCCUE project deals with the resilience of cities, specifically in the case studies of Barcelona, Bristol and Lisbon, in terms of urban services response in critical situations derived from climate change. The assessment of the response and interdependencies of the urban services (water services, transport, telecommunication, energy supply or solid waste collection) both for current and future scenarios is therefore the basis of the research. The expected results involve, among others, the hazard, vulnerability and risk assessment of the urban services operation, including the identification of critical infrastructures. In order to achieve all the project objectives, data is being complied, methodologies are developed, models are built and finally datasets are generated. ### 3.1.1 Output data Table 1 presents the datasets that will be generated by RESCCUE. For each dataset, the following characteristics are described: brief description of the result; WP where these data are generated; producer/owner of the results; date (project’s month) in which these data are expected; type and data format; estimated size of data; naming conventions to be used and expected end users. **Table 1 – Summary of the data generated in RESCCUE** <table> <tr> <th> **Description of result** </th> <th> **Associated WP** </th> <th> **Result owner(s)** </th> <th> **Delivery date** </th> <th> **Type and data format** </th> <th> **Estimated data size** </th> <th> **Naming conventions** </th> <th> **End-users** </th> </tr> <tr> <td> Climate downscaled projections, decadal and seasonal simulations </td> <td> 1 </td> <td> FIC </td> <td> M18 </td> <td> Ascii files with climate data per variable, model, scenario, etc. </td> <td> -Climate timescale: 500,000 files (30 Gb) -Decadal timescale:42,000 files (16 Gb) -Seasonal timescale:500 files (80 Mb) </td> <td> -For zip file: Variable_Model_Scenario_City .zip -For plain text file: Variable_Model_Scenario_Stat ionId.txt </td> <td> Model owners or other climate researchers that will use this information as inputs for their research </td> </tr> <tr> <td> Extreme climate scenarios </td> <td> 1 </td> <td> FIC, Aquatec </td> <td> M24 </td> <td> KML polygons with climate data per variable, horizon and scenario. </td> <td> 300 files (4Mb) </td> <td> -For climate/decadal simulations: CITY_variable_threshold_Retu rnPeriod_TimePeriod_quantile .kml -For seasonal forecast: CITY_variable_threshold_scen ario_criterion_seasonal.kml </td> <td> Model owners or other climate researchers that will use this information as inputs for their research </td> </tr> <tr> <td> Drought and water quality analysis </td> <td> 2 </td> <td> Cetaqua </td> <td> M36 </td> <td> Excel sheets presenting the water contributions at each reservoir for the future scenarios Excel sheets presenting the evolution of quality problems in the Llobregat's river at the DWTP of Sant Joan Despí for the future scenarios </td> <td> 2 Excel files 20 Mb (x2) </td> <td> Contributons_Ter_Llobregat_r eservoirs.xlsx Water_Quality_Llobregat_SJD. xlsx </td> <td> Water companies and the Catalan Water Agency </td> </tr> <tr> <td> Urban drainage simulations in Barcelona </td> <td> 2 </td> <td> Aquatec, BCASA </td> <td> M36 </td> <td> Shape file presenting hydraulic behaviour of sewer system for the several scenarios (5 different return periods for the current (1) and future (4) scenarios 🡪 TOTAL: 25 simulations) Shape files presenting the pedestrian hazard maps for the several scenarios (5 different </td> <td> 75 shape files of data (of 250 Mb each) TOTAL: 18.75 Gb </td> <td> Name_of_city_Hazard_target_ Return_period_Scenario_Time _period </td> <td> Other researchers, all the stakeholders that might have flooded assets and the general population of the city </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> return periods for the current (1) and future (4) scenarios 🡪 TOTAL: 25 simulations) Shape files presenting the vehicular hazard maps for the several scenarios (5 different return periods for the current (1) and future (4) scenarios 🡪 TOTAL: 25 simulations) </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Assessment of marine model impacts </td> <td> 2 </td> <td> Aquatec </td> <td> M36 </td> <td> MOHID files and Ascci data (Time series simulations of E.Coli concentration in the Barcelona bathing water) for current (1) and future scenarios (2) </td> <td> Ascii files of concentration distribution every hour for 10 years of continuous simulations (7GB). A total of 30 years simulations between baseline and future scenarios (TOTAL=21GB). </td> <td> Name_of_city_Scenario_Time _period </td> <td> Waste water operators, public administrations and general population </td> </tr> <tr> <td> Assessment of bursting pipes impacts in Barcelona </td> <td> 2 </td> <td> Aquatec, AB </td> <td> M36 </td> <td> Shape files presenting the hazard maps for several scenarios </td> <td> 2 shape files of data (of 250 Mb each) 🡪 TOTAL: 500 Mb </td> <td> Name_of_city_Scenario_Time _period </td> <td> Water companies and other stakeholders that might have flooded assets </td> </tr> <tr> <td> Simulations of the electric model in Barcelona </td> <td> 2 </td> <td> IREC, Endesa </td> <td> M36 </td> <td> Maps Images and data in table format presenting the impacts </td> <td> 4 cases with 3 PR for 3 scenarios (36 Figures) ~150Mb </td> <td> Simulation_City_Sector_Scena rio_PR_case </td> <td> All the stakeholders that have critical infrastructures depending on the electric network </td> </tr> <tr> <td> Simulation of hazards on the traffic model </td> <td> 2 </td> <td> Barcelona CC </td> <td> M36 </td> <td> Maps and ascii information presenting the impacts </td> <td> 25 shape files of data (of 250 Mb each) 🡪 TOTAL: 6.25 Gb </td> <td> Name_of_city_Hazard_target_ Return_period_Scenario_Time _period </td> <td> Local police and other public administrations </td> </tr> <tr> <td> Urban drainage simulations in Lisbon </td> <td> 2 </td> <td> Hidra and CML </td> <td> M36 </td> <td> 1\. Lisbon citywide drainage system (1D GIS based model) Image files presenting sewer capacity for 4 return periods of current situation (results </td> <td> 1\. 4 png files (1 Mb each) → TOTAL: 4 Mb </td> <td> NameOfTheCity_Model_Scena rio_ReturnPeriod </td> <td> Other researchers, all the stakeholders that might have flooded assets and the general population of the city </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> taken from Lisbon Drainage Master Plan 2016-2030) 2. Alcântara drainage system (1D SWMM Model) Image files presenting main hydraulic variables (flow capacity, flow rate and velocity) for 5 return periods of current situation and 3 return periods for 2 future scenarios (most severe and most probable) 3. Lisbon downtown catchments J and L (1D/2D Combined Model SWMM+BASEMENT) Raster files presenting water depths for 5 return periods of current situation and and 3 return periods for 2 future scenarios (most severe and most probable) </th> <th> 2. 3x11 png files (2 Mb each) → TOTAL: 66 Mb 3. 11 Raster files (1.5 Mb each) → TOTAL: 16.5 Mb </th> <th> </th> <th> </th> </tr> <tr> <td> Simulations of the energy distribution model in Lisbon </td> <td> 2 </td> <td> EDP </td> <td> M36 </td> <td> DXF files presenting the simulation of the impact in electrical infrastructure for Lisbon using the information of urban drainage models. 6 Scenarios: i) Lisbon Low/Medium/High Voltage Grid at normal configuration (2 simulations); ii) Lisbon municipality citywide drainage (2 simulations); iii) Lisbon municipality estuary water (1 simulation); iv) Lisbon downtown catchments J an L (7 simulations); iv-1) 1 primary substation out of service (4 simulations); iv-2) 9 secondary substations out of service (14 simulations) </td> <td> 30 files of data. Total: 6 Mb </td> <td> City, Low, Medium, High Voltage Grid, Urban Drainage model approaches and electrical infrastructure results Contingency Plan for different scenarios for simulations of iv), iv-1) and iv-2) </td> <td> All the stakeholders that have critical infrastructures depending on the electric network </td> </tr> <tr> <td> Urban drainage simulations in Bristol </td> <td> 2 </td> <td> BCC </td> <td> M36 </td> <td> Shape files presenting the depth, extent, hazard and velocity maps for the several scenarios (5 different return periods (RP) per catchment (7) – TOTAL: 35 simulations). </td> <td> 8Mb per catchment per RP. 7 catchments = 56Mb per RP. TOTAL: 5 RP x 56 x 3 = 0.85Gb. </td> <td> Catchment name_Mreturn period </td> <td> Other researchers, all the stakeholders that might have flooded assets and the general population of the city </td> </tr> </table> <table> <tr> <th> Tidal and Fluvial Flooding simulations in Bristol - Central Area Flood Risk Assessment (CAFRA) </th> <th> 2 </th> <th> BCC </th> <th> M36 </th> <th> Shape files presenting the depth, extent, hazard, maps for the several scenarios (32 RP (mixed combination events), 4 epochs, 3 emissions scenarios, with existing defences. TOTAL – 24 simulations (however equivalency runs exist, for instance current Flood Zone 2 is equivalent to future Flood Zone 3 so some represent 2 scenarios)) </th> <th> Varying sizes of ASCII grid and DAT file format 400Mb per run x 24 = 9.6Gb </th> <th> CAFRA_Versions_RP_Fluvialele ment_Tidalelement_EpochEm missions_DepthMaximumoutp uts_Gridsize </th> <th> Other researchers, all the stakeholders that might have flooded assets and the general population of the city </th> </tr> <tr> <td> Tidal and Fluvial Flooding simulations in Bristol - Avonmouth Strategic Flood Risk Assessment </td> <td> 2 </td> <td> BCC </td> <td> M36 </td> <td> Shape files presenting the hazard maps for the several scenarios (5 RP per epoch, 3 epochs plus various wave and surge components and different failure scenarios. Current defended/undefended/blockage. 2073 defended/undefended/breach. 2110 defended/undefended/breach. TOTAL – 118 simulations </td> <td> Varying sizes of ASCII and dat files TOTAL = 60Gb </td> <td> AVM_gridsize_RPtidal_RPfluvi al_epoch_withdefences_block age_breach_modeldesignruns </td> <td> Other researchers, all the stakeholders that might have flooded assets and the general population of the city </td> </tr> <tr> <td> Integrated flooding – traffic simulations in Bristol </td> <td> 2 </td> <td> Uni Exeter </td> <td> M36 </td> <td> Shape files presenting the impacts for all the simulated events. XML files outputted via micro-simulation traffic CSVs for graphical analysis </td> <td> Approximately 2-5GB per flood event simulation </td> <td> NameOfCity_EventType_Retur nPeriod_AdditionalInfo.FileTyp e </td> <td> Local police and other public administrations </td> </tr> <tr> <td> Integrated flood and waste sectorial model </td> <td> 2 </td> <td> Cetaqua </td> <td> M26 </td> <td> Shapefiles with the location of potentially unstable containers in Barcelona for the different scenarios </td> <td> 1.5 GB </td> <td> For the actual event (to validate the model): Containers_0_31_07_2011; Containers_50_31_07_2011; Containers_100_31_07_2011 For the design storms: Containers_0_T2; Containers_0_T5; Containers_0_T10 Containers_50_T2; Containers_50_T5; Containers_50_T10 </td> <td> Model owners or other researchers that will use this information as inputs for their research. Also the Barcelona City Council will use this model to prevent containers’ instabilities </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> Containers_100_T2; Containers_100_T5; Containers_100_T10 </th> <th> </th> </tr> <tr> <td> Flood impact assessment in the energy sector </td> <td> 3 </td> <td> IREC </td> <td> M36 </td> <td> Maps Images and data in table format presenting the impacts </td> <td> 5 PR for 3 scenarios (15 Figures + optimization table) ~200Mb </td> <td> Hazards_City_Sector_Scenario _PR_case </td> <td> All the stakeholders that have critical infrastructures depending on the electric network </td> </tr> <tr> <td> Flood direct damage assessments </td> <td> 3 </td> <td> Uni Exeter, Cetaqua, Aquatec </td> <td> M36 </td> <td> 1.Depth damage curves and shape files with the impacts for all the scenarios. KML/KMZ files for use in Google Earth 2.CSVs for graphical analysis </td> <td> 1.Approximately 2-5GB per flood event simulation 2.Excel File xls 5 Mb (curves) shape files 100 Mb (damage maps) </td> <td> NameOfCity_EventType_Retur nPeriod_AdditionalInfo.FileTyp e T1_damages.shp T5_damages.shp T10_damages.shp T100_damages.shp T500_damage.shp </td> <td> Other researchers, public administrations and insurance companies </td> </tr> <tr> <td> Flood indirect damage assessments </td> <td> 3 </td> <td> Cetaqua </td> <td> M36 </td> <td> Ascii files presenting the impacts for all the simulated events </td> <td> Ascii files 10 Mb (no definitive) </td> <td> Flood_indirect_damage (no definitive) </td> <td> Other researchers, public administrations and insurance companies </td> </tr> <tr> <td> Assessment of transport indirect damages </td> <td> 3 </td> <td> Cetaqua, Uni Exeter </td> <td> 36 </td> <td> 1.Shape files presenting the impacts for all the simulated events. XML files outputted via micro-simulation traffic 2.CSVs for graphical analysis </td> <td> 1.Approximately 2-5GB per flood event simulation 2.Ascii files 10 Mb (no definitive) </td> <td> NameOfCity_SubArea_EventTy pe_ReturnPeriod_ AdditionalInfo.fileType Transport_indirect_damage (no definitive) </td> <td> Other researchers, public administrations and insurance companies </td> </tr> <tr> <td> Adaptation measures and strategies database </td> <td> 5 </td> <td> Cetaqua </td> <td> 18 </td> <td> Database containing all the strategies compiled </td> <td> Not required (in cloud) </td> <td> https://resccue.herokuapp.co m </td> <td> Other researchers, service operators and public administrations </td> </tr> </table> As it is later explained in section 4, the data generated in RESCCUE project (i.e. datasets summarized in Table 1) will be made publically available and discoverable, by publishing the metadata in the Inspire portal and uploading the datasets in Zenodo. More details related to that can be found in that section. ### 3.1.2 Input data In addition to the information presented in Table 1, it is also important to show the information that the RESCCUE project reuses to generate its outputs. In Table 2 there is a brief overview of the data needed (reused) in each WP and the source(s) to obtain them. The data necessary to develop RESCCUE project is being collected by the responsible and contributors involved in each task. In general, though, primary data is being collected at the case studies (Barcelona, Lisbon and Bristol) by the case-study responsibles (Aquatec, LNEC and University of Exeter, respectively). Regarding the data used or reused from other sources, some of this information is private (it belongs to some of the project stakeholders) or has been purchased to use it on the project (as some of the climate information). Therefore, the RESCCUE partners are not allowed to share it, but only to use it to generate the outputs. **Table 2 – Summary of input data used in the RESCCUE project classified per Work Package** <table> <tr> <th> **WORK PACKAGE** </th> <th> **INPUTS** </th> </tr> <tr> <th> **Type of data** </th> <th> **Source(s)** </th> </tr> <tr> <td> **WP1-Climate Change Scenarios** </td> <td> Climatic data/models </td> <td> -Public data from: PCMDI GHCN-daily ISH/ISD -AEMet -IPMA -Met Office </td> </tr> <tr> <td> Future climate scenarios </td> <td> Public data from IPCC </td> </tr> <tr> <td> **WP2-Hazard Assessment for Urban Services Operation** </td> <td> Field data, sensor data and physical data from the several sectorial models and studies implemented in the 3 research sites </td> <td> -Public data -Know-how and networks data from Barcelona CC (BCASA), CML, Bristol CC, Endesa, EDP and Wessex Water </td> </tr> <tr> <td> **WP3-Vulnerability & Risk ** **Assessment for Urban Services Operation** </td> <td> Data quantifying the impacts of identified hazards in urban areas </td> <td> -Public data on impacts for the three research sites -Know-how (such as damage curves or other methodologies) from UNEXE, Aquatec, CETaqua and LNEC </td> </tr> <tr> <td> **WP4-Integration in a software tool** </td> <td> Information of the location of infrastructures and services from the three research sites, as well as their interdependencies, redundancies and other key parameters </td> <td> -City councils -Urban services providers such as water utilities, electricity providers, waste management services, etc. </td> </tr> <tr> <td> **WP5-Resilience and adaptation strategies ready for market uptake** </td> <td> Resilience strategies </td> <td> -Public information coming from previous research projects (such as CORFU, PREPARED, RESIN, BRIGAID – see Aneex 2 of D5.1) -Expert knowledge from RESCCUE partners to complete the methodology proposed </td> </tr> <tr> <td> **WP6-Validation Platform & First Applications ** </td> <td> Resilience studies undertaken by third parties </td> <td> -Reports from C40, 100RC, UN-Habitat and others </td> </tr> <tr> <td> **WP7-Dissemination & Exploitation ** </td> <td> Information of key stakeholders and personal data </td> <td> -Information of the key stakeholders that have been identified for the project -Personal data of the attendees of RESCCUE workshops and other public events, complying with the GDPR regulations </td> </tr> <tr> <td> **WP8-Project Management** </td> <td> Personal data </td> <td> -Personal data of the attendees of RESCCUE PCM meetings, complying with the GDPR regulations </td> </tr> </table> ### 3.1.3 Data quality All data generated and collected in RESCCUE undergo a quality check in order to analyse its individual plausibility and consistency. Collection and generation of climate and service related data follow established standards such as INSPIRE implementing rules, metadata and data specifications; OpenMI; WaterML; GML and OGC geospatial data services. When needed, additional measures are taken in order to ensure the quality of data. As an example, in “D1.1 - Data collection and quality control report. Summary of studies on climate variables at the research cities”, a thorough analysis of all the data compiled was undertaken, ensuring consistency, removing outliers and homogenizing the information. Open standardised and interchangeable formats are used whenever possible and adequate to ensure the long-term usability of data. Proprietary software- specific data formats are avoided, with exception to those widely spread and openly documented or eventually related to software platforms used by the project teams. ## Data sharing ### 3.2.1 Open access to peer-reviewed scientific publications RESCCUE research partners publish scientific publications including project results in Open Access. Open access can be defined as the practice of providing on-line access to scientific information that is free of charge to the end-user To meet this requirement, beneficiaries ensure that these publications can be read online, downloaded and printed (free of charge, online access to any user). The links to abstracts of research articles published in scientific journals are also available in the project website (www.resccue.eu). The open access to publications procedure comprises 3 steps: 1. Selecting the open access route (green or gold open access) 2. Providing open access to publications 3. Depositing the data in repositories (online archive) in order to allow for replicability of the results Thus, all scientific publications generated by the RESCCUE project will be made available both online through open access in peer reviewed scientific journals and at RESCCUE web page. ### 3.2.2 Open access to research data RESCCUE is part of the EC’s initiative Open Research Data Pilot. Within the framework of Horizon 2020, the Open Research Data Pilot aims to improve and maximise the access to and re-use of research data generated by projects. The Open Research Data Pilot applies to two types of data: 1. Data, including metadata, needed to validate the results presented in scientific publications (published in scholarly journals); 2. Other data (e.g. curated data not directly attributable to a publication or raw data), including associated metadata. The research data that will be produced, as it could be seen in Table 1, will be of interest to other researchers, public administrations, service operators and other stakeholders as well as the general population. In order to allow for replicability of research results, the information generated (final results as specified in Table 1) will be made available so other can reproduce the methodologies used in RESCCUE. Accordingly, the data generated in the project will be available in a research data repository so that it will be possible to access, mine, exploit, reproduce and disseminate it, free of charge for any user. Possible repositories to include these data are: Registry of Research Data Repositories (www.re3data.org) or Zenodo (zenodo.org). After an analysis of both, **Zenodo** has been selected as the data repository to be used in RESCCUE. According to the Exploitation Plan (Deliverable 7.4) the exploitation of the results generated from RESCCUE project must be ensured up to four years after the project end. Therefore, both the Zenodo repository and the RESCCUE website (including the deliverables and scientific publications) will be completely operative, at least, until May 2024. Additionally, all the Gold Open Access scientific publications will be available for an unlimited period after the end of the project. # Metadata and FAIR data The Guidelines on FAIR Data Management in Horizon 2020 (EC, 2016b), clearly state that making research data findable, accessible, interoperable and re- usable (FAIR), is on the main roles of the DMP. ## Making data findable, including provisions for metadata In order to make data findable, the main tool is to assure that data used and produced in the project can be discoverable with **metadata** . Since there are several ISO metadata standards produced by ISO committees including ISO 19115 (Geographic information — Metadata) and ISO 19119 (Geographic information — Services), the RESCCUE consortium will take advantage of the schemas already defined to define its metadata. Therefore, common criteria will be followed for all the RESCUE generated data while following the requirements of one of these standards. In addition, the European INSPIRE Directive (2007/2/EC) aims to create a European Union (EU) spatial data infrastructure. This Directive requests that Member States shall ensure that metadata are created for the spatial data sets, and that those metadata are kept up to date. In order to do so, INSPIRE created an online portal called “INSPIRE GeoPortal” ( _http://inspiregeoportal.ec.europa.eu/_ ) that can be used to store and search for metadata. This portal will be used in RESCCUE in order ensure that the project data will be findable. Given that data will be linked to each of the research sites, Barcelona, Bristol and Lisboa, to a certain service and timeframe, naming conventions are established in order to clearly identify the dataset by its name. All these can be seen in Table 1, together with information corresponding to the several datasets generated in the project. Taking advantage of those naming conventions, a set of keywords will be defined for each dataset, in order to ease the search of the metadata. These keywords will be defined in accordance with the terminology that is defined in the glossary of the RESCCUE project. All these metadata will be stored in the “INSPIRE GeoPortal” making sure that all the RESCCUE partners follow the same criteria. In addition, the Keyword “RESCCUE” will always be included in order to easily track the project results. An internal guide on how to generate the metadata of the RESCCUE results will be prepared and circulated to all the RESCCUE partners. ## Making data openly accessible The A from FAIR stands for accessible, which is precisely the main goal of the Open Research data pilot that was presented in section 3.2.2. As stated there, the research data generated by the RESCCUE Project will be shared in the Zenodo repository. This is of special relevance for the data used to publish results, in order to ensure replicability of research results. Only on the cases in which the key stakeholders do not give permission to disclose the results (when the vulnerabilities of the networks that they manage are being presented), the data will not be updated to Zenodo. On the rest of the cases, the RESCCUE results will be found there. ## Making data interoperable Making data interoperable, means that data exchange and re-use between researchers, institutions, organisations, countries, etc., should be available. The main goal of all this is to facilitate the re-combination of the data produced with different datasets from other origins. In order to do this, the use of standard formats and of available (open) software applications is promoted. In RESCCUE, the main pathway to make data interoperable is to include the metadata in the INSPIRE GeoPortal as presented before, as well as upload the datasets to the Zenodo repository, so other researchers are able to use this information with different software applications (no matter if they are open or not). Finally, a terminology glossary has been prepared, using the most common ontologies available from each of the fields that RESCCUE is dealing with. It can be found in D5.1, including all the definitions that are of interest for the project and thus, the results obtained will be easily understood and therefore re-used by others. ## Increase data re-use As explained before, data will be licensed to permit the widest re-use possible, when no limitations are identified by the key stakeholders. All data generated and collected in RESCCUE (see Table 1) undergoes a quality check in order to analyse its individual plausibility and consistency, making sure that others can directly use it to do their assessments and validate the research done by the RESCCUE team. As in some cases similar results will be generated for different case studies, data harmonisation will also be of critical importance both for increasing data re-use in general, but also to ease the comparison of RESCCUE results in the three research sites. In this sense, an Exploitation Plan (Deliverable 7.4) has been developed to ensure the use (and re-use) of project results (and data). In this deliverable, it is ensured that the results of RESCCUE project will be exploited, at least, during four years after the end of the project. More details about some of the RESCCUE detailed business plans can be also seen in Deliverable 7.3). # Data security ## Data confidentiality Due to the topics addressed in RESCCUE Project, some of the results and datasets generated could be potentially used for unfair purposes, that is, to cause a complete collapse of a city through the failure of its main urban services. Accordingly, apart from detailing what key data the project will reuse and generate, how it will be exploited or made accessible (as explained in “D7.7 Dissemination and Exploitation Plan”), in the context of RESCCUE project it is very important to determine which of this data needs to be protected and how it will be done. As stated in the guidance document “Guidelines for the classification of research results” (European Commission, 2015), the results of a project must be classified if their unauthorized disclosure could adversely impact the interests of the EU or of one (or more) of its Member States. E.g. some of the information produced by a project could potentially be used to plan terrorist attacks or avoid detection of criminal activities. However, while RESCCUE deals with critical infrastructures and utilities research (e.g. buildings and urban areas; energy, water, transport and communications networks; supply chains; financial infrastructures, etc.), the results on criticalities or vulnerabilities will not reach a level of detail that would imply a risk. Therefore, there is no need in protecting the project results with the category of EU CONFIDENTIAL. Nevertheless, in order to protect the results concerning critical infrastructures at the case study areas, a couple of measures have been taken. The first measure is already tackled in the Consortium Agreement, where all the project partners have agreed not to use confidential information otherwise than for the purpose for which it was disclosed, that is the development of the tasks of RESCCUE project. The second measure is to establish the dissemination level of several deliverables as **confidential** . The list of confidential deliverables are listed below: * D2.2: Multi-hazards assessment related to water cycle extreme events for current scenario * D2.3: Multi-hazards assessment related to water cycle extreme events for future scenarios * D3.4: Impact assessments of multiple hazards in case study areas * D4.1: Report from HAZUR® implementation in each city * D4.2: City RESILIENCE Assessment software (HAZUR® Assessment) * D4.3: City RESILIENCE Management software (HAZUR® Manager) * D5.3: Functional design of a resilience assessment operational module Nevertheless, in order to not limit the exploitation of the project results, a public version of deliverables 2.2, 2.3, 3.4 and 4.1 will be developed and thus available at the RESCCUE web page ( _www.resccue.eu_ ) . These public versions will be, respectively, 2.4, 2.5, 3.5 and 4.4, being a compacted version of the original deliverable avoiding the information or results affecting critical infrastructures. ## Protection of sensitive information As presented earlier, some of the assessments being done in the RESCCUE project are dealing with Critical Infrastructures (CI). The Directive 2008/114/EC (EC, 2008), defines “critical infrastructure as an asset, system or part thereof located in Member States which is essential for the maintenance of vital societal functions, health, safety, security, economic or social well-being of people, and the disruption or destruction of which would have a significant impact in a Member State as a result of the failure to maintain those functions”. It must be noticed thought that RESCCUE project deals with a scope at a city level and therefore, there is not a lot of details of the specific critical infrastructures and other sensitive information used. Consequently, although information about CI is used, no sensitive data regarding these infrastructures is disclosed and therefore, there would be no security measures to be taken in this sense (unless, as mentioned earlier, the key stakeholders specifically request that). In Spain, the CNPIC (Centro Nacional para la Protección de Infraestructuras Críticas) is the Centre that manages CI, which depends from the Ministry of Interior. Although, as explained before, they still haven’t identified all the CI from all the fields, they are using this principle in order to identify them: “the infrastructures that have been considered as critical for now, are the ones that might have impacts to the whole city (e.g. Barcelona). Therefore, localized impacts, even being severe, they are not considered critical as of now”. Finally, since the Hazur tool will be compiling information from several networks, containing several CI, general recommendations are proposed for the use of the Hazur tool and the presentation of final results: * Represent CI by its "zone of influence" rather than a singular point to not reveal its location * Completely anonymize the data of CI so that there is no spatial attributes stored within HAZUR * In order to establish interdependencies, use generalized nomenclature e.g. "Exchange box #12", so the infrastructures affected by this CI know that it is an exchange box but do not know anything else. ## Systems security Sensitive or confidential information that will not be made publicly available, will be stored in at least two systems: Hazur and Basecamp. Consequently, the security characteristics of both systems was assessed to make sure that the minimum standards were reached. Hazur is hosted in classical servers in a OVH data center with maximum physical security (servers can only be physically accessed by authorized employees, access restricted by security badge control system, video surveillance and security personnel, 24/7 on-site, rooms fitted with smoke detection systems and technicians on site 24/7) and high availability infrastructure (Systematic double power supply and generators with an initial autonomy of 48 hrs.) The servers have the following characteristics: <table> <tr> <th> **Virtualization** </th> <th> </th> <th> </th> <th> 64-bit OpenVZ </th> </tr> <tr> <td> **SLA** </td> <td> </td> <td> </td> <td> 99.98%, reboot in 10 mins in the event of hardware failure </td> </tr> <tr> <td> **Scalability** </td> <td> </td> <td> </td> <td> Upgrade whenever you want from our control panel. No need to transfer our data nor to reinstall our VPS. </td> </tr> <tr> <td> **Anti-DDoS** </td> <td> </td> <td> </td> <td> Included </td> </tr> <tr> <td> **IP** </td> <td> </td> <td> </td> <td> 1 IPv4 and 1 IPv6 included (all ports open) </td> </tr> <tr> <td> **Management** </td> <td> </td> <td> </td> <td> Web Control Panel, RESTful API, KVM, root access </td> </tr> </table> **Reboot and reinstallation** Unlimited, at any time via the Control Panel **Monitoring** Detailed monitoring and key performance indicators **Backup** Once a week OptiCits is considering different options to upgrade the software hosting in order to increase both the software performance and the data access security. Regarding Basecamp, they guarantee the security and confidentiality of the information stored there, by using encrypted protocols via HTTPS. Whenever data is in transit, everything is encrypted, and sent using HTTPS. Any files uploaded are stored and are encrypted at rest, and backups of data are encrypted using GPG. Additionally, all data is written to multiple disks instantly, backed up daily, and stored in multiple locations. Uploaded files are stored on servers that use modern techniques to remove bottlenecks and points of failure. The servers operate at full redundancy. The systems are engineered to stay up even if multiple servers fail. Their state-of-the-art servers are protected by biometric locks and round-theclock interior and exterior surveillance monitoring. Only authorized personnel have access to the data center. 24/7/365 onsite staff provides additional protection against unauthorized entry and security breaches. Their software infrastructure is updated regularly with the latest security patches. Their products run on a dedicated network which is locked down with firewalls and carefully monitored. While perfect security is a moving target, they work with security researchers to keep up with the state-of-the-art in web security. # Ethics and Legal Compliance Ethics is taken into account in the way data is stored, regarding who can see/use it and how long it is kept. The consent for data preservation and sharing obtained from data producers, or data owners, are strictly fulfilled according to applicable license rules. The identity of external participants is secured through anonymization, when and if applicable. The terms of use, curation and sharing of all datasets made available in the scope of RESCCUE, by data producers and data owners, are established in formal consent agreements. The formal consent agreements state who will own the copyright on the data to be collected or created, along with the license(s) for its use and reuse. When applicable, permissions to reuse third-party data and any restrictions needed on data sharing will also be referred in the associated metadata. However, ethics should not only be taken into account in the way data is stored and shared, but also regarding many other issues as all consortium members are subject to the EU Directive on Data Protection and its national transpositions. Therefore, there are several aspects to be considered, such as details on what type of personal data is being collected; details on data transference to non-EU countries; or examples of the Information Sheets and Consent forms to be used. According to the recent General Data Protection Regulation (GDPR) (EU) 2016/679, entering into application on the 25 th of May 2018, there is one set of data protection rules for all companies operating in the EU, wherever they are based. Specifically, the regulation contains provisions and requirements pertaining to the processing of personally identifiable information of individuals (formally called data subjects in the GDPR) inside the European Union, and applies to all enterprises, regardless of location, that are doing business with the European Economic Area. Business processes that handle personal data must be built with data protection by design and by default, meaning that personal data must be stored using pseudonymisation or full anonymisation, and use the highest-possible privacy settings by default, so that the data is not available publicly without explicit consent, and cannot be used to identify a subject without additional information stored separately. No personal data may be processed unless it is done under a lawful basis specified by the regulation, or if the data controller or processor has received explicit, opt-in consent from the data subject. Taking into consideration all these requirements, several measures have been adopted in the framework of RESCCUE project regarding personal data protection. The project manager (Aquatec) is the only one authorized for collecting and processing personal data. Due to this role, Aquatec must clearly disclose any data collection, declare the lawful basis and purpose for data processing, how long data is being retained, and if it is being shared with any thirdparties or outside of the EU. To accomplish this requirements, Aquatec has prepared a consent form to be disclosed each time personal data is collected, for example, at the registration form of periodic project meetings or other project’s events involving people from RESCCUE consortium and external ones. This consent form informs that: * Aquatec is the responsible for collecting and using personal data of project’s partners and other attendees to RESCCUE events * Personal data include: name, ID number, contact numbers, mailing, email addresses, photographic or video images * Personal data will be disclosed only to the European Commission and only when necessary (i.e. justification of partners’ contributions, travel expenditures, etc.) * Aquatec will keep data securely according to the GDPR * Aquatec will not disclose “sensitive personal data” as defined in the GDPR ( _i.e._ data concerning health such as alimentary intolerances) or address, telephone or email details without your explicit consent unless the disclosure is strictly necessary to protect your vital interests * Aquatec will record personal data during a period of 4 years after the project end after that period, personal data will be destroyed. A model of the consent form to be used before collecting this data is included in the Annex of this deliverable and will be applicable from the 25 th of May 2018 at each event or procedure collecting such personal data. # Responsibilities and Resources Aquatec (as Project Coordinator) as well as the leaders of WP1 to WP6 (where most of the project data is produced) are responsible to ensure that it is duly reviewed and timely revised. However, it is important to highlight that all beneficiaries must implement the DMP. The data management activities within each WP should be assured by the person responsible for data management at WP level. RESCCUE has 3 research sites that will be studied across several project activities and WPs. The coordination of data management is to be done by the WP leader and the people responsible for each research site: Cetaqua, FIC, Opticits, UNEXE, LNEC and Aquatec. However, as presented in Table 1, the owners of each dataset are clearly identified and they are precisely the ones responsible for the data management activities. So after generating each dataset, the partner that owns it should be in charge of uploading the information in a repository and making the dataset findable by generating the corresponding metadata.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1135_I-REACT_700256.md
1 INTRODUCTION # 1.1 PURPOSE OF THE DOCUMENT This document describes how we defined the Data Management Plan (DMP) of I-REACT and contains its second version. The first version of the DMP plan has been delivered by the end of the first six months of the project and this new version is a refinement of the former one. The DMP covers the data management life cycle of the collected, processed and generated data during the project. Specifically, the DMP is used to define the guidelines for data management in the project in order to ensure a high level of data quality, security, and accessibility. According to the "Guidelines on FAIR Data Management in Horizon 2020" [RDOI], "the DMP is intended to be a living document in which information can be made available on a finer level of granularity through updates as the implementation of the project progresses and when significant changes occur.". This deliverable contains the second version of the DMP. The first version of the DMP plan has been delivered by the end of the first six months of the project. The DMP will be updated, providing finer details, during the periodic evaluation/assessment of the project (i.e., at the midterm review and at the final review). We prepared the Data Management Plan by following the template provided in the "Guidelines on FAIR Data Management in Horizon 2020" [RDOI] document. Specifically, we generated the DMP of I-REACT by using the DMP online [RD02] tool, which is compatible and compliant with the requirements set out in Annex 1 of the "Guidelines on FAIR Data Management in Horizon 2020" [RDOI]. DMP online supports the generation of the DMP and allows exporting it in different electronic formats (e.g., pdf file, doc file). The DMP of I-REACT follows the FAIR principles, i.e., public research data are made Findable, Accessible, Interoperable and Re-usable. # 1.2 STRUCTURE OF THE DOCUMENT The document is organized as it follows: * Chapter 1 is this introduction and description of the document itself; * Chapter 2 contains the current version of the Data Management Plan of I-REACT. Specifically, the content of Chapter 2 corresponds to the export of the Data Management Plan (DMP) that we created by means of the DMP online [RD02] tool; Chapter 3 draws conclusions. ### 1\. DATA SUMMARY The I-REACT project aims at supporting emergency activities related to hazards (e.g., floods and fires) by integrating data from several external data sources. Specifically, the I-REACT project aims at (i) supporting emergency activities related to hazards (e.g., floods and fires) and (ii) generating predictive and forecast models. To achieve the main goals of the project, several heterogeneous data must be properly collected, transformed and combined in order to tackle the problem from different perspectives. Specifically, publicly available open data and external data sources will be integrated with the content generated by the users of the I-REACT system (e.g., the user generated reports). The integration of several data sources will allow generating models based on complementary information that can be used to support the different activities related to hazards and emergency events in the I-REACT project. Both first responders and citizens will use and exploit, at different granularities, the data collected, transformed, and generated by I-REACT. The collected data are mainly georeferenced data related to hazards and emergency events (floods and fires). Based on the analysis of the available data sources performed so far, the currently imported data, and the use cases of the project, the following types of external data have been currently identified as potentially useful for the I-REACT project: * Weather forecast data * Flood maps * Copernicus land monitoring service data * European flood awareness system data * European forest fire information system data * Disaster event historical data * Global administrative area data * Statistical data * Social media data (e.g., tweets) As reported in the above list, several external data sources will be re-used by I-REACT, each one providing a different facet of the hazard/emergency event we are facing. Both open and freely available data and non-public data will be used. Open and freely available data will be preferred. In the I-REACT project, standard data formats (e.g., GeoJSON and Shapefiles), and metadata (e.g., INSPIRE complaint metadata) will be used in order to improve findability, accessibility, interoperability and re-usability of the data. The I-REACT project will also generate data, based on the data (e.g., reports) generated by the users of I-REACT, for instance by means of mobile applications, and the transformation and analysis of the collected data. The main types of data generated by I-REACT are the followings: * User generated reports * Risk maps and weather forecast maps Flood and fire nowcasts and forecasts * Climate change maps * UAV imagery Also, the data generated internally by I-REACT will be represented by means of standard data formats (e.g., GeoJson) and will be enriched with standard metadata. Table 1 reports the data imported so far in the backend of I-REACT, with the expected size (per file) and the partner that is responsible for importing the data in the backend. <table> <tr> <th> RESPONSIBLE </th> <th> DESCRIPTION </th> <th> FORMAT </th> <th> SIZE </th> </tr> <tr> <td> GeoVille </td> <td> Copernicus EMS Delineation Maps (crisis information, administrative boundaries, area of interest, general information, hydrography, land cover, physiography, points of interest, populated places, settlements, transportation, utilities) </td> <td> Shapefile </td> <td> Depending on zoom level </td> </tr> <tr> <td> EOXPLORE </td> <td> Flood Forecast Period 1 — 5 year return period and Flood Forecast Period 2 — 20 year return period </td> <td> GeoJSON </td> <td> MB </td> </tr> <tr> <td> EOXPLORE </td> <td> Fire Hotspots Comune and Points </td> <td> GeoJSON </td> <td> < 5 MB </td> </tr> <tr> <td> EOXPLORE </td> <td> Fire Weather Index </td> <td> GeoJSON </td> <td> < 5MB </td> </tr> <tr> <td> CSI </td> <td> ARPA Hydrogeological Alert Bulletin </td> <td> GeoJSON </td> <td> < IMB </td> </tr> <tr> <td> CSI </td> <td> FMI Weather alert (Wind, Snow-ice, Thunderstorms, Low temperature, Forest- fire, Rain) </td> <td> GeoJSON </td> <td> N 3.5 MB </td> </tr> <tr> <td> TU Vienna </td> <td> Sentinel-I Flood Delineation Mapping </td> <td> GeoTlFF </td> <td> Sentinel-I: rv 50 MB Envi ASAR: N IOMB </td> </tr> <tr> <td> FMI </td> <td> Probability of gust limit 1, 2 </td> <td> GeoJSON </td> <td> rv 2.5MB (per lead time) </td> </tr> <tr> <td> FMI </td> <td> Probability of wind limit 1, 2 </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Probability of temperature limit 1, 2, </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Probability of precipitation limit 1, 2, 3, </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Temperature </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Wind </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Precipitation 3h and 24h </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Pressure </td> <td> GeoJSON </td> <td> N2.5MB </td> </tr> <tr> <td> FMI </td> <td> Ensemble forecasts </td> <td> NetCDF </td> <td> N500MB </td> </tr> <tr> <td> FMI </td> <td> 25th, 50th, and 75th percentile of daily max. temperature forecast distribution </td> <td> NetCDF </td> <td> N13MB </td> </tr> </table> Table 1 — Imported data The size of the data collected and generated depends on the considered data sources. The size of the datasets and files associated with the several data sources exploited by the project varies from some MBs to tens of GBs. The collected and generated data will be useful for several end users and stakeholders. Specifically, the users of the I-REACT system (both citizens and first responders) will benefit from the data generated by the system. Moreover, third parties could also be interested in the data collected and generated by I-REACT, also for supporting decisions not directly related to the management of emergency events. ### 2\. FAIR DATA #### 2.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA [FAIR DATA] We will share the public data related to the I-REACT publications in the Zenodo repository ( _https://www.zenodo.org/_ ). Zenodo provides a set of basic functionalities that allows publishing data and searching them by means of keywords. Moreover, Zenodo automatically assigns a DOI to each new uploaded dataset and allows specifying metadata, which can be profitably exploited to find the shared datasets. The uploaded files will be identifiable and versioned by using a name convention consisting of project name, dataset name, version, and date. For the data for which it is appropriate, the standard INSPIRE (http://inspiregeoportal.ec.europa.eu/) metadata will be used to enrich the shared data. #### 2.2 MAKING DATA INTEROPERABLE [FAIR DATA] The sharing of the data will follow the ORD pilot principle "as open as possible, as closed as necessary". Specifically, when it is possible (i.e., when the data can be released as open without violating any copyright), the research data needed to validate the results presented in the published scientific papers will be made available. To support the sustainability of the project, according to the defined business plan, the data that are fundamental for the sustainability of the I-REACT project and are not used in the published scientific papers will not be disclosed. As already introduced before, the public datasets, and the related metadata, will be made available through the Zenodo repository. Standard data formats and metadata will be used to improve accessibility to the data by means of standard freely available tools. We will use standard data formats (e.g., GeoJSON and Shapefiles) for representing the data used by I-REACT and standard metadata (INSPIRE complaint metadata) to improve interoperability and re-usability of the data. This solution improves the interoperability of the modules of I-REACT and the interoperability with respect to external users interested in using the data of I-REACT. #### 2.3 INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES) [FAIR DATA] The data collected from external sources, or generated transforming external data, will use the same license of the original data sources. For the subset of internally generated data that will be open, not based on external data sources, an open license will be used. The public research data associated with the published research papers will be made available as soon as the accepted papers will be published. The research data associated with the published research papers will be made available as soon as the accepted papers will be published. An embargo period will be applied only if the policy of the related scientific publication enforces it. The public research data published on Zenodo will be available for third parties also after the end of project and will be freely available as long as the Zenodo service will be available. ### 3\. ALLOCATION OF RESOURCES The public data will the published in the free Zenodo repository. Hence, there are no specific costs related to the storage of the public data. Regarding the subset of non-publicly disclosed data exploited by the I-REACT project, the costs of the management of the data, and the related privacy issues, are already part of the costs related to the cloud architecture that will be used to implement the I-REACT system. Regarding the long-term preservation of the data, we plan to cover the costs for the management of the non-public data by the revenue associated with the exploitation of the data itself and the related services. ### 4\. DATA SECURITY Security and privacy are two important issues managed by the project. Specifically, Task "T2.4: Security & Privacy by design" addresses the security and privacy issues by adopting a security and privacy by design methodology. A detailed description of the selected methodology is reported in the deliverable "D2.4: Report on Privacy and Security". The solution described in D2.4 will be further specialized during the project based on the new privacy and security issues that will emerge during the project. Any further changes to the privacy and security solution that will arise during the project will be included in the following versions of the DMP. Regarding the secure storage of the data, the project will use an architecture based on cloud services to store the data. The used services provide the functionalities needed to address secure storage and data security. Regarding sensible data, a secure and privacy by design methodology will be used to avoid the disclosure of sensible and personal data. ### 5\. ETHICAL ASPECTS According to the content of the ethics review and ethics section of DOA, the designed solution will imply the collection and the processing of personal data once operational. The management of such data will be regulated by a clear term of service, which must be read and accepted by users, and which will follow the EU data protection regulation. Note that data regarding financial details, sexual lifestyles, ethnicity, political opinion, religious or philosophical conviction will not be included in the Information Architecture. Different features of the system will rely on locationbased technologies to determine the position of people and "objects" (such as infrastructures, resources, vehicles, etc.), which is crucial for the implementation of the I-REACT products. Regarding the localization of people, two different categories will be distinguished: First responders — who are the emergency operators and/or volunteers. Since they are professional users, they must accept the geolocation of their devices during the working time through a formal agreement with their employee. Their positioning is needed to send in-field reports and it is also aimed at monitoring their safety during the emergency response phase. Citizens — who must read and accept specific term of service in order to submit geolocalized reports through the I-REACT mobile application. Optionally, they can allow the I-REACT system to locate their position, which will be shared only with subscribed authorities in case of emergency in order to effectively perform search and rescue operations. With respect to the data retrieved from Social Networks, the regulation included in their terms of service will be applied. All personal data will be solely used for the project purpose, and they will be only accessible by the data owner. The geolocated reports will include only the user category and an ID that is not linkable with the user personal data. Personal data will not be subject of any exploitation and will not be distributed to any third party. Such data will be collected, stored, and processed following Privacy-by-design approaches in order to guarantee confidentiality, and anonymity. Thus, the data that will be publicly made available will be anonymized. I-REACT does not foresee the management of health data. ### 6\. OTHER Regarding the privacy issues, the data will be managed in compliance with the European regulation, as already described in Sections 4 and 5 of this Data Management Plan. # 3 CONCLUSIONS This deliverable describes how the data management plan of the I-REACT project has been generated and contains its second version. A first version was released at the end of the first size months of the project and this updated version was released in November 2017. The DMP of lREACT will be periodically updated, providing finer details, in time with the following periodic evaluation/assessment of the project (i.e., at the final review). END OF THE DOCUMENT
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1139_COMPASS_710543.md
**1\. Introduction** This Data Management Plans (DMP) describes the data management life cycle for the data to be collected, processed and/or generated by COMPASS (710543). As part of making research data findable, accessible, interoperable and re-usable (FAIR), this DMP includes information on a) the handling of research data during and after the end of the project, b) what data will be collected, processed and/or generated, c) which methodology and standards will be applied, d) whether data will be shared/made open access and e) how data will be curated and preserved (including after the end of the project). This DMP is prepared according to the specifications for participating in the EU Pilot on Open Research Data (ODR). ### 1.1. Background Information on COMPASS (710543) Project The concept of “Responsible Research and Innovation” has been around for almost a decade. So far, it has been the subject of discussion among policy- makers and researchers; without including the key actors of Responsible Research and Innovation: businesses. The aim of the COMPASS (710543) project is to make Responsible Research and Innovation accessible, comprehensible and feasible for Small and Medium-Sized Enterprises. The COMPASS (710543) project will set the stage on which SMEs will be able to co-create their own visions of Responsible Research and Innovation – for their particular sector and for their particular business context. Physical and virtual interaction will take place in “Responsible Innovation Labs” for SMEs in the areas of nanotechnology, healthcare and ICT. The tools and services designed in these labs will be tailored to the needs of innovative SMEs. SMEs will get the chance to test the tools in their daily operations. They will get access to all services on the “Responsible Innovation COMPASS”, a custom-made web platform. ### 1.2. Information on Data The project objectives require collection and processing of information which is not available from other sources. As the concept of “Responsible Research and Innovation” is still fairly novel and not much data is available on it, the project will need to generate its own data sets. It will rely on industry experts, businesses and civil society actors to provide first-hand information about their experiences, views and opinions. The project will build up the following data sets: 1) Stakeholder & multiplier database; 2) Website; 3) Interviews; 4) Responsible Innovation Labs; and 5) Pilot Testing. The data sets will be appropriately handled by the consortium in accordance with confidentiality obligations (article 36) and principles of processing data (article 39.2) outlined in the Grant Agreement. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 710543 ### 1.3. Timetable for Updates **2\. Stakeholder & Multiplier Database 2.1. Data Set Description ** The COMPASS (710543) project will set up a stakeholder & multiplier database to compile contact information of actors that might be interested in taking an active part in the project (e.g. as participants in the Responsible Innovation Labs) or receiving and disseminating project output (e.g. innovation networks). The database is being compiled and administered by consortium partner FBLC and owned by all COMPASS (710543) partners. The compilation of the COMPASS (710543) stakeholder database will be the central point of reference for all stakeholder interaction during the running time of the project. These data stored in the COMPASS (710543) stakeholder database will be used for the following tasks: * Persona analysis (i.e. analysis of influence / position in an organisation, analysis of relevance / type of organisation) in order to determine an efficient communication strategy with the platform’s stakeholders; stakeholder contacts will be stored in a WP leader (FLBC) access database. * Monitoring participation of the stakeholders in the online and on-site interactions in order to provide analyses of stakeholder participation according to (a) type of organisation, (b) country, (c) gender balance and (d) areas of expertise. The results of these analyses will be used for guiding further stakeholder recruitment efforts in order to achieve a balanced representation of different stakeholder groups and to ensure gender balance. * Collecting input from the stakeholders (feedback on the Responsible Innovation COMPASS and roadmaps and online knowledge repository prepared by the consortium and new ideas through “idea sourcing”) as important source of the Responsible Innovation COMPASS content and information provision functionality. ### 2.2. Standards and Metadata Data is documented in form of entries (rows) and different fields, including type of entity, country, sector, basic information and contact details into an MS Office Excel File. Each entry includes metadata in form of a) information about who (i.e. which of the consortium partners or advisory board members) has provided the contact and will need to establish first contact between the project and the respective stakeholder/multiplier and b) which of the project tasks/activities might be of interest for the respective stakeholder/multiplier. No particular metadata standards are being applied. ### 2.3. Ethical and Legal Issues Data contained in the COMPASS (710543) stakeholder & multiplier database will not be publicly available, but solely serve the purpose of involving potentially interested stakeholders and multipliers in the different project activities. The stakeholder data base does not include any ethically questionable material. Personal data (i.e. name, organisation, position, email, telephone number) which are not publicly accessible, but provided by individuals during the course of the project, will be treated confidentially by FBLC. Any data contained in the COMPASS (710543) stakeholder & multiplier database will be anonymised for further use (i.e. academic or other publications). COMPASS (710543) parties will make sure that any data that is used outside of the scope of the COMPASS (710543) project will be unlinked (anonymised) from any personal data information. ### 2.4. Access and Sharing Data storage in the scope of the COMPASS (710543) project will be secured so as for the data not to become accessible to unwanted third parties and to be protected against disaster and risk. Data contained in the COMPASS (710543) stakeholder & multiplier database will not be publicly available; i.e. these personal data will not be shared with third parties nor attributed to commercial use. Sharing of personal data among COMPASS (710543) consortium partners will take place only if consented by the stakeholder/multiplier in question, and shall be organised through the server-based and password-protected “WU Owncloud system” (https://owncloud.wu.ac.at/). ### 2.5. Re-Use and Distribution Information contained in the database that has been obtained from publicly available sources in the Internet (such as name of the entity, country, or URL) might be re-used for education or further research purposes in the future. Any contact information is not expected to be re-used for education, research or non-profit purposes. Data contained in this dataset will not be distributed to third parties (see section 2.4). ### 2.6. Archiving and Preservation (including Storage and Backup) **2.6.1. Spaces for Data Storage and Respective Data Security Measures** The stakeholder & multiplier database will be stored on state-of-the-art secured laptops, protected by most recent and regularly updated anti-virus software. The FBLC team with access to the stakeholder database will adhere to a level of confidentiality (as outlined in the COMPASS (710543) Grant Agreement article 39.2). Furthermore, the consortium partners will comply with WU institutional data protection policy (“WU Information Security Policy” ; “The Organization of Information Security at WU”; “WU Directive on Confidentiality Classification”; “WU Directive on Data Erasure and Disposal” ) which outlines measures guaranteeing “protection from loss and damage to information” (availability), “protection from unauthorised access and disclosure of information (confidentiality)”, “protection from unintended and manipulative modification of information (integrity)”, and “protection from loss of nonrepudiation or comprehensibility of information flows”. #### Duration of data storage and access Data stored in the COMPASS (710543) stakeholder database will be securely stored within FBLC secure servers for the total duration of the COMPASS (710543) project and for 6 months after the completion of the project. Security measures as outlined above will be kept active as long as the respective data is in use or until terminated. #### Procedures for data destruction/deletion Data will be destroyed according to the WU Directive on Data Erasure and Disposal in order to guarantee proper off- and online data protection. COMPASS (710543) consortium parties will apply appropriate tools and procedures for data deletion in order to guarantee irreversibility. **3\. Website Subscriptions Data Set 3.1. Data Set description** The COMPASS (710543) consortium maintains a website for the purpose of making RRI more accessible to SMEs. To complement the stakeholder & multiplier database, the website offers a sign-up function: Website users are able to sign up to the website to receive news about the project. To subscribe, it is only necessary to enter a name and email address as well as agree to the website Terms and Conditions (see section 3.3). The website subscriptions data set thus compiles contact information of additional actors that might be interested in taking an active part in the project (e.g. as participants in the Responsible Innovation Labs) or receiving and disseminating project output (e.g. innovation networks). The website subscriptions data set will be administered by WU. The website is created using WordPress software. There are three software plugins (plugins for WordPress) COMPASS (710543) uses to collect website users data as an email subscription data: 1. Email Subscribers – a plug developed for WordPress sites that allows to create forms where users can enter their name and email address to subscribe to receive news from the respective website. COMPASS (710543) uses a double opt-in system, where the user receives an email to confirm his/her subscription to the COMPASS (710543) website news. 2. Maintenance Mode – a plug developed for WordPress sites that displays a “Coming soon” webpage and allows interested users to leave their email address to be informed of when the site is launched. COMPASS (710543) uses a single opt-in system, where the user enters their email address only and this email address is entered into the database of interested users without them having to reconfirm his/her subscription to the COMPASS (710543) website news. 3. Newsletter - Newsletter is a newsletter system plug-in for WordPress sites: it is used for list building, creating, sending and tracking emails. The plug in also collects data on whether the users have opened the email sent. It allows to create forms where users can enter their name and email address to subscribe to receive news from the respective website. COMPASS (710543) uses a single opt-in system, where the user enters their email address only and this email address is entered into the database of interested users without them having to reconfirm his/her subscription to the COMPASS (710543) website news. The email addresses and names collected through these plugins are stored on COMPASS (710543) website server and are visible online to COMPASS (710543) consortium partners. The data can only be exported from the site by COMPASS (710543) consortium partners when they log-in to the COMPASS website confirming their identity. **3.2. Standards and Metadata** No particular metadata standards are being applied. ### 3.3. Ethical and Legal Issues It is clearly stated on the website sign-up page ( _http://innovation- COMPASS.eu/sign-up/_ ) that by signing up users agree to accept COMPASS (710543) Terms and Conditions ( _http://innovation-COMPASS.eu/terms-and- conditions/_ ). By subscribing to the COMPASS (710543) website, users authorise COMPASS (710543) to process personal data for, and only for, the following purposes: * To send users information on COMPASS (710543) activities and project updates, both as general newsletters and specific communications from the COMPASS (710543) project * To send users information about the opportunity to participate in COMPASS (710543) activities, either in face-to-face format (workshops, seminars, etc.) or online, via the internet (contributions to the contents of the COMPASS (710543) website, participation in surveys, etc.). Users are informed that they may at any time exercise their rights to unsubscribe. Each communication they receive will detail free means whereby they can request not to receive further communications from COMPASS (710543). When users are asked to supply personal details to complete a form, they will be informed of the recipient of the data and the purpose for which it is being collected; the identity and address of the body responsible for the file (file manager). Users at all times may exercise their rights of access, rectification, cancellation and opposition to the treatment of their data. The personal data collected will only be used and/or transferred for the purpose specified, and always with the consent of the user. ### 3.4. Access and Sharing Data storage in the scope of the COMPASS (710543) project will be secured so as for the data not to become accessible to unwanted third parties and to be protected against disaster and risk. Data contained in the website subscriptions data set will not be publicly available; i.e. these personal data will not be shared with third parties nor attributed to commercial use. Sharing of personal data among COMPASS (710543) consortium partners will take place only if consented by the stakeholder/multiplier in question, and shall be organised through the server-based and password-protected “WU Owncloud system” (https://owncloud.wu.ac.at/). ### 3.5. Re-Use and Distribution As this dataset contains contact information only, we do not expect any re-use for education, research or non-profit purposes. Data contained in this dataset will not be distributed to third parties (see section 3.4). ### 3.6. Archiving and Preservation (including Storage and Backup) #### Procedures for the storage of personal data Data storage in the scope of the COMPASS (710543) project will be secured so as for the data not to become accessible to unwanted third parties and to be protected against disaster and risk. #### Duration of data storage and access Data stored in the COMPASS (710543) project website will be stored for the total duration of the COMPASS (710543) project and for 6 months after the completion of the project. Safe procedures outlined during the project’s running time will be kept active as long as the respective data is in use or until terminated. **Procedures for data destruction/deletion** Data will be destroyed according to the WU Directive on Data Erasure and Disposal in order to guarantee proper off- and online data protection. COMPASS (710543) consortium parties will apply appropriate tools and procedures for data deletion in order to guarantee irreversibility. #### Technical safety measures for personal data As regards any personal data compiled in COMPASS (710543) project website, all data will be subject to strict safety procedures as well as technical measures. At WU data will be stored on WU institutionalised SAN system. Safety measures include physical protection of server data centres, SAN system protection includes Data Protection Manager and TSM as well as physical protection. **4.Interviews Data Set 4.1. Data Set description** In order to collect first-hand insights on the potential of RRI in two of the three the project’s key innovation fields, three consortium partners (WU, DMU and UCLan CY) will undertake 30 fact-finding interviews with key industries representatives across Europe. Interviews will cover critical responsibility issues as well as success factors and barriers for adoption of RRI in industry. These interviews will include, besides expert knowledge on responsible innovation policy issues, the collection of personal data (i.e. name, organisation, position, type of organisation, country, email). Interviews will be conducted based on a semi-structured interview guideline and, subject to interviewees’ permission, recorded using digital audio recording (e.g. MP3). Interviews will be transcribed verbatim. Consortium partner UCLan CY will analyse the collected information and compile it in a synthesis report (D1.2 Synthesis report “Success factors and barriers for mainstreaming RRI in SMEs”). ### 4.2. Standards and Metadata Metadata will include: * About the interviewee: * Affiliation to key innovation field (i.e. healthcare or nanotechnology) * About the interviewer: o Name * Affiliation (i.e. consortium partner institution) * About the interview: o Date * Length * Form of conduct (i.e. face-to-face, telephone, skype or other) No particular metadata standards are being applied. ### 4.3. Ethical and Legal Issues Potential participants will be identified by all consortium partners. Interviews will be conducted face-to-face, via skype or telephone by experienced interviewers from UCLan CY, WU and DMU. All potential interview partners will receive detailed information about their involvement in the project. Participants will be asked to demonstrate their understanding of their involvement by consenting explicitly to their participation in the data collection activity. Each interview partner will receive an information sheet form clearly stating the following: * Details of who will be conducting the study; * Details about who is sponsoring the study and what the terms of the sponsorship are (i.e. who will 'own' the data and how the data will be used): In this case, reference will be made to the EU Horizon 2020 funding; * Details about the nature, purpose and duration of the study; * What kinds of procedures will be used and what the participant will be asked to do; * Details about any hazards, inconveniences and risks associated with the study; * What benefits are attached to the study; * What procedures will be employed to maintain confidentiality and anonymity (e.g. removing personal details from data/reports, keeping data in locked files); * What will happen to the data (how it will be used, how it will be stored, in what form it will be disseminated and if it is likely to be used for further analysis); * How to withdraw from the study; * Details about who to contact if questions or problems arise. Participants will also be provided with clear opportunities to provide feedback regarding their participation. The procedure regarding informed consent will be carried out prior to each interview. Participants will confirm having received: * Information on the purpose of the study; * The opportunity to ask any questions related to this study, and received satisfactory answers to questions and additional details; * Information that participation is voluntary; * Information, that words and phrases from the interview may be included in the final report and publications to come from this research but any quotations will be kept anonymous; * Information on withdrawal; * Information, that the interview will be recorded using audio recording equipment. The interviews do not include any highly sensitive personal data. Any data related to personal data collected by the COMPASS (710543) project will be anonymised for further use. COMPASS (710543) parties will make sure that any data that is used outside of the scope of the COMPASS (710543) project will be unlinked (anonymised) from any personal data information. ### 4.4. Access and Sharing Analysis of the collected information will be compiled in a synthesis report (D1.2 Synthesis report “Success factors and barriers for mainstreaming RRI in SMEs”). The report will be openly accessible and available as free download on the project website. #### Procedures for the circulation of personal data Sharing of personal data (i.e. name, organisation, position, type of organisation, country, email address) among COMPASS (710543) consortium partners will take place only if necessary to achieve project output and if consented by the stakeholder/multiplier in question. #### Confidentiality and safety procedures of personal data handling All consortium partners will store collected data in state-of-the-art secured computers, protected by most recent and regularly updated anti-virus software. Data will only be accessible to the COMPASS (710543) consortium who will adhere to a level of confidentiality (as outlined in the COMPASS (710543) grant agreement article 39.2). Furthermore, the consortium partners will comply with WU institutional data protection policy (“WU Information Security Policy”; “The Organization of Information Security at WU”; “WU Directive on Confidentiality Classification”; “WU Directive on Data Erasure and Disposal” ) which outlines measures guaranteeing “protection from loss and damage to information” (availability), “protection from unauthorised access and disclosure of information (confidentiality)”, “protection from unintended and manipulative modification of information (integrity)”, and “protection from loss of non-repudiation or comprehensibility of information flows”. ### 4.5. Re-Use and Distribution The synthesis report (D1.2 Synthesis report “Success factors and barriers for mainstreaming RRI in SMEs”) due in M14 of the project will be openly accessible and available as free download on the project website at least until the end of the project duration. Insight from the task will feed directly into the preparation of the Responsible Innovation Labs (WP2) and will thus become immediately available to up to 75 SMEs and SME support organisations. ## 4.6. Archiving and preservation (including storage and backup) #### Spaces for data storage and respective data security measures Consortium partners WU, DMU and UCLan CY will store interview data in state- of-the-art secured computers on secured server-based storage drives, protected by most recent and regularly updated anti-virus software. Sharing of personal data among COMPASS (710543) consortium partners will be organised through the server-based and protected WU Owncloud system (i.e. IT Clowd system). Sharing of (anonymised) data with third parties for transcription purposes will also be organised through WU Owncloud system. Third parties will store data in state-of-the-art secured computers on secured server-based storage drives and agree to adhere to a level of confidentiality (as outlined in the COMPASS (710543) grant agreement article 39. #### Duration of data storage and access Data will be stored for the total duration of the COMPASS (710543) project and for 6 months after the completion of the project unless explicit consent is sought from participants for portions of it to be kept longer or included in dissemination/deliverable activities. Safe procedures outlined during the project’s running time will be kept active as long as the respective data is in use or until terminated. A data subject will have the right to have his or her personal data erased and no longer processed where the personal data are no longer necessary in relation to the purposes for which they are collected or otherwise processed (“Right to be Forgotten”). #### Procedures for data destruction/deletion Data will be destroyed according to the WU Directive on Data Erasure and Disposal in order to guarantee proper off- and online data protection. COMPASS (710543) consortium parties will apply appropriate tools and procedures for data deletion in order to guarantee irreversibility. #### Technical safety measures for personal data As regards any personal data compiled in personal interviews all data will be subject to strict safety procedures as well as technical measures. All collected data will be stored on secure servers or, for physical data such as signed informed consent sheets, in a physically secure location such as a locked cupboard/drawer within one of the partner institutions until destruction. At WU data will be stored on WU institutionalised SAN system. Safety measures include physical protection of server data centres, SAN system protection includes Data Protection Manager and TSM as well as physical protection. The primary university server location and storage system of UCLan Cyprus is in the UCLan UK campus and supported by a dedicated WAN link to the UCLan Cyprus campus, for which there exists a backup line supported by CYNET (Cyprus Research and Academic Network). The storage system is supported by a SAN (Storage Area Network) for which all necessary physical security and cybersecurity measures are taken to ensure data protection. Thus, the users are encouraged to use the network drives, which are provided by this SAN system. # 5\. Responsible Innovation Labs Data Set ## 5.1. Data Set description The Responsible Innovation Labs will combine virtual and in situ interaction between stakeholders from industry, policy-making and civil society organisations. Any personal data collected via labs will not be publicly available (i.e. these personal data will not be shared with third parties and are not attribute to any commercial use). The operation of stakeholder interaction in the form of Responsible Innovation Labs will include, besides expert knowledge on responsible innovation policy issues, the collection of personal data (i.e. name, organisation, position, type of organisation, country, email). Prior to the Responsible Innovation Labs participants will receive an email including information (e.g. that the labs will be recorded) and a link to attend the virtual meeting. The virtual meetings will be held via GoToMeeting. By clicking on the link participants agree to the recording of the meeting. ## 5.2. Standards and Metadata Metadata will include affiliation to key innovation field (i.e. healthcare or nanotechnology) and region, company size, type of company, participants’ position in the company and contact between company and the project consortium. Metadata will also include date, length and form of conduct (i.e. face-to- face, webinar or other) of the Responsible Innovation Lab. Data will be documented in form of entries (rows) into an MS Office Excel or Word file and in form of audio files. No particular metadata standards will be applied. ## 5.3. Ethical and Legal Issues The Responsible Innovation Labs do not contain any ethically questionable material. Any data related to personal data collected by the COMPASS (710543) project will be anonymised for further use. COMPASS (710543) consortium partners will make sure that any data that is used outside of the scope of the COMPASS (710543) project will be unlinked (anonymised) from any personal data information. Potential participants will be selected by the institutions conducting the labs, namely DMU, FBLC and SDS. All participants will receive detailed information about their involvement in the project. Participants will be asked to demonstrate their understanding of their involvement by consenting explicitly to their participation in any data collection activity. ## 5.4. Access and Sharing Analysis of the collected information will be compiled in a methods report (D2.1 Responsible Innovation Lab Methods Report) and 3 roadmaps (D2.2 Responsible Innovation Lab Report & Roadmap 1, D2.3 Responsible Innovation Lab Report & Roadmap 2, D2.4 Responsible Innovation Lab Report & Roadmap 3). Insights will further feed into the Co- Creation Method Kit (D2.5 Responsible Innovation Co- Creation Method Kit) and a comparative assessment report (D2.6 Comparative assessment report). These reports will be openly accessible and available as free downloads on the project website. #### Procedures for the circulation of personal data Sharing of personal data (i.e. name, organisation, position, type of organisation, country, email address) among COMPASS (710543) consortium partners will take place only if necessary to achieve project output and if consented by the stakeholder in question. #### Confidentiality and safety procedures of personal data handling As regards the Responsible Innovation Labs all consortium partners will store collected data in state-of-the-art secured computers, protected by most recent and regularly updated antivirus software. Data will only be accessible to the COMPASS (710543) consortium partners who will adhere to a level of confidentiality (as outlined in the COMPASS (710543) grant agreement article 39.2). Furthermore, the consortium partners will comply with WU institutional data protection policy (“WU Information Security Policy” ; “The Organization of Information Security at WU”; “WU Directive on Confidentiality Classification”; “WU Directive on Data Erasure and Disposal” ) which outlines measures guaranteeing “protection from loss and damage to information” (availability), “protection from unauthorised access and disclosure of information (confidentiality)”, “protection from unintended and manipulative modification of information (integrity)”, and “protection from loss of non- repudiation or comprehensibility of information flows”. ## 5.5. Re-Use and Distribution The reports (D2.1 Responsible Innovation Lab Methods Report, D2.2 Responsible Innovation Lab Report & Roadmap 1, D2.3 Responsible Innovation Lab Report & Roadmap 2, D2.4 Responsible Innovation Lab Report & Roadmap 3, D2.5 Responsible Innovation Co- Creation Method Kit, D2.6 Comparative assessment report) will be openly accessible and available as free download on the project website at least until the end of the project duration. Insight from the task will feed directly into the preparation of the Responsible Innovation Compass (WP3) and will thus become publicly available. ## 5.6. Archiving and preservation (including storage and backup) #### Spaces for data storage and respective data security measures The DMU, FBLC and SDS teams will store data in state of the art secured laptops on secured server-based storage drives, protected by most recent and regularly updated anti-virus software. Sharing of personal data among COMPASS (710543) consortium partners will be organised through the server-based and protected WU Owncloud system (i.e. IT Clowd system). #### Duration of data storage and access Data will be stored securely for the total duration of the COMPASS (710543) project and for 6 months after the completion of the project unless explicit consent is sought from participants for portions of it to be kept longer or included in dissemination/deliverable activities. #### Procedures for data destruction/deletion Data will be destroyed according to the WU Directive on Data Erasure and Disposal in order to guarantee proper off- and online data protection. COMPASS (710543) consortium parties will apply appropriate tools and procedures for data deletion in order to guarantee irreversibility. #### Technical safety measures for personal data As regards any personal data compiled in Responsible Innovation Labs all data will be subject to strict safety procedures as well as technical measures. All collected data will be stored on a secure DMU, FBLC or SDS server or, for physical data, in a physically secure location such as a locked cupboard/drawer within one of the partner institutions until destruction. Data will be made available to COMPASS consortium within the secure WU OwnCloud storage system. # 6.Pilot-Testing Data Set ## 6.1. Data Set description Pilot testing & demonstration is dedicated to testing the Responsible Innovation Compass prototype modules developed in WP2 & WP3 in cooperation with potential users to ensure their usability and market readiness. The pilot testing phase includes demonstration activities, showcasing and gathering feedback on three particular features of the Responsible Innovation Compass: the co-creation method kit, the Responsible Innovation Self-Check tool and the Responsible Innovation Roadmaps. The “test groups” will be selected from (a) participants of the Responsible Innovation Labs (WP2); (b) networks of organisations in the Multiplier Programme and; (c) the networks of the consortium partners. Any personal data collected will not be publicly available (i.e. these personal data will not be shared with third parties and are not attribute to any commercial use). The operation of stakeholder interaction will include the collection of personal data (i.e. name, organisation, position, type of organisation, country, email). ## 6.2. Standards and Metadata Metadata will include affiliation to key innovation field (i.e. healthcare or nanotechnology) and region, company size, type of company, participants’ position in the company and contact between company and the project consortium. Data will be documented in form of entries (rows) into an MS Office Excel File or Word File and in form of audio files. No particular metadata standards will be applied. ## 6.3. Ethical and Legal Issues The pilot testing & demonstration work package will not contain any ethically questionable material. Any personal data collected will not be publicly available (i.e. these personal data will not be shared with third parties and are not attribute to any commercial use). ## 6.4. Access and Sharing Output of the pilot testing phase will continuously feed back into WP3 for modifications and fine-tuning in order to optimise the finalized Responsible Innovation Compass. The work package will feed recommendations for revision into WP2 (method kit) and WP3 (self-check tool). Analysis of the collected information will be compiled in a strategy (D4.1 Piloting and demonstration strategy) and reviews and recommendations (D4.2 Review and recommendations for revision of the Responsible Innovation Co- Creation method Kit, D4.3 Review and recommendations for revision of the Responsible Innovation Self-Check, D4.4 User feedback and implementation report on Responsible Innovation Roadmaps). These reports will be openly accessible and available as free downloads on the project website. #### Procedures for the circulation of personal data Sharing of personal data (i.e. name, organisation, position, type of organisation, country, email address) among COMPASS (710543) consortium partners will take place only if necessary to achieve project output and if consented by the stakeholder in question. #### Confidentiality and safety procedures of personal data handling As regards the pilot testing phase all consortium partners will store collected data in state-ofthe-art secured computers, protected by most recent and regularly updated anti-virus software. Data will only be accessible to the COMPASS (710543 consortium partners who will adhere to a level of confidentiality (as outlined in the COMPASS (710543) grant agreement article 39.2). Furthermore, the consortium partners will comply with WU institutional data protection policy (“WU Information Security Policy” ; “The Organization of Information Security at WU”; “WU Directive on Confidentiality Classification”; “WU Directive on Data Erasure and Disposal” ) which outlines measures guaranteeing “protection from loss and damage to information” (availability), “protection from unauthorised access and disclosure of information (confidentiality)”, “protection from unintended and manipulative modification of information (integrity)”, and “protection from loss of non- repudiation or comprehensibility of information flows”. ## 6.5. Re-Use and Distribution The reviews and reports (D4.1 Piloting and demonstration strategy, D4.2 Review and recommendations for revision of the Responsible Innovation Co-Creation method Kit, D4.3 Review and recommendations for revision of the Responsible Innovation Self-Check, D4.4 User feedback and implementation report on Responsible Innovation Roadmaps) will be openly accessible and available as free download on the project website at least until the end of the project duration. Insight from the task will feed directly into the revision of the Responsible Innovation Compass (WP3) and the Method Kit (WP2), and will thus become publicly available. Potential use of the Responsible Innovation Compass for profit purposes after COMPASS (710543) project has ended will be detailed in D5.8 Business and exploitation plan. ## 6.6. Archiving and preservation (including storage and backup) #### Spaces for data storage and respective data security measures The COMPASS (710543) consortium partners will store data in state of the art secured laptops on secured server-based storage drives, protected by most recent and regularly updated antivirus software. Sharing of personal data among COMPASS (710543) consortium partners will be organised through the server-based and protected WU Owncloud system (i.e. IT Clowd system). #### Duration of data storage and access Data will be stored securely for the total duration of the COMPASS (710543) project and for 6 months after the completion of the project unless explicit consent is sought from participants for portions of it to be kept longer or included in dissemination/deliverable activities. #### Procedures for data destruction/deletion Data will be destroyed according to the WU Directive on Data Erasure and Disposal in order to guarantee proper off- and online data protection. COMPASS (710543) consortium parties will apply appropriate tools and procedures for data deletion in order to guarantee irreversibility. #### Technical safety measures for personal data As regards any personal data compiled in the pilot testing phase all data will be subject to strict safety procedures as well as technical measures. All collected data will be stored in the EBN cloud storage system, which is available to EBN staff only (the EBN cloud is secure and password protected), or on a secure DMU, FBLC or SDS server or, for physical data such as signed informed consent sheets, in a physically secure location such as a locked cupboard/drawer within one of the partner institutions until destruction. Data will be made available to COMPASS consortium within the secure WU OwnCloud storage system.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1140_UMI-Sci-Ed_710583.md
introduction and actual use of state of the art technologies (Ubiquitous Learning, Mobile Learning, Internet of Things). European countries need as potential employees youngsters who are able to think creatively, apply new knowledge in an effective way, become continuously competitive in a highly demanding working environment through constant self-monitoring and thus be able to stand up for all the challenges that work based learning brings. The ability to switch efficiently between different disciplines such as STEM disciplines depends on processing effectively the educational material based on clearly defined outcomes, expanding a broad repertoire of ICT communication, problem-solving and decision-making skills, and using the collective knowledge represented in networks, based on working environments. _The orientation of UMI-Sci–Ed is entrepreneurial and multidisciplinary in an effort to raise young boys’ and girls’ motivation in science education and to increase their prospects in choosing a career in pervasive, mobile computing and IoT (UMI)._ Under this scope, the project should deliver meta-level solutions to link school, community and third-level initiatives together and should foster a model that looks to strongly broaden impacts from the current cohort (e.g. students competing in STEM/UMI competitions) to the entire ‘education’ population. The most effective way to do this will be to build communities of practice (CoPs) within active clusters where a body of knowledge already exists. The project should also provide tools, particularly technology, to support this; however, the creation of this toolset should empower the stakeholder group and this should be considered carefully. New technology is often rejected, so approaches where participants can help co-create the toolkit should be prioritised. In this context adopting a ‘making’ culture may prove highly beneficial and a productive way of building a tangible bridge between theory and action; it could foster pilots that enable exploration and validation of technical, pedagogical and community-based innovation approaches. In order to achieve the abovementioned, this project investigates important parameters on the introduction of UMI technologies in science education supported by the implementation of a CoPs format. By carefully exploiting state of the art technologies in order to design educational tools and activities, the project aims to offer novel educational services by implementing innovative pedagogies so as to enhance students’ and teachers’ creativity, socialisation and scientific citizenship. We aim to develop an integrated yet open fully integrated training environment for 14-16 years old students based on a selection of methodological processes and UMI applications. The training environment will consist of an open repository of educational material, educational means, training activities, a platform to support CoPs through socialization, delivery specific of educational material, entrepreneurship training, showcases, self-evaluation, mentoring, and conceptualisation of content and information management. This project’s core objectives are the following: **O1 Novel educational services:** To develop a training mechanism, as a methodology, for young students, containing guidelines for UMI learning under the CoPs format, roles and structures. **O2 Career consultancy services:** To foster innovation and support promising scientific careers. The career consultancy will be carried out by conducting a series of piloting UMI-SciEd activities and scenarios using CoPs and UMI. It includes linking the market needs to the project stakeholders through the platform, formation and management of CoPs using social computing tools and adaptation of specific, specially selected technological tools used for establishing CoPs. **O3 Supporting software tools:** To design and implement an integrated learning environment which shall actually support all stakeholders to form CoPs as a facilitating mechanism for UMI learning in the science education. **O4 Supporting hardware tools:** To integrate, package and delivery a necessary hardware kit to support training and to develop the accompanied programming environment allowing the interaction with the hardware kit and supporting young students to realize their ideas from the beginning of their training. **O5 Dissemination of the project ideas and results:** To disseminate the use of UMI technologies in real educational settings and promote their added pedagogical value to youngsters (male and female) in science education mostly but alternatively also in emerging disciplines of STEM (Technology, Engineering, Mathematics) and to convey project scientific achievements and RTD results both internally among project partners and externally to European and International research communities, potential users and industrial/commercial organizations. **Rationale and changes introduced in DMP revision** The first objective of initial DMP was to define and classify the most important datasets that will be produced and exchanged throughout the lifetime of the project among project partners and participants. As such, four datasets have been defined, i.e., _Educational Scenarios Derivatives_ , _Research data_ , _Educational material_ and _Platform & market analysis data _ . A description of these datasets has been given to the best possible extent. However, as noted in section 1.3 of DMP v1.0, with only a preliminary design and development of the UMI-Sci-Ed project platform (which consists the major means of project data collection, generation and exchange) in hand, several issues regarding the collection, organisation and presentation of data could not be accurately predicted and major decisions had to be made for several aspects of data management. On the other hand, while the datasets were correctly defined, the definition and development of educational scenarios and the evaluation framework that will drive the educational activities and the main educational research directions, respectively, were not yet mature enough in order to fully specify the data types and the metadata that are important for the project and the research activities involved. Last, but not least, the initial DMP development followed the Horizon 2020 directions available at the beginning of the project and was based on DMP online platform 2 provided by Data Curation Centre (DCC). In the meantime, Horizon 2020 released a new template 3 as well as new guidelines on Open Access 4 ]. Taking into account the abovementioned issues, together with several decisions that were made by the consortium in the meantime, a major revision of the initial DMP was necessary. Overall, the DMP has been revised in the following major aspects: * The initial datasets have been enriched with two more datasets. In specific: _DS5 Project working data_ is introduced in order to keep track and of internal working documents, that were decided to be circumvented through the _Filedepot_ feature of the project platform; _DS6 - Other data_ is introduced in order to include all other data that will be exchanged through the project platform that cannot be classified in one of the rest datasets. * Each of the datasets, either new or existent in the DMP v1.0 has been described in accordance with the project platform content types. Data curation and preservation methods have been fully clarified and Zenodo has been decided to be used as a repository of research data. * A set of metadata has been defined for each of the most important platform content types based on the LRMI/DCMI specification following the Schema.org metadata structure. * The DMP has been rewritten following the new template provided by Horizon 2020. **DMP as a living document: Next steps** The revised DMP described in this deliverable can be considered as a consistent and mature plan for data management. However, as clearly described above, the vast majority of data is going to be collected, generated and exchanged via the project platform. The project platform on the other hand is going to be enriched, change and expand, according to the needs and results of research activities that will lead to accomplishment of the project objectives. Therefore, the DMP is going to evolve in conjunction with the project platform and the project, as a whole, evolution. In this context, while the major decisions regarding the DMP have been already made, several issues will need reconsideration, revision and improvement. The next important step is the conclusion and evaluation of the local pilots phase (T6.2). This will provide a compact base of data as well as significant experience of the detailed utilisation of the platform that will drive the next steps of the platform implementation and the research methods to be subsequently followed. As such, it is expected that several decisions and specifications included in this document regarding the data and metadata management will be revised accordingly. 1. **Data Summary** 1. **Introduction** UMI-Sci-Ed is going to generate, use, circulate and disseminate a big amount of diverse data. These include data that will support the educational and training activities, the artefacts and derivatives of the piloting and implementation phases, data that will drive the research process that will take place in the context of the project, etc. These data will be both qualitative and quantitative and may either be automatically or manually generated. The origin will differ substantially, e.g., data will be generated by the participants to the educational activities (e.g., students, tutors, researchers, project members or professionals), researchers that will process and analyse the activities and data created in the context of the educational scenarios implementation, sensors or other artefacts producing data in the context of the pilot and implementation phases as well as the UMI-Sci-Ed platform itself. Furthermore, it is clear that the generated data will follow different formats and standards due their diverse nature and post-processing requirements. Almost all types of data will be managed through the UMI-Sci-Ed platform 5 that has been created in order to support the activities that will be realized in the context of the project. Aiming to provide easy access and processing capabilities, the consortium revised the original dataset categories with respect to the data that will be generated throughout the project from four (4) to six (6). In addition to the datasets described in the DMP v1.0, two more datasets have been introduced, namely, the _Project working data_ and the _Other data_ . The main idea behind this categorisation is the provision of a simple structure while keeping the major relevant data collections compact, in terms of the origin of creation, and the postprocessing that will be applied to the data collections. In specific, the six datasets are: * **_DS1: Educational scenarios derivatives._ ** This is a family of datasets that will contain all the raw or processed data that will be generated during the execution of the pilots as learning artefacts, mainly by the students and the tutors. Each educational scenario will eventually have its own dataset. * _**DS2: Research data.** _ This dataset will include all the quantitative and qualitative data (pre- and post-processed questionnaires, reflections, evaluations, etc.) that will support educational research. * _**DS3: Educational material.** _ This dataset will include all the educational material that will be developed by the partners, tutors, professionals, etc. that will participate in the preparation of the educational scenarios and the pilots. * _**DS4: Platform and market analysis data.** _ This dataset will include information about the usage of the platform that will be developed in the context of the project in the form of log files and evaluation forms as well as overall evaluation data that will be used for market analysis and exploitation. * _**DS5: Project working data.** _ This dataset includes working documents in the context of the project, such as presentations, draft deliverable documents, deliverable review documents, etc. * _**DS6: Other data.** _ This dataset includes other types of information that will be gathered in the UMI-Sci-Ed platform throughout the project that cannot be categorized in one of the above datasets. The UMI-Sci-Ed platform that hosts the above mentioned data has been organised in specific content types to support the activities of the project, as described in D4.2. These include _UMI Scenario_ , _UMI Project_ , _Group Article_ , _Repository Entry_ 6 , _Blog_ , _Survey_ , _Wiki_ , _Forum Topic_ , _Poll_ , and _Event_ . In addition, a _Filedepot_ is included in the platform that supports information exchange among project partners. In the following section, each of six datasets is described separately, in relation with the above-mentioned content types, following the guidelines of the H2020 template for DMP [2]. 2. **Datasets description** **1.2.1 DS1: Educational scenarios derivatives** This is a family of datasets containing the raw or assembled and processed intermediate or final data that will be created by the participants as learning artefacts during the pilots that will take place within WP6 (Tasks 6.2 and 6.3). This kind of data will be mainly generated and collected in the context of UMI projects ( _UMI Project_ content type) that will implement the predefined educational scenarios (UMI scenarios), that is, the basis of the above mentioned activities. However, relevant data can be uploaded in the _Blog_ , _Forum Topic_ , _Survey_ , etc. content types that are related to UMI projects. The data of this dataset will be generated manually by students and tutors or automatically by UDOO devices themselves running pieces of codes created by the participants. UMI-SciEd partners and researchers may also create data, e.g., during demonstration or dissemination activities. The dataset may comprise software code (e.g. programs for managing experiments, programs for building hardware), data produced by running experiments or building solutions, student reports, etc., following several diverse formats, e.g., text, images, audio, video, scripts, numerical (spreadsheet) and binary sensor data, etc. Open and/or wellestablished format standards will be considered and used as first options to guarantee easy exchange and processing using non-proprietary software. Customised format standards might also be used if necessary for the execution of the UMI projects; in such case, documentation and/or open source software tools to access the data will be released too. Available standards describing raw data from sensors such as Sensor Model Language (SML) and Sensor Observation Service (SOS) will be examined for adoption in the project. The data size cannot be predicted yet since it depends on the level of participation of students and their tutors. However, there is no restriction posed by technical or administrative issues. Data of this dataset may be categorized as raw data and processed data. Raw data represent all the artefacts of the students’ activities that take place with the use of UDOO devices. Processed data represent all material (documents, statistics, reports, etc.) that will be generated by the students after processing the raw data in cases that such processing is required in the UMI scenarios and UMI projects. The main purpose of these datasets is to support the activities of the students and tutors during the realisation of UMI projects and to allow groups and CoPs interaction when working on the same UMI project. In addition, specific samples of these data may be used to support research reasoning and outcomes or proof of concept and applicability of the UMISci-Ed platform and hardware devices. Therefore, they are closely related with O1, O3 and O4 objectives and they will be used as input to the assessment of the training methodology, the UMI-Sci-Ed platform, and the UDOO hardware kit, respectively, that are under development in the context of the project. **1.2.2 DS2: Research data** This is a data set containing the research data that will be collected during the UMI projects realisation (implementation of the UMI scenarios) in the classroom (T6.2 and T6.3), the CoPs study and evaluation phases (T2.3 and T2.4) as well as overall evaluation phase (T7.2 and T7.3). The data will mainly be collected through the _Survey_ or _Poll_ content types of the platform and may be exchanged (in aggregated and anonymised format) through the _Group Article_ , _Repository Entry_ content types and the _Filedepot_ . _UMI Scenario_ contents can be used in research activities as well. Each pilot study will produce a number of different files including both qualitative and quantitative data. Indicative information to be collected is as follows: * Pilot/workshop information (goals, hardware kits, programming language, group formation, success criteria, Gantt charts etc.) * Project partners responsible for the pilot/workshop, demographic information of participants (age, gender, etc.), tutors/mentors information. * Video/audio recordings, log interactions and observations. * Student assessments. * Tutors' reflections. * Corporate sector feedback. * Interviews with the participants (students, tutors, parents, etc.). * Different questionnaires addressed to participants (students, tutors, researchers) for collecting qualitative and quantitative data before, during and after the project activities performed in the context of the pilots/workshops. The above mentioned generated data will mainly be used in the context of research activities of UMI-Sci-Ed, including assessment of novel educational services, evaluation of software and hardware tools, as well as dissemination of the project ideas and results, as described in O1, O3, O4, and O5 objectives of the project, respectively. Therefore, they will support and accompany publications (which by default are included in this dataset as well). In this context, these data will be useful for researchers within the framework of UMI-Sci-Ed, as well as external researchers that are active in similar or complementary research areas. Existing data by previous research efforts (out of the context of the project) may be used and analysed in order to provide comparative results. Several data formats will be necessary to support the research and evaluation process including, but not restricted to, text, spreadsheets and audio/video. Open and/or wellestablished format standards will be considered and used as first options to guarantee easy exchange of the information. Estimation of the generated data volume cannot be made since it depends on the activities of project partners. However, there is no restriction posed by technical or administrative issues. **1.2.3 DS3: Educational material** This dataset contains the educational material and resources that will be developed in the context of tasks T2.2, T5.2 and T6.1 in order to support the pilots. The dataset will include two general types of educational material, i.e., material to be used by the tutors and material to be used by the students. The material will either be developed by the consortium partners participating in educational activities, external stakeholders and professionals as well as the tutors themselves, or it is already available (by organisations outside the framework of UMI-Sci-Ed) and will be redistributed, mainly in the form of URLs. The educational material will mainly be available through the _UMI Scenario_ , _UMI Project_ , _Repository Entry_ , _UMI Forum Topic_ , and _Wiki_ content types of the UMI-Sci-Ed platform. While the educational material is mainly intended to be used by the target groups of the project, i.e., tutors and students in the context of UMI-Sci-Ed as well as the general public, some material will support research efforts and publications as well. Indicative information to be generated and distributed includes: * Educational (UMI) Scenarios' detailed descriptions including activity plans, session/lesson plans hardware/software to be used, etc. * Supporting documentation and presentations for UDOO devices. * Guidelines, templates, handouts, instructive videos, presentations and documentation for the scientific topics under consideration in the UMI Scenarios and UMI Projects. From the above description, it is evident that this dataset is mainly related to O1 and O2 UMI-Sci-Ed objectives that deal with novel educational services development and career consultancy services support. The data generated and distributed can be of audio, video, document, presentation and other types. Common and well-established formats will be used to guarantee easy exchange and processing. The expected size of data cannot be estimated since it depends on the productivity of the project participants and partners. However, there is no restriction posed by technical or administrative issues. **1.2.4 DS4: Platform and market analysis data** The purpose of this dataset is to provide information about the usage of the UMI-Sci-Ed platform as well as general evaluation of the project and the offered services. Data to be collected can be categorized in two major parts, that is, log files and evaluation data. Both will include information regarding * user info (e.g., ID or name for registered users, submitted info by unregistered users) * user category (according to the actors that will be defined in the specifications of the platform) * date/time * visiting duration * visited pages/tools * platform and overall services evaluation data (through evaluation forms that will be defined in the specifications of the platform) as well as questionnaires and interviews for the services offered in general by the project. Platform usage log data will be collected automatically by the platform via proper software modules as text files (.csv or .xml) while evaluation data will be provided by the interaction of the users with the platform through proper evaluation forms and questionnaires that will be processed to provide spreadsheets or numerical tables. Furthermore, interviews (as audio or video files) may be required to collect qualitative data. These data will be properly aggregated in order (a) to get feedback for the quality of services that the platform and the project (as a whole) offers to the users, (b) to measure the visibility of the project, and (c) to be combined with other data in order to draw conclusions in educational research, drive market analysis and subsequent exploitation plans in the context of Task 8.3. Therefore, this dataset’s data will be useful for the consortium in the context of the O3 and O5 UMI-Sci-Ed objectives that are related to the supporting software tools assessment and the dissemination of the project ideas and results, respectively. The data size cannot be predicted since it has to do with the visibility and utilisation of the UMI-Sci-Ed platform throughout and after the project. However, there is no restriction posed by technical or administrative issues. **1.2.5 DS5: Project working data** This dataset includes working documents in the context of the project, such as draft deliverable documents, deliverable evaluation forms, deliverable revisions, progress reports, presentations, meeting minutes, etc., as well as other auxiliary material, that are created and exchanged through the UMI-Sci- Ed platform _Filedepot_ by the project partners in order to fulfil the typical project requirements. Thus, this dataset is related with all the objectives of the project and is used for monitoring the activities and progress of the project as a whole. It mainly consists of text files or presentations; however, other data types may be occasionally used, in well-established formats that have been agreed among the partners. **1.2.6 DS6: Other data** This dataset includes other types of information that will be gathered in the UMI-Sci-Ed platform by any registered user throughout the project and cannot be categorised in one of the _DS1_ \- _DS5_ datasets while still relevant and useful for the project, e.g., forum and blog discussions, announcements, etc. This dataset includes, in general, auxiliary data that are exchanged among the platform users in the context of educational scenarios/projects design and implementation and can be of any type. Their size depends on the utilisation of the platform throughout the project. They are not directly connected with the objectives of the project, however post-processing of them may be useful in qualitative analysis that could be used in research efforts of the project partners. **1.3 Datasets and UMI-Sci-Ed platform relations** The plurality and diversity of data that will be generated, gathered and exchanged through the UMI-Sci-Ed platform, as described in the previous section, make their classification and organisation a rather involved task. Furthermore, the platform, tools and methods are expected to be updated, expanded or shrunk in several aspects in order to support effectively the CoPs’ and educational activities throughout the duration of the project. Therefore, while the described datasets seem to cover the data that will be gathered in the framework of the project, the described in sections 1.1 and 1.2 relations between the platform content types and datasets, i.e., in which content types the _DS1_ \- _DS6_ data are expected to be uploaded, may change. In order to ease the presentation and keep track of the changes that may be necessary in the future, Table 1 provides the platform content types that will be used for collecting or exchanging data of the defined datasets. <table> <tr> <th> **Content Type** </th> <th> **DS1: Educational scenarios derivatives** </th> <th> **DS2: Research data** </th> <th> **DS3: Educational material** </th> <th> **DS4: Platform and market analysis data** </th> <th> **DS5:** **Project working data** </th> <th> **DS6:** **Other Data** </th> </tr> <tr> <td> **UMI Scenario** </td> <td> </td> <td> × </td> <td> × </td> <td> × </td> <td> </td> <td> </td> </tr> <tr> <td> **UMI Project** </td> <td> × </td> <td> </td> <td> × </td> <td> × </td> <td> </td> <td> </td> </tr> <tr> <td> **Group Article** </td> <td> </td> <td> × </td> <td> </td> <td> × </td> <td> </td> <td> × </td> </tr> <tr> <td> **Repository Entry** </td> <td> </td> <td> × </td> <td> × </td> <td> × </td> <td> </td> <td> </td> </tr> <tr> <td> **Forum Topic** </td> <td> × </td> <td> </td> <td> × </td> <td> × </td> <td> </td> <td> × </td> </tr> <tr> <td> **Blog** </td> <td> </td> <td> </td> <td> </td> <td> × </td> <td> </td> <td> × </td> </tr> <tr> <td> **Survey** </td> <td> × </td> <td> × </td> <td> </td> <td> × </td> <td> </td> <td> </td> </tr> <tr> <td> **Poll** </td> <td> × </td> <td> × </td> <td> </td> <td> × </td> <td> </td> <td> </td> </tr> <tr> <td> **Wiki** </td> <td> </td> <td> </td> <td> × </td> <td> × </td> <td> </td> <td> </td> </tr> <tr> <td> **Filedepot** </td> <td> </td> <td> × </td> <td> </td> <td> × </td> <td> × </td> <td> </td> </tr> </table> **Table 1 Datasets and UMI-Sci-Ed platform content types relation** 2. **Fair Data** **2.1 Making data findable, including provisions for metadata** The UMI-Sci-Ed platform is the major place of data collection and presentation either raw or processed data of the project. Additionally, publications and relative research data will be uploaded to Zenodo ( _https://zenodo.org/_ ) in order to expand visibility. All data of the content types described in Chapter 1 that are uploaded in the project platform, including attachments, have a unique and persistent identifier of URI type that is automatically generated by the platform. Research data uploaded to Zenodo will also have a unique DOI provided by Zenodo platform. A first set of metadata has been declared for each of the content types of the platform focusing in educational research while general metadata are also going to be generated. The specific set for each content type has been decided as a compromise between a full descriptive set of metadata for educational research and low requirements of user input, so that to avoid end user dissatisfaction that will prevent users from using the platform. Therefore, _UMI Scenario_ and _UMI Project_ content types that are basically developed by project partners require a significant set of metadata (provided by user input) during the setup of a scenario or a project. In all other cases user input related to metadata has been minimised. Following the same principle, attachments in the platform are classified in two major categories: attachments including educational material that supports educational scenarios and activities under the _UMI Scenario_ content type and attachments for all other content types. In the former case, educational material is classified in 5 categories, that is, Source Code, URL, Digital Document, Media Object and Rich Text, each one accompanied by a detailed set of metadata that will be generated either by user input or automatically by the platform. In the latter case, attachments are categorized as file attachments, image/photos and YouTube videos and are accompanied by a minimum set of metadata that will be generated by user input in order to ease file and media exchange. The metadata schema for educational research in the framework of the project will follow the Learning Resource Metadata Initiative (LRMI) version 1.1 of Dublin Core Metadata Initiative (DCMI) 7 , properly adapted and expanded to fulfil the needs of the project. Table 2 provides the properties of _Schema.org/CreativeWork_ that were adopted from LRMI specification, new properties introduced, the metadata collection method in the platform and some clarifying comments. Table 3 summarizes the use of _AlignmentObject_ type and its properties by providing specific examples and value sets that are going to be used as a basis for detailed metadata in the context of educational research. A new set of values has been defined in addition to the recommended by LRMI 1.1 values for the _alignmentType_ property. These are going to be used primarily with _UMI Scenario_ and secondarily with _UMI Project_ content types and are expected to enhance discoverability and reusability of UMI-Sci- Ed data. The preliminary set of metadata that has been specified for the most important content types of the project platform is presented in detail in Appendix A. All metadata are saved in the platform following Schema.org 8 structure. Metadata and their value sets are going to be finalised after the local pilots’ period (T6.2) based on the evaluation of the data that will be gathered. <table> <tr> <th> **Property** </th> <th> **LRMI** </th> <th> **New** </th> <th> **Generation method** </th> <th> **Comments** </th> </tr> <tr> <td> **educationalAlignment** </td> <td> × </td> <td> </td> <td> User input </td> <td> This property is of _schema.org/AlignmentObject_ type which is further described in Table 3. </td> </tr> <tr> <td> **educationalUse** </td> <td> × </td> <td> </td> <td> Automatically by the platform </td> <td> “Educational scenario plan” and “Educational scenario implementation” are the values used for _UMI Scenario_ and _UMI Project_ content types in the context of UMI-Sci-Ed. The value set may be revised according to UMI-Sci-Ed future needs. </td> </tr> <tr> <td> **timeRequired** </td> <td> × </td> <td> </td> <td> User input </td> <td> </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> × </td> <td> </td> <td> Automatically by the platform </td> <td> Value depends on the _educationalRole_ value and may be “14-16” for “learner” or “22-” for other values. </td> </tr> <tr> <td> **interactivityType** </td> <td> × </td> <td> </td> <td> User input or automatically by the platform </td> <td> Generation method depends on the platform content type. </td> </tr> <tr> <td> **learningResourceType** </td> <td> × </td> <td> </td> <td> User input or automatically by the platform </td> <td> Specific content types have predefined values. </td> </tr> <tr> <td> **artefactType** </td> <td> </td> <td> × </td> <td> User input </td> <td> A new property that provides means for characterization of learning artefacts. </td> </tr> <tr> <td> **licence** </td> <td> × </td> <td> </td> <td> User input </td> <td> This corresponds to the LRMI property useRightsUrl as defined in Schema.org </td> </tr> <tr> <td> **educationalRole** </td> <td> × </td> <td> </td> <td> User input </td> <td> This is a property of _schema.org/EducationalAudience_ type and includes the values “learner”, “teacher”, “author”, “manager”, as specified in IEEE LOM. It is mainly used for _UMI Scenario_ and _UMI Project_ content types. </td> </tr> </table> # Table 2 LRMI properties adopted in UMI-Sci-Ed _UMI Scenario_ , _UMI Project_ , _Group Article_ , _Repository Entry_ , _Blog_ , _Wiki_ and _Forum Topic_ content types, include a mandatory field for Key Terms or Tags. All of them will be mapped to _schema.org/CreativeWork:keywords_ property for metadata generation. Therefore, all datasets (see Table 1) will be accompanied by keywords. Keywords are available as a list (autocomplete feature) while typing new key terms or tags in the abovementioned content types in order to enhance reusability. Keywords are searchable through the main search function of the platform. Versioning of _UMI Scenario_ , _UMI Project_ and _Wiki_ content types is preserved automatically by the platform. Additionally, versioning is available for _UMI Scenario_ , _UMI Project_ and _Wiki_ content types by user input. For file attachments, users are suggested not to delete (replace) older versions of the same file in order to keep tracking of the file changes. Decimal numbering with two numbers preceded by “ _v_ ” and separated by “ _._ ” will be followed, indicating major or minor changes. For file attachments, version may be put in the end of file name followed, optionally, by language indication, e.g., “ _UMI-SCI-Ed CTI Scenario My Social Panel v3.1 en.docx_ ”. <table> <tr> <th> **alignmentType value** </th> <th> **LRMI** </th> <th> **New** </th> <th> **educationalFramework value or example** </th> <th> **targetName value set or example** </th> </tr> <tr> <td> **“learningOutcomes”** </td> <td> </td> <td> × </td> <td> Ex: “Revised Bloom Taxonomy” </td> <td> Ex: “Describe a simple local area computer network” </td> </tr> <tr> <td> **“educationalLevel”** </td> <td> × </td> <td> </td> <td> Ex: “Greek upper secondary school” </td> <td> Ex: “year 10 (on K-12 scale)” </td> </tr> <tr> <td> **“educationalSubject”** </td> <td> × </td> <td> </td> <td> Ex: “Greek upper secondary school” </td> <td> Ex: “Informatics’ Applications (Curriculum 932/14-4-2014/τΒ)” </td> </tr> <tr> <td> **“umiDomain”** </td> <td> </td> <td> × </td> <td> “Horizon 2020 UMI-Sci-Ed” </td> <td> “Ubiquitous”, “Mobile”, “IoT” </td> </tr> <tr> <td> **“educationalScenarioOri enation”** </td> <td> </td> <td> × </td> <td> “Horizon 2020 UMI-Sci-Ed” </td> <td> “Acquire new Knowledge”, “Develop new Skills”, “Attain Attitudes” </td> </tr> <tr> <td> **“pedagogicalTheory”** </td> <td> </td> <td> × </td> <td> “Horizon 2020 UMI-Sci-Ed” </td> <td> Ex: “Active Learning” </td> </tr> <tr> <td> **“requires”** </td> <td> × </td> <td> </td> <td> “hardware resources”, “software resources”, “other resources” </td> <td> Ex: “Arduino IDE” </td> </tr> <tr> <td> **activityType** </td> <td> </td> <td> × </td> <td> “Horizon 2020 UMI-Sci-Ed” </td> <td> “Introductory”, “Core” “Concluding” or “Auxiliary” </td> </tr> <tr> <td> **“learningObjectives”** </td> <td> </td> <td> × </td> <td> Ex: “Revised Bloom Taxonomy” </td> <td> Ex: “List UMI applications” </td> </tr> <tr> <td> **“difficulty”** </td> <td> </td> <td> × </td> <td> “IEEE LOM 5.8” </td> <td> “Easy”, “Intermediate” or “Advanced” </td> </tr> </table> # Table 3 Properties of Schema.org/Intangible/AlignmentObject used in UMI- Sci-Ed **2.2 Making data openly accessible** All data will be generated or collected via the UMI-Sci-Ed platform and will be made openly available to registered users with the following exceptions due to legal and contractual reasons: 1. Raw educational research data (part of _DS1_ dataset), i.e., survey and poll answers, interviews, observation notes, etc., that include or may collectively reveal the identity of student participants. This kind of data will be properly anonymised before use allowing information on demographic characteristics (e.g., age, gender) of students only (see Chapter 5). 2. Raw platform use log data ( _DS4_ dataset) or platform survey and evaluation data that include sensitive personal information of users. These data will be accessible only to the platform administrator and available only to officially authorised persons. They may be released to the consortium partners for processing and evaluation after anonymisation or exclusive permission of the steering committee upon a fully justified request. These data are not going to be shared outside the project consortium in any case (see also Chapter 5). 3. Files uploaded to _Filedepot_ which are by default created and exchanged between the project partners for internal use only. 4. Any data that are related to business activities and goals of corporate consortium partners. Furthermore, registered users that participate in a Group (supporting CoPs’ activities) have the capability to restrict specific types of the platform content to Group visibility only, as described in deliverable “D4.2 – Intermediate version of the system”. All data will be available through UMI-Sci-Ed platform after registration. In addition, research or platform data ( _DS2_ and _DS4_ datasets) supporting publications (as well as the publications themselves) will be openly available in Zenodo, after proper anonymisation, as described in (a) and (b) above. No special methods or tools are necessary to access the data with the exception of UDOO sensor data that will be gathered when implementing educational scenarios. The necessary software will be available in open-source code and will be accessible with a special link via the UMI-Sci-Ed platform. All data will be deposited in CTI servers dedicated to the project and accessed through the UMI-Sci-Ed platform. In addition, publications and accompanying data will be deposited to Zenodo and other institutional/national databases. Access to UMI-Sci-Ed platform is provided according to deliverable “D4.1 – System Requirements and Architectural Design”. The platform administrator can additionally provide massive access to data (either via the platform or directly to CTI servers) to consortium members, the project officer and the reviewers, after registration to the platform and proper consent of the project coordinator. All registered users are uniquely identifiable by the platform. 3. **Making data interoperable** The data that will be produced during the project and will be publicly made available will follow well-established and massively used format standards, open as much as possible, with preference to .csv, .pdf, .xml, .png, .jpeg formats. This will allow reusability between researchers, institutions and organisations not participating in the project consortium. Data and metadata vocabularies, as described in section 2.1 especially for educational research purposes, as well as with commonly used metadata for the platform content, as described in Schema.org, will offer enhanced interoperability and inter-disciplinary use. A preliminary set of the planned metadata and vocabularies for the content types of the platform are provided in detail in Appendix A. 4. **Increase data re-use (through clarifying licences)** Data produced by the project will be available through the platform or published on Zenodo platform under a Creative Common (CC) 9 proper license. In this context, _UMI Scenario_ , _UMI Project_ and _Repository Entry_ content types, include a license field to be filled by the user (as a list of available CC licensing types) that applies to all the content included in them as well. Research data and publications will follow the same procedure in Zenodo. All data that are viewable by registered users of the platform or uploaded in Zenodo or are accessible via the project web site 10 are considered to be available for re-use by third parties by the moment they are made available in public. No embargo period will be applied. Re-use of data is restricted only when they are related to business activities and goals of corporate consortium partners. Data will be and remain re-usable for a minimum time of two (2) years after the end of the project. The project partners will decide which data will be preserved for a longer period by the end of the project. Raw data processing, aggregation and presentation for research activities will take place following standardised methodologies and tools as described in detail in deliverable “D7.1 – Evaluation concept-framework and piloting” in order to assure processed data quality. Data related to deliverables and project reporting follow the predefined by the project document control special procedure that includes: (a) Use of predefined and agreed templates, (b) Review and revision process and (c) Approval from the steering committee. Raw and processed data will be kept in the platform and archived to CTI servers following standardised procedures that guarantee data integrity. A short description of the infrastructure, software and procedures used can be found in Chapter 4. 3. **Allocation of resources** The costs for making the data FAIR have been pre-allocated as part of Horizon 2020 grant in the context of Work Package 1 (Project management). No extra costs will be necessary for platform data curation and preservation during the project and for a period of two (2) years after the end of the project. Research data uploaded to Zenodo will not require extra costs. The project partners (through the Steering Committee) will decide what platform data will require longer term preservation. These data may be uploaded to Zenodo as well (depending on data volume) at no cost. Otherwise extra costs will burden CTI which is responsible for the maintenance of the platform and the project servers. CTI is responsible for data management. The platform administrator and the project coordinator (both in CTI) are the key persons for data management during the project. Furthermore, the coordinator of each partner is responsible for any data (intended to be scant or none) that are locally kept by the project partners outside the project platform. 4. **Data security** All data will be stored and preserved in CTI project servers. UMI-Sci-Ed platform is running on a server with Ubuntu 16.04 LTS operating system installed. LTS stands for “Long Term Support” and it means that Canonical (the company behind Ubuntu) provides support (security updates etc) for at least 5 years. As a web server, we have installed the latest version of Apache web server with PHP 7.0 and as database server UMI-Sci-Ed platform stores the data to the latest version of MySQL Server Community Edition. For better resources management and allocation, we use virtualization technologies in our servers. Regarding the project, UMI-Sci-Ed server is a virtual machine that runs on an ESXi infrastructure. The bare metal ESXi server runs on a Dell R430 machine with redundant power supply and redundant hard disk drives (SAS HDD – RAID1). The physical location of the server is CTI’s computer room. Our computer room is equipped with UPS systems, systems that maintain the temperature and the humidity levels inside the computer room and special fire suspension systems (Waterless Fire Suppressant). Finally, only authorized personnel can access the computer room by using personal ID cards. Also, any entrance to the computer room is logged and the entrance is video recorded 24/7. For the backup of UMI-Sci-Ed server, we use one extra, independent server (backup server). Our backup plan is executed daily at 2:00 and we keep 60 restore points. This means that we can restore any instance of UMI-Sci-ed server 60 from the past 60 days. All backups are created and transferred to the backup server via a secure connection. UMI-Sci-Ed development team checks in a regular basis servers’ system logs and ensures that all the necessary system updates are installed in our servers. Furthermore, CTI’s cyber security department monitors daily the internet traffic from/to CTI’s network to prevent any malicious actions, attempts for unauthorized access etc. Finally, as noted in Chapter 3, for longer than 2 years after the end of the project preservation, properly selected sets of data (that include research data) will be deposited to Zenodo. 5. **Ethical aspects** The project meets national legal and ethical requirements of the partners’ countries. The project will not study vulnerable populations. No sensitive personal data will be recorded. As we are interested in UMI/STEM use and career potential of both genders, gender of participating students and tutors will be probably recorded, but in anonymous questionnaires, with general descriptors like country, school name, geographic region, as well as opinion about project actions, etc. In spite of being sensitive, all collected data will be treated with care and respect, and only the results of their processing will be published. However, whenever personal data collection is required, this will be made with the explicit written consent of the individuals involved. Especially for children (which are unable to give informed consent), the consortium has already provided compliance with the ethical standards and guidelines of Horizon 2020 (Annex 1, DoA, part B, section 5.1, p. 76). The description and the justification of the methodology adopted and the composition of the sample are provided in Chapter 2 (sections 2.1 and 2.2) of deliverable “D9.2 – H - Requirement No. 1”. Furthermore, detailed information on the informed consent procedures that will be implemented regarding the participation of humans, as well as clarifications on how assent will be ensured for children is provided in section 2.3 of D9.2. More specifically, among the datasets under consideration in Chapter 1, _DS3_ and _DS5_ , educational material and project working data, do not, by definition, include personal student data. Other auxiliary data falling in _DS6_ , on the other hand, are unlikely to carry personal student data. Major importance for ethical aspects is data collected as part of _DS2_ and _DS4_ datasets, i.e., research and platform data, respectively. For _DS2_ , all data will be stored without any reference to the student interacting with the platform and will be password protected. In the case of the tutors collecting or generating information (interviews, assessments, reflections, etc.), the information will be directly stored to the platform via proper forms with no reference to student personal data. In all the abovementioned cases, only demographic student data will be stored (e.g., gender, age) and will be password protected. Following the same strategy, _DS4_ data including platform log files will be encrypted, statistically processed and aggregated to ensure anonymity. Aggregated and anonymised data may be shared within the consortium and may be made publicly available when they support research publications via the CTI platform and Zenodo. Finally, for _DS1_ data that include educational derivatives, the original intention of the project was to anonymise all learning artefacts created by students with reference only to groups and not the individuals. In this context, artefacts that inevitably contain personal student data (i.e., not fully anonymised) will be marked as sensitive and will be password protected and/or encrypted. However, this last case will be further examined by the consortium in cooperation with local tutors in the context of providing recognition (visible for the members of the Cops) to active students. Further information on ethical aspects and relative procedures can be found in deliverables “D9.1 – POPD - Requirement No. 2” and “ D7.1 – Evaluation concept-framework and piloting”. Finally, the educational material will include a section about plagiarism, code of ethics, and plain advice on how could students properly avoid it by properly referencing their sources. 6. **Other** No other national/funder/sectorial/departmental procedure for data management will be used. **Appendix A: Metadata tables for UMI-Sci-Ed platform content types** This section provides the preliminary set of metadata that have been specified for the _UMI Scenario_ , _UMI Project_ , _Group Article_ , _Wiki_ , _Blog_ , _Forum Topic_ and _Repository Entry_ content types of the platform. Furthermore, the set of metadata that are kept in case of _Files and Media_ attachments to the above content types (excluding UMI Scenario that includes a special set of metadata for attachments) is also given. It has to be noted that the tables given below may undergo changes after the local small scale pilots, in order to better support the objectives of the project and the usability of the project platform. Table 4 shows the metadata that have been specified for the main parameters of _UMI Scenario_ content type together with the mandatory attached file. The table includes the metadata property and the type of metadata according to Schema.org, the value space, the platform field that the platform user utilizes for input and some comments (when necessary). An empty platform field indicates that metadata are created automatically without any user input. Each educational scenario (that corresponds to _UMI Scenario_ content type) includes a set of activities that are supported by their own set of metadata. The set of metadata defined for the activity are presented in Table 5. Each activity can further include several types of educational material, i.e., pieces of source code, URLs, digital documents (text, presentation or spreadsheet), media objects, or rich text parts. The corresponding set of metadata is not included here for brevity. Tables 6, 7, 8, 9, 10 and 11 present the metadata that have been specified for _UMI Project_ , _Group Article_ , _Wiki_ , _Blog_ (including both the blog and a blog comment), _Forum Topic_ (including both the forum topic and a Forum Post) and _Repository Entry_ content types. _Files and Media_ attachments may be files, YouTube videos (via URL) and images or photos. Table 12 provides the metadata that have been for file attachments. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **alternateName** </td> <td> CreativeWork </td> <td> Text </td> <td> Abbreviation </td> <td> </td> </tr> <tr> <td> **version** </td> <td> CreativeWork </td> <td> Text </td> <td> Version </td> <td> </td> </tr> <tr> <td> **inLanguage** </td> <td> CreativeWork </td> <td> Text, as defined in IETF BCP47 </td> <td> Language </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Synopsis </td> <td> </td> </tr> <tr> <td> **keywords** </td> <td> CreativeWork </td> <td> Text </td> <td> Key Terms </td> <td> </td> </tr> <tr> <td> **educationalRole** </td> <td> Ιntangible > Audience > EducationalAudience </td> <td> "teacher", "learner", "author", "manager" </td> <td> Audience </td> <td> </td> </tr> <tr> <td> **audienceType** </td> <td> Intangible > Audience </td> <td> "public" or UMI Group </td> <td> Group content visibility, Groups audience </td> <td> </td> </tr> <tr> <td> **targetName, alignmentType = "learningOutcomes"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Expected Learning Outcomes </td> <td> alignmentFramework = "Revised Bloom Taxonomy" </td> </tr> <tr> <td> **timeRequired** </td> <td> CreativeWork </td> <td> ISO 8601 </td> <td> Expected duration </td> <td> Months, days, hours, minutes </td> </tr> <tr> <td> **alignmentFramework** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Framework </td> <td> </td> </tr> <tr> <td> **targetName, alignmentType = "educationalLevel"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Level </td> <td> alignmentFramework is defined by the field "Framework" </td> </tr> <tr> <td> **targetName, alignmentType = "educationalSubject"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Subject </td> <td> alignmentFramework is defined by the field "Framework" </td> </tr> <tr> <td> **targetName, alignmentType = "umiDomain"** </td> <td> Ιntangible > AlignmentObject </td> <td> "Ubiquitous", "Mobile", "IoT" </td> <td> UMI Domain </td> <td> alignmentFramework = "Horizon2020 UMI-Sci-Ed" </td> </tr> <tr> <td> **targetName, alignmentType =** **"educationalScenarioOrien ation"** </td> <td> Ιntangible > AlignmentObject </td> <td> "Acquire new Knowledge", "Develop new Skills", "Attain Attitudes" </td> <td> UMI-Sci-Ed Orientation/Focus </td> <td> alignmentFramework = "Horizon2020 UMI-Sci-Ed" </td> </tr> <tr> <td> **targetName, alignmentType = "pedagogicalTheory"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Pedagogical Theory </td> <td> alignmentFramework = "Horizon2020 UMI-Sci-Ed" </td> </tr> <tr> <td> **interactivityType = "mixed"** </td> <td> CreativeWork </td> <td> "active", "expositive" or "mixed" </td> <td> Interactivity Type </td> <td> Only one out of the three values </td> </tr> <tr> <td> **targetName,** **alignmentType = "requires"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Resources - Hardware </td> <td> alignmentFramework = "UMI-Sci-Ed hardware resources" </td> </tr> <tr> <td> **targetName,** **alignmentType = "requires"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Resources - Software </td> <td> alignmentFramework = "UMI-Sci-Ed software resources" </td> </tr> <tr> <td> **targetName,** **alignmentType = "requires"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Resources - Software </td> <td> alignmentFramework = "UMI-Sci-Ed other resources" </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Scenario" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> </td> <td> "UMI-Sci-Ed Educational Scenario" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **educationalUse** </td> <td> CreativeWork </td> <td> "Educational Scenario plan" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> CreativeWork </td> <td> "14-16" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text (as defined in https://www.w3.org/wiki/W ebSchemas/Accessibility) </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _UMIScenarioFile_ </td> <td> </td> <td> For the UMI Scenario File. </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _EducationalActivity_ </td> <td> </td> <td> For the UMI Activities, there can be more than one Activities. </td> </tr> <tr> <td> **UMI Scenario File** </td> </tr> <tr> <td> **name** </td> <td> CreativeWork > DigitalDocument </td> <td> Text </td> <td> </td> <td> The name of the file </td> </tr> <tr> <td> **Metadata property** </td> <td> **Metadata typeof** </td> <td> **Value space** </td> <td> **Platform field** </td> <td> **Comments** </td> </tr> <tr> <td> **fileformat** </td> <td> CreativeWork > DigitalDocument </td> <td> Text (MIME format - see IANA site) </td> <td> </td> <td> The type of the file </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork > DigitalDocument </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> Inherited from UMIScenario </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> Inherited from UMIScenario </td> </tr> <tr> <td> **datePublished** </td> <td> CreativeWork > DigitalDocument </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **isPartOf** </td> <td> CreativeWork > DigitalDocument </td> <td> UMIScenario </td> <td> </td> <td> </td> </tr> </table> # Table 4 Metadata defined for the _UMI Scenario_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Description and Steps </td> <td> </td> </tr> <tr> <td> **interactivityType** </td> <td> CreativeWork </td> <td> "active", "expositive" or "mixed" </td> <td> Interactivity Type </td> <td> </td> </tr> <tr> <td> **targetName, alignmentType = "activityType"** </td> <td> Ιntangible > AlignmentObject </td> <td> "Introductory", "Core" "Concluding" or "Auxiliary" </td> <td> Type of Activity </td> <td> alignmentFramework = "Horizon2020 UMI-Sci-Ed" </td> </tr> <tr> <td> **targetName, alignmentType = "learningObjectives"** </td> <td> AlignmentObject </td> <td> Text </td> <td> Learning Objectives </td> <td> alignmentFramework = "Revised Bloom Taxonomy" </td> </tr> <tr> <td> **timeRequired** </td> <td> CreativeWork </td> <td> ISO 8601 </td> <td> Expected duration </td> <td> </td> </tr> <tr> <td> **educationalRole** </td> <td> Ιntangible > Audience > EducationalAudience </td> <td> "teacher", "learner", "author", "manager" </td> <td> Audience </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **version** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **inLanguage** </td> <td> CreativeWork </td> <td> Text, as defined in IETF BCP47 </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text (as defined in https://www.w3.org/wiki/W ebSchemas/Accessibility) </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "Educational Activity" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Educational Activity" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **educationalUse** </td> <td> CreativeWork </td> <td> "Educational Scenario implementation" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> CreativeWork </td> <td> "14-16" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **isPartOf** </td> <td> CreativeWork </td> <td> UMIScenario </td> <td> </td> <td> </td> </tr> <tr> <td> **learningResourceType** </td> <td> CreativeWork </td> <td> "learning activity" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _UMIActivityFile_ </td> <td> </td> <td> For the UMI Activity File </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _UMIEducationalMaterial_ </td> <td> </td> <td> For the Educational Material </td> </tr> <tr> <td> **UMI Activity File** </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **name** </td> <td> CreativeWork > DigitalDocument </td> <td> Text </td> <td> </td> <td> Filename </td> </tr> <tr> <td> **fileformat** </td> <td> CreativeWork > DigitalDocument </td> <td> Text (MIME format - see IANA site) </td> <td> </td> <td> </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork > DigitalDocument </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> Inherited from _UMIScenario_ </td> </tr> <tr> <td> **isPartOf** </td> <td> CreativeWork > DigitalDocument </td> <td> UMIScenario </td> <td> </td> <td> </td> </tr> </table> # Table 5 Metadata defined for an activity that is part of a _UMI Scenario_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **version** </td> <td> CreativeWork </td> <td> Text </td> <td> Version </td> <td> </td> </tr> <tr> <td> **isBasedOn** </td> <td> CreativeWork </td> <td> _UMIScenario_ </td> <td> UMI Scenario </td> <td> </td> </tr> <tr> <td> **inLanguage** </td> <td> CreativeWork </td> <td> Text, as defined in IETF BCP47 </td> <td> Language </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Description </td> <td> </td> </tr> <tr> <td> **keywords** </td> <td> CreativeWork </td> <td> Text </td> <td> Key Terms </td> <td> </td> </tr> <tr> <td> **timeRequired** </td> <td> CreativeWork </td> <td> ISO 8601 </td> <td> Duration </td> <td> Months, days, hours, minutes </td> </tr> <tr> <td> **interactivityType = "mixed"** </td> <td> CreativeWork </td> <td> "active", "expositive" or "mixed" </td> <td> Interactivity Type </td> <td> </td> </tr> <tr> <td> **targetName,** **alignmentType =** **"difficulty"** </td> <td> AlignmentObject </td> <td> "Easy", "Intermediate" or "Advanced" </td> <td> Difficulty </td> <td> alignmentFramework = "IEEE LOM 5.8" </td> </tr> <tr> <td> **targetName,** **alignmentType = "requires"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Resources - Hardware </td> <td> alignmentFramework = "UMI-Sci-Ed hardware resources" </td> </tr> <tr> <td> **targetName,** **alignmentType = "requires"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Resources - Software </td> <td> alignmentFramework = "UMI-Sci-Ed software resources" </td> </tr> <tr> <td> **targetName,** **alignmentType = "requires"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Resources - Software </td> <td> alignmentFramework = "UMI-Sci-Ed other resources" </td> </tr> <tr> <td> **targetName, alignmentType = "learningOutcomes"** </td> <td> Ιntangible > AlignmentObject </td> <td> Text </td> <td> Expected Learning Outcomes </td> <td> alignmentFramework = "Revised Bloom Taxonomy" </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> License </td> <td> Creative Commons URL </td> </tr> <tr> <td> **educationalRole** </td> <td> Ιntangible > Audience > EducationalAudience </td> <td> "learner" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Project" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Project" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **educationalUse** </td> <td> CreativeWork </td> <td> "Educational Scenario implementation" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> CreativeWork </td> <td> "14-16" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text (as defined in https://www.w3.org/wiki/W ebSchemas/Accessibility) </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _UMIProjectFile_ </td> <td> </td> <td> For the UMI Project File. </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> Files and Media </td> </tr> <tr> <td> **UMI Project file** </td> </tr> <tr> <td> **name** </td> <td> CreativeWork > DigitalDocument </td> <td> Text </td> <td> </td> <td> The name of the file </td> </tr> <tr> <td> **fileformat** </td> <td> CreativeWork > DigitalDocument </td> <td> Text (MIME format - see IANA site) </td> <td> </td> <td> The type of the file </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork > DigitalDocument </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> Inherited from _UMIProject_ </td> </tr> <tr> <td> **datePublished** </td> <td> CreativeWork > DigitalDocument </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **isPartOf** </td> <td> CreativeWork > DigitalDocument </td> <td> _UMIProject_ </td> <td> </td> <td> </td> </tr> </table> # Table 6 Metadata defined for the _UMI Project_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **keywords** </td> <td> CreativeWork </td> <td> Text </td> <td> Tags </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Summary </td> <td> </td> </tr> <tr> <td> **articleBody** </td> <td> CreativeWork > Article > </td> <td> Text </td> <td> Body </td> <td> </td> </tr> <tr> <td> **audienceType** </td> <td> Intangible > Audience </td> <td> "public" or _UMIGroup_ </td> <td> Group content visibility, Groups audience </td> <td> </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Group Article" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Group Article" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **interactivityType** </td> <td> CreativeWork </td> <td> "expositive" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **ResourceType** </td> <td> </td> <td> "Article" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> Files and Media </td> </tr> </table> # Table 7 Metadata defined for the _Group Article_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Subject </td> <td> </td> </tr> <tr> <td> **articleBody** </td> <td> CreativeWork > Article </td> <td> Text </td> <td> Body </td> <td> </td> </tr> <tr> <td> **keywords** </td> <td> CreativeWork </td> <td> Text </td> <td> Tags of Wiki </td> <td> </td> </tr> <tr> <td> **version** </td> <td> CreativeWork </td> <td> Text </td> <td> Version </td> <td> </td> </tr> <tr> <td> **text** </td> <td> CreativeWork > Comment </td> <td> </td> <td> Revision log message </td> <td> </td> </tr> <tr> <td> **audienceType** </td> <td> Intangible > Audience </td> <td> "public" or _UMIGroup_ </td> <td> Group content visibility, Groups audience </td> <td> </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> https://creativecommons.or g/licenses/by/4.0/ </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **identifier** </td> <td> </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **contributor** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Wiki" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Wiki" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **interactivityType** </td> <td> CreativeWork </td> <td> "expositive" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **learningResourceType** </td> <td> CreativeWork </td> <td> "wiki page" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> CreativeWork </td> <td> "14-" </td> <td> </td> <td> Fixed value. </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> </table> # Table 8 Metadata defined for the _Wiki_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **keywords** </td> <td> CreativeWork </td> <td> Text </td> <td> Tags </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Summary </td> <td> </td> </tr> <tr> <td> **sharedContent** </td> <td> CreativeWork > Article > SocialMediaPosting > BlogPosting - typeOf CreativeWork > Article: articleBody </td> <td> Text </td> <td> Body </td> <td> </td> </tr> <tr> <td> **audienceType** </td> <td> Intangible > Audience </td> <td> "public" or _UMIGroup_ </td> <td> Group content visibility, Groups audience </td> <td> </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **sharedContent** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> Files and Media </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Blog" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Blog" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **Blog post (Comment)** </td> </tr> <tr> <td> **about** </td> <td> CreativeWork </td> <td> Text </td> <td> Subject </td> <td> </td> </tr> <tr> <td> **text** </td> <td> CreativeWork > Comment </td> <td> Text </td> <td> Comment </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> any date </td> <td> </td> <td> </td> </tr> <tr> <td> **parentItem** </td> <td> CreativeWork > Comment </td> <td> _UMIBlog_ </td> <td> </td> <td> </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **sharedContent** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person (RegisteredUser) </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Blog Comment" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Blog Comment" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> </table> # Table 9 Metadata defined for the _Blog_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Subject </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI General Discussion Forum" or "UMI Platform Support Forum" </td> <td> Forums </td> <td> Depending on user selection </td> </tr> <tr> <td> **sharedContent** </td> <td> CreativeWork > Article > SocialMediaPosting > DiscussionForumPosti ng - typeof CreativeWork > Article: articleBody </td> <td> Text </td> <td> Body </td> <td> </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **sharedContent** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> Files and Media </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Forum" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **Forum Post (Comment)** </td> </tr> <tr> <td> **about** </td> <td> CreativeWork </td> <td> Text </td> <td> Subject </td> <td> </td> </tr> <tr> <td> **text** </td> <td> CreativeWork > Comment </td> <td> Text </td> <td> Comment </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> any date </td> <td> </td> <td> To be shown in view mode </td> </tr> <tr> <td> **parentItem** </td> <td> CreativeWork > Comment </td> <td> _UMIForumTopic_ </td> <td> </td> <td> </td> </tr> <tr> <td> **sharedContent** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> Files and Media </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> To be shown in view mode </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "UMI Forum Comment" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Forum Comment" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> </table> # Table 10 Metadata defined for the _Forum Topic_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **name** </td> <td> CreativeWork </td> <td> Text </td> <td> Title </td> <td> </td> </tr> <tr> <td> **keywords** </td> <td> CreativeWork </td> <td> Text </td> <td> Key terms </td> <td> </td> </tr> <tr> <td> **author** </td> <td> Person </td> <td> Registered User </td> <td> Authors </td> <td> Multiple inputs are allowed </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "Educational Material", “Learning Artefact”, "Research Material", "Other" </td> <td> Type </td> <td> Fixed value </td> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Subject </td> <td> </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> License </td> <td> Creative Commons URL </td> </tr> <tr> <td> **identifier** </td> <td> </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **disambiguatingDescription** </td> <td> CreativeWork </td> <td> "UMI-Sci-Ed Repository Entry" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **interactivityType** </td> <td> CreativeWork </td> <td> "expositive" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **learningResourceType** </td> <td> CreativeWork </td> <td> "wiki page" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> CreativeWork </td> <td> "14-" </td> <td> </td> <td> Fixed value. </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> TBD </td> <td> </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> TBD </td> <td> </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> TBD </td> <td> </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> TBD </td> <td> </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _FileAttachment_ , _YouTubeVideo_ , or _Image/Photo_ </td> <td> </td> <td> Files and Media </td> </tr> <tr> <td> **learningResourceType** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> Inherited from Files and Media </td> </tr> <tr> <td> **artefactType** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> Inherited from Files and Media </td> </tr> <tr> <td> **resourceType** </td> <td> </td> <td> Text </td> <td> </td> <td> Inherited from Files and Media </td> </tr> </table> # Table 11 Metadata defined for the _Repository Entry_ content type. <table> <tr> <th> **Metadata property** </th> <th> **Metadata typeof** </th> <th> **Value space** </th> <th> **Platform field** </th> <th> **Comments** </th> </tr> <tr> <td> **description** </td> <td> CreativeWork </td> <td> Text </td> <td> Description </td> <td> </td> </tr> <tr> <td> **genre** </td> <td> CreativeWork </td> <td> "Educational Material", "Artefact", "Research Material", "Other" </td> <td> Type of Attachment </td> <td> </td> </tr> <tr> <td> **learningResourceType** </td> <td> CreativeWork </td> <td> Text </td> <td> File content characterization </td> <td> Takes value if genre = "Educational Material" </td> </tr> <tr> <td> **artefactType** </td> <td> CreativeWork </td> <td> Text </td> <td> Takes value if genre = "Artefact" </td> </tr> <tr> <td> **resourceType** </td> <td> </td> <td> Text </td> <td> Takes value if genre = "Research Material" or "Other" </td> </tr> <tr> <td> **dateCreated** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **dateModified** </td> <td> CreativeWork </td> <td> Date </td> <td> </td> <td> </td> </tr> <tr> <td> **provider:name** </td> <td> Organization:Text </td> <td> "Horizon2020 UMI-Sci-Ed" </td> <td> </td> <td> Fixed Value </td> </tr> <tr> <td> **license** </td> <td> CreativeWork </td> <td> URL </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityAPI** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityControl** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityFeature** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **accessibilityHazard** </td> <td> CreativeWork </td> <td> Text </td> <td> </td> <td> TBD </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **typicalAgeRange** </td> <td> CreativeWork </td> <td> "14-" </td> <td> </td> <td> Fixed value </td> </tr> <tr> <td> **isPartOf** </td> <td> CreativeWork </td> <td> _UMIProject_ , _UMIBlog_ , _UMIBlogPost_ , _UMIForumTopic_ , _UMIGroupArticle_ , _UMIForumPost_ , _UMIWiki_ , _UMIRepositoryEntry_ </td> <td> </td> <td> </td> </tr> <tr> <td> **hasPart** </td> <td> CreativeWork </td> <td> _AttachmentFile_ </td> <td> </td> <td> For the Attached File </td> </tr> <tr> <td> **Attachment File** </td> </tr> <tr> <td> **name** </td> <td> CreativeWork > DigitalDocument </td> <td> Text </td> <td> </td> <td> </td> </tr> <tr> <td> **fileformat** </td> <td> CreativeWork </td> <td> Text (MIME format - see IANA site) </td> <td> </td> <td> </td> </tr> <tr> <td> **identifier** </td> <td> CreativeWork </td> <td> URI </td> <td> </td> <td> </td> </tr> <tr> <td> **isPartOf** </td> <td> CreativeWork > DigitalDocument </td> <td> _UMIProject_ , _UMIBlog_ , _UMIBlogPost_ , _UMIForumTopic_ , _UMIGroupArticle_ , _UMIForumPost_ , _UMIWiki_ , _UMIRepositoryEntry_ </td> <td> </td> <td> </td> </tr> </table> **Table 12 Metadata defined for _Files and Media_ supporting several content types. NOTE: These metadata are for a file attachment. YouTube videos do not have an Attachment file and include a “contentUrl” property instead. Image or photo attachments on the other hand include a “caption” property in place of the “description” property. **
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1141_OpenUP_710722.md
# Summary This document provides the description of what data OpenUP project will generate, how it will be stored and managed and how it will be preserved after the end of the project. PPMI prepared this document as the main partner leading task 1.4 with inputs from other consortium partners and with a review from UoA. This Data Management Plan complies with H2020 requirements 1 and is based on the DMP template 2 provided by the e-Infrastructures Austria project. The plan will be updated whenever changes to the project are made due to inclusion of new data sets, changes in consortium policies or other external factors. # Explanatory Remarks For creating a common ground of understanding, in the current deliverable we use the following definition of dataset: A dataset is any set of data (no matter how many files it materialises) that is worth to be considered as a unit for data management activities 3 Any of the following can be considered as a possible dataset in the context of a project: * Any dataset produced by aggregating data from data providers for analysing it; * Any dataset produced by aggregating data from data providers for building an integrated dataset out of the aggregated data (e.g. this is the case of Knowledge Bases); * The material of a training course; * A dataset documenting and providing evidence for either a report or a publication produced in the context of project activities. In the case of OpenUP we have identified three main categories of datasets: Core Datasets, i.e., datasets related to the main project activities (review- assess-disseminate) and worth to be used by the project. These datasets pre- exist OpenUP and are publicly available; Produced Datasets, datasets resulting from the operation and evaluation of OpenUP’s use cases and pilot applications. These may include but are not limited to data collected through questionnaires, workshops, interviews, and desktop research. Project Related Data, i.e., datasets resulting from the operation of the OpenUP project and produced by the OpenUP consortium. These datasets are collections of standard material produced by a research project, e.g. deliverables, dissemination material, training material, scientific publications. # Data Collection <table> <tr> <th> </th> <th> </th> </tr> <tr> <td> Data Collection </td> <td>  </td> <td> Data collection activities will be carried out in WP3, WP4, WP5, WP6 and WP7. Main data collection methods include survey, interviews, workshops, focus groups, crowdsourcing, and web mining. Main data that the consortium will generate include opinions, attitudes and practices of researchers regarding open peer review, alternative dissemination and altmetrics. </td> </tr> <tr> <td> </td> <td>  </td> <td> The raw data collected from surveys will be stored in a suitable format (e.g. Excel files, ODS or CSV files). </td> </tr> <tr> <td> </td> <td>  </td> <td> The recorded data from focus groups, workshops and interviews will be stored as recordings in a suitable format (MP3, OGG or WMA format) on the internal servers of the task leading organisations. If focus groups, workshops and interviews are not recorded, the written summaries will be saved in a suitable format (Word documents, text files). The data from crowd-sourcing will be stored in Google Spreadsheets that anyone with the link can access. </td> </tr> <tr> <td> </td> <td>  </td> <td> Files will be converted to open file formats where possible for long term storage and in these cases the researchers will anonymise the data. </td> </tr> </table> 3 Renear, A.H., Sacchi, S. and Wickett, K.M. (2010) Definitions of dataset in the scientific and technical literature. Proceedings of the American Society for Information Science and Technology, 47(1): 1–4. DOI: _10.1002/meet.14504701240_ # Data storage, selection and preservation and data sharing <table> <tr> <th> </th> <th> </th> </tr> <tr> <td> Data Storage </td> <td>  </td> <td> The raw data collected from the survey will be stored in a suitable format (e.g. Excel files, ODS or CSV files). The documents will be saved on the internal institutional server of the task leader (PPMI). The server is protected by passwords known only to the researchers working on OpenUP project. Unauthorised users will not be able to access the data. The documents available on the server are backed up regularly. Once processed and anonymised, the data will be published on an online repository (eg. Zenodo) and will be available for the use by third parties. </td> </tr> <tr> <td> </td> <td>  </td> <td> The recorded data from focus groups, workshops and interviews will be stored as recordings in a suitable format (MP3, OGG or WMA format) on the internal servers of the task leading organisations. If focus groups, workshops and interviews are not recorded, the written summaries will be saved in a suitable format (Word documents, text files) and will be also stored on the internal servers of the responsible institution. The data will be backed up regularly. The servers are password protected and only researchers working on the project will be able to access the files. The written summaries will be anonymised and will also be made available on the online repository for third parties’ use. </td> </tr> <tr> <td> </td> <td>  </td> <td> The data from crowd-sourcing will be stored in Google Spreadsheets that anyone with the link can access. Participants can enter data in the spreadsheet either anonymously or by identifying themselves. </td> </tr> <tr> <td> </td> <td>  </td> <td> All the collected data, apart from the crowd-sourced data, will be anonymised. The information provided will be analysed and presented in project reports grouped together and will not be used individually. </td> </tr> <tr> <td> Data storage </td> <td>  </td> <td> The Raw data will be stored on the institutional servers of the organisations that have generated them. The data will be stored for two years after the end of the project. After that, the data will be destroyed. </td> </tr> <tr> <td> </td> <td>  </td> <td> The anonymised data will be stored in an online repository (e.g. Zenodo). </td> </tr> <tr> <td> Data Sharing </td> <td>  </td> <td> The raw data will only be accessed by researchers that collected the data. The anonymised summaries of interviews and workshops or survey results will be made accessible to other partners as well as third parties. </td> </tr> </table> # Documentation and Metadata The following table outlines an example structure of the metadata that project partners will include to describe their data. <table> <tr> <th> </th> <th> </th> </tr> <tr> <td> Project and GA number </td> <td> OpenUP 710722 </td> </tr> <tr> <td> Data type </td> <td> Text document/ numerical data/ survey data </td> </tr> <tr> <td> Description </td> <td> Description of the variables and description of the data included </td> </tr> <tr> <td> Data state </td> <td> Processed </td> </tr> <tr> <td> Data source </td> <td> Interview/ workshop/ focus group/ data mining/ survey / crowdsourced data </td> </tr> <tr> <td> Media type </td> <td> The format the data is stored in </td> </tr> <tr> <td> Licence or use constraints </td> <td> Licence (if any) </td> </tr> <tr> <td> Size </td> <td> Size of the file </td> </tr> </table> # Ethics and Legal Compliance The OpenUP project will mostly collect data on opinions, attitudes and practices related to open peer review, innovative dissemination and novel impact assessments methods used by the researchers. The consortium will also collect some personal data (such as country, gender, career stage or a researcher). Contact emails and names of researchers will be used when inviting them to participate in the survey, interviews or workshops. Once the data is collected, the researchers will anonymise it and present it in aggregated manner. The procedures for data handling of OpenUP are described in D8.1 data handling procedures document available _here_ . Also, participants of interviews, workshops and survey will be informed about the OpenUP project and the research activities through informed consent procedures that are described in _D8.2._ # Responsibilities and Resources PPMI is the lead for developing DMP and implementing it together with participation of UoA. PPMI is also responsible to ensure that the plan is reviewed and revised during the project duration. The DMP will be updated whenever important changes to the project occur due to inclusion of new datasets, changes in consortium policies or external factors. All involved partners are responsible for the compliance with the DMP and the procedures for data collection, handling and preservation. The contact person for communication: Viltė Banelytė [email protected]_ +37062336168 No additional resources than those already outlined in the budget of OpenUP will be needed for data management and archiving.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1142_LiRichFCC_711792.md
simulation parameters, synthesis protocols etc.), and it will have moderate size on the order of typically few tens of GB. Depending on the Work Package involved in data generation, data may not only be useful for members inside the consortium but also for other academic institutions or for industry that might want to do benchmarking of new models, protocols or materials in comparison with existing battery technology. **3.2 How will data be managed internally?** All LiRichFCC partners provide appropriate storage facilities for research data and provide controlled accesses as well as appropriate infrastructure. They also support free access to research data considering ethical, legal, economic, and contractual framework conditions. **3.3 What data can be made public?** For those data that can be made public, it needs to be ensured that it is findable, accessible, interoperable, and reusable (FAIR). To this end, proprietary formats (see 3.1) will be converted into international standard formats such as ASCII and stored as text files. That way, scientists and development engineers from all over the world which are researching on the field of Liion batteries or the synthesis and electrochemistry of new Li-rich cathode materials for Li-ion batteries will benefit from the LiRichFCC program. The consortium is currently preparing the following data sets to be made publicly available on the project website as well as selected repositories: <table> <tr> <th> **Partner** </th> <th> **Record Name** </th> <th> **ID** </th> <th> **Data Type** </th> <th> **Short description** </th> <th> **Type of data** **within record** </th> </tr> <tr> <td> **CEA** </td> <td> AlF3 coatings </td> <td> CEA01 </td> <td> exp </td> <td> XRD of AlF3 coated materials and XRD acquisition protocol </td> <td> XRD, protocols </td> </tr> <tr> <td> **CEA** </td> <td> AlF3-coated Li2VO2F </td> <td> CEA02 </td> <td> exp </td> <td> Electrochemical results of galvanostatic cycling of 1wt%, 2wt%, 3wt% AlF3 coated Li2VO2F </td> <td> charge/discharge, protocols </td> </tr> <tr> <td> **CEA** </td> <td> Separator study </td> <td> CEA03 </td> <td> exp </td> <td> Electrochemical results of the study of the effect of the separator nature (Celgard alone, Whatman+Celgard)) </td> <td> charge/discharge, protocols </td> </tr> <tr> <td> **CEA** </td> <td> Electrolyte additives </td> <td> CEA04 </td> <td> exp </td> <td> Electrochemical results of galvanostatic cycling with the different additives (LiBoB, LiODFB, FEC and glycolide) </td> <td> charge/discharge, protocols </td> </tr> <tr> <td> **CEA** </td> <td> Voltage window </td> <td> CEA05 </td> <td> exp </td> <td> Electrochemical results of the study on the cutoff effect (1V4.1v, 1.3V-4.1V, 1.5V4.1V, 1.5V-4.3V) </td> <td> charge/discharge, protocols </td> </tr> <tr> <td> **CEA** </td> <td> Protocolls - CEA </td> <td> CEA06 </td> <td> exp </td> <td> Corresponding protocols for </td> <td> supplement to CEA01 - CEA05 </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> electrochemical testing as well as electrode making and coin cell mounting </td> <td> </td> </tr> <tr> <td> **DTU** </td> <td> XRD characterization </td> <td> DTU01 </td> <td> exp </td> <td> Collection of XRD data from LiRichFCC materials </td> <td> XRD, operando XRD, protocols </td> </tr> <tr> <td> **DTU** </td> <td> Atomic Simulation Environment </td> <td> DTU02 </td> <td> code </td> <td> Tools and Python modules for setting up, manipulating, running, visualizing and analyzing atomistic simulations </td> <td> code </td> </tr> <tr> <td> **DTU** </td> <td> Simulated structural data </td> <td> DTU03 </td> <td> theory </td> <td> Structural data of LiRichFCC materials obtained with DFT and cluster expansion methods </td> <td> structural data </td> </tr> <tr> <td> **KI** </td> <td> Ceramic synthesis </td> <td> KI01 </td> <td> exp </td> <td> Ceramic Synthesis, characterization and electrochemical performance </td> <td> XRD, EIS, protocols </td> </tr> <tr> <td> **KI** </td> <td> Li2MnO2F </td> <td> KI02 </td> <td> exp </td> <td> Li2MnO2F synthesis by ball milling. XRD, ICP-MS and TEM characterization. Galvanostatic charge/discharge cycles measured at C/10 (22,4 mAg-1) </td> <td> XRD, ICP-MS, TEM, charge/discharge, protocols </td> </tr> <tr> <td> **KIT** </td> <td> Na-rich FCC </td> <td> KIT01 </td> <td> exp </td> <td> Attempt to synthesize sodium-rich disordered rock-salt oxyfluoride material, to be used as a cathode in Na-ion batteries. </td> <td> XRD, SEM, charge/discharge, protocols </td> </tr> <tr> <td> **UU** </td> <td> Interface data </td> <td> UU01 </td> <td> exp </td> <td> Interfacial characterization data </td> <td> XPS </td> </tr> </table> The “Atomic Simulation Environment” developed by DTU is already available as open source freeware at _https://wiki.fysik.dtu.dk/ase/_ . All structural data obtained via simulation have been uploaded on NOMAD. In addition, the consortium has already published and plans to publish key raw data together with scientific publications. **3.4 What processes have been implemented?** The partners of the LiRichFCC consortium combine over a century of experience in research data handling, and have developed efficient ways to archive and share data. Nonetheless, research has become increasingly more interdisciplinary, and amounts of data generated are on the rise. Therefore, especially for collaborative work within individual work packages, the partners follow internal codes and standards for making data findable. **Parameter sets, methods and protocols** will be stored in text documents that follow standardized naming conventions jointly defined by the LiRichFCC partners to ensure maximum findability, accessibility and interoperability. **Aggregated data** in the form of presentations, reports (deliverables), publications, or patents follow standardized naming conventions. For example, presentations and reports include the name of the project, the corresponding work package, and the date. Deliverables can be identified by their deliverable number, publications have unique DOIs, and patents are numbered per international standards. Public aggregated data will by default be made available on the project webpage ( _www.lirichfcc.eu_ ) as well as in a yet- to-be-determined professional repository. Aggregated data to be shared will always be in a format that can be used and understood by a computer. They will typically be stored in PDF formats that are either standardised or otherwise publicly known so that anyone can develop new tools for working with the documents. **Raw experimental or theoretical data** that have been identified as non- restricted will be converted into a standard, non-proprietary format (ASCII text file) and combined with necessary meta data in the form of a text document and PDF file. Such data will be available on the project website as well as from a professional repository. General consideration regarding publication, depositing or patenting of research data are summarized by the Figure below that has been reproduced from the H2020 Program Guidelines to the Rules on Open Access to Scientific Publications and Open Access to Research Data in Horizon 2020: # OUTLOOK The project has ended on Sept. 30, 2019. The consortium plans a series of scientific publications based on project results and prepares selected data sets for publication (see 3.3).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1144_EVERLASTING_713771.md
**PROJECT OBJECTIVES** </th> </tr> </table> EVERLASTING is focussing on model based battery management systems (BMS) for Li-ion batteries. New or improved BMS features will be developed by performing intensive research activities in the field of physical testing, simulations, modelling and validation on battery cell and pack level to improve the reliability, lifetime, performance and safety of Li-ion batteries when being used in electric vehicles. Today, batteries are not yet the ideal energy container they were promised to be. They are expensive, fragile and potentially dangerous. Moreover the current electric vehicle cannot compete yet with traditional vehicles when it comes to driving range and flexibility. EVERLASTING intends to bring Li-ion batteries closer to this ideal by focusing on the following technology areas: * Predicting the behaviour of battery systems in all circumstances and over their full lifetime. This enables accurate dimensioning and choice of the correct battery type, leading to lower cost. It also facilitates the development of a powerful battery management system during all stages of its evolution from idea to fully tested product. * Sensing signals beyond the standard parameters of current, voltage and temperature. This multi-sensing approach, over early, mid and late stage time domains, provides more varied and in-depth data on the status of the battery facilitating a pro-active and effective management of the batteries, preventing issues rather than mitigating them. * Monitoring the status of the battery by interpreting the rich sensor data. By intelligently combining this information with road, vehicle and driver data we intend to offer accurate higher-level driver feedback. This induces a bigger trust and hence a lower range anxiety. * Managing the battery in a proactive way, based on a correct assessment of its status. Efficient thermal management and load management results in increased reliability and safety and leads to lower overall cost through an increased lifetime. * Defining a standard BMS architecture and interfaces and gathering the necessary support in the market. This allows an industry of standard BMS components to flourish which will result in lower cost. <table> <tr> <th> **2 DATA MANAGEMENT P** </th> <th> **LAN** </th> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **2.1 INTRODUCTION** </td> <td> </td> </tr> </table> A new element in Horizon 2020 is the use of Data Management Plans. A **Data Management Plan** (DMP) describes the data management life cycle for all datasets to be collected, processed or generated by a research project. The purpose of the Data Management Plan (DMP) is to provide an analysis of the main elements of the data management policy that will be used by the applicants with regard to all the datasets that will be generated by the project. Special attention will be given to “ **Open Access** ” to scientific information and “ **Open Research Data** ”. Open access (OA) refers to the practice of providing online access to scientific information that is free of charge to the end-user and reusable without restrictions. In the context of research and innovation, 'scientific information' can mean: peer-reviewed scientific research articles (published in scholarly journals) or research data (underlying data from publications, curated data and/or raw data). Why give open access to publications and data in Horizon 2020? Firstly, for the benefits of society in general. Modern research builds on extensive scientific dialogue and advances by improving earlier work. The Europe 2020 strategy for a smart, sustainable and inclusive economy underlines the central role of knowledge and innovation in generating growth. Broader access to scientific publications and data therefore helps to: * build on previous research results (improved quality of results) * encourage collaboration and avoid duplication of effort (greater efficiency) * speed up innovation (faster progress to market means faster growth) * involve citizens and society (improved transparency of the scientific process). Secondly, also the EVERLASTING partners benefit from giving open access to publications and research data by increasing the visibility of the project results, leading to more citations for the research partners and increased collaboration potential for setting up new projects or for the quicker adoption and valorisation of the built up knowledge. **Deliverable D8.2 “Data Management Plan”** describes the handling of research data that will be collected, processed or generated by the EVERLASTING project according to the guidelines for data management in the H2020 Online Manual. This deliverable will evolve during the lifetime of the project in order to describe the status of the project's reflections on data management. EVERLASTING Deliverable D8.2 “Data Management Plan” must be considered as a “ **living document** ” and will be updated when important updates are available: new datasets, updates on existing datasets, changes in consortium policies (e.g. on exploitation of results and patenting) or other external reasons (e.g. changes in consortium members and suggestions from advisory board). The DMP will be updated as a minimum in time with the periodic evaluation/assessment of the project. This update will be in parallel with the update of **D8.1 “Dissemination and exploitation plan”** . Obviously, there is a close link to D8.1 “Dissemination and exploitation plan” since the generated data within the EVERLASTING project is input for future exploitation and dissemination activities. Some of the generated datasets will be used as “underlying data” for the dissemination of EVERLASTING project results and the Data Management Plan will describe how these datasets will be shared as “Open Data”. By sharing data, we further improve the quality level and impact of the dissemination activities within EVERLASTING. ## 2.2 SCOPE OF DATA MANAGEMENT PLAN VERSION 2.0 The DMP is not a fixed document, but evolves during the lifespan of the project. The scope of this **second version of the EVERLASTING DMP** contains the updated status of reflection within the consortium about the overall types of data that have been or will be collected or produced during the project and on how and what data will be made open available to external stakeholders such as research institutes and companies. In this version of the EVERLASTING DMP, more detailed datasets and their specific handling will be described. The first open datasets have been uploaded in the **4TU.Centre for Research Data** **repository** (see 2.5.3 Data Inventory). Existing guidance material will be used to structure this work such as the use of **DMPOnline** ( _https://dmponline.dcc.ac.uk_ ) , which has been developed by the Digital Curation Centre to help writing data management plans. **OpenAIRE** also provides a range of resources, FAQs, webinars and support pages. OpenAIRE can also be contacted via the local representatives in all EU countries: the **National Open Access Desks** or NOADs. More information can be found on _www.openaire.eu_ . The **Horizon 2020 FAIR DMP template,** which is available on DMPOnline, has been designed to be applicable to any Horizon 2020 project that produces, collects or processes research data. A single DMP should be developed to cover its overall approach. However, where there are specific issues for individual datasets (e.g. regarding openness), these should be described in the specific dataset. In general terms, research data should be 'FAIR' i.e. findable, accessible, interoperable and re-usable. ## 2.3 OPEN ACCESS SCIENTIFIC PUBLICATIONS In this section, the intended scientific dissemination will be described briefly since **“underlying data”** of scientific publications forms the core part of the “open data research pilot”. The scientific dissemination will be implemented through: * The publishing of the generated results in open access peer-reviewed scientific journals * The presentation of project results in scientific conferences and events Partners will publish scientific articles with Green or Gold-standard open access to share results generated from the project. TU/e will ensure that rules in the Consortium Agreement are respected concerning scientific publications before their submission to journals. Prior to any disclosure (conference, publications, defence of PhD theses or Masters) the protection of the project progress must be secured. The project will generate research data in a wide range of levels of detail from simulation and lab results to demonstrator validation. Most data will be associated with results that may have a potential for commercial or industrial protection and therefore cannot be made accessible for verification and reuse in general due to intellectual property protection measures. However, relevant data necessary for the verification of results published in scientific journals can be made accessible on a case by case basis. The decision concerning the publication of data will be made by the Management Board, as the decision-making body of the consortium. Research data of public interest such as those underlying scientific publications will be made available via open access data repositories while hyperlinks to these datasets will be placed on the EVERLASTING website. Below you can find the list of planned scientific publications, as mentioned in the project proposal, which will generate “underlying data”. Besides these publications some deliverables will also generate “open data” like D2.3 “Report containing aging test profiles and test results” (Type: ORDP & Due Date: M42). For the most recent details on our dissemination activities, we refer to the **M18 update of D8.1 “Dissemination and exploitation plan”** . <table> <tr> <th> Topic </th> <th> Dissemination method </th> <th> Partners </th> </tr> <tr> <td> WP1: Improved simulation and modelling tools </td> <td> </td> </tr> <tr> <td> Dissemination of research results with respect to tools and methods achieving a relevant prediction of pack behaviour, with coupled controls, including validation (Siemens) </td> <td> Conference presentation </td> <td> Siemens PLM </td> </tr> <tr> <td> Presentation of (pre-)industrialized new models, tools and features allowing the modelling and prediction of battery pack behaviour and BMS coupling </td> <td> Siemens user conference (including major carmakers, suppliers, OEMs, battery suppliers…) </td> <td> Siemens PLM </td> </tr> <tr> <td> Demonstrating multi-level battery modelling, simulation, reduction and control coupling applied to automotive transportation </td> <td> Simulation@Siemens (Siemens divisions) </td> <td> Siemens PLM </td> </tr> <tr> <td> Modelling order reduction techniques </td> <td> Conference presentation, Journal paper (Open Access) </td> <td> TUM, TU/e </td> </tr> <tr> <td> WP2: Increased reliability </td> <td> </td> </tr> <tr> <td> The use of self-learning algorithms for estimation of SoH </td> <td> Conference presentation, Journal paper (Open Access) </td> <td> VITO, ALGOLiON </td> </tr> <tr> <td> Test procedures for BMS testing </td> <td> Conference presentation, Standard proposal </td> <td> TÜV SÜD, VITO </td> </tr> <tr> <td> WP3: Extended vehicle range </td> <td> </td> </tr> <tr> <td> Challenges of parametrization of physico-chemical battery model and comparison to standard electro chemical battery models </td> <td> Conference presentation </td> <td> RWTH </td> </tr> <tr> <td> Deduction of minimum parameter set for physico-chemical model and adaption of model for application on embedded hardware </td> <td> Journal paper (Open Access) </td> <td> RWTH </td> </tr> <tr> <td> Maximizing operation range of Li-Ion batteries to maximize energy output without losing safety </td> <td> Conference paper, Journal Paper (Open Access) </td> <td> RWTH </td> </tr> <tr> <td> Embedded BMS hardware requirements towards execution of extended battery models </td> <td> Conference paper </td> <td> RWTH </td> </tr> <tr> <td> Costs for more complex BMS hardware, enabling the extraction of more energy vs. higher capacity of battery cells </td> <td> Conference paper </td> <td> RWTH </td> </tr> <tr> <td> Development of common BMS software towards integration of more complex battery models </td> <td> Conference paper </td> <td> RWTH </td> </tr> <tr> <td> Advanced BMS Models and Model Adaptation </td> <td> University courses and seminars </td> <td> RWTH </td> </tr> <tr> <td> Full Physico-Chemical Parameterization of the Battery Cell </td> <td> Journal paper (Open Access) </td> <td> RWTH </td> </tr> <tr> <td> Aging behaviour of the Battery Cell </td> <td> Journal paper (Open Access) </td> <td> RWTH </td> </tr> <tr> <td> Methodology for onBoard estimation of lithium loss and anode and cathode degradation </td> <td> Journal paper (Open Access) </td> <td> RWTH, ALGOLiON </td> </tr> <tr> <td> Drive power prediction of electric busses </td> <td> Conference presentation and journal paper (Open Access) </td> <td> TU/e </td> </tr> <tr> <td> Online real-time vehicle parameter estimation </td> <td> Conference presentation and journal paper (Open Access) </td> <td> TU/e </td> </tr> <tr> <td> Energy Management of Electrified Auxiliaries of electric busses </td> <td> Conference presentation and journal paper (Open Access) </td> <td> TU/e </td> </tr> <tr> <td> Range extension and optimization: tradeoff between travel-time and energy consumption </td> <td> Conference presentation and Journal paper (Open Access) </td> <td> TU/e </td> </tr> </table> WP4: Safer batteries Battery and cell safety and failure modes, early degradation Conference CEA mechanisms of materials leading to cell thermal runaway presentation, Journal paper (Open Access) Safety monitoring of Li-ion batteries: signal processing for battery Conference CEA, safety applications, multi-sensing safety algorithms presentation, ALGOLiON journal paper (Open Access), demonstration at trade exhibitions, trade journal article Multi-scale, multi-physics modelling of safety hazards in Li-ion Conference CEA batteries presentation, Journal paper (Open Access) WP5: Longer battery life Active and passive thermal management of Li-ion batteries Conference CEA, VITO presentation, Journal paper (Open Access) Battery reconfiguration Conference VITO presentation, Journal paper (Open Access) Influence of cell-to-cell parameter variations on cell balancing Conference TUM presentation, Journal paper (Open Access) Optimal utilization of dissipative cell balancing Conference TUM presentation, Journal paper (Open Access) Optimal utilization of non-dissipative cell balancing Conference TUM presentation, Journal paper (Open Access) WP6: Standardized architecture BMS architecture Standard proposal LION Smart Requirements and architecture concept of a highly modular Journal paper (Open TUM prototyping hardware platform. Access) Implementation of a selection of the developed technologies in WP2- Journal paper (Open LION W5 according to their readiness levels on standardized BMS Access) Smart WP7: Demonstrator Battery pack demonstrator, integrated in electric vehicle Presentation to VOLTIA scientific community and general public ## 2.4 DATA REPOSITORIES The EVERLASTING project will collect or generate a lot of different types of datasets. Some of these datasets will be only for internal use between the EVERLASTING consortium partners and will be stored in EVERLASTING repositories, while other datasets will be made publicly accessible in open access repositories. **2.4.1 DATA REPOSITORIES - DATA OWNER** The main data type collected in the EVERLASTING project is **BMS related measurement/testing/simulation data** of which one of the partners is the owner. If the data is considered to be privacy related, it is the responsibility of the data owner to inform the impacted person(s) and provide the necessary terms and agreements. The data owner will store its own data on **data owner servers** following internal data management procedures during the research project. Most partners have a dedicated Data Protection Officer (DPO) and some partners have a Research Data Management team (RDM). ### 2.4.2 DATA REPOSITORIES - EVERLASTING To facilitate the exchange of information and data between the EVERLASTING partners, the project coordinator has set-up 2 dedicated sites: **EVERLASTING SharePoint site** to exchange all general information and documents like MB and GA meeting minutes, WP related documents, such as the Grant Agreement and the Consortium Agreement, deliverables and publications (for more information see D9.1 “Project collaborative platform”). **EVERLASTING Secure FTP site** to facilitate the exchange of research data between the EVERLASTING partners. **2.4.3 DATA REPOSITORIES - OPEN ACCESS** A data repository is a digital archive collecting and displaying datasets and their metadata. A lot of data repositories also accept publications, and allow linking between publications and their underlying data. During the first 6 months, the EVERLASTING partners have been studying the different options where to store the data that can be made openly accessible. An overview of repositories can be found at **Re3data** ( _www.re3data.org_ ) . The EVERLASTING partners prefer to use an open access data repository which is co-developed by one of the own partners and the choice was made to use the “ **4TU.Centre for Research Data** ” **repository** as **main data repository for EVERLASTING “Open Data”** . The 4 Technical Universities in the Netherlands have created this research data repository ( _http://data.4tu.nl_ ) and are further developing this. TU/e has an own RDM group and can help the EVERLASTING partners in using this repository. It is possible to deposit EVERLASTING data as open data or deposit it with embargo when it contains confidential data. The **TU/e RDM team** can help in offering guidelines for sustainable data formats and metadata standards, as well as support for dealing with sensitive data and licensing. As complementary solution for storing open access publications and open research data, the EVERLASTING partners can use the “Zenodo” repository ( _www.zenodo.org)_ . Zenodo repository is provided by OpenAIRE and hosted by CERN. Zenodo is a catch-all repository that enables researchers, scientists, EU projects and institutions to: * Share research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. * Display their research results and get credited by making the research results citable and integrating them into existing reporting lines to funding agencies like the European Commission. * Easily access and reuse shared research results. * Integrate their research outputs with the OpenAIRE portal. ## 2.5 DATA SUMMARY **2.5.1 INTRODUCTION** EVERLASTING is focussing on model based battery management systems (BMS) for Li-ion batteries. New or improved BMS features will be developed by performing intensive research activities in the field of physical testing, simulations, modelling and validation on battery cell and pack level to improve the reliability, lifetime, performance and safety of Li-ion batteries when being used in electric vehicles. The first 3 years of the EVERLASTING project, the research activities will be mainly performed in the different labs of the research partners. In a later phase, also real-life validation activities will be performed on the electric van from VOLTIA and the electric bus from VDL ETS. The main data type collected in the EVERLASTING project is **BMS related measurement/testing/simulation data** . Some of the research data (such as lifetime and safety test data) obtained in the project will be publicly shared via the Open Research Data Pilot. In this chapter, the reader can find the first available open datasets and can also get an idea of the different datasets to be expected the remaining period from the EVERLASTING project. Before elaborating on the datasets in the data inventory we first describe the FAIR Data principles. ### 2.5.2 FAIR DATA The datasets will be made available following the **FAIR Data principles** which means findable, accessible, interoperable and re-usable. #### Making data findable, including provisions for metadata [fair data] _Outline the discoverability of data (metadata provision)_ All data underlying journal articles will be archived and made available via the data repository of **4TU.Centre for Research Data** ( _http://data.4tu.nl_ ) . It is a data repository for technical and scientific research data, mainly from The Netherlands. It offers data archiving services also to researchers at non-Dutch institutions and to H2020 projects such as the EVERLASTING project. 4TU.Centre for Research Data makes data openly available with a user license which is comparable to a CC BY-NC license. **Discoverability** of the data in 4TU.Centre for Research Data is achieved by: * Adding bibliographic (descriptive) metadata to each dataset (according to the Datacite metadata standard); * Allowing the metadata to be harvested by Google and portals such as Narcis ( _http://www.narcis.nl_ ) ; * Presenting metadata in the open linked data RDF format; * Providing a DOI to each dataset; * Linking the data set to the accompanying publication. Recognisability or visibility of the data as originating from the EVERLASTING project will be enhanced by creating a special collection of these data in 4TU.Centre for Research Data. _Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?_ Each data set deposited in 4TU.Centre for Research Data is assigned a **DOI** so that it is uniquely identifiable and can be cited in scholarly articles. #### Making data openly accessible [fair data] _Specify which data will be made openly available? If some data is kept closed provide rationale for doing so._ Partners will publish scientific articles with Green or Gold-standard open access to share results generated from the project. TU/e will ensure that rules in the Consortium Agreement are respected concerning scientific publications before their submission to journals. Prior to any disclosure (conference, publications, defence of PhD theses or Masters) the protection of the project progress must be secured. The project will generate research data in a wide range of levels of detail from simulation and lab results to demonstrator validation. Most data will be associated with results that may have a potential for commercial or industrial protection and therefore cannot be made accessible for verification and reuse in general due to intellectual property protection measures. However, relevant data necessary for the verification of results published in scientific journals can be made accessible on a case by case basis. The decision concerning the publication of data will be made by the Management Board, as the decision-making body of the consortium. _Specify how the data will be made available_ The data will be made openly available via the data archive of 4TU.Centre for Research Data ( _http://data.4tu.nl_ ) . _Specify where the data and associated metadata, documentation and code are deposited_ As part of the dataset, a data guide will be deposited with data specific information (parameters and/or variables used, column headings, codes/symbols used, etc.) and with information on the provenance of the data. When software is needed to reuse the data, the code will also be deposited together with the data itself. #### Making data interoperable [fair data] _Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability._ _Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?_ These questions do not yet apply to the project. To make the data interchangeable between the project partners and future users of the data, we are developing a metadata scheme. Related to standard vocabulary for data types, we can refer to our white papers (see D8.1 for more details) in which we will explain the definitions of some data types such as SOC and SOH. #### Increase data re-use (through clarifying licenses) [fair data] _Specify how the data will be licenced to permit the widest reuse possible_ Data sets deposited in 4TU.Centre for Research Data will have a user license comparable to a CC BY-NC license. 4TU.Centre for Research Data is in the process of offering data depositors a choice of different Creative Commons user licenses. When this is realized a CC BY license will be selected. _Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed_ Data will be made available for re-use immediately upon publication of the accompanying article or after a certain embargo period. This is on a case-by- case basis. _Specify the length of time for which the data will remain re-usable_ 4TU.Centre for Research Data archives data for a minimum of 15 years. Data from the EVERLASTING project are provided in a format that is recommended by 4TU.Centre for Research Data for long term preservation and re-use. 4TU.Centre for Research Data has received a Data Seal of Approval which guarantees that the research data deposited here will continue to be able to be found and shared in the future. _Describe costs and potential value of long term preservation_ The costs of archiving data in 4TU.Centre for Research Data depends on the size of the data. If the data is deposited on behalf of one of the partners of 4TU.Centre for Research Data (the 4 special Dutch universities, including Eindhoven University of Technology) and doesn’t exceed 100 Gb per year, then 4TU.Centre for Research Data can absorb these costs. Above 100 Gb per year, the costs will be € 3,60 per Gb per 15 years. Archiving data from non-partners is free till 10 Gb per year. Above 10 Gb, the costs are € 4,50 per Gb per 15 year. **2.5.3 DATA INVENTORY** The data inventory is structured alphabetically per partner. This version of the DMP plan will elaborate on all expected datasets more in detail on topics such as description, formats, metadata and whether the data will be shared/made open access or not. In this section, based on the current insights, the reader can get an view on the datasets to be expected from the EVERLASTING project. The first open datasets, from TU/e and TUM, are available in the 4TU.Centre for Research Data repository. #### ALGOLiON Raw Data: cell and small pack measurements for current, voltage, temperature and single frequency impedance data. Raw Data Format: CSV time series. Derived Parameters: electrochemical based parameters to diagnose the state of safety and predict the development of safety hazards. Derived Parameters Format: time domain normalized values. Analysis of Derived Parameters: statistical analysis of hazard parameters. The goal is to obtain data that will be helpful in identifying how the signature of a healthy battery differs from the signature of a damaged battery. The work will collect sufficient data to ascertain that the signatures are consistent across cells, various conditions of cells and charge-discharge cycles. A large statistical base will be used to quantify the extent to which signatures are repeatable across these platforms. Modelling: trace the development of internal short circuits to determine empirical time constant for changes in cell resistance and state of safety; projected time to threshold for thermal runaway. #### CEA <table> <tr> <th> </th> <th> **Data Description** </th> <th> Access </th> </tr> <tr> <td> WP4-T4.2: Multi-sensing strategy </td> </tr> <tr> <td> Meta Data </td> <td> * Type of sensors and specifications (Temperature, constraint gauge, IR, voltage, current). * Sensor implementation (Number of sensors, location…) * Cell ID * Further experiment information if needed </td> <td> EVERLASTING CONSORTIUM </td> </tr> <tr> <td> Data </td> <td> • Raw data coming from the multi-sensing experiments Format : CSV </td> <td> EVERLASTING CONSORTIUM </td> </tr> <tr> <td> Data </td> <td> • Post treated data coming from the multi-sensing experiments Format : CSV </td> <td> CEA </td> </tr> <tr> <td> WP4-T4.3: Safety Testing Campaign </td> </tr> <tr> <td> Meta Data </td> <td> • Conditions of tests, Cell ID… </td> <td> OPEN ACCESS </td> </tr> <tr> <td> Data </td> <td> • Raw data coming from the Safety Testing Campaign Format : CSV </td> <td> OPEN ACCESS </td> </tr> <tr> <td> Data </td> <td> • Post treated data coming from the Safety Testing Campaign Format : CSV </td> <td> EVERLASTING CONSORTIUM </td> </tr> <tr> <td> WP4-T4.4: Post Mortem Analysis </td> </tr> <tr> <td> Meta Data </td> <td> * Protocols * Pictures * Cell Size, weight… </td> <td> OPEN ACCESS </td> </tr> <tr> <td> Data </td> <td> • Characterization Raw data Format : tbd </td> <td> OPEN ACCESS </td> </tr> <tr> <td> Data </td> <td> • Post treated data Format : tbd </td> <td> EVERLASTING CONSORTIUM </td> </tr> <tr> <td> WP4-T4.5: Thermal and electrical modelling </td> </tr> <tr> <td> Meta Data </td> <td> * Physical models (set of equations chosen to describe the model) * Software version </td> <td> OPEN ACCESS </td> </tr> <tr> <td> Data </td> <td> • Model implementation (full access software) Format : COMSOL software </td> <td> CEA </td> </tr> <tr> <td> Data DEMONSTRAT OR SIMPLIFIED CASE </td> <td> * Data Input: * Simple Geometry * Mesh (Finite element) * State of the art material physical specifications and models Format : COMSOL Application * Data Output: * Simulation Results Format: CSV </td> <td> OPEN ACCESS </td> </tr> <tr> <td> Data & Meta Data DEMONSTRAT OR EVERLASTING </td> <td> • Data Output : \- Simulation results Format : CSV </td> <td> OPEN ACCESS </td> </tr> <tr> <td> CASE </td> <td> * Data and Meta Data input generated by EVERLASTING partners: * Material specifications and models deduced from experimental characterization * Experimental safety tests measurements (voltage and temperature time records) Format: CSV * COMSOL Application </td> <td> EVERLASTING CONSORTIUM </td> </tr> <tr> <td> WP4-T4.6: Safety warning and prediction algorithms </td> <td> </td> </tr> <tr> <td> Input Data </td> <td> • Database from multi-sensing measurements: exploitation Format: CSV </td> <td> EVERLASTING CONSORTIUM (Common data with ALGOLiON) </td> </tr> <tr> <td> Output Meta Data </td> <td> • Detection & Learning algorithms Format: Matlab Code </td> <td> EVERLASTING CONSORTIUM: (Share knowledge with ALGOLiON) </td> </tr> <tr> <td> Output Data </td> <td> • Results: statistics, figures, precision and robustness of the algorithms Format : tbd </td> <td> OPEN ACCESS </td> </tr> <tr> <td> WP5: Longer battery life </td> <td> </td> </tr> <tr> <td> Data </td> <td> * Thermal and hydraulic simulations results. Detailed comparison with experimental performances Format : tbd * Experimental tests : Detailed tests results in terms of thermal performance Format : tbd </td> <td> EVERLASTING CONSORTIUM </td> </tr> <tr> <td> Data </td> <td> • Experimental tests: General evaluation of 2 cooling solutions </td> <td> OPEN ACCESS </td> </tr> </table> #### LION Smart <table> <tr> <th> **WP** </th> <th> **Data** </th> <th> **Format** </th> <th> **Metadata** </th> <th> </th> <th> </th> </tr> <tr> <td> WP6 </td> <td> Reduced order electrochemical cells simulation models </td> <td> Matlab/Simulink </td> <td> Documentation simulation models </td> <td> of </td> <td> the </td> </tr> <tr> <td> WP6 </td> <td> Cells internal states and parameter estimation procedures and models </td> <td> Matlab/Simulink </td> <td> Documentation simulation models </td> <td> of </td> <td> the </td> </tr> <tr> <td> WP6 </td> <td> BMS simulation models </td> <td> Matlab/AmeSIM </td> <td> Documentation simulation models </td> <td> of </td> <td> the </td> </tr> </table> #### RWTH Aachen <table> <tr> <th> </th> <th> **Electrical cell** **measurements** </th> <th> **Simulations with physicochemical model** </th> <th> **Post mortem laboratory data** </th> </tr> <tr> <td> Data </td> <td> * Voltage * Current * Temperature * (Complex impedance data) </td> <td> * Internal Cell Potentials * Currents * Temperatures </td> <td> * Sizes * Weights * Microscopic Pictures * ICP Data </td> </tr> <tr> <td> Format </td> <td> CSV time series </td> <td> Binary file with Matlab parsing script </td> <td> Text Files, jpeg, Excel </td> </tr> <tr> <td> Metadata </td> <td> * Measurement Program * Cell ID * Time * Further measurement conditions (e.g. ageing test the cell is part of) </td> <td> * Simulation revision * Parameter set (Stripped XML File) </td> <td> * Protocol of cell opening * Cell ID </td> </tr> </table> #### Siemens Some of the simulation results in WP1 (data produced by the simulators, not the models and simulator themselves) may be considered as open: if the data used for parameterizing the models is public or if the partners give their approval for making simulation results public. _**TU/e** _ #### _Current data set_ <table> <tr> <th> **Journal Publications** </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Lead partner </td> <td> W P </td> <td> Status </td> <td> Open access </td> <td> RDM </td> <td> Year </td> <td> Authors </td> <td> Title </td> <td> Journal </td> </tr> <tr> <td> TUE </td> <td> 1 </td> <td> Accepted </td> <td> Green </td> <td> Data DOI: _https://doi.org/10.41_ _21/uuid:0d19258e8fe2-44b7-b390cfc205669528_ </td> <td> 2018 </td> <td> N. Jin, D. Danilov, P.M.J. Van den Hof, M.C.F. Donkers </td> <td> Parameter Estimation of an Electrochemistry-based Lithium-ion battery model using a Two-Step Procedure and Sensitivity Analysis </td> <td> Int J Energy Research </td> </tr> </table> The data set contains the Matlab simulation files needed to reproduce the results of the paper. The data set is open accessible and available in 4TU.Centre for Research Data repository under following link: _https://data.4tu.nl/repository/uuid:0d19258e-8fe2-44b7-b390-cfc205669528_ #### _Expected data sets_ Following data sets are expected in the next period. All data sets will be open, unless confidential information was used to generate them. <table> <tr> <th> **WP** </th> <th> **Data** </th> <th> **Format** </th> <th> **Metadata** </th> </tr> <tr> <td> WP1 </td> <td> Implemented battery simulation models Implementation of parameter estimation procedures </td> <td> Matlab </td> <td> Documentation of the simulation models </td> </tr> <tr> <td> WP3 </td> <td> Simulation models implementing range estimation techniques </td> <td> Matlab and AmeSIM </td> <td> Documentation of the simulation models </td> </tr> </table> ##### TUM <table> <tr> <th> **Journal Publications** </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Lead partner </td> <td> W P </td> <td> Status </td> <td> Open access </td> <td> RDM </td> <td> Year </td> <td> Authors </td> <td> Title </td> <td> Journal </td> </tr> <tr> <td> TUM </td> <td> 1 </td> <td> Published </td> <td> Gold </td> <td> Data DOI: _https://doi.org/10.41_ _21/uuid:c10a6b3fefe9-41ce-99f64093df68c653_ </td> <td> 2017 </td> <td> J. Sturm, F.B. Spingler, B. Rieger, A. Rheinfeld, A. Jossen </td> <td> Non-Destructive Detection of Local Aging in Lithium-Ion Pouch Cells by Multi-Directional Laser Scanning </td> <td> Journal of the Electrochemica l Society </td> </tr> </table> <table> <tr> <th> **Journal Publications** </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Lead partner </td> <td> WP </td> <td> Status </td> <td> Open access </td> <td> RDM </td> <td> Year </td> <td> Authors </td> <td> Title </td> <td> Journal </td> </tr> <tr> <td> TUM </td> <td> 5 </td> <td> review </td> <td> Yes </td> <td> Reserved DOI: _https://doi.org/10.4_ _121/uuid:65ed2b759cb7-469d-b55af4599e5b2126_ </td> <td> </td> <td> Ilya Zilberman, Alexander Rheinfeld, Andreas Jossen </td> <td> Uncertainties in Entropy due to Temperature Path Dependent Voltage Hysteresis in Li-Ion Cells </td> <td> Journal of Power Sources </td> </tr> </table> <table> <tr> <th> **Task** </th> <th> **Author** </th> <th> **Publication** </th> <th> **Existing dataset used** </th> <th> **Dataset generated** </th> </tr> <tr> <td> T5.3 </td> <td> Ilya Zilberman </td> <td> Statistical analysis of self-discharge in lithium-ion cells </td> <td> </td> <td> • 1.1 Check Up and EIS Measurement of 48 LG MJ1 cells: Distribution of the capacity and of the impedance </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> • 1.2 OCV measurement of 48 LG MJ1 cells </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> • 1.3 Self-discharge measurement of 24 LG MJ1 cells (SOC, T) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> • 1.4 Correlation analysis between single cell parameters </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> • 1.5 Simulation of the influence of different self-discharge rates on the cell voltage within the battery pack </td> </tr> <tr> <td> T5.3 </td> <td> Ilya Zilberman </td> <td> Influence of cell-to- cell variations, thermal conditions and balancing on the scalability of large lithium-ion battery packs </td> <td> 1.1, 1.2, 1.3, 1.5 </td> <td> Thermal characterization * 2.1 Heat capacity measurement of the LG MJ1 cell * 2.2 Heat generation measurement over the whole SOC range for 0.2C, 0.5C and 1C charge and discharge Electrical characterization </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> • 2.3 EIS and pulse measurements (SOC, T) </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> • 2.4 Simulation of electrical long-term and short-term behaviour of a battery pack with different balancing circuits </td> </tr> <tr> <td> T5.3 </td> <td> Ilya Zilberman </td> <td> Analysis of balancing effort during the ageing of the battery module </td> <td> 1.5, 2.4 • • </td> <td> 3.1 Module aging data with controlled temperature gradient 3.2 Balancing effort during the aging of the module </td> </tr> <tr> <td> T5.4 </td> <td> Sebastian Ludwig </td> <td> Multi-objective nondissipative balancing </td> <td> 2.4 • </td> <td> 4.1 Temperature distribution of a battery module during a driving profile with applied multi-objective non-dissipative balancing </td> </tr> <tr> <td> T1.1 </td> <td> Johannes Sturm </td> <td> Multi-dimensional simulation of internal short circuits </td> <td> ECM parameters • (RWTH) • </td> <td> 4.2 Progression of the temperature within the cell during an internal short circuit 4.3 Progression of the voltage during an internal short circuit </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> • </td> <td> 4.4 Influence of different cooling conditions for a state of the battery for an internal short circuit </td> </tr> <tr> <td> T1.2 </td> <td> Johannes Sturm </td> <td> Internal cell state estimation via electro chemical based EKF </td> <td> ECM parameters • (RWTH) • </td> <td> 5.1 Different approximation methods for solid state diffusion 5.2 Estimation of the anode potential with Extended Kalman Filter (EKF) during CCCV charge </td> </tr> </table> ##### TUV SUD TUV SUD performs electrical cell measurements and cycling tests (WP2) and abuse tests (WP4). These tests are recorded in great detail (15 Gb from 5 Cell tests) because it might be useful also for other partners at a further stage in the project. <table> <tr> <th> </th> <th> **Electrical cell measurements and cycling (WP2)** </th> <th> **Abuse Testing on Cells and Packs (WP4)** </th> </tr> <tr> <td> Data </td> <td> * Voltage * Current * Temperature * Impedance data </td> <td> * Voltage * Current * Temperature * Videos * Photos </td> </tr> <tr> <td> Format </td> <td> * CSV * Open source video and picture codecs </td> <td> * CSV * Open source video and picture codecs </td> </tr> <tr> <td> Metadata </td> <td> Test report containing: * Measurement Program * Cell ID * Time * Further experiment information if needed </td> <td> Test report containing: * Measurement Program * Test Set-up * Cell/Pack ID * Time * Further experiment information if needed </td> </tr> </table> ##### VITO <table> <tr> <th> 1\. Data Summary </th> </tr> <tr> <td> What is the purpose of the data collection/generation and its relation to the objectives of the project? </td> <td> The tests consist of ageing commercial Li-ion cells within several stress conditions (temperature, current, …). The results will allow to better understand how these cells age when exposed to these conditions which can be similar to the conditions that the cells will undergo in an electric vehicle. Additionally these tests will be used to build an ageing model that can estimate the state of health of a battery and also predict its lifetime. </td> </tr> <tr> <td> What types and formats of data will the project generate/collect? </td> <td> The ageing tests will generate measurements collected at cell level. The data will contain time, current, voltage, impedance and temperature values. </td> </tr> <tr> <td> Will you re-use any existing data and how? </td> <td> We might use some data collected from previous EU projects. </td> </tr> <tr> <td> What is the origin of the data? </td> <td> The data will be generated from experimental measurements. </td> </tr> <tr> <td> What is the expected size of the data? </td> <td> Some gigabits </td> </tr> <tr> <td> To whom might it be useful ('data utility')? </td> <td> Researchers working on Li-ion battery ageing; Li-ion battery user in automotive applications </td> </tr> <tr> <td> 2\. FAIR data </td> </tr> <tr> <td> Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? </td> <td> We can use metadata according to standards. </td> </tr> <tr> <td> What naming conventions do you follow? </td> <td> A naming agreed within the consortium. </td> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> yes </td> </tr> <tr> <td> Do you provide clear version numbers? </td> <td> yes </td> </tr> <tr> <td> What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how. </td> <td> An excel file with an overview of all the tests is usually created which makes easier to follow up the tests and to identify the test conditions and results. </td> </tr> </table> ##### VOLTIA & VDL ETS As demonstrator partners, VOLTIA and VDL ETS, can support the other participants by supplying real vehicle data of their existing e-fleet to help verifying and optimising algorithms and models. In the final year, VOLTIA and VDL ETS also play a role in the real life validation of the EVERLASTING BMS features in their respective electric vehicles. <table> <tr> <th> **Task** </th> <th> **Author** </th> <th> **Existing dataset used** </th> <th> **Dataset generated** </th> </tr> <tr> <td> T3.1 T3.3 </td> <td> VOLTIA </td> <td> Driving data from operation of electric utility vans. The data consists of: GPS coordinates, driving range, battery SOC, maximal, current and average speed, voltage, current, energy flow, temperature on cell and pack level. Monitored vehicles have been driven more than 1 million km cumulatively. </td> <td> Driving data from operation of electric utility vans. Measurement of data during specially designed driving profiles: voltage, current and SOC will be measured with high data sampling rate (5Hz). </td> </tr> <tr> <td> T5.1 </td> <td> VOLTIA </td> <td> \- </td> <td> Experimental validation of prototype of passive cooling system of battery pack </td> </tr> <tr> <td> T7.5 </td> <td> VOLTIA, TÜV SÜD </td> <td> \- </td> <td> Measurement of electrical, thermal and functional parameters of demonstrator battery pack. </td> </tr> </table> <table> <tr> <th> **3** </th> <th> **CONCLUSIONS** </th> </tr> </table> EVERLASTING Deliverable D8.2 “Data Management Plan” must be considered as a “ **living document** ” and will be updated when important updates are available: new datasets, updates on existing datasets, changes in consortium policies (e.g. on exploitation of results, patenting, …) or other external reasons (e.g. changes in consortium members, suggestions from advisory board, …). #### The “Data Management Plan” will be updated as a minimum in time with the periodic evaluation/assessment of the project and in parallel with the update of D8.1 **“Dissemination and exploitation plan”** . The scope of this **second version of the EVERLASTING DMP** contains the updated status of reflection within the consortium about the overall types of data that have been or will be collected or produced during the project and on how and what data will be made open available to external stakeholders such as research institutes and companies. The first open datasets have been uploaded in the **4TU.Centre for Research Data** **repository** (see 2.5.3 Data Inventory).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1154_Rafts4Biotech_720776.md
# 1 INTRODUCTION ## 1.1 PURPOSE OF THE DOCUMENT The goal of this document is to provide a technical description and the guidelines of Raft4Biotech data management plan version 1. The Deliverable outlines how the research data collected or generated will be handled during and after the Raft4Biotech action, describes which standards and methodology for data collection and generation will be followed, and whether and how data will be shared. This document follows the template provided by the European Commission in the Participant Portal 1 . Although a DMP is not required for this call Biotec-03-2016 because this particular call is not adhered to the Open Research Data Pilot, we nevertheless considered data sharing as an important component of our dissemination strategy and also an important component of our Responsible Research and Innovation (RRI) actions. We therefore endeavour to delineate the main actions to generate a DMP and to identify specific datasets that will be disseminated using free access/use and also the numbers of channels chosen for their dissemination. The _Rafts4Biotech_ project DMP will be written in reference to the Art 29.3 in the Annotated Model Grant Agreement 2 called “Open Access to Research Data” (research data management). Open access refers to a practice of giving online access to all scholarly disciplines information that is free of charge to the end-user. In this way data becomes re-usable, and the benefit of public investment in the research will be improved. Project participants must deposit their data in a research data repository and take measures to make the data available to third parties. The third party should be able to access, mine, exploit, reproduce and disseminate the data. This should also help to validate the results presented in scientific publications. In addition Article 29.3 suggests that participants will have to provide information, via the repository, about tools and instruments needed for the validation of project outcomes. The Data Management plan is a document that is submitted to the EU as project **Deliverable 8.1.** in June 2017, at early stage of the project. It is therefore important to note that the document will evolve and further develop during the project’s life cycle. # 3 RAFT4BIOTECH DATA MANAGEMENT PLAN ## 2.1 OBJECTIVE OF THE RAFT4BIOTECH PROJECT The objective of the _Rafts4Biotech_ project is to develop a new and versatile generation of microbial chassis to produce industrially relevant reactions confined in subcellular membrane compartments, allowing their fine regulation to achieve optimal performance and isolation from cellular metabolism to prevent toxic cross-reactivity. The _Rafts4Biotech_ project will produce a new generation of reliable and robust synthetic microbial chassis platforms (SMCPs) in which industrial production processes are confined in bacterial lipid rafts, thus SMCPs will be released from their classical off-genome limitations and optimized for the industrial production of numerous biochemical processes. The _Raft4Biotech_ project will use two of the most biotechnologically relevant microbial systems, the gram-positive bacterium _Bacillus subtilis_ and the gram-negative bacterium _Escherichia coli_ , to engineer synthetic bacterial lipid rafts to optimize the performance of three challenging biochemical processes of growing biotechnological interest, including 1) the production of a new class of lipophilic antibiotics to fight resistant infections. The industrial production of these antimicrobials cannot be achieved in conventional cell factories because of their inherent toxicity to bacterial cells and the natural capacity of these lipophilic molecules to inhibit essential activity in bacterial membranes. 2) the implementation of complex biochemical pathways of commercial value spatially confined in synthetic bacterial lipid rafts. Subcellular confinement of these processes in specific highly lipophilic areas of the bacterial membrane will increase the efficiency of the targeted enzymatic processes that can be applied to industrial manufacturing. The general aim of the _Rafts4Biotech_ project is therefore to develop an innovative technology to optimize multistep enzymatic processes using a new generation of SMCPs. dissemination stands, different dissemination channels will be considered depending on whether the dataset comes in form of a scientific publication or in contrast, is a large dataset, likely produced by the number of -omics research disciplines of _Rafts4Biotech_ project. In the flow-chart below, we delineated the main actions to identify the specific datasets that will be disseminated. **Figure 1 Flowchart of the Data Management Plan designed for _Rafts4Biotech_ . ** # 3 DATA SET DESCRIPTION The data for the _Rafts4Biotech_ project will be produced by various researchers from the participating laboratories using general guidelines and procedures to standardize the operative procedure of data and help to decide which data is dedicated to dissemination and which data requires IPR protection. ## 3.1 TRANSCRIPTOMIC DATA Transcriptomic data will be generated via RNA-Seq using high-throughput sequencing platforms. The resulting raw data (fastq files) and the alignment information (BAM files) will be stored in the European Nucleotide Archive 4 (ENA) which accepts sequence reads and associated analyses. Data submitted to ENA is automatically exchanged between International Nucleotide Sequence Database Collaboration 5 (INSDC) partners: NCBI and DDBJ. These data bases make data publically available to the community allowing for new discoveries by comparing these raw data sets with others. Secondly, analyzed data files will be deposited in the EBI Array Express Archive of Functional Genomics Data 6 . This ensures that transcriptome experiments are available for reuse and combined analyses. Both, the EBI and the NCBI curates data that is deposit to their archives, which ensure uniformity of the transcriptome data and its cognate experiments. ## 3.2 PROTEOMIC DATA Proteomic data will be generated by LC-MS/MS and SRMlabel-free. Upon publication data will be freely made available through public databases and supplementary material. Processed and raw data will be stored either in ProteomeXchange international public database 7 or PRIDE European repository 7 . ## 3.3 METABOLOMIC DATA The metabolomic data will be made available freely in the standard formats defined by the Metabolomics Standards Initiative 8 to enable free and open sharing of metabolomics data. The specific format will depend on whether targeted or untargeted methods are being used. Besides the specific data format the reported metadata are possibly even more crucial and are routinely reported in our internal storage system. Within the project we will use the OpenBis platform to store, annotate, and exchange metabolomics data 9 . Upon publication data will be freely made available through public databases, supplementary material and public accessible ETH storage for larger sets, thereby ensuring preservation and re-use. ## 3.4 GENETIC TOOLS AND CONSTRUCTS Regarding genetic tools and constructs that become available during the completion of _Rafts4Biotech_ , plasmids and useful genetic constructs that are released from IPR protection will be deposited at the Addgene non-profit public repository 10 . Genetic sequences will be available to any requesting laboratory. Addgene develops a strong interaction between participating laboratories to implement data curation and standardization of the samples that are stored in their DNA bank. # 4 DATA SET STORAGE During the life cycle of the project data will be stored and systematically organised in a repository called ‘Data4Raft‘. A website will be generated not only to be the showcase of all online communication material and also all the dissemination material (papers, conference abstracts, conference talk videos, posters), but also (via internal access-restricted intranet to the Data4Raft repository) to share, preserve, cite, explore and analyse research data. It will facilitate to make data available to others at the end of the project, and allows to replicate others work. # 5 DATA DISSEMINATION Following the recommendation of the Exploitation Committee of _Rafts4Biotech_ to the Steering Committee, dataset will be disseminated by appropriate channels, mostly depending on whether the dataset comes in form of a scientific publication or in contrast, is a large omics dataset. ## 5.1 PUBLICATIONS Open access will be mandatory for research publications. Open access publishing (gold open access) will be the preferred option when is available. However, high-impact journals such as _Science_ and _Nature_ do not provide this option. Even in the cases of the high-impact journals that offer the gold open-access option, the publication fees are excessive in such this precludes many laboratories to publish in gold open access. As we have a clear intention to cause the highest impact as possible, publication in high-impact journals will be pursued during the course of this project. In case that the journal does not have the gold open access option or the publication fees for open access cannot be assumed by the reporting laboratory, we will choose self- archiving publishing (green open access) as an option to publish in open access. This option is free and suitable for such already accepted manuscripts that become public after a defined embargo period of 2-6 months. As reference, we will use Europe PubMed Central (Europe PMC) repository 11 This repository is the recommended repository for Green Open Access publication in other European Funding actions such as ERC grants (European Research Council). EuropePMC provides access to life sciences articles, books, patents and clinical studies as well as links to online databases such as Uniprot, European Nucleotide Archive (ENA), Protein Data Bank Europe (PDBE) and BioStudies. Europe PMC meets the agreed definition of a repository as determined by Jisc and endorsed by RLUK, SCONUL, ARMA and UKCoRR. ## 5.2 IPR AND PUBLIC AVAILABILITY OF DATA While the data produced and collected in the framework of the _Rafts4Biotech_ project will be publicly available, precautious will be taken regarding intellectual property rights (IPR). To protect IPR, the ECR will be free to choose any available form of protection (Patent, Trademark, Industrial design, Copyright, Trade-secret, Confidentiality). The choice of the most suitable form should be made on the basis of the specificities of the action and the type of result. The ECR will develop the IPR section that has been previously defined in the Consortium Agreement (CA). Rules of confidentiality established in the CA will define access rights, license agreements and timing for public availability of the data sets. ## 5.3 PRESERVATION OF THE FINAL DATA SET Although the WEB-based ‘Data4Raft’ is a useful tool to store the data during the life cycle of the project, it is not geared towards the long-term preservation of the data. Therefore, at the completion of the project, the final dataset will be transferred to a sustainable repository, which ensures sustainable archiving of the final research data. Several such repository services promoting sustained access to digital research data are currently offered and the ECR will search and attempt to identify a repository promoting sustained access to digital research data. **END OF THE DOCUMENT** Project: Rafts 4 Biotech D8.1 Presentation of the Data Management Deliverable ID: D.8.1 Plan (DMP) Grant Agreement: 720776 Call ID: H2020-NMBP-BIO-2016 Page: 10 of 12
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1155_ZAero_721362.md
<table> <tr> <th> **3.** </th> <th> **Data Formats** </th> </tr> </table> Initially, data formats used for data representation within the ZAero project (or at individual project partners) may be proprietary and/or of complex structure. However, when preparing data for open access, the goal is to use standard data formats that are well known and easy to handle. Any data that is present in a highly proprietary complicated format must be converted to easily readable data formats before publication. For example, HDF5, XML, or standard image formats (.png, .jpg) are favoured over complex proprietary binary formats. Every data set that is published will be accompanied by a simple “readme.txt” text file that provides general information about the data as well as a detailed description of the data format. <table> <tr> <th> **4.** </th> <th> **Software and Interoperability** </th> </tr> </table> In some cases it makes sense to provide data with software that is able to load the data and/or process the data in order to make it easier to work with it. From the current status within the project it is not yet completely clear, how and if such software can be provided with the data. For each data set there will be a decision if and how dedicated software can be provided together with the data set. Preferably, we promote Python as programming language to work with scientific open access data from the ZAero project. Since Python has become very popular as the language of choice in research, this would make it easy for a large number of researchers to get access to the data and use it. <table> <tr> <th> **5.** </th> <th> **Licences for Data** </th> </tr> </table> For open access publication of pure data, the Creative Commons Attribution 4.0 International Public License (IPL) [5] will be considered in the first place. This license essentially provides the right to freely copy and redistribute the material in any medium or format. Re-mixing, transforming, and building upon the material are explicitly permitted. This is granted for any purpose. The licensor cannot revoke these freedoms as long as the license terms are followed. The use of the IPL is the preferable choice for data publication in ZAero. In specific cases a different license might be considered. <table> <tr> <th> **6.** </th> <th> **ZAero Data Sets** </th> </tr> </table> The following list includes the data sets which are expected to be acquired and/or used within the ZAero project. Each data set is characterized by: * Code: a code consisting of “ZAero-“ and a 3-letter code (e.g. “ZAero-DTD”). - Data set full name: a full name for the data set. * Responsible beneficiary: the main responsible project partner. * Description: a general description of the data set. * Data Format: the expected format of the data. * Data set size: the expected size of a single data set instance. * Data sharing: information about whether the data set is confidential (not published) or published under open access criteria. The subsequent list of data sets defines mainly the type of data. During the run of the project it can be expected that multiple instances for each data set will be acquired. For example, dry material triangulation data might be generated at a first trial and later repeated with modified parameters. In this case, the specific instances of data sets should additionally be tagged by * a date when the data set was acquired and * a textual description of the conditions/environment under which the data was captured or initially generated. Multiple instances of a single type of data set will be tagged with a running number. For example, data sets for code “ZAero-FCD” (see below) will be referred to as ZAero-FCD-1, ZAero-FCD-2, etc. Additionally, data sets that are uploaded to the Zenodo portal, will receive a unique DOI. For the data sets listed below the entry “Data sharing” describes whether data sharing is considered. “Publication” means that the plan is to publish the data set (or at least specific instances). “Confidential” means that the respective data set is not considered for publication. <table> <tr> <th> **Code** </th> <th> **ZAero-FCD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Fibre orientation sensor calibration data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> Ideko </td> </tr> <tr> <td> **Description** </td> <td> Data used to evaluate calibration accuracy of Profactor’s fibre orientation sensor: calibration images, position information etc. </td> </tr> <tr> <td> **Data format** </td> <td> HDF5 file </td> </tr> <tr> <td> **Data set size** </td> <td> ~40 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Code** </th> <th> **ZAero-LCD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Laser triangulation sensor calibration data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> Ideko </td> </tr> <tr> <td> **Description** </td> <td> Data used to evaluate calibration accuracy of Profactor’s laser triangulation sensor: calibration images, position information, etc. </td> </tr> <tr> <td> **Data format** </td> <td> HDF5 file </td> </tr> <tr> <td> **Data set size** </td> <td> ~40 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Code** </th> <th> **ZAero-DTD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Automated dry material placement triangulation data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> PROFACTOR </td> </tr> <tr> <td> **Description** </td> <td> Profile depth data that is generated by the in-line depth sensor from Profactor in the dry material placement shop floor at Danobat. </td> </tr> <tr> <td> **Data format** </td> <td> 16-bit PNG uncompressed image files in combination with simple text-file providing meta-information about the raw depth data (e.g. calibration information, position information, etc.) </td> </tr> <tr> <td> **Data set size** </td> <td> ~2 GB </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Code** </th> <th> **ZAero-FTD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Automated fibre placement triangulation data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> PROFACTOR </td> </tr> <tr> <td> **Description** </td> <td> Profile depth data that is generated by the in-line depth sensor from Profactor in the AFP test environment at MTorres and FIDAMC </td> </tr> <tr> <td> **Data format** </td> <td> 16-bit PNG uncompressed image files in combination with simple text-file providing meta-information about the raw depth data (e.g. calibration information, position information, etc.) </td> </tr> <tr> <td> **Data set size** </td> <td> ~1 GB </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-FOS** </th> </tr> <tr> <td> **Data set full name** </td> <td> Dry material fibre orientation sensor data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> PROFACTOR </td> </tr> <tr> <td> **Description** </td> <td> Dense fibre orientation data. For a dense grid of points over the surface of a scanned ADMP part fibre orientations are determined. Each measurement position is stored as a tuple of 6 floating point values: x, y, z (position as 3D point), vx, vy, vz (fibre orientation as 3D-vector). </td> </tr> <tr> <td> **Data format** </td> <td> HDF5 file containing a single Nx6 matrix of floating point values. </td> </tr> <tr> <td> **Data set size** </td> <td> ~10 GB (Probably, data sets will be down-sampled or only parts of it will be made publicly available to make data easily usable) </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-CTD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Curing monitoring: temperature data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> AGI </td> </tr> <tr> <td> **Description** </td> <td> Temperature profile (temperature over time at different locations) during CFRP curing </td> </tr> <tr> <td> **Data format** </td> <td> Xyz coordinate for each location, temperature value in °C, time stamp </td> </tr> <tr> <td> **Data set size** </td> <td> < 1 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Confidential </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-CFD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Curing monitoring: flow front data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> AGI </td> </tr> <tr> <td> **Description** </td> <td> Flow front propagation at different locations during CFRP resin injection </td> </tr> <tr> <td> **Data format** </td> <td> Xyz coordinate for data points, yes/no value on resin arrival, time stamp </td> </tr> <tr> <td> **Data set size** </td> <td> < 1 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Confidential </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-CDD** </th> </tr> <tr> <td> **Data set full name** </td> <td> Curing monitoring: degree of curing data </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> AGI </td> </tr> <tr> <td> **Description** </td> <td> Information on the degree of curing during resin curing in a CFRP fabrication process </td> </tr> <tr> <td> **Data format** </td> <td> Xyz coordinate for each location, value for degree of curing, time stamp </td> </tr> <tr> <td> **Data set size** </td> <td> < 1 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Confidential </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-SPE** </th> </tr> <tr> <td> **Data set full name** </td> <td> Schema Specification </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> DS </td> </tr> <tr> <td> **Description** </td> <td> Schema used in defining the content of the manufacturing database. (Also used for Effects of Defects Database) </td> </tr> <tr> <td> **Data format** </td> <td> MSWord document </td> </tr> <tr> <td> **Data set size** </td> <td> ~70 KB </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-MDB** </th> </tr> <tr> <td> **Data set full name** </td> <td> Manufacturing Data Base </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> DAS </td> </tr> <tr> <td> **Description** </td> <td> Manufacturing database for test model; Will contain simulated and/or measured data. Whole model data to show the full sweep of zero defect simulation and decision methods. One version of this data set is expected for each demonstration within the project. </td> </tr> <tr> <td> **Data format** </td> <td> HDF5 file </td> </tr> <tr> <td> **Data set size** </td> <td> ~5 GB </td> </tr> <tr> <td> **Data sharing** </td> <td> A few representative versions of data sets are planned to be published. Probably, these data sets will be reduced before publication to meet confidentialy concerns. </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-EDB** </th> </tr> <tr> <td> **Data set full name** </td> <td> Effects of Defects Database (EDB) </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> DS </td> </tr> <tr> <td> **Description** </td> <td> Database which tabulates the results of multiple FE analysis for use as a lookup during manufacture to determine the knock-down factors to apply for measured defects. </td> </tr> <tr> <td> **Data format** </td> <td> HDF5 file </td> </tr> <tr> <td> **Data set size** </td> <td> ~20 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Publication </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-FEA** </th> </tr> <tr> <td> **Data set full name** </td> <td> Effects of Defects FE analysis results </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> DS </td> </tr> <tr> <td> **Description** </td> <td> Datasets for multiple FE analysis runs used to determine the effect of different defects and defect locations on the properties of laminates. Use to create the lookup EDB for use during manufacture to determine the knock-down factors to apply for measured defects. Many (100 - 1000s) instances are expected. </td> </tr> <tr> <td> **Data format** </td> <td> Proprietary file format </td> </tr> <tr> <td> **Data set size** </td> <td> ~10 GB </td> </tr> <tr> <td> **Data management** </td> <td> Confidential </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-ALU** </th> </tr> <tr> <td> **Data set full name** </td> <td> Automated fiber placement lay-up definition </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> MTorres </td> </tr> <tr> <td> **Description** </td> <td> Definition of paths for lay-up on AFP machines. </td> </tr> <tr> <td> **Data format** </td> <td> CATIA file </td> </tr> <tr> <td> **Data set size** </td> <td> ~5 MB </td> </tr> <tr> <td> **Data sharing** </td> <td> Confidential </td> </tr> </table> <table> <tr> <th> **Data set short name** </th> <th> **ZAero-DLU** </th> </tr> <tr> <td> **Data set full name** </td> <td> Automated dry material placement lay-up definition </td> </tr> <tr> <td> **Responsible beneficiary** </td> <td> Danobat </td> </tr> <tr> <td> **Description** </td> <td> Definition of paths for lay-up on ADMP machines. </td> </tr> <tr> <td> **Data format** </td> <td> JT format (could be provided in any other CAD data format) </td> </tr> <tr> <td> **Data set size** </td> <td> ~5 MB </td> </tr> <tr> <td> **Data management** </td> <td> Confidential </td> </tr> </table> <table> <tr> <th> **7.** </th> <th> **Allocation of Resources** </th> </tr> </table> Costs for making data FAIR in the ZAero project will not be extensive. Due to the use of the Zenodo repository, no additional costs for infrastructure concerning storage and online availability of the data will occur. Concerning efforts related to preparation of the data for easy re-use, all partners need to work together to define clear interfaces within the project anyway. If the definition of internal interfaces is done in a good way, this will ease preparation of data for publication. We therefore expect that all efforts will be covered by the activities within the individual technical work packages and/or the dissemination work package. As a coordinator, Profactor will be responsible for keeping an overview and guiding data management across the whole project. However, individual partners are associated to data sets and declared “responsible beneficiary” as specified in the tables above. As such, partners are expected to supervise data acquisition, storage, and documentation for “their” data sets. In case specific versions of data sets that allow interesting (scientific) insights are identified, these will be presented and discussed in the regular telephone conferences or at general meetings. Activities for publication of data sets (preparation of scientific publications, upload to online repository, etc.) will be driven by the respective responsible beneficiaries.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1157_DYNACOMP_722096.md
# DATA SUMMARY The purpose of the data will be to create a database of material properties regarding the dynamic behavior of the following **three composite materials:** * **HexPly® M91** (UD prepreg system, area weight of 268 gsm, cpt of 0.25 mm) with PES and Polymide particles), that shows evidence of displaying a good dynamic behaviour. * **RTM250** (RTM resin with core shell particles), using a **Hexcel® 46290 WB1010 (5-harness)** (area weight of 280 gsm, cpt of 0.3 mm) fabric, that shows evidence of displaying a good dynamic behaviour. * **HexPly® 8552** (UD prepreg system with PES), which is not is not suitable for high rate impact behavior. The following properties will be collected as a function of loading rate: * Longitudinal tension (LT) * Transversal tension (TT) * Longitudinal Compression (LC) * Transversal Compression (TC) * In-Plane Sehar (IPS) * Interlamintar Toughness Mode I (GIC) and Mode II (GIIC) * Intralaminar toughness (G1+) * Impact (IMP) The following micromechanical properties will be collected as a function of loading rate: * Hardness * Indentation elastic modulus The following micromechanical properties will be collected as a function of impact energy and indentation strain rate: * Dynamic hardness * Coefficient of restitution The data generated will be numerical, including the load-displacement curves, as well as the mechanical properties derived from such curves. The origin of the data will be mechanical tests carried out at the participating institutions. The data will be used internally and will serve as a database against which to compare the results of the simulations that will be produced in the framework of the project. _Project nº 722096. H2020-MSCA-ITN 2016 European Industrial Doctorate_ ______________________________________________________________________________ # FAIR DATA ## Making data findable, including provisions for metatata A large number of tests will be produced. Each test will generally produce a CSV file, with the data acquired from the mechanical testing machine, that is typically load, displacement and time. The Core Scientific Metadata Model (CSMD) will be considered as a reference. The names of the files must identify the test unambiguously. **The following criteria is proposed:** XXX_YY_ZZ_NN where: XXX: refers to material type: M91, RTM250 or 8552. YY: refers to type of test: LT, TT, LC, TC, IPS, GIC, GIIC, G1+,IMP ZZ: cross head speed NN: number of sample A header will be added with the information about the material, the stacking of the composite laminate, the type of test and the dimensions of the specimen. **For micromechanical test results:** Impact indentation: AAAA_BBB_CC_D where: AAAA: refers to material type: M91, RTM250 or 8552. BBB: refers to impulse load CC: refers to impulse distance D: refers to number of test: 1,2,...o each type. Strain rate-constant quasistatic indentation: AAAA_BBB_CC_D where: DYNACOMP _Data management plan_ ______________________________________________________________________________ AAAA: refers to material type: M91, RTM250 or 8552. BBB: refers to maximum load CC: refers to strain rate D: refers to number of test: 1,2,...o each type. ## Making data openly accessible Initially, and due to the commercial value of the data, the **data generated will be confidential and only opened to the members of the consortium** (see ownership of results in the Consortium Agreement, CA). **When results of the research are made available** , for instance through a research publication, **and clearance is received** from the industrial partners (HEXCEL and MM), **efforts will be made to make the accompanying data publicly available through an open repository** . In such case, due to the simplicity of the data (CSV file), no special software is needed to access the data. The _Zenodo_ repository will be considered in those cases. ## Making data interoperable The data is easily interoperable as the mechanical tests provide standard signals such as load, displacement and time. ## Increase data re-use In those case where the data is publicly available (for instance linked to a publication, see section 6.2), it is expected that the data will be licensed under Public Domain. # ALLOCATION OF RESOURCES The costs are minimal as open and free tools are considered for making data publicly availability. The project management office (PMO) at IMDEA is responsible for data management, under the supervision of the project coordinator (D. Jon Molina), which is covered by funding received by funding from the REA. # DATA SECURITY Zenodo open repository is considered safe for long-term preservation. _Project nº 722096. H2020-MSCA-ITN 2016 European Industrial Doctorate_ ______________________________________________________________________________ # ETHICAL ASPECTS Whenever data is considered publicly available, written consent will be seek from the participating partners.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1158_enCOMPASS_723059.md
Executive Summary 5 1 DATA SUMMARY 6 2 FAIR DATA 8 2.1 Making data findable, including provisions for metadata 8 2.2 Making data openly accessible 8 2.3 Making data interoperable 8 2.4 Increase data re-use (through clarifying licenses) 8 3 ALLOCATION OF RESOURCES 9 4 DATA SECURITY 10 5 ETHICAL ASPECTS 11 6 OTHER 13 # **EXECUTIVE SUMMARY** The Commission is running a flexible pilot under Horizon 2020 called the Open Research Data Pilot (ORD pilot). Projects participating in the pilot must submit a first version of the DMP (as a deliverable) within the first 6 months of the project. The DMP needs to be updated over the course of the project whenever significant changes arise. The enCOMPASS project is taking part in this pilot and this is the initial version of the Data Management Plan set up according to the ORD pilot specifications. Therefore, this document describes the data, how it will be created, how it will be stored and backed-up, who owns it and who is responsible for the different data. # DATA SUMMARY The enCOMPASS project will collect data related to energy consumption in households, in schools and in public buildings in pilots located in Germany, in the city of Haßfurt, in Greece, in Athens and Thessaloniki, and in Switzerland, in the municipality of Gambarogno. _Purpose of data collection_ The purpose of the data collection is to implement and validate an integrated socio-technical approach to behavioural change for energy saving, by developing innovative user-friendly digital tools for making energy data consumption available and understandable for the different users and stakeholders (residents, employees, pupils, building managers, utilities, ICT providers) empowering them to collaborate to achieve energy savings and manage their energy needs in energy efficient, cost-effective and comfort-preserving ways. _Relation to the objectives of the project_ In order to provide information to energy users about their consumption levels and their potential for improvement it is necessary to collect, process, and analyse a variety of data sources, from smart meter reading, providing detailed energy consumption, to building characteristics, to user psychographic data. _Types and format of data_ There are 3 major types of data: * **Feature data** : these data describe the structural properties of a building (e.g. the surface area, the type of insulation, the lighting installation, etc.) or of an individual (age, education, environmental attitude, comfort preferences, etc.). These data are both quantitative and qualitative. * **Sensor data** : these data are data streams regarding observed measurements including energy consumption, temperature, humidity, luminance, presence. * **Interaction data** : these data represent the interaction of the users with the enCOMPASS platform such as logs of accesses, actions performed and so on. _Re-use of existing data_ The data used in the enCOMPASS project is being collected and generated by the project itself. _Origin of data_ * Psychographic data of residential users will be provided by the users themselves through voluntary answering to data collection questionnaires in an anonymous way. * Electrical and thermal energy consumption will be provided by smart meter readings, where available. Power consumption of individual appliances will be provided by smart plugs, where available. * Local comfort measurements in households and buildings, such as temperature, humidity, luminance, will be collected by in-situ sensors, installed by the project. * Presence of humans in monitored buildings and rooms will be also be monitored by in-situ sensors installed by the project. * Interaction data is generated by the use of the enCOMPASS platform by the users. The data can refer to interaction with web and mobile apps. _Users of the data_ The collected data will be mainly used to build profile of users in order to provide meaningful and effective suggestions for energy efficiency. The data is also collected to experimentally verify the impact of the energy saving suggestions. Data will be therefore used by the researchers of the enCOMPASS project. Other participants of the enCOMPASS project will also have access to the data (e.g. building managers, home owners, utility operators, etc.) but specific interfaces will be provided to them to access the data and therefore they are not expected to directly access the datasets. # FAIR DATA ## MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA The data generated by the enCOMPASS project will be annotated using public metadata standards where appropriate. In particular, the DDI (Data Documentation Initiative _http://www.ddialliance.org/_ ) will be used for socio-economic data where possible, and OGC's Observation and Measurements ( _http://www.opengeospatial.org/standards/om_ ) will be used for sensor generated data. Provided the data is not covered by non-disclosure agreements and provided that it does not violate ethics, it will be published using the Zenodo platform ( _https://zenodo.org_ ), which enables the association of a DOI to the generated data sets. Data sets will be catalogued by a structured approach to a naming convention defined as follows: **enCOMPASS_TYP_AAA_CC_PP_BTY_BLDID_vXX_(short_title)** * T: Type of data, one of FET (feature), SEN (sensor), INT (interaction) * AAA: Author organisation (use the acronyms of the contract) * CC_PP: Country and Place of origin, one of: DE_HS for Haßfurt, GR_AT for Athens, GR_TH for Thessaloniki, CH_GM for Gambarogno. * BTY: building type, one of: RES (residential), SCH (school), PUB (public) * BLDID: building id, a unique id associated to each building in the project  vXX: progressive version number This convention might be revised according to the specific needs arising during the project. ## MAKING DATA OPENLY ACCESSIBLE All data will be made available after anonymisation where required and appropriate. Before publishing data sets, even after anonymisation, clearance must be issued by the data owners (typically the utilities and the municipalities) Data will be published using an open platform such as Zenodo, aiming for compliancy with the OpenAIRE initiative. Restricted data will not be published online and will not be made available to the public. ## MAKING DATA INTEROPERABLE The interoperability of the collected data will be ensued by the compliance to standard ontologies as much as possible, in particular, as stated in Section 2.1, to the DDI (Data Documentation Initiative _http://www.ddialliance.org/_ ) for socio-economic data where possible, and OGC's Observation and Measurements ( _http://www.opengeospatial.org/standards/om_ ) for sensor generated data. ## INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES) Access to data will be Open Access as it is published on Zenodo. Data will be available to the public as long as the Zenodo repository will be available. Copies of the data will be retained by the partners for a maximum of three years after the project’s end. # ALLOCATION OF RESOURCES The cost for making enCOMPASS data fair can be ascribed to Task 8.4 "Open data and standards", where a total of 7 months has been allocated. Part of these resources will be thus devoted to the preparation and organisation of data for making it FAIR compliant. The enCOMPASS project has designed Stelios Krinidis of CERTH as Data Manager and each site has an individual data manager, which reports to the project Data Manager. These persons are: * Greek pilot: Konstantinos Arvanitis (WVT) * German pilot: Felix Zösch (SHF) * Swiss pilot: Marco Bertocchi (SES) The cost for long term preservation are dramatically reduced thanks to the availability of the Zenodo platform. # DATA SECURITY Sensitive data is transmitted over the Internet only by secure channels such as _ssh_ and _sftp_ . In general the enCOMPASS project does not deal with data that can be directly traced to a single individual (anonymization). There will be certain type of data which the enCOMPASS users will generate in conjunction with the use of the platform (login ids, emails, passwords) that will be transmitted only over https secured channels. The enCOMPASS servers where data are temporarily stored for data processing and elaboration are protected by state-of-the-art techniques and they are constantly monitored and regularly updated to limit the risk of intrusion. # ETHICAL ASPECTS Although enCOMPASS does not introduce any critical ethical issues or problems, several considerations typical to ICT and IoT applications and on-site trials shall be taken into account. The consortium is fully aware of these and has the necessary experience to address them seamlessly. enCOMPASS proposed solutions does not expose, use or analyse personal sensitive data for any purpose. In this respect, no ethical issues related to personal sensitive data are raised by the technologies to be employed in Pilot sites foreseen in Germany, Greece and Switzerland. However, enCOMPASS consortium is fully aware of the privacy-related implications of the proposed solutions and respects the ethical rules and standards of H2020, and those reflected in the Charter of Fundamental Rights of the European Union. Generally speaking, ethical, social and data protection considerations are crucial and will be given all due attention. enCOMPASS will address any ethical and other privacy issues in **WP3** , as well as allocated specific **Task 3.2** to review the deployed solutions for privacy and security. Thus enCOMPASS will assure the investigation, management and monitoring of ethical and privacy issues that could be relevant to its envisaged technological solution and will establish a close-cooperation with the Ethics Helpdesk of the European Commission. The consortium is aware that a number of privacy and data protection issues could be raised by the activities (in WP3-WP5) to be performed within the scope of the project. The project involves the carrying out of data collection in all pilots in order to assess the effectiveness of the proposed solution. For this reason, human participants will be involved in certain aspects of the project and data will be collected. This will be done in full compliance with any European and national legislation and directives relevant to the country where the data collections are taking place (INTERNATIONAL/EUROPEAN): * The Universal Declaration of Human Rights and the Convention 108 for the Protection of Individuals with Regard to Automatic Processing of Personal Data. * Directive 95/46/EC & Directive 2002/58/EC of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data. * Core ethical issues and with the European Charter of Fundamental Human Rights and as well as with any relevant EU standard in the fields of privacy and data protection. In addition to this, to further ensure that the fundamental human rights and privacy needs of participants are met, the privacy-preserving data collection activities within encompass project will further comprise of the writing of detailed ethical guidelines for the project (deliverable D10.1). In order to protect the privacy rights of participants (e.g. public building employees, students, visitors, etc.), a number of best practice principles will be followed. These include: * No data will be collected without the explicit informed consent of the individuals under observation. This involves being open with participants about what they are involving themselves in and ensuring that they have agreed fully to the procedures/research being undertaken by giving their explicit consent. * No data collected will be sold or used for any purposes other than the current project. * A data minimisation policy will be adopted at all levels of the project and will be supervised by each Industrial Pilot Demonstration responsible. This will ensure that no data, which is not strictly necessary to the completion of the current study will be collected. * Any shadow (ancillary) personal data obtained during the course of the research will be immediately cancelled. However, the plan is to minimize this kind of ancillary data as much as possible. Special attention will also be paid to complying with the Council of Europe’s Recommendation R(87)15 on the processing of personal data for police purposes, Art.2: “The collection of data on individuals solely on the basis that they have a particular racial origin, particular religious convictions, sexual behaviour or political opinions or belong to particular movements or organisations which are not proscribed by law should be prohibited. The collection of data concerning these factors may only be carried out if absolutely necessary for the purposes of a particular inquiry.” * Compensation, if and when provided, will correspond to a simple reimbursement for hours lost as a result of participating in the study; special attention will be paid to avoid any form of unfair inducement. * If employees of partner organizations, are to be recruited, specific measures will be in place in order to protect them from a breach of privacy/confidentiality and any potential discrimination; in particular, their names will not be made public and their participation will not be communicated to their managers. More information related to the ethical aspects related to the data collected and utilized within encompass project will be documented at deliverable D10.1. # OTHER The implementation and deployment of core components will be performed in Germany, Greece and Switzerland, under the leadership of each Pilot Site responsible (SHF, WVT and SES respectively). In the following, the consortium outlines the legislation for the countries involved in the pilots: 1. German Pilot Trials in schools, residential and public buildings in Haßfurt, Germany have to comply with German legislation “German Federal Data Protection Act (BDSG)” and the responsible state data protection acts, relating to the collection, processing and use of personal data: _http://www.gesetze-iminternet.de/bdsg_1990/index.html_ * Federal regulatory authorities and ethical committees: German Federal Data Protection Authority (BfDI) _http://www.bfdi.bund.de/_ * State regulatory authorities: Bavarian Data Protection Authority _https://www.datenschutzbayern.de/_ 2. Greek Pilot in National Documentation Centre in Athens and Residential & Offices buildings of WATT+VOLT customer portfolio in Greece, have to comply with Greek legislation “Law 2472/1997 (and its amendment by Law 3471/2006) of the Hellenic Parliament”. * Regulatory authorities and ethical committees: Hellenic Data Protection Authority _http://www.dpa.gr/_ 3. Swiss Pilot Trials in Locarno have to comply with Swiss federal legislation “235.1 Legge federale sulla protezione dei dati (LPD) / Federal Act on Data Protection (FADP)” relating to the processing of personal data: _https://www.admin.ch/opc/en/classified-compilation/19920153/index.html_ and _https://www.admin.ch/opc/it/classified-compilation/19920153/201401010000/235.1.pdf_ * Regulatory authorities and ethical committees: Federal Council _https://www.admin.ch/gov/en/start.html_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1159_CPaaS.io_723076.md
# Introduction As described with the H2020 guidelines, Research funding organisations, as well as organisations undertaking publicly funded research, have an obligation to optimize the use the funds they have been granted. Part of this optimization is that data sets resulting from public funded research must be made available to other researchers to either verify the original results (which is integral part of proper scientific approach), or to build upon them. In order to achieve this high-level objective, a data management policy has to be implemented and thoroughly followed by the CPaaS.io consortium as a whole, even if –per se- not all CPaaS.io partners will be involved in all aspects of those policies/principles. The Data Management Plan (DMP) is a living document (with two formal versions of the same deliverable released in M6 and M30 respectively) that describes the data management policy (i.e. the management principles) and collected and generated data sets. It covers all aspects introduced in the “Guidelines on Data Management in Horizon 2020”, which are: 1. Precise description of the collected and generated data (nature of data, related domain ontologies, standards and data formats used,…) 2. Detail about various aspects of the data management (how it is stored, by whom, under which responsibility, how it is secured, how it is sustained and backed up) 3. Sharing principles (licensing, access methods,...) 4. Detail about how the privacy is maintained In this second iteration of the Data Management Plan we provide the current and final status on data generation and management and some final statements. ## Delta with D7.2 ## This section highlight updates brought to version 2 of the Data Management Plan: • Enhanced user Experience: Section2.1 has been extended in order to include the Sapporo Snow ## Festival event; ## • 3 applications from Japan in Section 2; ## • Final statements in Section 3 for EU partners and Data Management Plans for Japanese partners. # CPaaS.io Research Data This section introduces the different EU-side use-cases as described in the CPaaS.io Description of Work document and the applications built upon them. It also describes the collected data (meaning the semantically annotated raw-data with no extra added value) and the generated data (meaning the semantic value- added information built from the annotated raw-data using various technics like analytics or reasoning). Part of the information described in this section can be found in a more complete form in CPaaS.io deliverable D2.1 [1]. The 3 use-cases considered in CPaaS.io are: * Managing Fun and Sport events, which derives into 3 different applications at three different locations (and therefore based on 3 distinct data sets in Amsterdam, Sapporo and Tokyo) * Waterproof Amsterdam * Yokosuka Emergency Medical The 5 derived applications are: * Enhanced User Experience * Sapporo Visitor Experience * Tokyo Public Transportation (case 1 and case 2) * Waterproof Amsterdam * Yokosuka Emergency Medical Care ## Data from Enhanced User Experience application ### Short description The core idea of this application is to use IoT sensors and analytics to enhance people’s experience while visiting or participating at a fun or sports event. Wearables and mobile phones are used as sensors in order to learn about the activities of event participants. Event participants may include members of the audience, but also performing artists or athletes. For instance AGT has previously equipped referees and cheerleaders in basketball matches with wearable sensors and created content based on the analysed data for consumption on site and for distribution via TV broadcasting, social media and other digital distribution channels 1 . Furthermore the application uses sensor deployed at the venue to measure and analyse fan behaviour and engagement. **Data collected for the Enhanced User Experience application (Color Run and Sappor Snow Festival)** Table 1 summarizes the data from the Enhanced User Experience application covering both events, the Color Run and the Sapporo Snow Festival. **Table 1: Data collected for Managing Fun and Sport events scenario** <table> <tr> <th> **Biometric data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> This data set includes a range of measurements from wearables such as wristbands, chest straps and smart sportswear that provides biometric measurements including heart rate, breathing rate and galvanic skin response, burned calories measurements and skin temperature. </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes </td> </tr> <tr> <td> **Hosting** </td> <td> External, CPaaS.io </td> </tr> <tr> <td> **Data Provider** </td> <td> AGT, YRP </td> </tr> <tr> <td> **Format** </td> <td> JSON, RDF </td> </tr> <tr> <td> **Update Frequency** </td> <td> up to every 200ms </td> </tr> <tr> <td> **Update Size** </td> <td> ~1 KB </td> </tr> <tr> <td> **Data Source** </td> <td> Sensor </td> </tr> <tr> <td> **Sensor** </td> <td> Wristband, chest strap, smart shirts </td> </tr> <tr> <td> **Number of Sensors per person** </td> <td> ~6 </td> </tr> </table> <table> <tr> <th> **GPS Traces** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> GPS traces include positional data including altitude information as delivered by GPS devices. </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes </td> </tr> <tr> <td> **Hosting** </td> <td> External, CPaaS.io </td> </tr> <tr> <td> **Data Provider** </td> <td> AGT, YRP </td> </tr> <tr> <td> **Format** </td> <td> common GPS formats (GPX, KML, CSV, NMEA) </td> </tr> <tr> <td> **Update Frequency** </td> <td> Up to 1s </td> </tr> <tr> <td> **Update Size** </td> <td> < 1KB </td> </tr> <tr> <td> **Data Source** </td> <td> Sensor </td> </tr> <tr> <td> **Sensor** </td> <td> GPS sensor in wristbands and mobile phones </td> </tr> <tr> <td> **Number of Sensors per person** </td> <td> 1-2 </td> </tr> </table> <table> <tr> <th> **Motion Data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Motion data that measures hand and body movements based on accelerometer and gyroscope sensors </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes </td> </tr> <tr> <td> **Hosting** </td> <td> External, CPaaS.io </td> </tr> <tr> <td> **Data Provider** </td> <td> AGT, YRP </td> </tr> <tr> <td> **Format** </td> <td> JSON </td> </tr> <tr> <td> **Update Frequency** </td> <td> Up to every 16 ms </td> </tr> <tr> <td> **Update Size** </td> <td> ~ 200 byte per sensor reading </td> </tr> <tr> <td> **Data Source** </td> <td> Sensors </td> </tr> <tr> <td> **Sensor** </td> <td> Accelerometer and gyroscope sensors of mobile phones, wristband and other wearables </td> </tr> <tr> <td> **Number of Sensors per person** </td> <td> 2-3 </td> </tr> </table> <table> <tr> <th> **Step Counts** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> This data set contains step counts. </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes </td> </tr> <tr> <td> **Hosting** </td> <td> External, CPaaS.io </td> </tr> <tr> <td> **Data Provider** </td> <td> AGT, YRP </td> </tr> <tr> <td> **Format** </td> <td> JSON </td> </tr> <tr> <td> **Update Frequency** </td> <td> Up to 1Hz </td> </tr> <tr> <td> **Update Size** </td> <td> ~ 200 byte per sensor reading </td> </tr> <tr> <td> **Data Source** </td> <td> Sensors </td> </tr> <tr> <td> **Sensor** </td> <td> Step count measurement of wristband </td> </tr> <tr> <td> **Number of Sensors per** </td> <td> 1-2 </td> </tr> <tr> <td> **person** </td> <td> </td> </tr> </table> <table> <tr> <th> **Environmental Data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> This data set environmental data such light intensity and barometric pressure. The data is primarily collected from wearable sensors. </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes </td> </tr> <tr> <td> **Hosting** </td> <td> External, CPaaS.io </td> </tr> <tr> <td> **Data Provider** </td> <td> AGT, YRP </td> </tr> <tr> <td> **Format** </td> <td> JSON </td> </tr> <tr> <td> **Update Frequency** </td> <td> Up to 1Hz </td> </tr> <tr> <td> **Update Size** </td> <td> ~ 200 byte per sensor reading </td> </tr> <tr> <td> **Data Source** </td> <td> Sensors </td> </tr> <tr> <td> **Sensor** </td> <td> Sensors in wristband </td> </tr> <tr> <td> **Number of Sensors per person** </td> <td> 1-2 </td> </tr> </table> <table> <tr> <th> **Mobile Camera videos** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> This data set contains videos recorded by mobile cameras worn by Color Run participants. </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes </td> </tr> <tr> <td> **Hosting** </td> <td> External, CPaaS.io </td> </tr> <tr> <td> **Data Provider** </td> <td> AGT, YRP </td> </tr> <tr> <td> **Format** </td> <td> MP4 </td> </tr> <tr> <td> **Update Frequency** </td> <td> 30fps </td> </tr> <tr> <td> **Update Size** </td> <td> (~45kbps) </td> </tr> <tr> <td> **Data Source** </td> <td> Mobile Camera </td> </tr> <tr> <td> **Sensor** </td> <td> GoPro Hero4 Camera </td> </tr> <tr> <td> **Number of Sensors per person** </td> <td> 1 </td> </tr> </table> ### Data generated by the Enhanced User Experience application (Color Run and Sapporo Snow Festival) The Enhanced User Experience application generates or uses the following types of data 1. **User Activity** . Mainly on motion data and therefore private information. A user activity is always linked to a user and therefore personal information. The re-use of the data is possible within the boundaries defined in the consent forms used to collect the data. 2. **Dominant Colors** . Provides information about the prevailing colour in a video feed and is used for detecting colour stations in the Color Run. The output is a colour value, duration and location. The generated can be provided in anonymised form, but requires further examination to what degree it can be opened. 3. **Clothing Analysis** . Clothing Analysis uses deep learning techniques to determine metrics based on clothing styles derived from images. By nature this metrics are linked to user and therefore reflect private data that can only be reused in the boundaries of the consent forms used to collect the data. 4. **Running Type.** This classifies whether the participant ran more like a fun or an ambitious runner during an event. This is personal data. 5. **Dance Energy.** Provides a measure of the accumulated energy overtime while dancing per user. This is personal data. 6. **Emotions.** Refers to people’s emotion during the event. This is personal data, but also aggregated data is used. 7. **Tube Rider Classification.** Classifies the tube ride based on how intensely the tube rotated. This is personal data. 8. **Event Sentiment.** This is an aggregated measure of sentiments per event derived from Tweets. 9. **Ride Bumpiness.** Provides a metric for the bumpiness of a ride. This is personal dat. 10. **Throw Intensity.** This provides a measure for the intensity of throwing a ball such as a snow ball. This is personal dat. **Table 2: Data generated for the Enhanced User Experience application** <table> <tr> <th> **Types of generated data** </th> <th> **Based on…** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> User Activity </td> <td> Motion Data </td> <td> N </td> <td> reusable, but not open </td> </tr> <tr> <td> Dominant Colour </td> <td> Mobile Camera Videos </td> <td> Y </td> <td> Reusable, but not fully open </td> </tr> <tr> <td> Clothing Analysis </td> <td> Mobile Camera Videos, Public Images </td> <td> N </td> <td> Reusable, but not open </td> </tr> <tr> <td> Running Type </td> <td> Motion Data </td> <td> N </td> <td> N </td> </tr> <tr> <td> Dance Energy </td> <td> Motion Data </td> <td> N </td> <td> N </td> </tr> <tr> <td> Emotions </td> <td> Video Data </td> <td> N </td> <td> N </td> </tr> <tr> <td> Tube Rider Classification </td> <td> Motion Data </td> <td> N </td> <td> N </td> </tr> <tr> <td> Event Sentiment </td> <td> Tweets </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Ride Bumpiness </td> <td> Motion Data </td> <td> N </td> <td> N </td> </tr> <tr> <td> Throw Intensity </td> <td> Motion Data </td> <td> N </td> <td> N </td> </tr> </table> ## Data from Waterproof Amsterdam ### Short description Extreme rainfall and periods of continued drought are occurring more and more often in urban areas. Because of the rainfall, peak pressure on a municipality’s sewerage infrastructure needs to be load balanced to prevent flooding of streets and basements. With drought, smart water management is required to allow for optimal availability of water, both underground as well as above ground. The Things Network develops the Amsterdam Waterproof application, which is a software tool creating a network of smart, connected rain buffers, be it rain barrels, retention rooftops or buffer otherwise, that can be both monitored and controlled centrally by the water management authority. Third party hardware providers will connect their buffers to this tool for uplink and downlink data transmission. External data such as weather data and sewerage capacity are added, in order to calculate the optimal filling degree of each buffer and so operate a pump or valve in the device. Waternet is the local water management company who will be the main user of the application. ### Data collected for the Waterproof Amsterdam application In the section below are the data sets used for the Waternet application. It consists of device data (rain buffer information), public weather data and government data about physical infrastructure. Device data will be stored in the application and could be stored in CPaaS, especially as it contains private data like name and address of device owner. As this stage however we cannot determine whether this privacy data will be shared by the vendors of the devices, who are also the ones maintaining them. They are the only actor who has direct contact with the end user and/or owner of the device. (Historical) weather data is publicly available on the web, so there is no need to store this data. It will be provided by a subscription data feed from the web. The third data set is already owned and stored by Waternet, so there is also no need for storage capabilities. **Table 3: Data collected for the Waterproof Amsterdam scenario** <table> <tr> <th> **Weather data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Upcoming weather displaying periods of heavy rain or drought </td> </tr> <tr> <td> **OGD or private data** </td> <td> OGD </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Platform </td> </tr> <tr> <td> **Data Provider** </td> <td> KNMI – Dutch weather forecast agency </td> </tr> <tr> <td> **Format** </td> <td> HDF5/JSON </td> </tr> <tr> <td> **Update Frequency** </td> <td> Hourly </td> </tr> <tr> <td> **Update Size** </td> <td> 20kb </td> </tr> <tr> <td> **Data Source** </td> <td> Sensors </td> </tr> <tr> <td> **Sensor** </td> <td> Water sensor </td> </tr> <tr> <td> **Number of Sensors** </td> <td> unknown </td> </tr> </table> <table> <tr> <th> **Rain buffer information** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Specific information about each rainbuffer (rooftop, barrel, underground storage) * Buffer size and type * Filling degree * Temperature * Location * Battery status * Pump/valve capacity * Active pump/valve hours * Owner name, address, contact information </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> Yes – anonymised and not open </td> </tr> <tr> <td> **Hosting** </td> <td> Platform </td> </tr> <tr> <td> **Data Provider** </td> <td> Rain buffer hardware provider </td> </tr> <tr> <td> **Format** </td> <td> JSON </td> </tr> <tr> <td> **Update Frequency** </td> <td> Hourly </td> </tr> <tr> <td> **Update Size** </td> <td> 10b </td> </tr> <tr> <td> **Data Source** </td> <td> Sensors </td> </tr> <tr> <td> **Sensor** </td> <td> Water sensor or infrared sensor </td> </tr> <tr> <td> **Number of Sensors** </td> <td> 1 per buffer </td> </tr> </table> <table> <tr> <th> **Sewerage processing capacity** </th> </tr> <tr> <td> **Detailed Description** </td> <td> Geographical data on water infrastructure depicting remaining capacity of sewerage </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> External </td> </tr> <tr> <td> **Data Provider** </td> <td> Waternet </td> </tr> <tr> <td> **Format** </td> <td> XML </td> </tr> <tr> <td> **Update Frequency** </td> <td> Hourly </td> </tr> <tr> <td> **Update Size** </td> <td> 1kb </td> </tr> <tr> <td> **Data Source** </td> <td> Sensors, maps </td> </tr> <tr> <td> **Sensor** </td> <td> Water sensor </td> </tr> <tr> <td> **Number of Sensors** </td> <td> unknown </td> </tr> </table> **Data generated by the Waterproof Amsterdam application** The Waterproof Amsterdam generates different types of data. 1. Open/close command per buffer. This is the most important data generated, as it determines when an actuator inside a buffer should be operated (valve open or pump on). Based on all data sources available, an algorithm will determine which conditions are required to perform a certain command. The commands can be open and close, or a value in between as different water discharge mechanisms have different capacities (i.e. a percentage of full capacity) 2. Aggregated remaining buffer capacity per area. Waternet as the primary user of the application needs to monitor the total remaining capacity to buffer rain water, to understand whether there will be plenty capacity to catch up rain water in moments of heavy rainfall. 3. Aggregated litres of rain water processed per area. This is a metric to be used to show the impact the micro buffer network has generated over time. These insights may be used for PR and marketing purposes to stimulate individuals and companies to also buy and install such rain buffers. The open data in the table below can be reused to perform analytics on historical data, and could be open data through a public (graphical or application) interface for third parties to interact with. **Table 4: Data generated for the Waterproof Amsterdam application** <table> <tr> <th> **Types of generated data** </th> <th> **Based on…** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Open/close command per buffer </td> <td> All data sets </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Aggregated remaining buffer capacity (street, area, city level) </td> <td> Individual rain buffers filling degree and location, map </td> <td> Y </td> <td> Y </td> </tr> <tr> <td> Aggregated litres processed by the buffers </td> <td> Individual rain buffer pump hours run and pump capacity, map </td> <td> Y </td> <td> Y </td> </tr> </table> ## Data from Sapporo Visitor Experience application ### Short description The Sapporo _Visitor Experience application_ is part of a set of applications that will use the CPaaS.io platform to manage fun and sports event. This application focuses on tourist information services including event information and public transportation information. ### Data collected for the Sapporo Visitor Experience application We established Sapporo Open Data Association whose members are data providers such as Sapporo City, companies and organizations in tourism, sports and transportations. This association collects data from those members, transforms them to Open Data and publishes them on the Sapporo Open Data Portal. <table> <tr> <th> **Sightseeing Spot** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Comments, pictures and locations of popular sightseeing spots </td> </tr> <tr> <td> **OGD or private data** </td> <td> OGD </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Sapporo City </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Yearly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> Web </td> </tr> <tr> <td> **Sensor** </td> <td> Ucode marker </td> </tr> <tr> <td> **Number of Sensors** </td> <td> 11 </td> </tr> </table> <table> <tr> <th> **Meal** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Restaurant name, location, opening hours, menu, service </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Hotel, Sapporo Station Shopping Mall </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Monthly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> Web </td> </tr> <tr> <td> **Sensor** </td> <td> Ucode marker </td> </tr> <tr> <td> **Number of Sensors** </td> <td> 11 </td> </tr> </table> <table> <tr> <th> **Hotel** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Hotel name, location, number of rooms, access </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Hotels </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Monthly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> Paper </td> </tr> <tr> <td> **Sensor** </td> <td> N/A </td> </tr> <tr> <td> **Number of Sensors** </td> <td> N/A </td> </tr> </table> <table> <tr> <th> **Other Facilities** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Location of ATM, Toilet, Rental Locker, etc. </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Sapporo Station Shopping Mall </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Monthly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> Web </td> </tr> <tr> <td> **Sensor** </td> <td> Ucode marker </td> </tr> <tr> <td> **Number of Sensors** </td> <td> 11 </td> </tr> </table> <table> <tr> <th> **Snow Festival** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Event information in Sapporo Snow Festival </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Operating Committee of Sapporo Snow Festival </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Yearly </td> </tr> <tr> <td> **Update Size** </td> <td> </td> </tr> <tr> <td> **Data Source** </td> <td> Web </td> </tr> <tr> <td> **Sensor** </td> <td> Ucode marker </td> </tr> <tr> <td> **Number of Sensors** </td> <td> 11 </td> </tr> </table> <table> <tr> <th> **Stadium Data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Location, facilities and drawing of stadiums </td> </tr> <tr> <td> **OGD or private data** </td> <td> OGD </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Sapporo City </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Yearly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> Web, Paper </td> </tr> <tr> <td> **Sensor** </td> <td> N/A </td> </tr> <tr> <td> **Number of Sensors** </td> <td> N/A </td> </tr> </table> <table> <tr> <th> **Train & Bus Data ** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Time table, Operating status, Line, Real time location, bus stop, etc. </td> </tr> <tr> <td> **OGD or private data** </td> <td> OGD & private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Sapporo City, Hokkaido Chuo Bus </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Monthly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> File </td> </tr> <tr> <td> **Sensor** </td> <td> N/A </td> </tr> <tr> <td> **Number of Sensors** </td> <td> N/A </td> </tr> </table> <table> <tr> <th> **Express Way Data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Information of service area </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> NEXCO East Japan </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Yearly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> Web </td> </tr> <tr> <td> **Sensor** </td> <td> N/A </td> </tr> <tr> <td> **Number of Sensors** </td> <td> N/A </td> </tr> </table> <table> <tr> <th> **Underground Map Data** </th> <th> </th> </tr> <tr> <td> **Detailed Description** </td> <td> Drawing of underground map </td> </tr> <tr> <td> **OGD or private data** </td> <td> Private </td> </tr> <tr> <td> **Personal Data** </td> <td> No </td> </tr> <tr> <td> **Hosting** </td> <td> Sapporo Open Data Association </td> </tr> <tr> <td> **Data Provider** </td> <td> Sapporo Station Shopping Mall </td> </tr> <tr> <td> **Format** </td> <td> PDF, HTML, CSV, JPEG </td> </tr> <tr> <td> **Update Frequency** </td> <td> Yearly </td> </tr> <tr> <td> **Update Size** </td> <td> N/A </td> </tr> <tr> <td> **Data Source** </td> <td> File </td> </tr> <tr> <td> **Sensor** </td> <td> Ucode marker </td> </tr> <tr> <td> **Number of Sensors** </td> <td> 11 </td> </tr> </table> **Note:** Japanese applications including Sapporo visitor experience are based on u2 architecture, ucode architecture Version 2. Data collected from hardware device is only ucode, a non-semantic 128-bit id umber. Only after a process called ucode resolution process is performed by an application, a meaningful data is provided to the application in a context-sensitive manner. Ucode itself has no notion of open vs proprietary / anonymous vs public. **Table 5: Data collected for the Sapporo Visitor Experience application** <table> <tr> <th> **Devices** </th> <th> **Types of data** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> ucode marker </td> <td> 128 bit unique identifier, ucode. </td> <td> N </td> <td> n/a </td> </tr> </table> ### Data generated by the Sapporo Visitor Experience application **Note:** The data generated such as guidance uses open data map. However, particular guidance based on the user’s current location is a private data and should be anonymised. **Table 6: Data generated for the Sapporo Visitor Experience application** <table> <tr> <th> **Types of generated data** </th> <th> **Based on…** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Sightseeing spot guidance </td> <td> Location information (from ucode marker and GPS, ditto in rows below) and Sightseeing Spot data. </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Meal Guidance </td> <td> Location information and Meal (restaurant, etc.) data. </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Hotel Guidance </td> <td> Location information and hotel data. </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Facility Guidance (ATM, toilet, etc.) </td> <td> Location information and facility data </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Snow Festival event guidance </td> <td> Location information and Snow Festival data </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Stadium Guidance </td> <td> Location information and Stadium data </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Train and Bus Guidance </td> <td> Location information and Train and Bus data. </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Expressway Guidance </td> <td> Location information and Expressway guidance </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Underground Route Guidance </td> <td> Location information from ucode marker (GPS is not available!) and Underground Map data. </td> <td> Y </td> <td> N </td> </tr> </table> ## Data from Tokyo Public Transportation: (a) Open Data Challenge contest ### Short description One prominent use case of Tokyo Public Transportation open data is the open data contest that uses data from public transportation operators in Tokyo area, i.e., Open Data Challenge for Public Transportation in Tokyo. One contest has already been performed and the second one is under way: From July 17, 2018 to March 31, 2019. URL: _https://tokyochallenge.odpt.org/en/index.html_ Third one is already announced to start on January 16, 2019. The contest is held by the Association for Open Data of Public Transportation (ODPT). See URL: _http://www.odpt.org/_ I t is chaired by Dr. Ken Sakamura, the director of YRP UNL, the Japanese coordinator of CPaaS.io. YRP and MSJ, both CPaaS.io partners, are members of ODPT. The explanation of the open data provision framework within the context of Open Data Challenge for Public Transportation in Tokyo is in order. The contest makes available many types of data from many raw data sources from public transportation operators in Tokyo area. However, they are all in proprietary formats and CPaaS.io can't mention them in a public document. They are covered by NDA. From these diverse data sources, the contest developer site uses program modules that understand these formats and produces unique data format (JSON- LD) for all types of data so that ALL the information is provided in this single format. So, the participants in the contest only need to know the API to access this JSON-LD data and the semantics of JSON-LD data. They do not have to deal with the idiosyncrasies of various data formats and network protocols used by the public transportation operators in the greater Tokyo metropolitan area. There are many types of data available in the contest. There are dynamic data available, but the details change from time to time and is hard to keep track. So, the author refers the reader to the contest developer site. URL: https://ckan-tokyochallenge.odpt.org/dataset As of now (December 28, 2018), there are 152 data sets made available from the contest site. (https://ckan-tokyochallenge.odpt.org/dataset ) The JSON-LD data set has the semantics which is explained by a document available at the developer site of the contest. (It is readable only after one registers itself at the website. See the contest site for details.) The following URL is the generic API and semantics initially used some ago for the contest framework (in Japanese). URL: https://docs.odpt.org/ However, the developer site has English translation of the more focused and updated document and so the interested reader is encouraged to register in the contest (there is NO OBLIGATION to submit the final contest work). ### Data collected for the Open Data Challenge Contest Due to the reasons mentioned above, we cannot really mention how and by means of which device the data is collected. So instead, below, the generic description of data made available is listed (quoted from the URL of the contest site.) **Table 7: Data Collected for the two Tokyo Public Transportation applications** <table> <tr> <th> **Devices** </th> <th> **Types of data** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> n/a </td> <td> n/a </td> <td> n/a </td> <td> Not really in the original format. </td> </tr> </table> * List: Published Data types. ➢ Railway ⟡ Tokyo Metro * Static data, including train/station timetable, and dynamic data, such as train location information and status information ⟡ Bureau of Transportation, Tokyo Metropolitan Government * Static data, including train/station timetable of Toei Transportation, Tokyo Sakura Tram (Toden Arakawa Line), and Nippori-Toneri Liner, and dynamic data, such as status information, and train location information of Tokyo Sakura Tram (Toden Arakawa Line) * JR East * Static data, including train/station timetable, and dynamic data, such as train location information and status information, of multiple railway lines in Greater Tokyo * Odakyu * Static data and dynamic data, such as status information * Keio * Static data, including train/station timetable, and dynamic data, such as status information ⟡ Keisei * Static data, including station timetable, and dynamic data, such as status information ⟡ Keikyu * Static data, including station timetable, and dynamic data, such as status information ⟡ Seibu * Static data, including station timetable, and dynamic data, such as status information * Tokyu * Static data and dynamic data, such as status information * TWR Rinkai Line * Static data, including train/station timetable, and dynamic data, such as status information * Tobu * Static data, including station timetable * Yurikamome * Static data, including station timetable * Bus * Bureau of Transportation, Tokyo Metropolitan Government * Static data, including timetable of Toei Bus, and bus location information as dynamic data * Odakyu Bus * data, including timetable ⟡ Kanto Bus * data, including timetable * Keio Bus * Static data, including timetable * Kokusai Kogyo * Static data, including timetable ⟡ JR Bus Kanto * Static data, including timetable ⟡ Seibu Bus * Static data, including timetable ⟡ Tokyu Bus * Static data, including timetable ⟡ Tobu Bus * Static data, including timetable ⟡ Nishi Tokyo Bus * Static data, including timetable * Airline * All Nippon Airways * Static data, including flight timetable, and dynamic data, such as real-time arrival/departure information * Tokyo International Air Terminal * Static data, including flight timetable, and dynamic data, such as real-time arrival/departure information * Narita International Airport * data, such as real-time arrival/departure information * Japan Airport Terminal * Flight timetable, and dynamic data, such as real-time arrival/departure information * Japan Airlines * Static data, including flight timetable, and dynamic data, such as real-time arrival/departure information * -station map and facility information of train stations ● The intra-station map and the facility information of the train stations around Shinjuku station and Tokyo station are being prepared by Ministry of Land Infrastructure, Transportation and Tourism anticipating the Tokyo Olympic and Paralympic Games in 2020. The data are made available in this challenge. ### Data generated by Open Data Challenge Contest As mentioned above, from the diverse data sources of many transportation operators, the contest developer site uses program modules that understand these formats and produces unique data format (JSON-LD) for all types of data so that ALL the information is provided in this single format. The generated data is stored in a ckan website. From the data in the ckan website, many apps create their own data (guidance, visualization, etc.). But it is beyond the scope of this document to characterise the existing contest apps and their generated data. **Table 8: Data generated for the** <table> <tr> <th> **Types of generated data** </th> <th> </th> <th> **Based on…** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Many in JSON-LD format </td> <td> Many </td> <td> </td> <td> Varies </td> <td> Yes, in JSON-LD format to contestants. </td> </tr> </table> ## Data from Tokyo Public Transportation (b): Tokyo Management of Service Vehicles ### Short description Tokyo Management of Service Vehicles provides a platform mobility tracking infrastructure to enable management of service vehicles using monitoring, planning, coordination, analysing of things with mobility, such as service vehicles. ### Data collected for the Tokyo Management of Service Vehicles application 1. Time tables and real-time traffic information of public transportation such as railways, buses, and airplanes. Data are owned by transportation operators. For public transportation, it is public. For public service-related transportation, it depends on services. 2. Geographical data for routes, locations of railway stations, bus stops, airports, etc. Data are owned by transportation operators. 3. Data of origination and destination, such as maps, facility lists. Data are owned by transportation operators. Some are public, some (such as origination places) are companyconfidential. 4. Driver information. It is owned by transportation companies. It needs privacy protection. **Table 9: Data Collected for the Tokyo Management of Service Vehicles (* not yet defined)** <table> <tr> <th> **Devices** </th> <th> **Types of data** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Smartphone </td> <td> Location, time </td> <td> N </td> <td> Y </td> </tr> <tr> <td> Smartphone </td> <td> Device-id </td> <td> Y </td> <td> N </td> </tr> <tr> <td> Smartphone </td> <td> Image </td> <td> N </td> <td> N </td> </tr> <tr> <td> * </td> <td> Route </td> <td> N </td> <td> N </td> </tr> <tr> <td> * </td> <td> Weather </td> <td> N </td> <td> Y </td> </tr> <tr> <td> * </td> <td> Road information </td> <td> N </td> <td> Y </td> </tr> </table> ### Data generated by the Tokyo Management of Service Vehicles application Route logs: generated from mobility tracking data. The data is owned by transportation providers. Stored inside transportation providers services on top of CPaaS.io **Table 10: Data generated for the two Tokyo Management of Service Vehicles application (* Not yet defined)** <table> <tr> <th> **Types of generated data** </th> <th> **Based on…** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Route logs </td> <td> Time-series-mobility data </td> <td> N </td> <td> * </td> </tr> </table> ## Data from Yokosuka Emergency Medical Care application ### Short description Yokosuka Emergency Medical Care Application provides a platform to improve quality and efficiency of medical information sharing situation of sick person and to reduce time to start initial treatment after ambulance arrival. ### Data collected for the Yokosuka Emergency Medical Care application Yokosuka Emergency Medical Care application collects IDs and location from smartphone in ambulance. This application also collects the video image in ambulances from camera(s) in the ambulances but not stored. Summarised in the table below: **Table 11: Data collected for the Yokosuka Emergency Medical Care application** <table> <tr> <th> **Devices** </th> <th> **Types of data** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Smartphone </td> <td> Location, time </td> <td> N </td> <td> Y </td> </tr> <tr> <td> Smartphone </td> <td> Device-id </td> <td> N </td> <td> N </td> </tr> <tr> <td> camera </td> <td> video image in ambulance (not stored) </td> <td> N </td> <td> N </td> </tr> </table> ### Data generated by the Yokosuka Emergency Medical Care application Yokosuka Emergency Medical Care application generated a location map of ambulances. Summarised in the table below: **Table 12: Data generated for the Yokosuka Emergency Medical Care application** <table> <tr> <th> **Types of generated data** </th> <th> **Based on…** </th> <th> **Anonymised** **Y/N** </th> <th> **Open** **Y/N** </th> </tr> <tr> <td> Location map of ambulances </td> <td> location and time from smartphone in ambulances </td> <td> N </td> <td> N </td> </tr> </table> # CPaaS.io Research Data management plan CPaaS.io project follows the principle that research data will be handled and managed by those organisations/institutions that either collects or generates the research data. The CPaaS.io project comprise a number of partners that are involved directly in either: * Producing the actual data during the trials, or * Developing tools and enablers (e.g. analytics, reasoners, etc.) that are needed as core elements in the CPaaS.io system architecture, or * Elaborating upon the produced data (using the aforementioned enablers) in order to produce new value-added knowledge. The individual roles and duties of such partners and the research data management plans that are in place in the organisations taking part in CPaaS.io are described in the following sub-sections. ## AGT International (AGT) ### Data collection (from sensors) The data collected by AGT has been described in Section 2.1 and is used for generated the data as described in Table 2 and for developing the Enhanced User Experience application. As described in D2.2 the collected data is enriched with additional metadata. ### Data generation The data generated by AGT has been described in Table 2 and is used in the Enhanced User Experience application. ### Data Management We have implemented appropriate technical and organizational measures to ensure generated data is protected from unauthorized or unlawful processing, accidental loss, destruction or damage. We review our information collection, storage and processing practices regularly, including physical security measures, to guard against unauthorized access to our systems. We restrict access to generated data to only those employees, contractors and agents who strictly need access to this information, and who are subject to strict contractual confidentiality obligations. _**Update at the end of the Project:** _ In cases where consent forms and legislation allow, the data will be maintained beyond the project mainly for presenting the results of the project. ## University of Surrey (UoS) ICS at the University of Surrey is not involved neither in the production of raw data nor in the exploitation or generation of higher-level information out of it. However, UoS is focussing on architecture work where particular attention is paid to ensuring that 1/ all privacy-related requirements are thoroughly taken into account 2/ important part of the data is publicly available following the project Open Data policy. To this respect UoS is aiming at providing a bridge between CPaaS.io and another FIRE project called FIESTA-IoT, two projects where UoS is actively involved. UoS will in particular aim at involving CPaaS.io in either the 2 nd Open Call of FIESTA-IoT or as a fellow contributor to that project via a cooperation agreement to be discussed between the two projects after both POs have been consulted on that matter. In both cases, CPaaS.io could play two non-exclusive distinct roles: * Data-provider: playing this role the CPaaS.io project would inject its data or part of its data (either raw data or inferred data) to the FIESTA-IoT so that so-called experimenters can make use of it using the FIESTA-IoT enablers; or * Experimenter: playing this role, CPaaS.io could reuse additional data sets produced by the FIESTAIoT collaborators for testing our new own algorithms (e.g. Analytics) and techniques. **Data collection (from sensors)** UoS does not participate in any data collection **Data generation** UoS does not generate any new data from the project data sets ### Data Management UoS does not manage any gathered or generated data _**Update at the end of the Project:** _ The foreseen collaboration between CPaaS.io and FIESTA-IoT could not take place due to technical issues. It will be discussed –early 2019- whether the data collected internally as part of our smartBuilding test-bed can be ingested to the CPaaS.io platform. That data would be then open to students and used for the sake of machine learning experiments. ## Bern University of Applied Sciences (BFH) The BFH is not directly involved in the implementation of the envisaged use cases. Its main research focus is in the data management concepts – in particular the usage of Linked Data and Open Government Data as well as data quality annotations, the application of MyData approaches, and in the validation of the use cases. Hence it is not collecting, generating or storing any data. However, as part of its exploitation, validation and knowledge transfer activities, BFH is planning to connect some sensors via the LoRa testbed network that another institute (Institute for Energy and Mobility Research in Biel) is currently setting up. What data will be collected and for what purposes exactly will be defined at a later stage; a related data management plan will be drawn up before any data collection starts. ### Data collection BFH is not collecting any data for the main use cases of CPaaS.io. It may collect and make available some sensor data through the LoRa network at BFH for testing and validation purposes; details will be determined at a later stage. ### Data generation BFH is not generating any data for the main use cases of CPaaS.io. It may link public data sources (e.g., from the Swiss Open Government Data portal at _www.opendata.swiss_ ) with the sensor data collected through the LoRa network at BFH for testing and validation purposes; potential use cases will be determined at a later stage. ### Data Management BFH is not managing any data for the main use cases of CPaaS.io. Data collected and generated for testing and validation purposes through the LoRa network at BFH will likely be made available publicly, in the spirit of open data research, unless the data could allow to infer any information about individuals. Details are to be determined at a later stage. _**Update at the end of the Project:** _ BFH used a single TTN sensor to generate some test data. This data however is not being persisted beyond the end of the project. ## OdinS OdinS as a partner involved on the security and privacy aspects, will check and support the project to check that data access and sharing activities will be implemented in compliance with the privacy and data collection rules and regulations, as they are applied nationally and in the EU, as well as with the H2020 rules. Concerning the results of the project, these will become publicly available based on the IPRs as described in the Consortium Agreement. Due to the nature of the data involved, some of the results that will be generated by each project phase will be restricted to authorized users, while other results will be publicly available. Data access and sharing activities will be rigorously implemented in compliance with the privacy and data collection rules and regulations, as they are applied nationally and in the EU, as well as with the H2020 rules. ### Data collection (from sensors) OdinS will not be involved in the data generation of data from sensors, working exclusively in the architecture aspects of the data collections and its consequence over the security and privacy components. ### Data generation OdinS is not involved in the production of raw data, but as part of the Task 4.1 User Empowerment Component Definition and the definition of access control policies and use consent solution, OdinS will generate information associated to data for controlling access and sharing data between entities and components that will use the platform. ### Data Management As the raw data included in the data sources, will be gathered from sensor nodes and information management systems, those could be seen as highly sensitive. Therefore, access to raw data can only take place between the specific end users based on the policies associated and the partners involved in the analysis of the data. For the models to function correctly, the data will have to be included into the CPaaS.io repository. The results of the data analytics are set to be anonymised and made available to the subsequent layers of the framework, which will then allow the possibility for external industry stakeholders to use the results of the project for their own purposes. ## NEC NEC is not directly involved in the production or raw data. NEC’s focuses are in the architecture (system integration including transferability and semantic interoperability) area and cloud-edge processing of the data. FIWARE’s resources such as the Generic Enablers and NEC’s IoT Platform can support storage and exploitation of data from use cases for generating higher-level analytical results. NEC pays particular attention to privacy related requirements as well as the Open Data policy of CPaaS.io. **Data collection** NEC is not planning to collect any raw data for the use cases of CPaaS.io. ### Data generation NEC is not generating data for the main use cases, NEC may exploit shared data from use cases and generate higher level data as a result. Potential use cases will be determined at a later stage. ### Data management While NEC is not directly involved with the use cases, it will take part in data transferability and management via the provided IoT Platform. NEC has implemented necessary organizational and technical measures for the usage of the data and its protection from unauthorized persons. ## The Things Network #### Data collection (from sensors) The data collected by The Things Network has been described in Section 2.2 and is used for generated the data as described in table 2 and for developing the Waterproof Amsterdam application. As described in D2.4 the collected data is enriched with additional metadata. #### Data generation The data generated by The Things Network has been described in and is used in the Waterproof Amsterdam application. Private data from owners of a rain buffer is anonymised. Based on an algorithm, data from various sources is processed by the application to determine the optimal filling degree for each individual rain buffer. The results may be used for automated control of buffers, or push notifications to trigger manual control. **Data Management** Open data such as weather data will be streamed into the application and not stored locally. Private data from external sources such as device location will be stored in the application and only released in an anonymised and aggregated manner. Personal details about a device, such as name, address and contact details will also be stored in the application in a secure account server. These data may be transferred to CPaaS.io at some time, easing security and privacy demands on the application end and transferring those to CPaaS.io Parts of the personal data, such as buffer location, size and processed litres, will be released in an aggregated, anonymised manner (e.g. on a heat map) per area of a city or the city as a whole. Readily available data from Waternet about sewerage capacity will abide by the policies of Waternet. These policies are not yet clear at the moment. We restrict access to generated data to only those employees, contractors and agents who strictly need access to this information, and who are subject to strict contractual confidentiality obligations. ## YRP ### Data collection (from sensors) The data collected by YRP has been described in Section 2.4 and is used for generated the data as described in table 6 for developing the Tokyo Public Transportation application. It also has been described in Section 2.6 and is used generated the data as described in table 12 for developing the Yokosuka Emergency Medical Care application. As described in D2.4 the collected data is enriched with additional metadata. ### Data generation The data generated by YRP has been described in Section 2.4 and is used in the Tokyo Public Transportation application. It also has been described in 2.6 and is used in Yokosuka Emergency Medical Care application. Neither Tokyo Public Transportation nor Yokosuka Emergency Medical Care application generates private data. ### Data Management Tokyo Public Transportation application coverts data such as timetables, location of trains and provides converted data. This application does not manage private data. Yokosuka Emergency Medical Care application manages the location of ambulances and provides a map of the location. This application streams private data, such as video image in ambulance and does not store locally. Explicitly private data that was gathered during Sapporo Snow Festival in February 2018 was collected with explicit user agreement. (The English translation of the agreement is part of D2.12 Final Ethics Report.). That data was exchanged with AGT under a mutually signed data exchange agreement. ## Microsoft Japan **Data collection (from sensors)** MSJ is not directly involved in the production or raw data. **Data generation** MSJ is not generating data for the main use cases. ### Data Management While MSJ is not directly involved with the data management of each CPaaS.io use cases, MSJ has implemented necessary organizational and technical measures for the usage of private data and its protection from unauthorized persons in its cloud service. In the case of collecting data from visitors to Sapporo Snow Festival in February 2018, MSJ personnel acted as an agent of YRP which was responsible for the gathering and storage of personal data in Japan. ## ACCESS ### Data collection (from sensors) ACC collects data from smartphones. The collected data are location, time, device ID of the smartphone and the images taken by the camera of the smart phone. ### Data generation ACC generates route logs: generated from mobility tracking data in Tokyo Management of Service Vehicles application. The data is owned by transportation providers. This data is stored inside transportation provider’s services on top of CPaaS.io. ### Data Management ACC manages the data with the usual access control system so that they are not leaked to non-owners. Also, the identity of smartphones are crucial for the application in so far as the route log is generated, and are not anomyzed within the application domain. ## Ubiquitous Computing Technology ### Data collection (from sensors) Ubiquitous Computing Technology (UCT) is not planning to collect any raw data for the use cases of CPaaS.io. **Data generation** UCT does not generate any new data from the project data sets ### Data Management UCT does not manage any gathered or generated data ## University of Tokyo ### Data collection (from sensors) University of Tokyo (UoT) was not planning to collect any raw data for the use cases of CPaaS.io in 2016. However, as of 2018, it has now facilities for monitoring indoor temperature, lighting, human movement, humidity and other data from environmental sensors. **Data generation** UoT does not generate any new data from the project data sets. ### Data Management UoT does not manage any gathered or generated data other than the pure sensor data (non-private data). # Conclusions In this second version of the Data Management Plan deliverable we presented the CPaaS.io approach towards data management as handled by the EU CPaaS.io consortium and final statements made at the end of the project as far as management of data is concerned.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1160_5GPagoda_723172.md
1\. Introduction 5 2\. Data Types 8 2.1. Data Types of the Project 8 2.2. Levels of Confidentiality and Flow of Data 9 2.3. Data Formats 10 2.4. Expected Data of the Project 11 3\. Collection, Storage and Use of Research Data and Analysed Research Data 13 4\. Storage and Use of Project Data 14 4.1. Storage of Data 14 4.2. Documenting of Data 15 5\. Data Sharing 17 5.1. Depositing Research Data 17 5.2. Open Access Publishing 17 5.3. Other Dissemination 18 6\. Opting Out Research Data 19 7\. Summary and Perspectives 20 # Introduction This document outlines the principles and processes for data collection, annotation, analysis and distribution as well as storage and security of data within the EU/JP 5G!Pagoda project. The procedures will be adopted by all project partners throughout the project in order to ensure that all project related data are well-managed according to contractual obligations as well as applicable legislation both during and after the project. 5G!Pagoda is committed to openness with respect to research data and results, as manifested by its participation in the Pilot on Open Research Data in Horizon 2020. Therefore, as a further dissemination channel, 5G!Pagoda will share datasets on which its results are derived as explained in this Open Data Management Plan. Public research repositories are used to maximize the visibility and impact of the research carried out by partners by making such data available for reuse by 3rd parties and reproducibility of the project’s results. The Grant Agreement of the 5G!Pagoda project (as an Open Data Pilot participant) obligates the project to deposit [digital research data generated in the project] in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: 1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; 2. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan', i.e. this document. The Grant Agreement contains an option to discard the obligation to deposit a part of research data in the case where the achievement of the action's main objective, described in Annex 1 of the Grant Agreement, would be jeopardised. In such case, the Open Data Management Plan must contain the reasons for not giving access. As the obligation to deposit research data in a databank does not change the obligation to protect results, take care of confidentiality and security obligations, or the obligations to protect personal data, the Open Data Management Plan addresses these topics. This document details, how the seemingly contradicting commitments to share and protect are implemented within the project. The Open Data Management Plan has, on the other hand, also served the purpose of acting as a tool to agree on the data processing of the 5G!Pagoda project consortium. The production of the Open Data Management Plan has helped the consortium to identify situations, where the practices were thought to be agreed upon and where a common understanding on practices was thought to have been achieved, but where such in fact did not exist. Documents related to the Open Data Management Plan are the 5G!Pagoda project Grant Agreement, the Consortium Agreement, the Coordination Agreement, and the Project Management, Quality and IPR Guide. Some of the deliverables also contain information which link to the Open Data Management Plan. The relationships are described below in Table 1. **Table 1 Documents related to the Open Data Management Plan.** <table> <tr> <th> Related document </th> <th> Relationship to the Open Data Management Plan </th> </tr> <tr> <td> The Grant Agreement </td> <td> * Article 27 details the obligation to protect results * Article 29 details dissemination of results including open access to publications and research data * Article 36 details confidentiality obligations </td> </tr> <tr> <td> </td> <td> * Article 39 details obligations to protect personal data, if applicable * Annex 1, Part B, Chapter 2.4 and Chapter 2.5 detail open access publication and research data management principles. </td> </tr> <tr> <td> Consortium Agreement </td> <td> Chapter 4.1 on the General principles: “ _Each Party undertakes to notify promptly, in accordance with the governance structure agreed in the Coordination Agreement and directly to the Coordinator, any significant information, fact, problem or delay likely to affect the Project, submission of the deliverables or reports in accordance with the Grant Agreement and_ _shall promptly provide all information reasonably required by the Coordinator to carry out its tasks._ _Each Party shall take reasonable measures to ensure the accuracy of any information or materials it supplies to the Coordinator or the Parties.”_ This is a general declaration of the partners to abide by the rights and obligations set out in the Grant Agreement. </td> </tr> <tr> <td> Coordination Agreement </td> <td> * Chapter 4.1 on the General principles: _“Each Party undertakes to take part in the efficient implementation of the Coordinated Project, and to cooperate, perform and fulfil, promptly and on time, all of its obligations in the said project and under this Coordination Agreement as may be reasonably required from it and in a manner of good faith._ _Each Party undertakes to notify promptly, in accordance with the governance structure of the Coordinated Project, and as required for the coordination under this Coordination Agreement, any significant information, fact, problem or delay likely to affect the project(s) or the coordination.”_ This is a general declaration of the partners to abide by the rights and obligations set out in the Coordinated Project, Project Plan, part of the Grant Agreement. * Chapter 8.4 on Publication defines publication principles. </td> </tr> <tr> <td> Project Management, Quality and IPR Guide </td> <td> The Project Management, Quality and IPR Guide defines the quality criteria for all work conducted in the 5G!Pagoda project. </td> </tr> <tr> <td> Yearly report on standardization, dissemination and exploitation achievements </td> <td> Contributions to standardization will enable 5G!Pagoda to achieve broader recognition of its results by a wide industry community, stimulate higher levels of interoperability and contribute to establishing economies of scale for 5G!Pagoda applications. Moreover, close coordination between research projects and standardization organizations, particularly through running and validated testbeds, is an important mechanism for exploitation of results and for inspiring and initiating innovation. </td> </tr> <tr> <td> Report on innovation achieved and forthcoming industrial exploitation </td> <td> This public report summarises the innovations achieved in the project and the activities and plans to exploit these innovations and other project results. </td> </tr> </table> 5G!Pagoda does not intend collecting any data that may expose private information (i.e., user contextrelated data). If during the course of the project, this becomes a necessity, the 5G!Pagoda consortium is committed to respect privacy and comply with the EU respective directives and national law: any data deemed private will not be made publicly available, and if it has to be made public, it will be appropriately anonymized. The project may generate quantitative models of various aspects of the 5G!Pagoda system (e.g., performance of specific network function, performance of a service over a specific network slice, network or application level traces, dependence of end-user experience on specific system and environment parameters, etc.), which will be made available via open-access publications, technical reports, and the project’s public deliverables. The partners will evaluate the applicability of standards with respect to various aspects of data publication, indexing, storage, preservation, privacy, etc., and will report on them in this Open Data Management Plan (i.e. deliverable D1.2). Where applicable, each contributor to a publication as an outcome of 5G!Pagoda will acquire a persistent identifier from the ORCID registry and each published data set or other research data object will be assigned a persistent identifier from DataCite, which builds on OIDs. The consortium will strive to publish data on repositories indexed by the Registry of Research Data Repositories, which provides an XML schema and an API for querying research data. The selected repositories should support the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) standard for repository interoperability and cross-archive search. The consortium will ensure that appropriate metadata are provided for the published data, including the acknowledgement to the funding of EU, H2020, and the Japanese Ministry of Internal Affairs and Communications, the name of the project, the grant number and the name of the action. The ODMP may be re-evaluated during the project. On each updated version, the necessary corrective actions will be included, to ensure that research data remain discoverable, accessible, assessable and intelligible, interoperable and (re)usable beyond their original purpose. The European Commission as well as the Japanese Ministry of Internal Affairs and Communications will be accordingly notified. # Data Types ## Data Types of the Project In the 5g!Pagoda project there are four basic types of data (Figure 1): research data, analysed research data, project data and reports and communication data. **Research data** covers the data collected on the project subject matter and it refers to information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. Users can normally access, mine, exploit, reproduce and disseminate openly accessible research data free of charge. /H2020 Online Manual/ Based on discussion with the consortium members, it has become apparent that it is difficult at this stage to identify the kind of data to be collected on the project. However, once such data become identified, this Open Data Management will be accordingly updated. **Analysed research data** means the reports composed of the research data. Analysed data also refers to qualitative and quantitative data analyses conducted on the data. Reviews of earlier published data and records will be utilised to some degree. This data will be considered as analysed research data for the purposes of this document. Project related workshops and stakeholder engagement events are public events and the workshop notes of project partners will be treated in the same way as analysed research data (i.e. the notes will be shared within the consortium). **Figure 1 Data types.** **Project data** includes administrative and financial project data, including contracts, partner information and periodic reports, as well as accumulated data on project meetings, teleconferences and other internal materials. This data is confidential to the project consortium and to the European Commission. Project data includes mainly MS Office documents, in English, which ensures ease of access and efficiency for project management and reporting. Most of the project data is stored in the password protected OwnCloud repository, administrated by Aalto University. **Reports and other communication data** includes deliverables, presentations and for example articles. This data type also refers to the contents of the 5G!Pagoda project website. Each data type is treated differently with regard to the level of confidentiality (see Chapter 2.2). Some of the data falls under the EU, Japan, and national laws on data protection and for this reason the project is obliged to seek necessary authorisations and to fulfil notification requirements. The project will assume the principle of using commonly used data formats for the sake of compatibility, efficiency and access. The preferred means of data types is MS Office compatible formats, where applicable. ## Levels of Confidentiality and Flow of Data Overall, there are three basic levels of confidentiality, namely Public, Confidential to consortium (including Commission Services), and Confidential to the Partner. **Figure 2 Data types displayed in three levels of confidentiality.** Figure 2 displays how the previously mentioned data types are positioned in the level of confidentiality context. Figure 3 displays the data in more granularity. **Figure 3 Data distributed into the three levels of confidentiality in more detail.** ## Data Formats Recommended file formats are presented in the following table. See best practices for file formats from _Stanford University Library_ . **Table 2 Recommended file formats.** <table> <tr> <th> **Text format** </th> <th> **File extension** </th> </tr> <tr> <td> Acrobat PDF/A </td> <td> .pdf </td> </tr> <tr> <td> Comma-Separated Values </td> <td> .csv </td> </tr> <tr> <td> Open Office Formats </td> <td> .odt, .ods, .odp </td> </tr> <tr> <td> Plain Text (US-ASCII, UTF-8) </td> <td> .txt </td> </tr> <tr> <td> XML </td> <td> .xml </td> </tr> <tr> <td> **Image / Graphic formats** </td> <td> **File extension** </td> </tr> <tr> <td> JPEG </td> <td> .jpg </td> </tr> <tr> <td> JPEG2000 </td> <td> .jp2 </td> </tr> <tr> <td> PNG </td> <td> .png </td> </tr> <tr> <td> SVG 1.1 (no java binding) </td> <td> .svg </td> </tr> <tr> <td> TIFF </td> <td> .tif, .tiff </td> </tr> <tr> <td> **Audio formats** </td> <td> **File extension** </td> </tr> <tr> <td> AIFF </td> <td> .aif, .aiff </td> </tr> <tr> <td> WAVE </td> <td> .wav </td> </tr> <tr> <td> **Motion formats** </td> <td> **File extension** </td> </tr> <tr> <td> AVI (uncompressed) </td> <td> .avi </td> </tr> <tr> <td> Motion JPEG2000 </td> <td> .mj2, .mjp2 </td> </tr> </table> ## Expected Data of the Project Table 3 presents the data expected to be created during the project. D1.2 – Open Data Management Plan <table> <tr> <th> **5G!Pagoda** </th> <th> **Version 1.0** </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> **Page 12 of 20** </th> </tr> </table> **Table 3 Expected data of the project.** <table> <tr> <th> </th> <th> **Publications** </th> <th> **Reports & Deliverables ** </th> <th> **Software** </th> <th> **Open Research Data** </th> <th> **Opted Out Research Data** </th> <th> **Contribution to Open Sources** </th> <th> **Contribution to Standars** </th> </tr> <tr> <td> **Partner** </td> <td> Gold </td> <td> Green </td> <td> Consortium </td> <td> Public </td> <td> Consortium </td> <td> Public </td> <td> Yes 1) </td> <td> No </td> <td> May be </td> <td> Yes 2) </td> <td> No </td> <td> May be </td> <td> Yes 3) </td> <td> No </td> <td> May be </td> <td> Yes 4) </td> <td> No </td> <td> May be </td> </tr> <tr> <td> AALTO </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> </tr> <tr> <td> UT </td> <td> Δ </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> <td> FGIMT2020,5GMF </td> <td> </td> <td> </td> </tr> <tr> <td> Ericsson </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> </tr> <tr> <td> Orange </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> X </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> <td> 3GPP, ETSI NFV, ITU-T </td> <td> </td> <td> </td> </tr> <tr> <td> FOKUS </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> EI </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> X </td> </tr> <tr> <td> MI </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> ITU-T </td> <td> </td> <td> </td> </tr> <tr> <td> DG </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> KDDI </td> <td> </td> <td> </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> 3GPP, 5GMF </td> <td> </td> <td> </td> </tr> <tr> <td> HITACHI </td> <td> </td> <td> </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> </tr> <tr> <td> NESIC </td> <td> </td> <td> </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> </tr> <tr> <td> WU </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> X </td> <td> </td> <td> </td> <td> </td> <td> X </td> <td> ITU-TFG IMT2020, SG13 </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> If you answered yes to the following items, please give detailed comments/explanations below! </td> <td> </td> <td> </td> </tr> <tr> <td> Yes 1) </td> <td> Describe the data, give also format and expected amount (MB, GB) </td> <td> </td> <td> </td> </tr> <tr> <td> Yes 2) </td> <td> If you are opting out data (keep it confidential), describe the data and give reasons for opting out </td> <td> </td> <td> </td> </tr> <tr> <td> Yes 3) </td> <td> If you are contributing to open sources, describe the contributing data and the open source instance </td> <td> </td> <td> </td> </tr> <tr> <td> Yes 4) </td> <td> If you are contributing to standards preparation, describe the instance to contribute to </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Detailed comments** </td> <td> </td> <td> </td> </tr> <tr> <td> **Partner** </td> <td> **Comment** </td> <td> </td> <td> </td> </tr> <tr> <td> AALTO </td> <td> 4) Aalto may contribute to ITU-T IMT 2020 Network Softwarization Working Group along with The University of Tokyo </td> <td> </td> <td> </td> </tr> <tr> <td> UT </td> <td> 4) UT will contribute to ITU-T IMT-2020 and 5GMF Network Architecture WG </td> <td> </td> <td> </td> </tr> <tr> <td> Ericsson </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Orange </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> FOKUS </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> EI </td> <td> 3) EURECOM is interested to contribute the code around Open Air Interface (OAI) to the OAI Software Alliance </td> <td> </td> <td> </td> </tr> <tr> <td> MI </td> <td> 4) MI will contribute to ITU-T SG20 </td> <td> </td> <td> </td> </tr> <tr> <td> DG </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> KDDI </td> <td> 4) KDDI will contribute to 3GPP SA2 WG and 5GMF Technology Promotion Group </td> <td> </td> <td> </td> </tr> <tr> <td> HITACHI </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> NESIC </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> WU </td> <td> 4) Waseda will contribute to ITU-T FG IMT-2020 and may be to SG13 </td> <td> </td> <td> </td> </tr> </table> # Collection, Storage and Use of Research Data and Analysed Research Data The research data will be collected following jointly agreed guidelines and principles in order to guarantee the achievement of sound research data and to make the reusability of the research data possible. Possible data flows are presented in Figure 4. **Figure 4 Open access to scientific publication and research data in the wider context of dissemination** **and exploitation (Source: H2020 Programme, Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020, Version 3.1, 25 August 2016).** # Storage and Use of Project Data ## Storage of Data The data accumulated in the 5G!Pagoda project will be analysed and stored according to the principles detailed in this plan. Overall the detailed research data will be stored by the organisations which has collected it, and the project data will be stored by Aalto University. Both the central repository and the databases of the individual organisations will be secured using the latest security protocols, and access to data will be granted only for persons nominated by the project partners. All project administrative data will be stored at a dedicated database for the 5G!Pagoda project. The project uses the OwnCloud repository (https:// _https://5g-pagoda.aalto.fi/owncloud/_ ) , which is a secure, password protected document repository and archive system. Access to the database is managed by the coordinator and provided for project consortium and other parties as deemed necessary by the project team. The project data is stored on Aalto University servers, not in the cloud, for added security. The project data management structure and categories are as follows. Workpackages WP1 WP2 WP3 WP4 WP5 WP6 Templates Standards-Activities Project Meetings Physical Meetings ConfCalls Market Analysis Deliverables WP1 WP2 WP3 WP4 WP5 WP6 Admin Relevant Original DoA Grant Agreement Consortium Agreement Coordination Agreement The 5G!Pagoda Project Management, Quality and IPR Guide details the project internal management structure and processes, as well as quality and reporting practices. Related to project data management, best practices for data generation and sharing have been applied. This includes set rules for version control, whereby the partners are encouraged to use unified methods for naming the documents by the Task or Deliverable name, with a corresponding version number. The documents are stored in the database and preferably shared within the consortium via a link to the database rather than e-mail attachments. All deliverables have a unified look and feel with a unified template which helps the reviewers in their project evaluation. The project coordinator assumes the responsibility for timely documentation and sharing of project management related documents and materials. Each Work Package (WP) leader monitors the timely documentation of WP related requirements within the consortium. Each task leader ensures the timely production of the deliverable for which he/she is responsible. With the strongly inter-related and intertwined Tasks in the different Work Packages, the same previously described principles regarding e.g. confidentiality levels and data types will be applied. ## Documenting of Data The stored data needs to be provided with metadata that complies with an international metadata standard. To facilitate this, the following questions should be answered already during the research work: * Who are the creators and what are their affiliations * Where the data is located and is there a persistent identifier * What is the license chosen to allow reuse * How, when and by whom the data has been collected/ created * How the data has been prepared for analysis * What kind of data manipulations have taken place * How and what methods have been used to analyse the data * What instruments and devices have been used * Which scientific publications are based on this data * What is the software used to process and analyse the data Available metadata standards are listed e.g. in these sites: * _General Metadata standards_ listed by the Digital Curation Center (DCC) UK * _Discipline-specific metadata standards_ listed by the Digital Curation Center (DCC) UK * _Metadata standards by topic_ listed by the Research Data Alliance (RDA) It is imperative to get a persistent identifier for data so that it is findable and citable. The most common identifier is Digital Object Identifier (DOI). Appropriate data repositories provide persistent identifiers for data sets. The 5G!Pagoda project intends to use _DataSite_ digital identifier for research data. For more details about DOI, the interested reader may refer to th e _International DOI Foundation (IDF_ ) . To license a dataset requires either 1) that all creators agree to release the data they have created using the same license; or 2) the ownership of datasets is transferred to one legal entity. One straightforward and effective way of doing this is to attach _Creative Commons_ _Licences_ ( _CC BY_ o r _CC0_ ) to the data deposited. Th e _EUDAT B2SHARE_ t ool includes a built-in license wizard that facilitates the selection of an adequate license for research data. Table 4 presents the planned Open Access Data of the project. For repositories see Chapter 5. **Table 4 Curation of stored open data.** <table> <tr> <th> **Partner** </th> <th> **Data type** </th> <th> **Repository** </th> <th> **Metadata standard** </th> <th> **License standard** </th> </tr> <tr> <td> All partners </td> <td> PU deliverables </td> <td> Participant Portal </td> <td> </td> <td> </td> </tr> <tr> <td> AALTO </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> UT </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> Ericsson </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> Orange </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> Orange </td> <td> Software </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> EI </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> EI </td> <td> Software </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> MI </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> DG </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> <tr> <td> WU </td> <td> Publications </td> <td> TBD </td> <td> TBD </td> <td> TBD </td> </tr> </table> # Data Sharing All parties have signed the Coordination Agreement and the European Partners have signed/accessed to the project Grant Agreement and Consortium Agreement, which all together detail the parties’ rights and obligations, including – but not limited to – obligations regarding data security and the protection of privacy. These obligations and the underlying legislation will guide all of the data sharing actions of the project consortium. The 5G!Pagoda project has committed to participate in the Pilot on Open Research Data in Horizon 2020, which is an expression of the larger Open Access initiative of the European Commission 1 . Participation in the pilot is manifested on two levels: a) depositing research data in an open access research database or repository and b) choosing to provide open access to scientific publications which are derived from the project research. At the same time, the consortium is dedicated to protect the privacy of the informants and companies. ## Depositing Research Data Following the principles of the European Commission Open Data pilot, the applicable research data gathered in the project will be made available to other researchers through open access database or repositories, in order to increase the potential exploitation of the project work. To make research data openly accessible, the following matters shall be considered: * Specify which data will be made openly available? If some data is kept closed, provide rationale for doing so * Specify how the data will be made available * Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? * Specify where the data and associated metadata, documentation and code are deposited * Specify how access will be provided in case there are any restrictions * Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. There are online research data archives, which may be subject-based/thematic, institutional or centralised. Useful listings of repositories include the Registry of _Research Data Repositories_ a nd _Databib_ . The Open Access Infrastructure for Research in Europe (OpenAIRE) provides additional information and support on linking publications to underlying research data. Some repositories like _Zenodo_ (an OpenAIRE and CERN collaboration), allows researchers to deposit both publications and data, while providing tools to link them. Zenodo and some other repositories as well as many academic publishers also facilitate linking publications and underlying data through persistent identifiers and data citations. ## Open Access Publishing All peer-reviewed scientific publications relating to results are published so that open access (free of charge, online access for any user) is ensured. Publications will either immediately be made accessible online by the publisher (Gold Open Access), or publications are available through an open access repository after an embargo period, which according to the Grant Agreement can be six months at maximum (Green Open Access). Possible Gold Open Access journals include IEEE Access Magazine. For all other articles, the researchers aim at publishing them in a Green Open Access repository. The coordinator, Aalto University, has a Green Open Access repository the 5G!Pagoda consortium can use, at _https://aaltodoc.aalto.fi/?locale- attribute=en_ . A repository for scientific publications is an online archive. Institutional, subject-based and centralised repositories are all acceptable choices. The _Open Access Infrastructure for Research in Europe_ _(OpenAIRE)_ i s the recommended entry point for researchers to determine what repository to choose. It also offers support services for researchers, such as the National Open Access Desks. Other useful listings of repositories are: * _Registry_ _of Open Access Repositories (ROAR)_ * _Directory_ _of Open Access Repositories_ _(OpenDOAR)_ A machine-readable electronic copy of the published version or final peer- reviewed manuscript accepted for publication will be available in a repository for scientific publications. Electronic copies of publications will have bibliographic metadata in a standard format and will include "European Union (EU)" and "Horizon 2020", the name of the action, acronym and grant number; publication date, and length of embargo period if applicable, and identifier. _ORCID_ identifier is used to identify the authors of publications in the 5G!Pagoda project. ## Other Dissemination In addition to the above, the project will release yearly a public report on standardization, dissemination and exploitation achievements (deliverables D6.2, D6.3, and D6.4). At the end of the project, a report on innovation achieved and forthcoming industrial exploitation will be also released (D6.5). # Opting Out Research Data The 5G!Pagoda project can opt out at any stage and so free itself retroactively from the obligations associated with the conditions – if: * participation is incompatible with the obligation to protect results that can reasonably be expected to be commercially or industrially exploited * participation is incompatible with the need for confidentiality in connection with security issues * participation is incompatible with rules on protecting personal data * participation would mean that the project's main aim might not be achieved * the project will not generate / collect any research data or * there are other legitimate reasons Alternatively, the 5G!Pagoda project can also choose to keep selected datasets or even all data closed for any of the reasons above, via this Data Management Plan. At this phase of the project no research data is identified and thus opting out is not relevant. # Summary and Perspectives The Open data management plan will be updated during the project lifetime if new practices for data management are introduced.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1161_SAFE STRIP_723211.md
**Executive Summary** In Horizon 2020 a limited pilot action on open access to research data has been implemented. Participating projects have been required to develop a Data Management Plan (DMP). This deliverable provides the first version of the DMP elaborated by the SAFE STRIP project. The purpose of this document is to provide an overview of the main elements of the data management policy. It outlines how research data will be handled during the project and describes what data will be collected, processed or generated and following what methodology and standards, whether and how this data will be shared and/or made open, and how it will be curated and preserved. Besides, an initial list of data types, metadata and global data collection processes are also provided in this document. The SAFE STRIP Data Management Plan refers to the latest EC DMP guidelines. This version has explicit recommendations for full life cycle management through the implementation of the FAIR principles, which state that the data produced shall be Findable, Accessible, Interoperable and Reusable (FAIR). Since the data management plan is expected to evolve during the project while taking into account the progress of the work, especially regarding sensors specifications, use cases, system architecture, pilot preparation and data modelling and fusion activities, an updated version will be produced as foreseen (deliverable D2.3) in M29. Deliverable D2.2 SAFE STRIP Initial Data Management Plan Version 1.0Date 08/11/2017 6 1. **Introduction** **1.1 SAFE STRIP main concept and objectives** **SAFE STRIP** aims to introduce a **disruptive technology** that will achieve **to embed C-ITS applications in existing road infrastructure** , including novel I2V and V2I, as well as VMS/VSL functions **into low-cost, integrated strips markers on the road** . The SAFE STRIP vision is to make roads self- explanatory (with personalised in-vehicle messages) and forgiving (due to advanced cooperative functions) for all road users (trucks, cars and vulnerable road users, such as PTWs riders) and all vehicle generations (non- equipped, C-ITS equipped, autonomous), with reduced maintenance cost, full recyclability and added value services, as well as supporting real-time predictive road maintenance functions. A brief overview of the specific project objectives is given below: 1. To develop a novel micro/nano sensorial system integrated in road pavement tapes/ markers; that will provide advanced safety functions to all road users at a fraction of the cost of current I2V/V2I nodes and roadside equipment. 2. To support predictive infrastructure maintenance, through dynamic road embedded sensors input. 3. To make road infrastructure (mainly for highways and interurban roads but also for city rings and selected rural roads) self-explanatory (through personalised info in own language and preferred format provided by the system to each driver/rider) and forgiving (through key I2V/V2I info provided to the cooperative system of the vehicle; such as dynamic speed limit and friction coefficient factor); for all vehicle types. 4. To extend this notion to parking depots, key intermodal nodes, such as railway crossings, harbour loading/uploading areas and logistic depots and work zone areas. 5. To reduce the infrastructure operational (including VMS/VSL info and toll collection functions), installation and maintenance costs. 6. To provide key info to C-ITS equipped and autonomous vehicles about road, weather and traffic conditions ahead, to support dynamic trajectory estimation and optimisation. 7. To support a wide range of added value services. 8. To evaluate the system in a controlled environment. In order to realise its vision, SAFE STRIP will implement two complementary as well as alternative solutions, one to address equipped vehicles (namely, intelligent vehicles with on board sensors and C-ITS or automation applications) and one to address nonequipped vehicles (the great majority of current vehicle fleets, including also vehicles that are very difficult to equip with rich on-board sensorial platforms, like PTW’s). **1.2 Purpose of the Document** The purpose of this deliverable ( _D2.2-Data Management Plan_ ) is to provide an analysis of the main elements of the data management policy that will be used by the consortium with regard to all the datasets that will be generated and/or collected by the project consortium. As SAFE STRIP is a H2020 funded project the Data Management Plan (DMP) must at least cover the specific aspects about project’s datasets. Particularly, in accordance to the EC DMP guidelines and template1, describes the data management life cycle for all datasets to be collected, processed or generated by a research project. It must cover: * the handling of research data during & after the project * what data will be collected, processed or generated * what methodology & standards will be applied  whether data will be shared /made open access & how  how data will be curated & preserved. The DMP plays a crucial role in project’s success for two main reasons. On the one hand, it ensures the availability and the quality of the datasets, which will be used/produced in the technical Work Packages of the project (WP2-WP6), serving the achievement of project’s objectives. On the other hand, it provides a thorough elaboration of the state of all datasets and corresponding software, proving that they are assessable and useable by third parties. **1.3 Intended audience** The SAFE STRIP project addresses highly innovative concepts. As such, foreseen intended audience of the project is the scientific community in the areas of intelligent transport systems, with emphasis on wireless communications, road safety, and automotive engineering. In addition, due to the strong expected impact of the project on their respective domains, the other expected audience consists in automotive industrial communities, telecom operator and standardization organizations. **1.4 Interrelations** This deliverable is directly related to project activity _A2.4-Data Modelling and Fusion_ , which defines the data models and protocols for all data to be exchanged among system modules, during the project lifecycle. Furthermore, project activities _A1.3-Use cases and application scenarios_ and _A2.2-Sensors specifications_ , provide the necessary input for the specification of the various data types and models that are concerned in the project. In particular, from the use cases and applications defined in A1.3, we derive the types of data associated to each application, whereas A2.2 provides insight about the data to be collected and processes by sensors which are installed in various locations of the system architectures. 2. **Data Summary** **2.1 Data Collection and Sharing Process** In line with SAFE STRIP objectives, data collection in SAFE STRIP concerns the two key following data clusters: 1. Data that will be generated and exchanged at different communication nodes of the overall system architecture, according to the foreseen use cases. 2. Additional data that need to be collected, **on top of the above** , during **technical validation activities and Pilots with users in a real-life operational environment** during the project that will allow the meaningful evaluation of the system performance and acceptance as well as the impact assessment to be held (in the context of WP6). Those data can be raw data (the ones of cluster 1 above plus some additional data that need to be collected for evaluation purposes regarding the driving performance upon the warning/notifications provided by the system and, in addition, subjective/qualitative data that will be collected in the context of interviews and focus groups). Both clusters of data will be exhaustively specified in the following version of the DMP (for M29) upon the outcomes of _D1.2: “SAFE STRIP Use Cases and application scenarios”_ , _D2.1:_ _“System architecture, sensor specifications, fabrication, maintenance, installation and security requirements and Risk Assessment”_ and will be pragmatically verified – before being reported – after the three first evaluation iterations that will have been held until M29. Apart from the above, a set of data has been collected already in the context of A1.1 surveys and A1.3 workshop – anonymised – that have been used for recognizing stakeholders’ needs and views on the system use cases and implementation approach. In the remaining section we present the various data categories of the aforementioned data clusters. Where applicable, we provide the full details that describe the data model for each category, i.e. data structure and data type of each element. It is expected that the full details for all SAFE STRIP data categories will be defined in the context of the activities _A2.2-Sensors specifications_ and _A2.4-Data Modelling and Fusion_ and will be included in the next updated version of this deliverable ( _D2.3SAFE STRIP Final Data Management Plan_ ) on M29. 2. **V2I & V2V Exchanged Messages ** The types of the exchanged I2V and V2V messages can be classified into: * **Decentralized Environmental Notification Messages** 2 (DENMs – ETSI EN 302 637-3) * **Cooperative Awareness Messages** 3 (CAMs - ETSI EN 302 637-2) DENMs contain information related to an event that has potential impact on road safety and are triggered by an event occurrence (event-triggered messages). On the other hand, CAMs inform road users and roadside infrastructure about each other’s position, dynamics and attributes. This is achieved by periodically exchange of information among vehicles (V2V) and between vehicles and road side infrastructure (V2I). The data associated with the aforementioned types is presented in the remaining sections. **2.2.1 Decentralized Environmental Notification Messages** In our application the DENMs, which are exchanged between Road-Side-Unit (RSU) and Vehicles, are providing information related to road wear and road conditions based on a number of sensors data. DENMs, which are event-based messages, are giving specific information about the event such as **Event Type** , **Event Location** , **Transmission Area** , **Event Duration** and are constructed by the Data Elements DE (contain one single data) & Data Frames DF (contain more than one data element in a predefined order) described in the clause below. The general structure of a DENM is depicted at the following figure. Each container is composed of a sequence of data elements (DE) and/or data frames (DF). A DE and a DF is either optional or mandatory. * ITS PDU Header * Management Container o _actionID_ o _detectionTime_ o _referenceTime_ o _termination_ o _eventPosition_ o _relevanceDistance_ o _relevanceTrafficDirection_ o _transmissionInterval_ o _station type_ * Situation Container o _informationQuality_ o _eventType_ * _causeCode_ * _subCauseCode_ o _linkedCause_ o _eventHistory_ * Location Container o _EventSpeed_ o _EventPositionHeading_ o _Traces_ o _roadType_ * A la carte Container o _LanePosition_ * _ImpactReduction_ o _ExternalTemperature_ * _RoadWorks_ * _PositioningSolution_ o _stationaryVehicle_ Each of these DE &DF belongs to one of the following categories4: * **Vehicle information:** the DE or DF describes one or a set of in vehicle data. * **GeoReference information:** the DE or DF provides geographical description of the data. * **Road topology information:** the DE or DF describes one or a set of road topology information. * **Traffic information:** the DE or DF describes one or a set of road traffic information. * **Infrastructure information:** the DE or DF describes one or a set of ITS infrastructure information. * **Personal information:** the DE or DF describes one or a set of ITS personal information. * **Communication information:** the DE or DF describes one or a set of data that are relevant to the ITS application layer or ITS facilities layer communication protocol. * **Other information:** the DE or DF that does not belong to any of the above categories. The Data Elements DE & Data Frames DF, which are described below, are giving information related to **ITS PDU Header** 4 **(** Figure 1 **)** * **DF_ItsPduHeader (DataType_ 114) – Communication Information** **Category** o Common message header for application and facilities layer messages. It is included at the beginning of an ITS message as the message header. The DF shall include the following information: * **protocolVersion:** version of the ITS message and/or communication protocol * **messageID:** Type of the ITS message: * denm(1): Decentralized Environmental Notification Message (DENM) as specified in ETSI EN 302 637-3 [i.3] * cam(2): Cooperative Awareness Message (CAM) as specified in ETSI EN 302 637-2 [i.2] * poi(3): Point of Interest message as specified in ETSI TS 101 556-1 [i.11] * spat(4): Signal Phase And Timing (SPAT) message as specified in SAE J2735 [i.12] * map(5): MAP message as specified in SAE J2735 [i.12] * ivi(6): In Vehicle Information (IVI) message as defined in ISO TS 19321 [i.13] * ev-rsr(7): Electric vehicle recharging spot reservation message, as defined in ETSI TS 101 556-3 [i.14] ▪ **stationID:** the identifier of the ITS-S that generates the ITS message in question. It shall be represented as specified hereafter (DataType 77). * **DE_StationID (DataType_ 77) - Communication Information Category** o Identifier for an ITS-S (Intelligent Transport Systems Station). The ITS-S ID may be a pseudonym. It may change over space and/or over time. o Integer value which ranges from 0 to 4294967295 The Data Elements DE & Data Frames DF, which are described below, are giving information related to **Management Container (** Figure 1 **)** . * **DF_ActionID (DataType_ 102) - Communication Information Category** o Identifier used to describe a protocol action taken by an ITS-S. For example, it describes an action taken by an ITS-S to trigger a new DENM as defined in ETSI EN 302 637-3 [i.3] after detecting an event. The DF shall include a sequence of the following data: * **originatingStationID:** ID of the ITS-S that takes the action. It shall be presented as defined in _**DE_StationID** _ * **sequenceNumber:** a sequence number. It shall be presented as defined in _**DE_SequenceNumber** . _ * **DE_SequenceNumber (DataType_ 68) - Other Information Category** o Sequence number o Integer value which ranges from 0 to 65535. * **DE_Timestamplts (DataType_ 82) - Other Information Category** o Number of milliseconds since 2004-01-01T00:00:00.000Z, as specified in ISO 8601 [i.10]. EXAMPLE: The value for TimestampIts for 2007-01-01T00:00:00.000Z is 94 694 401 000 milliseconds, which includes one leap second insertion since 2004-01-01T00:00:00.000Z. It should range from 0 to 4398046511103\. * (0) if utcStartOf2004 * (1 )oneMillisecAfterUTCStartOf2004 * **DF_ReferencePosition (DataType_ 124) – GeoReference Information** **Category** o The geographical position of a position or of an ITS-S. It represents a geographical point position. The DF shall include a sequence of the following information: * **latitude** : latitude of the geographical point * **longitude** : longitude of the geographical point * **positionConfidenceEllipse** : accuracy of the geographical position * **altitude** : altitude and altitude accuracy of the geographical point * **DF_Latitude (DataType_ 41) – GeoReference Information Category** * Absolute geographical latitude in a WGS84 coordinate system, providing a range of 90 degrees in north or in south hemisphere. Positive values are used for latitude in north of the Equator, negative values are used for latitude in south of the Equator. When the information is unavailable, the value shall be set to 900 000 001\. * Unit: 0.01 microdegree * **DF_Longitude (DataType_ 44) - GeoReference Information Category** o Absolute geographical longitude in a WGS84 co-ordinate system, providing a range of 180 degrees to the east or to the west of the prime meridian. Negative values are used for longitudes to the west; positive values are used for longitudes to the east. When the information is unavailable, the value shall be set to 1 800 000 001\. o Unit: 0.01 microdegree * **DF_PosConfidenceEllipse (DataType_ 119) - GeoReference Information** **Category** o DF that provides the horizontal position accuracy in a shape of ellipse with a predefined confidence level (e.g. 95 %). The centre of the ellipse shape corresponds to the reference position point for which the position accuracy is evaluated. The DF shall include a sequence of the following information: * **semiMajorConfidence** : half of length of the major axis, i.e. distance between the centre point and major axis point of the position accuracy ellipse ( _**DE_SemiAxisLength** _ ). * **semiMinorConfidence** : half of length of the minor axis, i.e. distance between the centre point and minor axis point of the position accuracy ellipse ( _**DE_SemiAxisLength** _ ). * **semiMajorOrientation** : orientation direction of the ellipse major axis of the position accuracy ellipse with regards to the WGS84 north ( _**DE_HeadingValue** _ ). * **DE_SemiAxisLength (DataType_67) - GeoReference Information** **Category** o Absolute position accuracy in one of the axis direction as defined in a shape of ellipse with a predefined confidence level (e.g. 95 %). The required confidence level is defined by the corresponding standards applying the DE. The value shall be set to: * (1) if the accuracy is equal to or less than 1 cm * (n > 1 and n < 4 093) if the accuracy is equal to or less than n cm * (4 093) if the accuracy is equal to or less than 4 093 cm * (4 094) if the accuracy is out of range, i.e. greater than 4 093 cm * (4 095) if the accuracy information is unavailable o Unit: 1 cm. **NOTE** : The fact that a position coordinate value is received with confidence set to 'unavailable(4095)' can be caused by several reasons, such as: * the sensor cannot deliver the accuracy at the defined confidence level because it is a low-end sensor, * the sensor cannot calculate the accuracy due to lack of variables, or o there has been a vehicle bus (e.g. CAN bus) error. In all 3 cases above, the reported position coordinate value may be valid and used by the application. If a position coordinate value is received and its confidence is set to 'outOfRange(4094)', it means that the reported position coordinate value is not valid and therefore cannot be trusted. Such value is not useful for the application. * **DE_HeadingValue (DataType_35) – GeoReference, Vehicle & Road ** **Topology Information Category** o Orientation of a heading with regards to the WGS84 north. * wgs84North(0) * wgs84East(900) * wgs84South(1800) * wgs84West(2700) * unavailable(3601) o Unit: 0.01 degree. * **DF_Altitude (DataType_ 103) - GeoReference Information Category** o Altitude and accuracy of an altitude in a WGS84 co-ordinate system. The DF shall include a sequence of the following information: * **altitudeValue** : altitude of a geographical point. * **altitudeConfidence** : accuracy of the reported altitudeValue within a specific confidence level. * **DF_AltitudeValue (DataType_ 9) - GeoReference Information Category** o Altitude in a WGS84 co-ordinate system. When the information is not available, the DE shall be set to 800 001. For altitude equal or greater than 8 000 m, the DE shall be set to 800 000. For altitude equal or less than -1 000 m, the DE shall be set to -100 000\. o Unit: 0.01 metre * **DE_AltitudeConfidence (DataType_8) - GeoReference Information** **Category** o Absolute accuracy of a reported altitude value of a geographical point for a predefined confidence level (e.g. 95 %). The required confidence level is defined by the corresponding standards applying the usage of this DE. The value shall be set to: * 0 if the altitude accuracy =< 0.01 metre * 1 if the altitude accuracy =< 0.02 metre * 2 if the altitude accuracy =< 0.05 metre * 3 if the altitude accuracy =< 0.1 metre * 4 if the altitude accuracy =< 0.2 metre * 5 if the altitude accuracy =< 0.5 metre * 6 if the altitude accuracy =< 1 metre * 7 if the altitude accuracy =< 2 metres * 8 if the altitude accuracy =< 5 metres * 9 if the altitude accuracy =< 10 metres * 10 if the altitude accuracy =< 20 metres * 11 if the altitude accuracy =< 50 metres * 12 if the altitude accuracy =< 100 metres * 13 if the altitude accuracy =< 200 metres * 14 if the altitude accuracy is out of range, i.e. greater than 200 metres * 15 if the altitude accuracy information is unavailable **NOTE** : The fact that an altitude value is received with confidence set to 'unavailable(15)' can be caused by several reasons, such as: * the sensor cannot deliver the accuracy at the defined confidence level because it is a low-end sensor, * the sensor cannot calculate the accuracy due to lack of variables, or o there has been a vehicle bus (e.g. CAN bus) error. In all 3 cases above, the reported altitude value may be valid and used by the application. If an altitude value is received and its confidence is set to 'outOfRange(14)', it means that the reported altitude value is not valid and therefore cannot be trusted. Such value is not useful for the application. * **DE_RelevanceDistance (DataType_ 61) - GeoReference Information** **Category** o DE describing a distance of relevance for information indicated in a message, for example, it may be used to describe the distance of relevance of an event indicated in a DENM as defined in ETSI EN 302 637-3 [i.3]. The value shall be set to: * 0 if relevance distance < 50 m * 1 if relevance distance < 100 m * 2 if relevance distance < 200 m * 3 if relevance distance < 500 m * 4 if relevance distance < 1000 m * 5 if relevance distance < 5 km * 6 if relevance distance < 10 km * 7 if relevance distance > 10 km * **DE_RelevanceTrafficDirection (DataType_ 62) - GeoReference** **Information Category** o DE describing a traffic direction that is relevant to information indicated in a message. For example, it may be used to describe traffic direction which is relevant to an event indicated by a DENM as defined in ETSI EN 302 637-3 [i.3], The terms "upstream", "downstream" and "oppositeTraffic" are relative to the event position. The value shall be set to: * 0 if allTrafficDirections * 1 if upstreamTraffic * 2 if downstreamTraffic * 3 if oppositeTraffic(3) o The terms "upstream", "downstream" and "oppositeTraffic" are relative to the event position * **DE_ValidityDuration (DataType_ 88) - Traffic Information Category** o At the end of this validity duration, the event is regarded as terminated. All information related to it may be deleted. o Unit: Second (It could range from 0 to 86400) * **DE_TransmissionInterval (DataType_ 86) - Communication Information** **Category** o Time interval between two consecutive message transmissions. o Unit: Millisecond * **DE_StationType (DataType_78) - Other Information Category** o The type of an ITS-S. The station type depends on the integration environment of ITS-S into vehicle, mobile devices or at infrastructure. (0 … 255) * unknown(0) * pedestrian(1) * cyclist(2) * moped(3) * motorcycle(4) * passengerCar(5) * bus(6) * lightTruck(7) * heavyTruck(8) * trailer(9) * specialVehicles(10) * tram(11) * roadSideUnit(15) The Data Elements DE & Data Frames DF, which are described below, are giving information related to **Situation Container (** Figure 1 **)** . * **DE_InformationQuality (DataType_39) - Other Information Category** Quality level of provided information (0 … 7) o Unavailable (0) o Lowest (1) o Highest (7) * **DF_CauseCode (DataType_104) - Traffic Information Category** o Encoded value of a traffic event type. The DF shall include the following information: * causeCode: the type of a direct cause of a detected event. It shall be presented as defined in _**CauseCodeType** _ * subCauseCode: sub type of the direct cause. It shall be presented as defined in _**SubCauseCodeType.** _ The values of causeCodeType and subCauseCode are defined in clause 7.1.4 of ETSI EN 302 637-3 [i.3]. * **DE_CauseCodeType (DataType_ 10) - Traffic Information Category** o Value of the direct cause code of a detected event. The Cause Codes are described as following: * adverseWeatherCondition-Adhesion ( **Direct Cause Code** 🡪 6): the type of event is low adhesion * adverseWeatherCondition-ExtremeWeatherCondition ( **Direct Cause Code** 🡪 17): the type of event is extreme weather condition * adverseWeatherCondition-Precipitation ( **Direct Cause Code** 🡪 19): the type of event is precipitation * adverseWeatherCondition-Visibility ( **Direct Cause Code** 🡪 18): the type of event is low visibility * hazardousLocation-SurfaceCondition ( **Direct Cause Code** 🡪 9): the type of event is abnormal road surface condition * roadworks ( **Direct Cause Code** 🡪 3): the type of the event is roadwork * **DE_SubCauseCodeType (DataType_ 81) - Traffic Information Category** o Type of **Sub Cause** of a detected event as defined in ETSI EN 302 637-3 [i.3]” **.** * **DE_AdverseWeatherCondition – AdhensionSubCauseCode (DataType_ 4) - Traffic Information Category** Low Road Adhension is due to: * unavailable (Sub Causes Code 🡪 0) o heavy frost on the road (Sub Causes Code 🡪 1) o fuel on the road (Sub Causes Code 🡪 2) o mud on the road (Sub Causes Code 🡪 3) o snow on the road (Sub Causes Code 🡪 4) o ice on the road (Sub Causes Code 🡪 5) o black ice on the road (Sub Causes Code 🡪 6) o oil on the road (Sub Causes Code 🡪 7) o loose gravel or stone fragments detached from a road surface or from a hazard (Sub Causes Code 🡪 8) o instant black ice on the road surface (Sub Causes Code 🡪 9) * salted road (Sub Causes Code 🡪 10) * **DE_AdverseWeatherCondition –** **ExtremeWeatherConditionSubCauseCode (DataType_ 5) - Traffic Information Category** Extreme Weather Condition is: o unavailable (Sub Causes Code 🡪 0) o strong wind (Sub Causes Code 🡪 1) o damaging hail (Sub Causes Code 🡪 2) o hurricane (Sub Causes Code 🡪 3) o thunderstorm (Sub Causes Code 🡪 4) o tornado (Sub Causes Code 🡪 5) o blizzard (Sub Causes Code 🡪 6)  **DE_AdverseWeatherCondition – PrecipitationSubCauseCode** **(DataType_ 6) - Traffic Information Category** Precipitation is: <table> <tr> <th> o unavailable </th> <th> (Sub Causes Code 🡪 0) </th> </tr> <tr> <td> o heavy rain </td> <td> (Sub Causes Code 🡪 1) </td> </tr> <tr> <td> o heavy snow fall </td> <td> (Sub Causes Code 🡪 2) </td> </tr> <tr> <td> o soft hail </td> <td> (Sub Causes Code 🡪 3) </td> </tr> </table> * **DE_AdverseWeatherCondition – VisibilitySubCauseCode (DataType_ 7) - Traffic Information Category** Low visibility due to: o unavailable (Sub Causes Code 🡪 0) o fog (Sub Causes Code 🡪 1) o smoke (Sub Causes Code 🡪 2) o heavy snow fall (Sub Causes Code 🡪 3) o heavy rain (Sub Causes Code 🡪 4) o heavy hail (Sub Causes Code 🡪 5) o sun glare (Sub Causes Code 🡪 6) o sand storm (Sub Causes Code 🡪 7) * **DE_HazardousLocation – SurfaceConditionSubCauseCode (DataType_ 33) - Traffic Information Category** Hazardous Location in case: o unavailable further detailed information of the road surface condition (Sub Causes Code 🡪 0) o the road surface is damaged by earthquake (Sub Causes Code 🡪 2) o road surface is damaged by subsidence (Sub Causes Code 🡪 4) o road surface is damaged due to snow drift (Sub Causes Code 🡪 5) o road surface is damaged by strong storm (Sub Causes Code 🡪 6) o road surface is damaged due to pipe burst (Sub Causes Code 🡪 7) o road surface damage is due to falling ice (Sub Causes Code 🡪 9) The sensors can be connected to On-Road-Unit (ORU) and/or Road-Side-Unit (RSU) and can provide information about the Temperature, Humidity, Barometric Air Pressure, Wind Direction & Speed, Precipitation Quantity, Solar Radiation, Chemicals Detection (e.g. oil) on Road and Road Wear (string gauges)5 , 6\. The Data Elements DE & Data Frames DF, which are described below, are giving information related to **A la Carte Container (** Figure 1 **)** . * **DE_Temperature (DataType_ 83) - Other Information Category** o Temperature. For temperature equal to or less than -60 °C, the value shall be set to -60. For temperature equal to or greater than 67 °C, the value shall be set to 67. * equalOrSmallerThanMinus60Deg (-60) * oneDegreeCelsius(1) * equalOrGreaterThan67Deg (67) o Unit: Degree Celsius (It could range from -60 to 67)  **DE_RoadworksSubCauseCode (DataType_ 66) ) - Traffic Information Category** RoadWork Type: o unavailable further detailed information on roadworks (Sub Causes Code 🡪 0) o a major roadworks is ongoing (Sub Causes Code 🡪 1) o a road marking work is ongoing (Sub Causes Code 🡪 2) o slow moving road maintenance work is ongoing (Sub Causes Code 🡪 3) o a short term stationary roadwork is ongoing (Sub Causes Code 🡪 4) o a vehicle street cleaning work is ongoing (Sub Causes Code 🡪 5) o winter service work is ongoing (Sub Causes Code 🡪 6) Table 1 shows an example of DENM message with RoadWorks cause-code selected. Table 1: Example of DENM generated by a roadside unit. <table> <tr> <th> ETSI ITS (DENM) DENM header protocolVersion: currentVersion (1) messageID: denm (1) stationID: 2 denm management actionID originatingStationID: 148 sequenceNumber: 33 detectionTime: 2158628750 (2158628.750 s) referenceTime: 2158628750 (2158628.750 s) eventPosition latitude: (45.8510750 deg) longitude: (11.0006270 deg) positionConfidenceEllipse semiMajorConfidence: unavailable (4095) semiMinorConfidence: unavailable (4095) semiMajorOrientation: unavailable (3601) altitude altitudeValue: (800.00 m) altitudeConfidence: unavailable (15) relevanceDistance: lessThan500m (3) relevanceTrafficDirection: allTrafficDirections (0) validityDuration: Unknown (60 sec) stationType: roadSideUnit (15) situation informationQuality: lowest (1) eventType causeCode: roadworks (3) subCauseCode: 0 </th> </tr> </table> **2.2.2 Cooperative Awareness Messages** Cooperative awareness within road users means that vehicles and infrastructure become aware of each other’s presence thanks to periodic messages containing each other’s position, dynamics and attributes. The information to be exchanged is packed up in the periodically transmitted Cooperative Awareness Message (CAM) whose construction, management and processing is part of the ITS Communication architecture ETSI EN 302 6657\. The requirements on the performance of the CAM and the quality of its data elements are derived from the Basic Set of Applications (BSA) defined in ETSI TR 102 6388. Figure 2 shows the general structure of a CAM message. Each container consists of a sequence of DE and DF similarly to DENM. Here is a description of typical containers: * ITS PDU Header * Basic Container o stationType o referencePosition * latitude * longitude * positionConfidenceEllipse * altitude * High Frequency Container o Basic Vehicle Container HF * heading * speed * driveDirection * vehicleLength * vehicleWidth * longitudinalAcceleration * curvature * yawRate OR o Basic RSU Container HF o Protected Communication Zone * protectedZoneType * protecetedZoneLatitude * protectedZoneLongitude * protectedZoneRadius * Low Frequency Container o vehicleRole o exteriorLights o pathHistory * ItemX: deltaLatitude * ItemX: deltaLongitude * ItemX: deltaAltitude The content of CAM varies from the type of ITS-S station. In SAFE STRIP two types of stations will be used to identify the presence of equipped vehicles. The CAM messages will be used to notify the vehicle that it is approaching a location with equipped infrastructure and vice versa the vehicles will send a notification to the infrastructure about their presence. Table 2 shows the implementation of two CAM messages from a vehicle and a roadside unit. Table 2: Example of CAM generated by: (a) a vehicle, (b) a road-side unit. <table> <tr> <th> ETSI ITS (CAM) CAM header protocolVersion: currentVersion (1) messageID: cam (2) stationID: 200 cam generationDeltaTime: 30204 camParameters basicContainer stationType: passengerCar (5) referencePosition latitude: (45.8501000 deg) longitude: Unknown (11.0012658 deg) positionConfidenceEllipse semiMajorConfidence: (12.00 m) semiMinorConfidence: (11.50 m) semiMajorOrientation: (36.01 deg) altitude altitudeValue: (173.60 m) altitudeConfidence: unavailable (15) highFrequencyContainer: basicVehicleContainerHighFrequency (0) basicVehicleContainerHighFrequency heading headingValue: (129.7 deg) headingConfidence: (1) speed speedValue: (20.50 m/s, 73.80 km/h) </th> </tr> </table> <table> <tr> <th> speedConfidence: (0.16 m/s) driveDirection: unavailable (2) vehicleLength vehicleLengthValue: (3.8 m) vehicleLengthConfidenceIndication: noTrailerPresent (0) vehicleWidth: (2.0 m) longitudinalAcceleration longitudinalAccelerationValue: (1.61) longitudinalAccelerationConfidence: (1.02) curvature curvatureValue: straight (0) curvatureConfidence: unavailable (7) curvatureCalculationMode: yawRateUsed (0) yawRate yawRateValue: straight (0.00 deg/sec) yawRateConfidence: unavailable (8) lowFrequencyContainer: basicVehicleContainerLowFrequency vehicleRole: specialTransport (2) exteriorLights: 12 [0001 0010] pathHistory: 1 item Item 0 PathPoint pathPosition deltaLatitude: (0.0000003 deg) deltaLongitude: (0.0000002 deg) deltaAltitude: (0.01 m) </th> </tr> <tr> <td> (a) </td> </tr> <tr> <td> ETSI ITS (CAM) CAM header protocolVersion: currentVersion (1) messageID: cam (2) stationID: 2 cam generationDeltaTime: (35748) camParameters basicContainer stationType: roadSideUnit (15) referencePosition latitude: (45.8510750 deg) longitude: (11.0006270 deg) positionConfidenceEllipse </td> </tr> <tr> <td> semiMajorConfidence: unavailable (4095) semiMinorConfidence: unavailable (4095) semiMajorOrientation: unavailable (3601) altitude altitudeValue: unavailable (800001) altitudeConfidence: unavailable (15) highFrequencyContainer: rsuContainerHighFrequency (1) rsuContainerHighFrequency protectedCommunicationZonesRSU: 1 item Item 0 ProtectedCommunicationZone protectedZoneType: cenDsrcTolling (0) protectedZoneLatitude: (45.8510750 deg) protectedZoneLongitude: (11.0006270 deg) protectedZoneRadius: (300) </td> </tr> <tr> <td> (b) </td> </tr> </table> The data content transmitted from the two stations is different: the vehicles are moving objects, therefore it is necessary to spread information about vehicle dynamics. On the contrary, RSUs are characterized by static coordinates and protected areas. For the same reason vehicles send CAMs and DENMs at higher rates as opposed to roadside units: typically the vehicles transmit 10[mex/s] while roadside units sends out CAM messages at 1[Hz]. In ETSI EN 302 637-23 the mandatory fields are defined, such as the input source of the message and the proper operation of the Cooperative Awareness service. Here the most relevant containers from SAFE STRIP perspective are reported. The Data Elements DE & Data Frames DF, which are described below, provide information related to **ITS PDU Header** * **DF_ItsPduHeader (DataType_ 114) – Communication Information** **Category** o ITS PDU header of the CAM, it’s composed by three containers: * **protocolVersion** version of the ITS message and/or communication protocol; * **messageID** Type of the ITS message. Following message type values are assigned in the present document: denm(1): Decentralized Environmental Notification Message, cam(2): Cooperative Awareness Message, poi(3): Point of Interest message, spat(4): Signal Phase And Timing (SPAT) message, map(5): MAP message, ivi(6): In Vehicle Information (IVI) message and ev-rsr(7): Electric vehicle recharging spot reservation message; **stationID:** the identifier of the ITS-S that generates the ITS message in question. It shall be represented as specified hereafter (DataType 77). Generation Delta Time is a container described in ETSI EN 302 637-2, it is not included into ITS-PDU Header, nor into High Frequency Container, but it is mandatory to evaluate the time validity. * **generationDeltaTime** this is the reference time of the CAM, considered as time of the CAM generation. It shall be used as time basis for application with predefined time validity. The following DF and DE comprise information of the Basic container, which provides basic information of the originating ITS-station. * **DE_StationType (DataType_78) - Other Information Category** o The type of an ITS-S: unknown, pedestrian, cyclist, moped, motorcycle, passengerCar, bus, lightTruck, heavyTruck, trailer, specialVehicles, tram, roadSideUnit. * **DF_referencePosition (DataType_124) - GeoReference Information** **Category** o Position at the reference point of the originating ITS-S. The measurement time shall correspond to generationDeltaTime. This parameter shall be used to locate the vehicle and roadside units into absolute coordinate system. The DF shall include a sequence of the following information: * latitude: latitude of the geographical point, * longitude: longitude of the geographical point, * positionConfidenceEllipse accuracy of the geographical position, * altitude: altitude and altitude accuracy of the geographical point. * **DF_PosConfidenceEllipse (DataType_119) - GeoReference Information** **Category** o positionConfidenceEllipse provides the accuracy of the measured position with the 95 % confidence level. The DF shall include a sequence of the following information: * semiMajorConfidence: half of length of the major axis, i.e. distance between the centre point and major axis point of the position accuracy ellipse, * semiMinorConfidence: half of length of the minor axis, i.e. distance between the centre point and minor axis point of the position accuracy ellipse, * semiMajorOrientation: orientation direction of the ellipse major axis of the position accuracy ellipse with regards to the WGS84 north. The following DF and DE comprise information of the high Frequency Container, which contains the high frequency signals of the vehicle or roadside units. An example of DF and DE includes car acceleration, speed and yaw rate. Thanks to this information the following scenario could be realised: the infrastructure and a vehicle can determine if a user is properly approaching an intersection, or any other obstacle. On the contrary, the roadside units could disseminate as high frequency container the position of a CEN DSRC Tolling Station. * **DF_Heading (DataType_112) – GeoReference, Vehicle & Road Topology ** **Information Category** o Heading in a WGS84 co-ordinates system. The DF shall include a sequence of the following information: * headingValue: a heading value, * headingConfidence: the accuracy of the reported heading value with a predefined confidence level. * **DF_Speed (DataType_126) - Vehicle Information Category** o It describes the speed and corresponding accuracy of the speed information for a moving object (e.g. vehicle). The DF shall include the a sequence of the following information: * speedValue, * speedConfidence: accuracy of the reported speed value. * **DE_DriveDirection (DataType_22) - Vehicle Information Category** o It denotes whether a vehicle is driving forward or backward. * **DF_vehicleLength (DataType_131) - Vehicle Information Category** o Length of vehicle and accuracy indication information. The DF shall include a sequence of the following information: * vehicleLengthValue: length of vehicle, * vehicleLengthConfidenceIndication: indication of reported length value confidence * **DE_VehicleWidth (DataType_95) - Vehicle Information Categor** o Width of a vehicle, including side mirrors. * **DF_LongitudinalAcceleration (DataType_129) - GeoReference** **Information Category** o It indicates the vehicle acceleration at longitudinal direction and the accuracy of the longitudinal acceleration. The DF shall include a sequence of the following information: * longitudinalAccelerationValue: longitudinal acceleration value at a point in time, * longitudinalAccelerationConfidence: accuracy of the reported longitudinal acceleration value with a predefined confidence level. * **DF_Curvature (DataType_107) - Vehicle Information Category** o It describes the curvature of the vehicle trajectory and the accuracy. The curvature detected by a vehicle represents the curvature of actual vehicle trajectory. The DF shall include a sequence of the following information: * curvatureValue: detected curvature of the vehicle trajectory, * curvatureConfidence: accuracy of the reported curvature value with a predefined confidence level. * **DE_CurvatureCalculationMode (DataType_13) - Vehicle Information** **Category** o It describes whether the yaw rate is used to calculate the curvature for a reported curvature value. * **DF_YawRate (DataType_132) - Vehicle Information Category** o Yaw rate of vehicle at a point in time. The DF shall include a sequence of the following information: * yawRateValue: yaw rate value at a point in time, * vehicleLengthConfidenceIndication: accuracy of reported yaw rate value. The following DF and DE comprise information of the low Frequency Container, which contains information changing with more relaxed dynamics, such as the exterior-lights, or the vehicle role. This information allows the user to react to different type of operations: public transport, roadwork vehicles etc. * **DE_VehicleRole (DataType_94) - Vehicle Information Category** o Role played by a vehicle at a point in time: default, publicTransport, specialTransport, dangerousGoods, roadWork, rescue, emergency, safetyCar, agriculture, commercial, military, roadOperator, taxi. * **DE_ExteriorLights (DataType_28) - Vehicle Information Category** o This DE describes the status of the exterior light switches of a vehicle. * **DF_ProtectedCommunicationZonesRSU (DataType_122) –** **Infrastructure & Communication Information Category ** o DF that describes a list of protected communication zones by a road side ITS-S (Road Side Unit RSU). It may provide up to 16 protected communication zones information. Each protected communication zone shall be presented as DF_ProtectedCommunicationZone. * **DF_ProtectedCommunicationZone (DataType_121) – Infrastructure & ** **Communication Information Category** o DF that describes a zone of protection inside which the ITSG5 communication should be restricted. It shall include a sequence of the following information : * protectedZoneType: type of the protected zone, * expiryTime: (otional) time at which the validity of the protected communication zone will expire, * protectedZoneLatitude: latitude of the centre point of the protected communication zone, * protectedZoneLongitude: longitude of the centre point of the protected communication zone, * protectedZoneRadius: (optional) radius of the protected communication zone in metres, * protectedZoneID: (optional) the ID of the protected communication zone. o A protected communication zone may be defined around a CEN DSRC road side equipment **2.3 TMC Messages** Standard RDS-TMC user messages provide the following five basic items of explicit, broadcast information9: * **Event description** , giving details of: o road event situation o general traffic problems o weather situations In some cases more information about the severity of the event is given.  **Location** (road segment or point location of event occurrence) * **Direction and Extent** , identifying the adjacent segments or specific point locations also affected by the incident, and where appropriate the direction of traffic affected. * **Duration** (indication of expected problem duration) * **Diversion advice** (showing whether or not end-users are recommended to find and follow an alternative route) Optional information can be added to any message using one or more additional RDS data groups. This optional addition can give greater detail or can deal with unusual situations. Any number of additional fields can in principle be added to each basic message, subject only to a maximum message length of five RDS data groups4. 4. **User model** In the context of SAFE STRIP, any user is an entity that has its own data model. The elements of the User model are shown below. Table 3: User model in SAFE STRIP <table> <tr> <th> **Field Name** </th> <th> **Type** </th> <th> **Required** </th> <th> **Details** </th> </tr> <tr> <td> **_id** </td> <td> objectid </td> <td> True </td> <td> Unique id </td> </tr> <tr> <td> **email** </td> <td> string </td> <td> True </td> <td> Email </td> </tr> <tr> <td> **password** </td> <td> string </td> <td> True </td> <td> Password </td> </tr> <tr> <td> **name** </td> <td> string </td> <td> True </td> <td> First/ last name </td> </tr> <tr> <td> **phone** </td> <td> string </td> <td> True </td> <td> Phone number </td> </tr> <tr> <td> **fcm_token** </td> <td> string </td> <td> True </td> <td> FireCloud Message token </td> </tr> </table> All data related to user personal information will not be open by any means, but it will be stored according to the SAFE STRIP data privacy policy (Section 6.3). 5. **Data Collection and Management in Pilots** Data management in Pilots will follow the principles below as a minimum, at each pilot site level. This list will be updated in the next DMP version (deliverable D2.3): * Each Pilot site must ensure the data collection according to the evaluation requirement and with the right accuracy. * Each pilot site must provide, in a timely fashion, the requested data to perform test analysis. Data must be provided with its metadata or a complete description. To enable an accurate usage, data must be generated in compliance with evaluation requirements (formats, measurements, etc.) that will be defined in the technical validation and evaluation plans of A5.6 and A6.1 respectively. * Each pilot site must provide an easy access to collected data and a secure environment for data storage. * Each pilot site needs to have all the necessary tools to check data quality and to read raw data. Automated scripts will be provided to process large dataset in order to ease and enable post-processing of aggregated data, whereas the logging of the raw data will be pre-defined in a homogeneous way for all types of them. The data that will be collected during technical validation and user trials activities are as follows: * Subjective anonymised data from users (drivers) and other stakeholders (i.e. TMC operators) – collected at **pilot site level** related to system evaluation and acceptance. Indicative list of evaluation indicators to be collected is as follows (list is not exhaustive – will be finalized in technical and pilot evaluation plans): usage, user satisfaction, user compliance (following the system’s advice), user acceptance (perceived/rated by users), usability ratings (perceived by users), system efficiency rates, willingness to have and use the system. * Driving performance data from drivers/riders that will be logged during trials in the logging mechanisms installed in the vehicles and the mobile devices **– collected at pilot site level, but in an homogeneous way and upon common protocols,** related with speed, lateral deviation, safety headway, acceleration/deceleration, lane merging and overtaking, etc. * **Data from sensors** . This includes the data that will be collected by the various sensors, e.g. road sensors for _humidity_ , _temperature_ , _oil_ _detection_ , _road wear_ , as well as tyre sensors for _friction_ , RFID sensors for _parking slots_ , _car position_ , _speed limit_ , etc. This data will be collected by the hardware sensors and will be stored in the cloud via the SAFE STRIP communication modules (i.e., through LTE), according to the data model defined in Section 2.2. In particular, for parking slots and relevant information, the SAFE STRIP data model will be compatible with the ETSI TR 102 638 standard8. For any data element that no reference to the existing standards occurs (e.g. _vehicle_ _lane position_ for autonomous vehicles) _,_ the project will propose a new data model to be potentially included in the existing standards. * **TMC messages** , as the ones shown in section 2.3. Storage of those messages will be done centrally, by the means of the OMNIA cloud repository (provided by SWARCO). Hence, the corresponding data model will adhere to the one supported by the OMNIA Web API. Only data that do not confront any IPR related issues will be available as open data. A final decision about this will be made in the context of activity A2.4 and be included in the next version of the DMP. Access to all data of the OMNIA repository will be done in an authenticated manner. * **Vehicle position** received from the GPS device **.** This can be stored in the cloud in an anonymised format **,** according to the data model defined in Section 2.2 (“event position”). The final definition of the data, the way and place they will be logged will be defined in _D5.4-Test sites set-up and experimental technical validation plan_ , _D5.5-Updated experimental technical validation plan and results – final report_ , D6.1- _Initial report on Pilot framework and plans_ and _D6.2-Final report on Pilot framework and plans_ , and will be also presented in the next version of DMP (deliverable D2.3). All test sites participating in either the technical and/or user-based evaluation of the system are obliged to follow the common plans, where the experiments will be defined in detail. In specific, the validation plans, starting from Key Performance Indicators that will be set, will define the number of scenarios and test cases, the duration of tests and test runs, the number of situations per specific event, the number of test vehicles, the variation in users, the variation in situations (weather, traffic, traffic context, etc.) together with the validation approaches and data collection tools (questionnaires, user surveys protocols, logging mechanisms on different ends – e.g., the vehicle, mobile phone, the cloud, or other) that will be used to store all types of data in need in an appropriate and consistent manner and format (in order to allow aggregated and crosssite post processing). Additional requirements related to ORDP are defined in section 4 to guarantee that the collected data will be provided in compliance to European Commission Guidelines1 on Data Management in Horizon 2020. **2.6 Data Flow in SAFE STRIP Test Environment** The test data management process carried out by activity A3.4 will define the architecture that enables the implementation of the data collection processes in all SAFE STRIP modules. Data generated during SAFE STRIP system operation will be stored in the long term in a cloud repository, as data storage at the local infrastructure will be volatile due to the limited capabilities of those devices and their low energy requirements. However, interconnection is foreseen to the OMNIA platform provided by SWARCO, which is also built upon a cloud architecture. OMNIA can be seen as a gateway to TMC data, hence access to this data will be performed in an authenticated manner via the OMNIA Rest web API. It is not yet clear of the OMNAI platform will provide all required resources for hosting the SAFE STRIP cloud or only part of it, as at the time of this writing the system architecture (activity A2.1) is not yet finished. The test data management process will focus on services that will be deployed on the Pilot Site Test Servers and that will comprise the central SAFE STRIP cloud. Special care will be given so that for each specific pilot site test a dedicated VM will collect and store all the data created by the pilot site. Figure 3 shows the data flow in the SAFE STRIP test environment. Finally, the aggregated and validated data will be fed into the SAFE Decision Support for posting the warning message to the drivers. **Sensor data** **(** **Road** **,** **Parking slots** **,** **etc** **.)** **Pilot site VM** **OMNIA cloud** **(** **TMC data** **)** **Decision Support** **System** **Pilot site specific data sources** **(** **mobile phone** **,** **static data** **,** **user** **questionnaires** **)** **Data Aggregation** **Data import** **/** **export** **SAFE STRIP Cloud** **Data enrichment** **User feedback** **Data Fusion** 3. **FAIR Data Principles** **3.1 Making data findable, including provisions for metadata** According to the principle of making data discoverable, all data to be made openly available in SAFE STRIP will have rich metadata. The foreseen data quality assurance process of the project will take special care in order to prevent storage of data for open access, unless adequate metadata will be provided. All data in SAFE STRIP will be encoded in adherence to a set of standards (e.g. ETSI TS 102 894-2, etc.) that are mentioned in Section 3.3. Adherence to those standards will ensure that the appropriate metadata will be provided, making data visible in a searchable context based on the semantics of data. Also the use of persistent and unique in the form of Digital Object Identifiers will be supported in order to facilitate data search and discovery. The naming conventions used for SAFE STRIP data will be as follows. The DE introduced in the previous section, and the connection among them (see e.g. Figures 1-4), where the **Cause Code** (TPEGS-TEC (CEN ISO/TS 21219-15)) corresponds to the **Direct Cause Code of** **DE_CauseCodeType** **(DataType_ 10)** and the **Sub Cause Code** (TPEGS-TEC (CEN ISO/TS 21219-15)) corresponds to the **Sub Causes Code** of the following DE: * **DE_AdverseWeatherCondition – AdhensionSubCauseCode (DataType_ 4)** * **DE_AdverseWeatherCondition – ExtremeWeatherConditionSubCauseCode (DataType_ 5)** * **DE_AdverseWeatherCondition – PrecipitationSubCauseCode (DataType_ 6)** * **DE_AdverseWeatherCondition – VisibilitySubCauseCode (DataType_ 7)** * **DE_HazardousLocation – SurfaceConditionSubCauseCode (DataType_ 33)**  **DE_RoadworksSubCauseCode (DataType_ 66)** The aforementioned standards (DATEX II, TPEG) and the metadata they provide will be applied in order to make data searchable. For enabling keyword search- based data discovery off-the-self solutions will be deployed for allowing ease and fast search (see also section 3.2). slippery road” 10 weather conditions” 4 visibility” 11 works” 4 **3.1.1 Data documentation** In addition to metadata inherited by the aforementioned standards, further documentation and metadata will be provided where applicable. The initial documentation details that will be included for each service are shown in the template shown below. Table 4: Data documentation template <table> <tr> <th> **Basic** </th> <th> </th> <th> **Service 1** </th> <th> **Service 2** </th> <th> **Service n** </th> <th> **Advanced** </th> </tr> <tr> <td> Data name </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Definition of variables </td> </tr> <tr> <td> Who created/contributed data </td> <td> to </td> <td> </td> <td> </td> <td> </td> <td> Vocabularies </td> </tr> <tr> <td> Date created </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Units of Measurement </td> </tr> <tr> <td> Data modified </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> File format and type </td> </tr> <tr> <td> Conditions </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Methods </td> </tr> <tr> <td> **Basic** </td> <td> **Service 1** </td> <td> **Service 2** </td> <td> **Service n** </td> <td> **Advanced** </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> used/assumptions made/analytical procedural info; description </td> </tr> </table> The role of this template will be to accompany any data to be accessed and used by other than the individuals/organizations who collected the data. The details that will define and create the specific dataset profile can be: _Basic:_ details defining the origin, the type, the creation dates and the way in which this data was created. _Advanced_ : include definitions of variables, the methods used and applied in order to gather and capture data based on common standards (incl. any procedural steps taken). A detailed and technical specification of data might be included like the variables that define the specific dataset, the units of measurements and the assumptions applied for the data collection (e.g. zero might mean no activity recorded, therefore the patient is not active). The format and file types used for collecting and storing the data (and size) can be of help if external users show interest in re-using a dataset (i.e. checking compatibility with s/w they use in order to aggregate or analyse data utilise import/export functionalities). **3.2 Making data openly accessible** All data described in Section 1 will be made available with open access with the exception of those datasets that will be assessed by the project consortium as being subject to IPR violation, e.g. proprietary data owned by consortium beneficiaries or third legal parties (e.g. suppliers), in accordance with the policies outlined in DoASection 5, the Ethics requirement deliverables of the project (D9.1, D9.2), as well as the project CA especially concerning data that will be generated or made available as background assets by one or more beneficiaries. In case restrictions such as the aforementioned ones will occur, access will be granted only to authorised users. All datasets will be available for sharing and re-use via the management/exploitation portals of the project. These portals will be the common means for exchanging data either among the partners of the project or between the consortium and third parties. The specifications of how and which of the portals will be used in the context of data sharing and reuse will be defined in the next months (up to M9) by the consortium and will be included in the final version of the DMP. These specifications have to be referred to any new features that have to be added to the current portal infrastructure for dealing with the different data manipulation needs. All datasets will be stored in a private cloud-based repository, e.g. ZENODO ( _https://zenodo.org/_ ) a free service developed by CERN under the EU FP7 project OpenAIREplus (grant agreement no.283595). The repository shall also include information regarding the software, tools and instruments that were used by the dataset creator(s) so that secondary data users can access and then validate the results. The SAFE STRIP data collection can be accessed in ZENODO repository in a similar address as the following link: _https://zenodo.org/collection/ <<safestrip _ > >. The datasets in the cloud repository will be linked to the management/exploitation portals of the project, as well, and they will be assigned to DOIs in order for 3 rd parties to have access to them. Through the use of the above repository (or similar, e.g. ownCloud), we will ensure that the most up to date security features will be applied out of the box, e.g., firewall, password protection, encryption, etc. If any IPR issue exist on sharing, they will be handed accordingly. Moreover, all necessary material that supplements each dataset (software for parsing the datasets, standards documents, etc.) will be provided by the consortium via the management/exploitation portals. **3.3 Making data interoperable** Data interoperability is foreseen in the project through conformance to standards Specifically, SAFE STRIP data will confront to the following standards: * ETSI TS 102 894-24 * ETSI EN 302 637-2 12 * ETSI EN 302 637-32 * DATEX II, TPEG2-TEC- and TMCEvents for safety messages 13 **3.4 Increase data re-use (through clarifying licenses)**  SAFE STRIP participates in the Pilot on Open Research Data launched by the European Commission along with the Horizon2020 Programme. As such, all data produced by the project are to be published with open access. Particularly, access will be given through Creative Commons CC0 license for all datasets at this project stage, unless it is defined otherwise. All datasets will be maintained for the entire duration of the project as well as for 2 additional years after its conclusion. After the project ends, all datasets will be stored in a centralized facility in order to minimize maintenance costs. Datasets with acknowledged long-term value may be kept for longer period of time. The long-term value of a dataset will be decided according to exploitation plan as well as its relation to a scientific publication. For the time period in which the data will be available for open access, no restrictions will be imposed on their access. In case IPR protected or proprietary data will be generated or made available as background assets in the context of the project by one or more beneficiaries, access to this data will be treated in accordance to the rules and regulations foreseen in the project Consortium Agreement. For all this data, access will be granted only to authorised users. All data to be available as open data will undergo a quality assurance process. At a first level, each partner in charge follows specific procedures to assure the quality of the data and conformance to standards as referred to each dataset’s description in Section 2. Those procedures regard calibration of the measurements, as well as, internal post-measurement review. In a higher level, quality will be also monitored periodically on the different versions of the dataset as conservation experts of the consortium will review each update. Finally, additional peer-reviews will take place in case of publication in Journals. **4 SAFE STRIP Open Strategy & Participation in the Open Research Data Pilot ** The SAFE STRIP project has agreed to participate in the Pilot on Open Research Data in Horizon 2020 and uses the specific Horizon 2020 guidelines associated with ‘open’ access to ensure that the results of the project results provide the greatest impact possible. SAFE STRIP will ensure the open access 14 to all peer-reviewed scientific publications relating to its results and will provide access to the research data needed to validate the results presented in deposited scientific publications. The following lists the minimum fields of metadata that should come with a SAFE STRIP project-generated scientific publication in the Library section of the project web site ( _http://safestrip.eu/publications-index-p/_ ) , where a Publication Index has been anticipated: * The terms: “European Union (EU)”, “Horizon 2020” * Name of the action (Research and Innovation Action) * Acronym and grant number (SAFE STRIP, 723211) * Publication date * Publication type * Publication place * Publication name * Publication authors * Length of embargo period if applicable * Persistent identifier Apart from the scientific publications index, SAFE STRIP will publish all its public Deliverables and dissemination material in its web site (under “Library”). When referencing Open access data, SAFE STRIP will include at a minimum the following statement demonstrating EU support (with relevant in- formation included into the repository metadata): \- “This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 723211”. Finally, SAFE STRIP will target **Open Access journals** , when applicable, whereas it will also target Gold OA publications and Green OA, wherever Gold OA is not possible. The target is to maximize the impact on scientific excellence through result publication in open access yet highly appreciated journals, without releasing any confidential information that could potentially violate the security nature of the project. The SAFE STRIP consortium will strive to make many of the collected datasets open access. When this is not the case, the data sharing section for that particular dataset will describe why access has been restricted (see also section 3.2). In regards to the specific repositories available to the SAFE STRIP consortium, numerous project partners maintain institutional repositories that will be listed in the next DMP version (Deliverable D2.3), where project scientific publications and in some instances, research data will be deposited. The use of a specific repository will depend primarily on the primary creator of the publication and on the data in question. Some other project partners will not operate publically accessible institutional repositories. When depositing scientific publications they shall use either a domain specific repository or use the EU recommended service OpenAIRE (http://www.openaire.eu) as an initial step to finding resources to determine relevant repositories. Project research data shall be deposited on an online data repository (see also Section 3.2) about ZENODO. In summary, as a baseline SAFE STRIP partners shall deposit: * _Scientific publications_ – on their respective institute repositories in addition (when relevant) to the SAFE STRIP online data repository * _Research data_ – to the SAFE STRIP online data collection (when possible) * _Other project output files_ – to the SAFE STRIP online data collection (when relevant) This version of the DMP does not include the actual metadata about the Research Data being produced in SAFE STRIP project. Details about technical means and services for building repositories and accessing to this metadata will be provided in the next version of the DMP (deliverable D3.2). A template document is defined in section 3.1.1 and will be used by project partners to provide all requested information. 5. **Allocation of Resources** In order to face the data management challenges efficiently, all SAFE STRIP partners have to respect the policies set out in this DMP and datasets have to be created, managed and stored appropriately. The Data controller role within SAFE STRIP will be undertaken by Dionysis Kehagias (CERTH/ITI) who will directly report to the SAFE STRIP Ethics Board and the Project Management Team (PMT). The Data controller acts as the point of contact for Data Protection issue and will coordinate the actions required to liaise between different beneficiaries and their affiliates, as well as their respective Data Protection agencies, in order to ensure that data collection and processing within the scope of SAFE STRIP, will be carried out according to EU and national legislation. Regarding the ORDP, the data controller must ensure that data are shared and easily available. Each data producer and WP leader is responsible for the integrity and compatibility of its data during the project lifetime. The data producer is responsible for sharing its datasets through open access repositories. He is in charge of providing the latest version. In summary, the Data Manager will coordinate the actions related to data management, will be responsible for the actual implementation of the DMP successive versions and for the compliance to Open Research Data Pilot guidelines. As the SAFE STRIP open data will be hosted by an open free of charge platform (e.g. Zenodo), no additional costs will be required for hosting the data. 6. **Ethical and Security Aspects** 1. **Ethical issues** Regarding ethical issues, the deliverables D9.1 and D9.2 detail all the principles, processes and measures that SAFE STRIP will use to comply with the H2020 Ethics requirements. In addition, security and privacy requirements will be addressed in _D2.1: “System architecture, sensor specifications, fabrication, maintenance, installation and security requirements and Risk Assessment”_ for M18. 2. **Data Protection and Security** To avoid any data losses and ensure fast and reliable access to the data common storage facility to be deployed in SAFE STRIP, appropriate redundancy mechanisms will be deployed (e.g. RAID-1, RAID-5 etc.) for the online data repository. In addition to online data, an appropriate number of off-line copies will be maintained in hard disks or network-attached storage (NAS) owned by every partner in charge of their data. For the time of this writing, the total volume of the generated data has not yet been estimated accurately. A separate checklist has been prepared and should be used by all data providers not only during evaluations but also technical testing and validation. Any data that will be stored as a result of regular checks and tests performed by the service administrators wish to perform regular checks and tests and want to be able to create a database: <table> <tr> <th> **Checklist** : * How will the raw data be stored and backed up during the research? * How will the processed data be stored and backed up during the research? * Which storage medium will you use for your storage and backup strategy? Network storage; personal storage media (e.g. external hard drives); cloud storage * Are backups made with sufficient frequency so that you can restore in the event of data loss? </th> </tr> </table> Each site and service responsible should ensure that the research data is regularly backed-up and stored in secure and safe location. There is a common “rule-of-thumb” to only store data that you actually need in three different copies. It is advised that copies can be stored in both local and remote storage units/locations. **6.3 SAFE STRIP data privacy policy** SAFE STRIP system aims to be a cost-efficient safety and comfort provision cooperative system. It performs so with due respect, as all required participants’ physical, vehicle, driving performance and subjective data (e.g. driving background data) will be stored separately from identification information and be securely protected by relevant mechanisms. Personal data on subjects will be used in strictly confidential terms and will be published only as integrated statistics (anonymously) and under no circumstances as individual data set. All test data will be anonymized. Only one person per site (relevant Pilot responsible) will have access to the relation between test participants’ code and identity, in order to administer the tests. People who will analyse data will receive anonymised and coded information. One month after the tests end, this reference will be deleted, thus safeguarding full anonymisation. The stored data will only refer to users’ age, gender, nationality and driving profile. No other identifier will be kept. Nevertheless, **stored data will by no means relate to a person’s health, religious, political and sexual beliefs and preferences.** In addition, SAFE STRIP pilot participants **will not receive any medication, related to project research.** As such, anonymised data will be publicly available for other researchers; still with no possibility for identification of participants. In addition, participants will be recorded only if they provide consent. To avoid risks related to the processing of personal data such as identity theft, discriminatory profiling or continuous surveillance, the principle of proportionality has to be respected. Data can be used only for the initial purpose for which they were collected. Anonymisation or pseudonymisation is a way to prevent violations of privacy and data protection rules. Processing has to be limited to what is truly necessary and less intrusive means for realising the same end have to be considered. All this information falls under the European legislation for the lawful processing of personal data: * Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). * Directive 95/46/EC of the European parliament and the Council (1995) on the protection of individuals with regard to the processing of personal data and on the free movement of such data. * Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC. * Directive 2002/58/EC of the European Parliament and of the Council, concerning the processing of personal data and the protection of privacy in the electronic communications sector. * Reform of the legislative framework for personal data protection (In January 2012, the European Commission proposed a reform of the Directive 95/46/CE, which constituted until now the basic instrument for personal data protection, in the form of a global Regulation on data protection 2012/001 (COD), supplemented by Directive 2012/0010 (COD) concerning the processing of personal in the area of police and judicial cooperation in criminal matters). * Art.29 Data Protection Working party: Working Document on Privacy on the Internet. Still, apart from the above, attention will be paid to national legislation, that has to be respected even if the users have given their consent for the processing of their personal data. As such, during the SAFE STRIP Pilot tests: 1. Drivers/riders (but also other stakeholders) participating in the trials will give their names, address, and contact phone, together with age, gender, nationality and, if any, functional problem type (not medical term of impairment), to a single person in each pilot site, to be stored in a protected local database (to contact them and arrange for the tests). The contact person will issue a single Participant ID for each of them. This person will not participate in the evaluation and will not know how each user behaved. 2. The names, address and contact phone will be kept in the database only for the duration of each trial (short term trials-up to 1 week, long term trials- up to 1 month). Such data will not be communicated to any other partner or even person in each pilot site. Once the test ends, they will be deleted. 3. Each month the anonymised data will be re-sorted randomly, to mix participants order. 4. Since personal data will be deleted, no follow-up studies with the same people will be feasible. 5. Personal data will be used in strictly confidential terms and will be published as statistics (anonymously). 6. In pilots, all necessary safety precautions will be taken (i.e. use of professional drivers or drivers with valid driving license under normal traffic conditions, and , in case of need, using a test car with double pedals and a driving instructor as codriver). **7 Conclusions – Next Steps** The first data management analysis contained in this report allows anticipating the procedures and infrastructures to be implemented in SAFE STRIP to efficiently manage the generated and/or collected data. This document points to present the general principles of DMP and to summarize the current state of each and every identified dataset, containing information regarding brief descriptions, standards and metadata specifications, activities and responsibilities of each partner, access, data sharing and reuse policies and archiving/preservation strategies over the respective datasets from each and every related partner. Finally, open issues have been recognized, i.e., the role and the functionality of the management/exploitation portals as a common means for data sharing have to be specified, and the metadata specifications which cannot be explicitly defined yet. These open issues are taken into sincere consideration and they are to be addressed in the next period. The next version of the data management plan (D2.3), which is scheduled for M29, will include more detailed dataset descriptions according to the process defined in this version. Until M29, WP1 will have been completed, but also WP2 and WP3 will be approaching their ending (both end at M33), therefore the main input regarding the data types from sensors and the end user applications and services will have been comprehensively defined in detail. Hence, an enhanced and possibly revised version of the SAFE STRIP data models that have been introduced in this initial version of the DMP will be provided.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1162_LessThanWagonLoad_723274.md
# Executive summary The aim of this document is to provide the procedure to be adopted by the project partners to collect and utilize the data from the customers involved in the implementation of the business models under WP08 and research packages WP1-7, 9- 11\. The adopted procedure will follow the guidelines developed by the European Commission in the document “Guidelines on Data Management in Horizon 2020” 1 . This Data Management Plan (DMP) is also in line with the methodology of the DMP online tool developed by the Digital Curation Centre 2 . It has to be highlighted that a DMP is mandatory for the projects participating in the Open Research Data Pilot and for the “research and innovation actions” or “innovation actions”. The project is a coordination and support action participating in the Open Research Data Pilot. Besides this it has to be noted that this DMP is intended to be a “living document in which information will be made available on a finer level of granularity through updates as the implementation of the project progresses” 3 This Draft of the Data Management Plan, submitted at M6 (31st October 2016), describes a plan for data collection and utilization while the Data Management Plan will be updated when the data is collected and the plan will state how it should be utilized. The adopted procedure includes the following steps: 1) Data set description 2) Standard and metadata 3) Data sharing 4) Archiving and preservation (including storage and backup) and 5) Ethical aspects. The project partners agree on the procedure described in this document and on providing further information on the data to be collected and utilized in the deliverable “Data Management Plan”. # Introduction The main objective of the project is to develop a smart specialized logistics cluster for the chemical industry in the Port of Antwerp in order to shift transport volumes from road to rail freight. The developed solutions and concepts will be reviewed by the Advisory Board as well as the General Assembly and potential customers. This work will be performed within WP10 and WP12 and in specific within “Task 10.2”, “Task 10.3” and “Task 12.1”. This document first describes the data set that will be collected, then describes any applicable standards and metadata, and finally deals with data sharing, archiving and preservation and ethical aspects. <table> <tr> <th> **1.** </th> <th> **Data Set Description** </th> </tr> </table> Definition of Data Set Description (according to the H2020 guidelines): _“Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse”._ Data that will be generated or collected in the frame of practically every WP: from market information to product specifications. The first three WPs are pure R&D packages. Within these packages the consortium wants to develop a working LessThanWagonLoad concept. First a viable LWL concept is created preferably together with a potential customer. Simultaneously, the consortium is going to develop a Automatic Wagon Loading System (AWLS), this has to enable the developed LWL concept and make it profitable. The data generated during these packages are commercially sensitive so will be handled as confidential during the 3 years of the project and 4 years after the end of the project. Within WP 9 a validation of this system will be researched at the Italian terminal operator, ISC. The data will be handled with the same care. For WP4, 5 and 6 some market research is necessary to really pinpoint the issues in the chemical industry and translate these problems or necessities into working solutions. The market research for the two WPs will executed together and coordinated by Essenscia, the Belgian federation of chemical companies. Essenscia acts as neutral partner and will make the data anonymous if the responses are commercially sensitive. The market research will be handled as confidential. Nevertheless the preferences and requirements which will be discovered during the research will be used to create public reports but with anonymous and aggregated data. The dissemination of the market research and the different solutions will be the topics of a concluding conference organized for interested parties within the sector. Within WP7 the main goal is to find possible destinations for viable mixed train products. The process to select these destinations is twofold. First of all a top-down approach is used to map the potential. In order to construct this analysis, Eurostat data is used and drilled down to NUTS3 level. The outcome of this exercise will be reviewed by Lineas’ market experts. The top- down approach will be used as a long/short list from where the high potential destinations will be selected. Together with the Business Development department the outcome will be translated into concrete business cases with potential roll-out. Due to the commercial sensitivity of the data, these regions will be handled confidential. Work package 8 tries to summarize everything which was researched before this package and has the main goal to create a coherent business case for every element. These business cases will consist of costs and potential benefits which are only for internal use. WP 11 calculates the environmental benefit of all preceding WPs and solutions based on potential modal shift because of the enhanced product offering. These reports will be made public to promote rail freight transport in Europe. **At this stage of the project it is still not totally clear if there could be any additional type of data involved or collected.** This will become clear only once the project partners start working on the specific WP. In main order this will be qualitative data, but for example in WP07 this will also include some quantitative data. Besides this here are some additional considerations that will be taken into account: * The data will originate from the customers and will be collected by partners of the consortium * This data will be directly used by and for analysis * The data will underpin scientific publications published by the partners. However this will be done only on the overall aggregated level and not individual customer level. The data and thereby the outcome of these work packages can be useful for every intermodal operator, terminal operator or any other players with the supply chain. Those companies are competitors of Lineas and Lineas Intermodal. For this reason the deliverables as well as the data will be confidential during the project and four years after the end date of the project. Based on the best of our knowledge, similar data has not been generated before for research purposes in this specific setting. Data will be collected by specific research participants. The participants will be recruited by the consortium, using approaches that fit best with their customers and in ways that maximize engagement. One of the methods that could be used is described below: The Consortium will send letters and survey questionnaires to target customers to generate and gauge interest in their participation in market research or the new developed business model. Based on the survey feedback, the Consortium will identify the customers for interaction. The Consortium will invite the identified customers to a short presentation and workshop to explain the project background, the underlying economic assumptions and the potential benefits. So the questionnaire is used as a starting point for further interaction. Those who wish to participate in the project will give permission for their data to be used, specifying what data they are happy to share and how it can be used. Finally it’s important to clarify that all data will be anonymised or aggregated with other data in order to make it impossible to trace it back to the client. <table> <tr> <th> **2.** </th> <th> **Standards and metadata** </th> </tr> </table> Definition of Standards and metadata (according to the H2020 guidelines): _“Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created”._ The project focuses on the development a smart specialized logistics cluster for the chemical industry in the Port of Antwerp in order to shift transport volumes from road to rail freight. For this purpose, market information is needed from relevant players inside the current chemical cluster. Accordingly the data will be described and categorized with appropriate metadata. Metadata is _“structured information that describes, explains, locates, or otherwise makes it easier to retrieve, use, or manage an information resource. Metadata is often called data about data or information about information”_ 4 _._ This means that during the project, the collected data will also be assigned appropriate metadata, which will make it easier to manage the data itself. However at this stage of the project we cannot define what metadata will be collected. Because for example concerning WP2 and WP3, the Automated Wagon Loading System does not exist -to the best of our knowledge. Also the metadata might differ across the countries due to different data collection regulations. Nevertheless the data created or translated by the consortium will be FAIR at all time. (Findable, Accessible, Interoperable a Re-usable). Although, it should be mentioned that the project data and deliverables should be considered as ‘photos’: The photo of today could be fairly different from the photo of within a year or so. <table> <tr> <th> **3.** </th> <th> **Data sharing** </th> </tr> </table> Definition of Data sharing (according to the H2020 guidelines): _“Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. Ethical, rules of personal data, intellectual property, commercial, privacy- related, security- related)”._ The collected data will be a mix of confidential and public information. I t s usage will be subject to customers and consortium’s partners consent. This will be asked with a survey questionnaire that will be sent out to customer’s i.e potential research participants. The survey questionnaires will include an Informed Consent Form (ICF) which needs to be signed by the research participants potentially interested in using the data. The signature of the ICF will be a pre-requisite for the development of the customized implementation plan. The ICF will include a short description of the project mentioning that data collected from the customers will be used in the framework of the project. It will be discussed with the client what data and in what form can be shared with the project consortium. The consortium consists of partners that operate in countries in which data and privacy protection all have adopted the European legislation: _“Client data is always treated confidentiality and all data management will in particular be in line with privacy regulation of the European Union. It will nevertheless be possible to use the data within the project.”_ The format of the agreements between the customers and project partners will be defined during the activities, when they occur. However, the data that will be shared for the purpose of implementing the work packages will be subject to individual and commercial negotiations with the customers. In general data privacy has not been identified as a major barrier within the consortium. In specific the Consortium partners were asked to assess how data and privacy protection are defined in their countries. Also they were asked if the approach to data management is clearly regulated and if this presents a barrier for the projects activities. <table> <tr> <th> **4.** </th> <th> **Archiving and preservation** </th> </tr> </table> Definition of archiving and preservation (according to the H2020 guidelines): _“Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered”._ The customer data is deleted after the duration of the project. As all customers are directly associated with one consortium partners with which a bilateral contract for data handling is signed additionally to the Informed Consent Form (ICF), this bilateral agreement might allow the data storing and use beyond the project duration. The data created by the consortium partners (not customer bound) in order to develop the AWLS; analyze the environmental impact, etc. will be saved at least 4Y after the end date of the project. Afterwards an evaluation takes place to see if further archiving is necessary. If needed, Non-Disclosure Agreements (NDAs) will be signed with the parties that use the data for further work in the project (for instance by the research partners). To safely store the information in a manner that it is still accessible for the consortium partners, a dedicated portal is created via the Targetprocess tool. Every partner has been given access to this portal and is responsible for his or her own password. The table below provides a summary of the measures for data storing. _Deliverable: D12.5 – Data management plan_ Table 3: Measures for data storing <table> <tr> <th> </th> <th> **Storage of data – description of method** </th> <th> **Duration of storage** </th> <th> **Comments** </th> </tr> <tr> <td> **Customer data** </td> <td> All clients’ data should be kept confidential and anonymized **-** Paper records such as questionnaires are stored in a locked filing cabinet **-** Digital data is stored in a specific folder with restricted access. **-** These set of data can only be consulted by members who need this information. </td> <td> Project duration </td> <td> The data can be shared within the consortium, however it must be kept confidential by the partners The whole data set will not be publically available and any publication dissemination will obscure the client’s individual data. </td> </tr> <tr> <td> **AWLS – WP 2,3, 11** </td> <td> The data related to the project is going to be stored on our servers and Targetprocess for potential future patent. </td> <td> Project duration and longer if necessary </td> <td> Maybe needed for potential patent </td> </tr> <tr> <td> **Market data – WP1, WP4-7, 9- 10** </td> <td> The plan is to store data in xls, csv- formats or in a data base. </td> <td> Project duration and longer if necessary </td> <td> No further comment </td> </tr> <tr> <td> **Business cases – WP8** </td> <td> The data related to the project is going to be stored on our servers and Targetprocess. </td> <td> Project duration + 3Y </td> <td> No further comment </td> </tr> </table> 13 _Data management plan_ <table> <tr> <th> **5.** </th> <th> **Ethical aspects** </th> </tr> </table> Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA). Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? The consortium shall carry out the action in compliance with: (a) Ethical principles (including the highest standards of research integrity) and (b) Applicable international, EU and national law. The consortium shall respect the highest standards of research integrity — as set out, for instance, in the European Code of Conduct for Research Integrity This implies notably compliance with the following essential principles: * honesty; * reliability; * objectivity; * impartiality; * open communication; * duty of care; * fairness and * responsibility for future science generations This means that consortium shall ensure that persons carrying out research tasks: * present their research goals and intentions in an honest and transparent manner; * design their research carefully and conduct it in a reliable fashion, taking its impact on society into account; * use techniques and methodologies (including for data collection and management) that are appropriate for the field(s) concerned; * exercise due care for the subjects of research — be they human beings, animals, the * environment or cultural objects; * ensure objectivity , accuracy and impartiality when disseminating the results; * allow — in addition to the open access obligations under Article 29.3 of the Grant Agreement as much as possible and taking into account the legitimate interest of the beneficiaries — access to research data, in order to enable research to be reproduced; * make the necessary references to their work and that of other researchers; * refrain from practicing any form of plagiarism, data falsification or fabrication; * avoid double funding, conflicts of interest and misrepresentation of credentials or other research misconduct. _Data management plan_ Activities raising ethical issues must comply with the ‘ethics requirements’ set out as deliverables in Annex 1 of the Grant Agreement. The documents must be kept on file and be submitted upon request by the coordinator to the Agency (see Article 52 of the Grant Agreement). If they are not in English, they must be submitted together with an English summary, which shows that the action tasks in question are covered and includes the conclusions of the committee or authority concerned (if available)
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1163_MUV_723521.md
Executive Summary 5 Introduction 6 Legal framework 7 Grant agreement and Consortium agreement 7 Personal data protection in MUV: EU regulations 7 Open Access in H2020 8 MUV’s Data management plan (DMP) 8 Data summary 8 Data localization and metadata 12 Making data openly accessible 14 Making data interoperable 17 Responsibilities for data management 17 Data re-use 19 Data security 20 Conclusions 20 Annex I data sources 21 ### Abbreviations <table> <tr> <th> CA </th> <th> Consortium Agreement </th> <th> EURAXES </th> <th> European Charter for Researchers and Code Conduct </th> </tr> <tr> <td> CSV </td> <td> Comma-separated Values </td> <td> ECGA </td> <td> European Comission Grant Agreement </td> </tr> <tr> <td> DFS </td> <td> Distributed File System </td> <td> GDPR </td> <td> Eurpean Genera Data Protection Regulation </td> </tr> <tr> <td> DMP </td> <td> Data Management Plan </td> <td> MUV </td> <td> Mobility Urban Values </td> </tr> <tr> <td> IPR </td> <td> Intellectual Property Rights </td> <td> SQL </td> <td> Structured Query Language </td> </tr> <tr> <td> EB </td> <td> Ethics Board </td> <td> NO-SQL </td> <td> Not Only Structured Query Language </td> </tr> <tr> <td> EC </td> <td> European Comission </td> <td> WP </td> <td> Work Package </td> </tr> </table> # Executive Summary This deliverable contains the first version of the Data Management Plan (M6). It describes the nature of the data that will be collected and used during the MUV project. The data has been divided into four overarching data categories: * **Participants’ data** . MUV is a research project that will involve real users during some of its research activities. Our research involves the collection of personal data, first, for requirement elicitation, second, for valuation purposes and, third, to improve the quality of experience within a real-time feedback loop. * **Data for expert systems** . As stated in D6.1, MUV envisions to use several distinct data sources to feed the recommendation engine, one of the central components of the MUV platform. This data will be used by the application to take decisions in order to offer and promote more sustainable mobility decisions to the users. * **Data for evaluation** . Also identified in D6.1, different data sources have been selected to measure the impact of the MUV project. These datasets are either from the data generated by the application, from the data collected for the expert systems or from external data sources (like national statistics institutes). These datasets will be the base for publication of research papers. * **Code** . Files containing the code of the platform is considered also as output data of the project. These datasets are designed for compilation/interpretation by a compiler/interpreter of the specific language. This category also comprehends the configuration files and scripts needed to run any needed tool. It also describes the governance model of this data which is based on several aspects: i) the current legislation, ii) the ethics related to working with data, iii) the technical aspects regarding data storage and processing and iv) the responsible parties for collecting and accessing the data. # Introduction This deliverable reports the management procedures that will be followed when dealing with the data of MUV, including generated datasets, gathered datasets, research data and code. It has been designed to be compliant with the Guidelines on FAIR Data Management in Hoirizon 2020 1 and reviewed with the Checklist for a Data Management Plan v4.0 2 . Due to the evolving nature of MUV and its methodology, based on co-creation design and AGILE development, the datasets of MUV will be collected (or even created) during and after the project lifetime. Moreover, MUV is a project with many different datasets. Hence, MUV has envisioned several deliverables to report different aspects of data. Specifically: **D6.1 Data Preparation (M6):** This deliverable has been already submitted and presents in detail the different datasets that will be processed and valorised in MUV and the techniques that will be used to prepare them for further analysis. **D1.4 Data Management Plan (M7):** This deliverable, which deals with the management procedures of all the data generated or used in MUV, including the code and the research data (publications). **D6.2 Detailed data management and privacy plan (M10):** This deliverable will specify the procedures for the privacy and security of the data collected and generated in MUV (including anonymization). **D1.4 Data Management Plan - Update (M19, M31):** Updates of the DMP to adapt to the evolving nature of MUV. **D10.1 H - Requirement No. 1 (M10):** Details on the procedures and criteria that will be used to identify / recruit research participants, including how to ethically manage this data. **D10.2 POPD - Requirement No. 2 (M10):** Confirmation by all the competent Data Protection Officers from the National Data Protection Offices, including the files declared with personal data and how they will be managed according to the legislation and with ethical considerations. **D10.3 GEN - Requirements No. 3 (M4):** The Ethics report including how to ethically treat the data generated in MUV. In order to avoid information repetition, but at the same time produce a self- contained deliverable, this D1.4 will summarize the identified categories of datasets envisioned in MUV and will include all the tables describing these datasets in the Annex. More information regarding specific procedures on how to prepare the data for data quality and for data re-use will be found in D6.1. Information about the procedures to secure the data and assure its privacy will be found in D6.2. Any ethical consideration will be found in the deliverables of WP10, specifically D10.1, D10.2 and D10.3. # Legal framework The objective of this section is to describe the legal framework of the project linked with the data policy. In this sense this section reviews the IPR framework established by the ECGA, the Consortium Agreement (CA) and the overarching European legislation regarding data protection. ## Grant agreement and Consortium agreement MUV has not participated in the Research Data Pilot (article 29.3 of the ECGA) hence this legal framework on data management derives from the articles of the ECGA and the CA. Of them, main points to take into consideration are the following: · The ECGA defines in article 26 the ownership of the project results. These results are any tangible or intangible output of the actions including data and information and are owned by the institution that generates them. In case the results are generated by two or more institution rules defined in article 26.2 of the ECGA and article 8.2 of MUV’s CA must be applied. · Unless it goes against the legitimate interest of the beneficiaries the results must be disseminated by disclosing them. This means that the beneficiaries have the right to protect the results in case the institution plans to protect or exploit the results. · As defined in the article 8.4 of the CA 30 calendar days prior to any publication notice must be given to the other parties. Objections must be raised in writing within 15 calendar days after the receipt of the notice. The publication will be permitted if no objection is made within this time limit. ## Personal data protection in MUV: EU regulations The project is committed to observe the EU Data Protection Directive 94/46/EC when dealing with data and the forthcoming European General Data Protection Regulation. MUV has a whole Work Package dedicated to Ethical Issues (WP10). This WP will deal and manage all ethical issues that arise during the project lifetime, including those related to data gathering, publications and datasets. MUV also leverages on an Ethics Board (EB), which is an external group of experts with special competences in Ethics, protection of personal data and security. Upon request of the EC the consortium established an EB, as they could provide valuable knowledge, advice and guidance on the work planned in the project and the data related to it. IPR’s and exploitation rights (including data) are extensively covered in D1.3 “Plan for managing Knowledge and Intellectual Property” (M7). # Open Access in H2020 The workflow related to publications is clearly stated in the CA (article 8), and further evolved in D1.3. The same instructions are provided in MUV’s Project Management Handbook. MUV will attempt to open access to datasets with multiple purposes. Most of these datasets can be reused, but the terms of use will be properly specified. It is expected that during the project certain contents can be generated and they can be of interest for researchers. With this purpose, the project will identify which datasets can be made public, and which only re-used by project members. This will be treated on a case-by-case basis. Data research will be published following the same approach as for publications. This means that data will be accessible within the 6 months after its publication. # MUV’s Data management plan (DMP) ## Data summary **Purpose of the data collection** MUV project envisions four main categories of data to be collected during the project lifetime. * **Participants’ data** . MUV is a research project that will involve real users during some of its research activities. Our research involves the collection of personal data, first, for requirement elicitation, second, for valuation purposes and, third, to improve the quality of experience within a real-time feedback loop. * **Data for expert systems** . As stated in D6.1, MUV envisions to use several distinct data sources to feed the recommendation engine, one of the central components of the MUV platform. This data will be used by the application to take decisions in order to offer and promote more sustainable mobility decisions to the users. * **Data for evaluation** . Also identified in D6.1, different data sources have been selected to measure the impact of the MUV project. These datasets are either from the data generated by the application, from the data collected for the expert systems or from external data sources (like national statistics institutes). These datasets will be the base for publication of research papers. * **Code** . Files containing the code of the platform is considered also as output data of the project. These datasets are designed for compilation/interpretation by a compiler/interpreter of the specific language. This category also comprehends the configuration files and scripts needed to run any needed tool. **Relation to the objectives of the project** MUV general objective is to “ _lever behavior change in local communities in an entirely novel approach to reducing urban traffic. Rather than focus on infrastructure, it raises citizen awareness on the quality of the urban environment to promote a shift towards more sustainable and healthy mobility choices. MUV solutions will be open, co-created with a strong learning community of users and stakeholders, and piloted in a set of diverse urban neighborhoods spread across Europe: Amsterdam (NL), Barcelona (ES), Fundao (PT), Ghent (BE), Helsinki (FI), Palermo (IT). In order to ensure the effectiveness of the mobility solutions and really match the communities and stakeholders’ needs, all the project’s main activities (co-creation sessions, software development and impacts’ analysis) will iterate three times during the piloting phase._ _Real impact is measured with an evidence-based approach to maximize economic viability and Social Return on Investment (SROI) and drive replicability and the scaling up and out of MUV solutions._ ” From the objective definition, it raises naturally the need for the three envisioned categories of data. Firstly, **participants’ data** to be able to create the communities that will allow MUV solutions to be “ _open, co-created with a strong learning community of users and stakeholders, and piloted in a set of diverse urban neighborhoods spread across Europe”_ and to elicit the requirements of MUV “ _to ensure the effectiveness of the mobility solutions and really match the communities and stakeholders’ needs”_ . Secondly, **data for expert systems** to “ _raise citizen awareness on the quality of the urban environment to promote a shift towards more sustainable and healthy mobility choices”_ . Finally, **data for evaluation** to measure real impact “ _with an evidence-based approach to maximize economic viability and Social Return on Investment (SROI) and drive replicability and the scaling up and out of MUV solutions._ ” Relation of data with specific objectives is shown in the Table 1\. <table> <tr> <th> understanding the neighborhoods’ peculiarities and emerging values to **define an effective behavior change strategy** </th> <th> A recommendation engine is one of the central components of the MUV platform. It will be fed with several data sources regarding user profile, environmental context and public transport system and will produce and present specific smart and sustainable choices to the user </th> </tr> <tr> <td> **co-designing site-specific solutions** for better and more liveable urban environments </td> <td> Co-creation and co-design is the methodology of the project. Hence, data will be collected to be able to run these activities (participants’ personal data). The results of these activities will also be collected (requirements, validation). </td> </tr> <tr> <td> **developing scalable digital solutions and technologies** to improve globally the experience of urban mobility </td> <td> Heterogeneous datasets will be collected from all the different pilot areas. These datasets will be obtained through city's open data portals, through the MUV application and through the monitoring stations of MUV. In conclusion, MUV data model will be defined to be able to escalate to any new region or area. </td> </tr> <tr> <td> **raising awareness among citizens** **on the importance of sustainable and healthy mobility choices** , reducing private vehicular traffic and its negative externalities (pollution, noise, deterioration of urban infrastructures, time wasted in traffic jams, cost …) and encouraging local consumption </td> <td> Several datasets regarding environmental measurements will be collected, either from the cities’ open portals or from the monitoring stations installed in MUV. A visualization of these datasets will be offered to the user to raise awareness on the topic of sustainable mobility. </td> </tr> <tr> <td> **analyzing, visualizing and sharing mobility and environmental data** to build an effective decision support system for multiple stakeholders </td> <td> As mentioned in the same objective, these datasets will be collected to fed the recommendation engine and build a complete decision system. </td> </tr> <tr> <td> **integrating new co-created mobility solutions into urban policy-making** and planning processes at neighborhood level </td> <td> Outputs of the co-creation activities will be collected, with special focus to requirement elicitation and validation of the solution. </td> </tr> <tr> <td> **bringing the whole experiment to the market through an innovative business model** in order to improve urban transportation in crowded neighborhoods and cities all over the world </td> <td> MUV data model will be open and scalable to allow the experiment to be moved to new regions and cities. </td> </tr> </table> _Table 1: Relation of gathered data with the objectives of MUV_ **Type and format of the data** MUV will collect several datasets and the formats of such datasets vary depending on their sources. Specifically, we can identify 7 types of datasets inside the three categories described. * Participants’ data. 1. Personal details of the co-creation participants, like name, age, contact details, etc. The format of this dataset will be a file declared in each national data protection office. ○ Outputs of the activities, including requirement elicitation and validation feedback. Format will be unstructured text in tables. * Data for expert systems. 1. Data from MUV application like users’ profile data or users’ behaviour. Format of these datasets will be SQL tables. ○ Social media data, extracted from public social network interaction. Format of these datasets will be non-structured text stored in No-SQL databases. ○ Data from open data portals, including environmental and public transportation data. Format of this data may vary depending on the specific portal but it is usually in CSV, XML or JSON format. ○ Monitoring stations data, from the monitoring stations created in MUV. Format of these datasets will be JSON stored in NO-SQL databases. ● Data for evaluation ○ User engagement evaluation data, composed by the interaction of the users with the platform. Will be stored in structured files. ○ Research publications done during the life of MUV. The format of these publications will be either LATEX 3 or MS-WORD / Open Office. * Code 1. Code sentences will be stored in language specific files. **Origin of the data** There are six possible origins of the data: * Open data portals from the municipalities, connected through an API. * MUV application, including the interaction of the user with the application. * Social media data from social media networks, specially twitter. * MUV monitoring stations data, from the stations created and deployed within the project. * Data from the participants in the co-creation activities. * Research publications and code created by the MUV researchers / developers. **Size of the data** We do not envision very large datasets in MUV but the exact size is difficult to predict. Datasets from open data portals are in the scale of some megabytes and data collected by the MUV application can be stored in traditional SQL tables with one entry per user. In addition, data from the participants is expected to be of some kilobytes. Social media data and MUV monitoring stations data is the data that can represent a higher challenge in terms of size. However, we cannot predict such size at this moment, since we need to define the frequency and the scope of this collection through the co-creation activities. Finally, research papers and code files are usually not exceeding few megabytes. ## Data localization and metadata. Data will be stored in a cloud defined in WP4. The cloud infrastructure will contain a specific module for data storage that will be designed and partitioned according to the needs of the different datasets regarding security, format and processing capabilities. Details of this architecture will be presented in D4.1. Storing formats for each one of the datasets have been designed and defined in D6.1 and depend on the needs of each dataset. We will use five types of data storage technologies * Plain files are the oldest and simplest way of storing a data. Data is stored in a file with a specific format. A very known example of this type of storage is the Comma Separated Value (CSV) file, where commas are used to separate different values and end of lines indicate new rows. Files are used because they are easy to transfer and because they can be ingested easily in other storage or processing technologies. * Relational Databases are the most known and used storage technology. Relational databases are based on formal tables which contain strongly-typed variables (types of variables like integer, float, string etc. are predefined). For example, a client of a company has a name (string), an age (int), etc. Each new client is a new instance of the table with the same fields but different values. To uniquely identify each instance of the table, a value that never repeats must be selected, this value is called the Primary Key of the table. Moreover, different tables need a connection, for example each client may have ordered several different commands, so there should be a new table which represents each command with its own primary key. Again each command can be composed by several products, which should be represented in a new table. The link between different tables is done using a foreign key (e.g. the id of the client is stored in the command table). The main advantages of relational databases are their strong schema, which assures availability of the data, and the specific language designed to operate over them (called SQL), which facilitates operations and the possibility to apply any preparation technique directly. The drawbacks are the lack of scalability (the relations between tables introduce checks and delays) and their poor flexibility (predefined types and mandatory fields) which do not deal well with low quality and high volume data. * Non-relational databases are a new type of databases which appeared few years ago to deal with the flexibility problems of relational databases. In non-relational databases, usually known as Not Only SQL (NO-SQL), information is stored in documents which contain few mandatory fields and are not strongly-typed. It is on the designer hands to define indices that permit the location of specific documents in a collection of documents. NO-SQL databases are fast, can deal with high volume of data and can store not cleaned data, while at the same time can locate easily the datasets (if the indices are well designed). However, NO-SQL datasets do not allow data processing and connectors with specific external technologies (like Elastic search, R, etc.) are needed. * Distributed file systems (DFS) are designed for data processing. In DFS data is partitioned into blocks and distributed among different nodes and partitions. Using this approach, data processing techniques can run in parallel, especially with the use of the MapReduce paradigm 4 . DFS are very convenient for data preparation techniques, especially for high-volume, high-velocity and high-heterogeneity. Moreover, availability is assured through replication. However, they do not contain relations between different datasets and are not designed to easily locate data. * Versioning systems are softwares that act as a folder that is uploaded to a server to avoid any possible loss, but that track not only the files included in the folder, but also the changes from one version to another. Versioning systems are used for code development and for paper production. Naming convention in MUV will follow the tables of identified datasets explained in D6.1 and also in the Annex I of this document. Moreover, metadata will be added to open data in order to classify and facilitate location of the data. This metadata will be in the form of tags regarding the nature of the data i.e. transport, environmental, etc. For code and research papers, well- known standards and best practices will be used. MUV identifies the following stakeholders of the data: * Users of the application, that will visualize processed data to help them take more sustainable and healthier decisions. Their interests are on data directly involved with their interaction with the application, e.g. recommendations, pollution map, transit map, etc. * MUV developers, that will use MUV data as a base for further developments. Their interests are on data that measures the activity of the users, e.g. profiling of the users and their level of activity in the platform, impact of their activity, etc. they are also interested in the code of the project. * MUV evaluators / researchers, that will use MUV data to evaluate or measure the impact of the MUV platform. Their interests are on impact of MUV in the cities, e.g. traffic, pollution, noise, etc. They are also interested in the research publications of the project. ## Making data openly accessible. As explained in D1.3, the MUV consortium will attempt to maximise the visibility and exploitation of the project and its long-term impact, by providing as many as possible publicly available results that can be easily discovered and reused. Specifically, appropriately identified datasets (anonymized end-user data whenever possible according to the “Article 29 Data Protection Working Party” 5 , as well as, wherever possible, software releases) will be generated and collected to drive the project’s technical work. With this purpose, the project will identify which datasets can be made public, and which only reused by project members. This will be treated on a case-by-case basis. Open data **Open data** will be stored and made available through Symphony ( _Symphony_ ) . Moreover, all datasets regarding municipalities will be also made accessible through the municipalities’ open data portals. **Research data** will be made accessible through Zenodo ( _Zenodo_ ) **Code** will be made accessible through github ( _Github_ ) All these datasets will be included in a table detailing the following fields: * Type of dataset * Format * File type * Owner * Open data set (Y/N) * Licensing * Comments When publishing the databases, any needed software tools or tool needed to access to the data will be included in the description of the data-base. Table 2 summarizes the level of access to each dataset <table> <tr> <th> **Category** </th> <th> **Subcategory** </th> <th> **Openness** </th> <th> **Comments** </th> </tr> <tr> <td> Participant’s data </td> <td> Personal details </td> <td> Private </td> <td> Personal details of the participants will be private </td> </tr> <tr> <td> Outputs of the activities </td> <td> Confidential / public </td> <td> outputs of the activities will drive to the requirements and evaluation. Depending on the nature of the outputs will be kept confidential (within the consortium) or open </td> </tr> <tr> <td> Data for expert systems </td> <td> Users’ profile data </td> <td> Private / confidential </td> <td> Personal data will be kept private. Other types of data (profiling) will be confidential within the consortium for evaluation and improvement needs. </td> </tr> <tr> <td> Users’ behaviour data </td> <td> Private/confi dential </td> <td> Same as previous </td> </tr> <tr> <td> social media data </td> <td> Public </td> <td> Only public social media data will be used. </td> </tr> <tr> <td> Environmen tal data </td> <td> Public </td> <td> Environmental data is gathered from the open data portal </td> </tr> <tr> <td> MUV monitoring stations data </td> <td> Public </td> <td> Monitored data will be made public </td> </tr> <tr> <td> Public transport data </td> <td> Public </td> <td> Transport data will be made public </td> </tr> <tr> <td> Traffic data </td> <td> Public/confid ential </td> <td> Publicly available data will be open. We envision the possibility to gather private traffic data (mainly from google maps) which will be confidential for use only inside the consortium </td> </tr> <tr> <td> Data for evaluation </td> <td> Recommen dation system input data </td> <td> N/A </td> <td> This is a set of parameters to be used as input to the recommendation system. These are parameters from other already identified datasets and their confidentiality level will be maintained </td> </tr> <tr> <td> User engagemen t evaluation data </td> <td> N/A </td> <td> Same as before </td> </tr> <tr> <td> KPI evaluation data </td> <td> Public </td> <td> KPIs data will be made public for evaluation and maximization of impact. </td> </tr> <tr> <td> Research publications data </td> <td> Public </td> <td> Research publications will be under open or green access. </td> </tr> <tr> <td> Code </td> <td> Code files </td> <td> According to CA </td> <td> Taking into account firstly the IPR and the Consortium Agreement, code will be made open when possible. </td> </tr> <tr> <td> </td> <td> Scripts and configuratio n files </td> <td> According to CA </td> <td> Same as before </td> </tr> </table> _Table 2: Levels of access for the MUV datasets_ We foresee MUV evaluation data to be used actively and extensively for research purposes, including the publication of papers to conferences and journals. Eventually, data for expert systems can also be used for research, especially in more technical publications regarding data science and artificial intelligence. We don’t foresee research use for participant’s data, and if this situation arises, only aggregated results (anonymized) will be allowed to be published (according to Article 6 of the GDPR 6 ). Data published in open data portals (and identified in D6.3) will be kept at least 2 years beyond the life of the project. Other data sources will be tied to the life of the website of MUV (1 year after the end of the project). Any kind of research data will be held in persistent research repositories. No associated costs are envisioned for the data repositories. Either research data repositories and open data repositories are free of charge and maintained by non-profit organizations or administrations. Several activities are envisioned in the scope of T6.3 - Open data valorisation in order to preserve the datasets and to make them persistent and easy to locate and use. Aspects like open formats, use of standards and linked data will be taken into account. If an entity or particular from outside the consortium asks for access to personal or closed data, the General Assembly will decide either to give access or not. The proprietary of the data set can exercise his veto as specified in the CA. In case there is a request to access private content by a third party/external entity, the question will be evaluated by the project coordinator and those partners involved. In case the request is internal, the part requesting access will contact the project coordinator, who will take it to the General Assembly (GA) and the GA will evaluate, jointly with the hosting partner, whether it is possible or not. All decisions will follow the _European Charter for_ _Researchers and Code of Conduct_ (EURAXESS), the Consortium Agreement and the Grant Agreement. ## Making data interoperable. For research data collected and generated in the project, a fit-for-purpose file naming convention will be developed in accordance with best practice for qualitative data, such as described by the UK Data Archive (2011). This will involve identifying the most important metadata related to the various research outputs. Key information includes content description, date of creation, version, and location. To make the datasets in the platform easily findable searchable tags will be added to the metadata. When uploading the data the creator of the dataset also have the option to create new tags that corresponds to the contents of the dataset making it easier for other users to find and re-use the data. MUV is strongly committed to produce any kind of data in open formats. For files, Comma Separated Values (CSV) 7 files will be preferred, when possible including links to other files and hence, promoting linked data 8 . Depending on the choice of repositories we are however likely to follow the standard of The Data Catalog Vocabulary (DCAT) (Data Catalog Vocabulary, 2014) as it defines a standard way to publish machine-readable metadata about a dataset where appropriate. We also intend to use common ontologies and vocabularies for data. ## Responsibilities for data management. Each partner will be solely responsible of the quality and accuracy of the data provided and of making data accessible, table summarizes partners per datasets. The PC (PUSH) will provide support and advice to the consortium. Table 3 summarizes these responsibilities for each one of the MUV datasets. <table> <tr> <th> **Category** </th> <th> **Subcategory** </th> <th> **Openness** </th> <th> **Responsible partner** </th> <th> **Partners with limited accessing rights** </th> </tr> <tr> <td> Participa nt’s data </td> <td> Personal details </td> <td> Private </td> <td> Amsterdam pilot ->AAU Barcelona pilot -> I2CAT Fundao pilot -> BAG Ghent pilot -> LUCA Helsinki pilot -> FVH Palermo pilot -> PUSH </td> <td> AMST IMI BCN FUNDAO GENT MUNI PAL </td> </tr> <tr> <td> Outputs of the activities </td> <td> Confidenti al / public </td> <td> Amsterdam pilot ->AAU Barcelona pilot -> I2CAT Fundao pilot -> BAG Ghent pilot -> LUCA Helsinki pilot -> FVH Palermo pilot -> PUSH </td> <td> All </td> </tr> <tr> <td> Data for expert systems </td> <td> Users’ profile data </td> <td> Private / confidenti al </td> <td> PUSH </td> <td> I2CAT, ISMB, WAAG, LUCA </td> </tr> <tr> <td> Users’ behaviour data </td> <td> Private/co nfidential </td> <td> PUSH </td> <td> I2CAT, ISMB, WAAG, LUCA </td> </tr> <tr> <td> social media data </td> <td> Public </td> <td> ISMB </td> <td> PUSH, I2CAT, WAAG, LUCA </td> </tr> <tr> <td> Environment al data </td> <td> Public </td> <td> I2CAT </td> <td> All </td> </tr> <tr> <td> MUV monitoring stations data </td> <td> Public </td> <td> WAAG </td> <td> I2CAT, ISMB, PUSH, LUCA </td> </tr> <tr> <td> Public transport data </td> <td> Public </td> <td> I2CAT </td> <td> All </td> </tr> <tr> <td> Traffic data </td> <td> Public/con fidential </td> <td> I2CAT </td> <td> PUSH, ISMB, WAAG, LUCA </td> </tr> <tr> <td> Data for evaluatio n </td> <td> Recommenda tion system input data </td> <td> N/A </td> <td> ISMB </td> <td> I2CAT, WAAG, PUSH </td> </tr> <tr> <td> User engagement evaluation </td> <td> N/A </td> <td> ISMB </td> <td> I2CAT, WAAG, PUSH </td> </tr> <tr> <td> </td> <td> data </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> KPI evaluation data </td> <td> Public </td> <td> ISMB </td> <td> ALL </td> </tr> <tr> <td> Research publications data </td> <td> Public </td> <td> LUCA </td> <td> All </td> </tr> <tr> <td> Code </td> <td> Code files </td> <td> According to CA </td> <td> PUSH </td> <td> N/A </td> </tr> <tr> <td> </td> <td> Scripts and configuration files </td> <td> According to CA </td> <td> PUSH </td> <td> N/A </td> </tr> </table> _Table 3: Summary of the responsible partners and partners with accessing rights to the MUV datasets_ Regarding research data and research publications, prior to uploading any database to zenodo, the responsible has to inform the GA 30 days in advance. Every six months the PC will request the consortium for pre-printed manuscripts (accepted), slides and posters, and non-protected raw data supporting papers and deliverables for updating the project public repository in zenodo. ## Data re-use. MUV data will be licensed from the creation of the dataset to ensure that it is used according to the defined terms. Any re-used dataset by MUV will respect the previous license of the dataset. **Research Publications** : Each publication will be licensed on a case-by- case basis, taking into account embargo periods if applicable. Questionnaires are licensed under Creative Commons Attribution-Non Commercial 4.0. Video Footage is licensed under Creative Commons Attribution-Non Commercial 4.0 (strictly forbidden to use it, totally or partially, for commercial purposes). **Private data:** Private data will be licensed by the owner (partner) of the dataset. **Confidential data:** Confidential data will be licensed by the owner (partner) of the dataset, which has to include the rights to allow the partners with accessing rights to access to such information. **Public data:** Public data will be licensed under the Open Data Commons Open Database License (ODbL) 9 which is an open license for data (and databases) which requires attribution (owner of the data must be kept when reusing it) and share-alike (the same license must be applied when re-using data). Regarding data quality, D6.1 specifically describes the processes that will be used to ensure data quality, including cleaning and enrichment (to remove or correct erroneous samples) and curating to ensure perdurability and easy classification of the datasets. For publications and research data, all content will be peer-reviewed. ## Data security Data security in MUV will be treated in a specific deliverable, D6.2 Detailed Data Management and Privacy Plan which is a confidential deliverable that assures that any security procedure is not disclosed publicly. # Conclusions MUV considers four types of data: Participant’s data, Data for expert systems, Data for evaluation and Code. In this deliverable we analyze these four types of data from several aspects. Firstly, its relation with the objectives of the project. Secondly, the procedures and techniques that will be used to store them and assure its accessibility. Finally, their nature (open/private/confidential), their license and their owners. It is important to note that, except for research data and code, which have very standardized procedures to define these aspects, all the other datasets are very heterogeneous (from a wide range of different sources) and with different privacy, preparation and ethics requirements. Hence, three specific deliverables must be considered together with the Data Management Plan presented here, to obtain all the picture of the data activities of MUV. Firstly, D6.1 for data preparation techniques, including cleaning and curation. Secondly, D6.2 for data privacy and security, including anonymization. Finally, D10.1, D10.2 and D10.3 for ethics and legal requirements regarding data.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1164_RE4_723583.md
# 1\. EXECUTIVE SUMMARY According to the Guidelines on Open access to Scientific Publications and Research Data for projects funded or co-funded under Horizon 2020, Europe 2020 strategy underlines the central role of knowledge and innovation in growth generation. For these reasons the European Union strives to improve access to scientific information and to boost the benefits of public investment in the research funded under the EU Framework Programme Horizon 2020. The present document constitutes the **second issue** of the Deliverable D8.4 Data Management Plan in the framework of the RE 4 project, dedicated to Task T8.5 under the work package WP8. The Data Management Plan (DMP) identifies the results that should be subject of RE 4 dissemination and exploitation and analyses the main data uses, users and explore the restrictions related to IPR according with the Consortium Agreement, defining the data assurance processes that are to be applied during and after the completion of the project. This document is prepared in compliance with the template provided by the Commission in the Annex 1 of the “Guidelines on Data Management in Horizon 2020“. Main updates at month 18: \- Dataset description, data sharing, storage, preservation and responsibilities (Table 4) - Datasets shared publically at month 18 (Table 5) # 2\. INTRODUCTION This document constitutes the **second issue** of Data Management Plan (DMP) in the EU framework of the RE 4 project under Grant Agreement No 723583. The objective of the DMP is to establish the measures for promoting the findings during the project’s life and detail what data the Project will generate, whether and how it will be exploited or made accessible for verification and re-use, and how it will be curated and preserved. The DMP enhances and ensures relevant project´s information transferability and takes into account the restrictions established by the Consortium Agreement. In this framework, the DMP sets the basis for both Dissemination Plan and Exploitation Plan. The first version of the DMP was delivered at month 6, at month 18 the DMP is updated in parallel with the different versions of Dissemination and Exploitation Plans. It is acknowledged that not all data types will be available at the start of the project, thus whenever important, if any changes occur to the RE 4 project due to inclusion of new data sets, changes in consortium policies or external factors, the DMP will be updated in order to reflect actual data generated and the user requirements as identified by the RE 4 consortium participants. The overall goal of the RE 4 project is to promote new technological solutions for the design and development of structural and non-structural pre- fabricated elements with high degree of recycled materials and reused structures from partial or total demolition of buildings. The developed technology will aim at energy efficient new construction and refurbishment, thus minimizing environmental impacts. The RE 4 -Project targets the demonstration of suitable design concepts and building elements produced from CDW in an industrial environment, considering perspective issues for the market uptake of the developed solutions. The technical activities will be supported by LCA and LCC analyses, certification and standardization procedures, demonstration activities, professional training, dissemination, commercialisation and exploitation strategy definition, business modelling and business plans. The overarching purpose is to develop a RE 4 prefabricated energy-efficient building concept that can be easily assembled and disassembled for future reuse, containing up to 65% in weight of recycled materials from CDW (ranging from 50% for the medium replacement of the mineral fraction, up to 65% for insulating panels and concrete products with medium mineral replacement coupled with the geopolymer binder). The reusable structures will range from 15-20% for existing buildings to 80-90% for the RE 4 prefabricated building concept. RE 4 Project comprises seven technical work packages (WPs) as follows: * WP1 - Mapping and analysis of CDW reuse and recycling in prefabricated elements * WP2 - Strategies for innovative sorting of CDW and reuse of structures from dismantled * WP3 - Innovative concept for modular/easy installation and disassembly of eco-friendly prefabricated elements * WP4 - Technical characterization of CDW-derived materials for the production of building * WP5 - Development of precast components and elements from CDW * WP6 - Pilot level demonstration of CDW based prefabricated elements * WP7 - Life-cycle and HSE analysis and certification/standardization strategy definition To facilitate the technical work there are three transversal work packages to coordinate all the work packages, disseminate and communications project results and to ensure compliance with the ethics requirements. * WP8 - Training, dissemination and exploitation * WP9 - Project Management * WP10 - Ethics requirements This document has been prepared to describe the data management life cycle for all data sets that will be collected, processed or generated by RE 4 Project. It is a document outlining how research data will be handled during the project, and after the project is completed. It describes what data will be collected, processed or generated and what methodologies and standards are to be applied. It also defines if and how this data will be shared and/or made open, and how it will be curated and preserved. # 3\. OPEN ACCESS Open access can be defined as the practice of providing on-line access to scientific information that is free of charge to the reader and that is reusable. In the context of R&D, open access typically focuses on access to “scientific information”, which refers to two main categories: * Peer-reviewed scientific research articles (published in academic journals). * Scientific research data (data underlying publications and/or raw data). It is important to note that: * Open access publications go through the same peer review process as non-open access publications. * As an open access requirement comes after a decision to publish, it is not an obligation to publish: it is up to researchers whether they want to publish some results or not. * As the decision on whether to commercially exploit results (e.g. through patents or otherwise) is made before the decision to publish (open access or not), open access does not interfere with the commercial exploitation of research results. 1 Benefits of open access: * Unprecedented possibilities for the dissemination and exchange of information due to the advent of the internet and electronic publishing. * Wider access to scientific publications and data can help to accelerate innovation, foster collaboration and avoid duplication of effort, build on previous research results, involve citizens and society. Figure 1. Open Access benefits The EC capitalizes on open access and open science as it lowers barriers to accessing publicly-funded research. This increases research impact, the free- flow of ideas and facilitates a knowledge-driven society at the same time underpinning the EU Digital Agenda (OpenAIRE Guide for Research Administrators - EC funded projects). Open access policy of European Commission is not a goal in itself, but an element in promotion of affordable and easy accessible scientific information for the scientific community itself, but also for innovative small businesses. ## 3.1 Open Access to peer-reviewed scientific publications Open access to scientific peer-reviewed publications has been anchored as an underlying principle in the Horizon 2020 Regulation and the Rules of Participation and is consequently implemented through the relevant provisions in the Grant Agreement. More specifically, Article 29: “Dissemination of results, Open Access, Visibility of EU Funding” of RE 4 Grant Agreement establishes the obligation to ensure open access to all peer-reviewed articles produced by RE 4 . ## _Article 29.2 Open access to scientific publications in RE_ 4 _GA_ Each beneficiary must ensure open access (free of charge online access for any user) to all peer reviewed scientific publications relating to its results. In particular, it must: 1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications. 2. ensure open access to the deposited publication — via the repository — at the latest: 1. on publication, if an electronic version is available for free via the publisher, or 2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case. 3. ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication. The bibliographic metadata must be in a standard format and must include all of the following: * the terms “European Union (EU)” and “Horizon 2020”; * the name of the action, acronym and grant number; * the publication date, and length of embargo period if applicable; \- a persistent identifier. 3.1.1 Green open access The green open access is also called self-archiving and means that the published article or the final peer-reviewed manuscript is archived by the researcher in an online repository before, after or alongside its publication. Access to this article is often delayed (embargo period). Publishers recoup their investment by selling subscriptions and charging pay-per-download/view fees during this period during an exclusivity period. This model is promoted alongside the “Gold” route by the open access community of researchers and librarians, and is often preferred. 3.1.2 Gold open access This type of open access is sometimes called open access publishing, or author pays publishing and means that a publication is immediately provided in open access mode by the scientific publisher. Associate costs are shifted from readers to the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. This model is usually the one promoted by the community of well-established scientific publishers in the business. ### 3.2 Open Access to research data “Research data” refers to information, in particular facts or numbers, collected to be examined and considered and as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. ## _Article 29.3 Open access to research data in RE_ 4 _GA_ Regarding the digital research data generated in the action (‘data’), the beneficiaries must: 1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: 1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; 2. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1 of RE 4 GA); 2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves). This does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply. The beneficiaries do not have to ensure open access to specific parts of their research data if the achievement of the action's main objective, as described in Annex 1, would be jeopardized by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access to third parties. ### 3.3 Dissemination & Communication and Open Access For the implementation of RE 4 Project, there is a complete dissemination and communication set of activities scheduled, with the objectives of raising awareness in the research community, industry and wide public (e-newsletters, e-brochures, poster or events, are foreseen for the dissemination of RE 4 to key groups potentially related to the project results’ exploitation). Likewise, RE 4 website, webinars, press releases or videos, for instance, will be developed for a communication to a wider audience. Details about all those dissemination and communication elements are provided in the deliverable D8.2 Communication and Dissemination Plan. The Data Management Plan and the actions derived are part of the overall RE 4 dissemination and communication strategy, which is included in the above mentioned D8.2. # 4\. OBJECTIVES OF DATA MANAGEMNET PLAN The purpose of RE 4 Data Management Plan (DMP) is to provide a management assurance framework and processes that fulfil the data management policy that will be used by the RE 4 project partners with regard to all the dataset types that will be generated by the RE 4 project. The aim of the DMP is to control and ensure quality of project activities, and to effectively/efficiently manage the material/data generated within the RE 4 project. It also describes how data will be collected, processed, stored and managed holistically from the perspective of external accessibility and long term archiving. The content of the DMP is complementary to other official documents that define obligations under the Grant Agreement (GA) and associated annexes, and shall be considered a living document and as such will be the subject of periodic updating as necessary throughout the lifespan of the project. Figure 2. RE 4 Data Management Plan overview # 5\. RE 4 PROJECT WEBSITE, STORAGE AND ACCESS RE 4 project website is used for storing both public and private documents related to project and dissemination, and is meant to be live for the whole project duration and minimum 2 years after the project end. Public section of the website contains mainly public deliverables, brochure, (roll up) poster, presentations, scientific papers, newsletters, magazine article, videos, photos, etc. Private section of the project website includes confidential deliverables, work packages related documentation, and is used as the main exchange of information among the Project partners. The website www.re4.eu was launched on 1st December 2016, its design is done by dissemination leader FENIX that is also in charge of website maintenance and regular update. It is dynamic and interactive tool in order to ensure a clear communication and wide dissemination of project news, activities and results. The website is of primary importance due to the expected impact on the target audiences. It was designed to give quick, simple and neat information. The website is regularly updated with news and events related to RE 4 Project, press releases, magazine articles, scientific papers, etc. The website is available in English, but translation to partners´ languages is considered as well in order to break the language barrier and enable wide and effective communication of project results at national level. To ensure the safety of the data, the partners will use their available local file servers to periodically create backups of the relevant materials. The RE 4 Project website itself already has its own backup procedures. In addition to the RE 4 Project website, the Project Coordinator established a temporary ftp access for the first period of the Project to all project partners. Dropbox was created for the RE 4 Project to manage living documents (e.g. contact list). The Project Coordinator (CETMA) of the RE 4 along with the Dissemination and Exploitation Manager (FENIX) will be in charge for data management and all the relevant issues. Figure 3. RE 4 Project website # 6\. DATA MANAGEMENT PLAN IMPLEMENTATION Partners of the RE 4 Project demonstrate relevant management capabilities necessary to support and provide major contribution to all the activities envisaged in the Project work. The general and technical Project management is handled by the Coordinator of the Project, CETMA. The main roles and instruments comprising the Project management structure include: * _Project Management Committee_ (PMC): representatives of each partner, the highest decision board and its main task is the Project governance, the overall responsibility of all technical, financial, legal, administrative, ethical, and dissemination issues of the Project. It encompasses the following main roles: * Project Coordinator (PC): PMC chairman, responsible for the overall management, communication, and coordination of the entire Project (supervision and approval of reports and technical deliverables, first liaison and communication with the EU Institutions, monitoring of the progress of the Project according to the work-plan, ensuring that the technical objectives of the Project as a whole are met, budget controlling, reporting of the major changes from the agreed work-plan to the PMC). * Dissemination and Exploitation Manager (DEM): responsible for dissemination and communication (website, press releases, newsletters, etc.), for exploitation planning (support and liaising to companies, SMEs and industrials), continuous assessment of the market potential of the developed know-how in the Project. * Risk Manager and Quality (RQM): assessment, and – along with the support of the PC – the management of administrative and technical risks and the development of the Quality Plan. * _Scientific and Technical Committee_ (STC): under the control of and in compliance with the decision of PMC responsible for the planning, execution and controlling of the Project, as regards issues of both scientific and technical nature. From a technical point of view, the Project is broken down into a number of work packages, each of them addressing a specific area of work. The STC encompasses the following roles: * Scientific and Technical Manager (STM): ensuring that the S&T objectives of the Project are met with quality and time. STM is expected to lead the S&T activities undertaken within the Project and is responsible for resolving any issues of S&T nature that might occur. * Work Packages Leaders (WPL): responsible for managing their work package as a selfcontained entity. Their tasks include among others coordinating, monitoring, and assessing the progress of the WP to ensure that output performance, costs, and timelines are met. * Each WP is further subdivided into its large components tasks, which are allocated a Task Leader responsible for coordination. The Project management also encompasses an experienced Financial Responsible (FR) who is in charge for the financial and administrative Project management and supervision. Finally, an _End_ _User and Interest Group_ (EIG) has been already named to provide inputs for products requirements and to evaluate the Project results and achievements. It includes external experts already identified and is chaired by Prof Hebel from Swiss Federal Institute of Technology Zurich, that is currently holding also the position of Assistant Professor of Architecture and Construction at the Future Cities Laboratory in Singapore. The main role of the EIG is to observe the work tackled in RE 4 and envisage possible inconsistencies between the market expectations and the technical work to assure a high level of innovation and to find a suitable balance between the waste managers, architects and endusers requirements and the developed technical solutions. Figure 4. Management structure on RE 4 Project Table 1: RE 4 partners and their role in the project <table> <tr> <th> **#** </th> <th> **Partner short name** </th> <th> **Partner legal name** </th> <th> **Partner role in RE 4 project ** </th> </tr> <tr> <td> **1.** </td> <td> **CETMA** </td> <td> CENTRO DI RICERCHE EUROPEO DI TECNOLOGIE DESIGN E MATERIALI </td> <td> Project coordinator, mapping the current best practices related to reuse and recycling of CDW in prefabricated elements, diagnosis of CDW management in the EU, current status on policy measures and regulatory frameworks, </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> development of materials incorporating CDW, Portland cement and alkali activated binders. </th> </tr> <tr> <td> **2.** </td> <td> **ACCIONA** </td> <td> ACCIONA INFRAESTRUCTURAS S.A. </td> <td> Leader of the demonstration activities, in charge of assembling and testing some of the final components in real scale pilot buildings. </td> </tr> <tr> <td> **3.** </td> <td> **CBI** </td> <td> CBI Betonginstitutet AB </td> <td> Scientific leader, development of suitable concrete formulations and concrete component development in particular façade applications, performance and durability testing of the prefabricated elements and of larger precast and timber elements, requirements concerning the quality and properties of materials for concrete and other cement based product, LCA and LCC. </td> </tr> <tr> <td> **4.** </td> <td> **CDE** </td> <td> CDE GLOBAL LIMITED </td> <td> Developing an innovative separating system for CDW, based on weight criteria, and of providing recycled materials for R&D and demo activities, innovative strategies and processes for separating CDW based on weight criteria, collection of representative samples of CDW sorted material. </td> </tr> <tr> <td> **5.** </td> <td> **CREAGH** </td> <td> CREAGH CONCRETE PRODUCTS LIMITED </td> <td> Production of RE4 prefabricated component and their assembly into demo building, manufacturing and testing of the prefabricated elements prototypes, quality control and characterization, HSE analysis. </td> </tr> <tr> <td> **6.** </td> <td> **FENIX** </td> <td> FENIX TNT SRO </td> <td> Dissemination and exploitation leader, development of business modelling and business plans, IPR management, market assessment, data management. </td> </tr> <tr> <td> **7.** </td> <td> **QUB** </td> <td> THE QUEEN'S UNIVERSITY OF BELFAST </td> <td> Technical characterization of recycled material for structural and non- structural elements, characterisation of mineral aggregates, assessment of variability effects and investigation on the alkali activation potential of ceramic waste, development of prefabricated components, refinement and production of pre-fab test elements, certification strategies, technical documentation and standardization. </td> </tr> <tr> <td> **8.** </td> <td> **ROS** </td> <td> ROSWAG ARCHITEKTEN GESELLSCHAFT VON ARCHITEKTEN MBH </td> <td> Design of innovative concept for modular/easy installation and disassembly of eco-friendly prefabricated elements, current status of construction of prefabricated elements with reused/recycled material, definition of sustainable strategies for the disassembly and reuse of structures and components from dismantled buildings. </td> </tr> <tr> <td> **9.** </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **10.** </td> <td> **STRESS** </td> <td> SVILUPPO TECNOLOGIE E RICERCA PER L'EDILIZIA SISMICAMENTE SICURA ED ECOSOSTENIBILE SCARL </td> <td> Life-cycle and HSE analysis and certification /standardization strategy definition, scaled-up processes, inputs related to S-LCA, support for the definition of data to be collected, development of the BIM-compatible DSS and platform for CDW estimation and management, refurbishment of residential and/or commercial building: installation of the panels/blocks on an existing façade. </td> </tr> <tr> <td> **11.** </td> <td> **NTUST** </td> <td> National Taiwan University of Science and Technology </td> <td> Demonstration of RE 4 technologies outside EU. </td> </tr> <tr> <td> **12.** </td> <td> **VORTEX** </td> <td> VORTEX HYDRA S.R.L. </td> <td> Extruded products (roof tiles, floor tiles and façade products) obtained using CDW materials, to supply a demo line to the consortium capable to produce extruded products using the CDW material, to assist the testing phase of the obtained products following the Standards and to use its experience in this field to achieve the final target, design and adapt the prefabricated elements production line. </td> </tr> <tr> <td> **13.** </td> <td> **ACR+** </td> <td> ASSOCIATION DES CITES ET DES REGIONS POUR LE RECYCLAGE ET LA GESTION DURABLE DES RESSOURCES </td> <td> Assessment of economic instruments of CDW management for European representative countries, dissemination and communication activities. </td> </tr> <tr> <td> **14.** </td> <td> **STAM** </td> <td> STAM Srl </td> <td> Study of innovative sorting solutions for the recycling of CDW and strategies for the reuse of structures from dismantled buildings, main features of materials resulting from total or partial demolition of buildings, innovative strategies and processes for sorting CDW based on advanced robotic system. </td> </tr> </table> # 7\. RESEARCH DATA “Research data” refers to information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. As indicated in the Guidelines on Data Management in Horizon 2020 (European Commission, Research & Innovation, October 2015), scientific research data should be easily: 1. DISCOVERABLE The data and associated software produced and/or used in the project should be discoverable (and readily located), identifiable by means of a standard identification mechanism (e.g. Digital Object Identifier). 2. ACCESSIBLE Information about the modalities, scope, licenses (e.g. licencing framework for research and education, embargo periods, commercial exploitation, etc.) in which the data and associated software produced and/or used in the project is accessible should be provided. 3. ASSESSABLE and INTELLIGIBLE The data and associated software produced and/or used in the project should be easily assessable for and intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. the minimal datasets are handled together with scientific papers for the purpose of peer review, data is provided in a way that judgments can be made about their reliability and the competence of those who created them). 4. USEABLE beyond the original purpose for which it was collected The data and associated software produced and/or used in the project should be useable by third parties even long time after the collection of the data (e.g. the data is safely stored in certified repositories for long term preservation and curation; it is stored together with the minimum software, metadata and documentation to make it useful; the data is useful for the wider public needs and usable for the likely purposes of non-specialists). 5. INTEROPERABLE to specific quality standards The data and associated software(s) produced and/or used in the Project should be interoperable allowing data exchange between researchers, institutions, organisations, countries, etc. Some examples of research data include: * Documents (text, Word), spreadsheets * Questionnaires, transcripts, codebooks * Laboratory notebooks, field notebooks, diaries * Audiotapes, videotapes * Photographs, films * Test responses, slides, artifacts, specimens, samples * Collection of digital objects acquired and generated during the process of research * Database contents (video, audio, text, images) * Models, algorithms, scripts * Contents of an application (input, output, logfiles for analysis software, simulation software, schemas) * Methodologies and workflows * Standard operating procedures and protocols. In addition to the other records to manage, some kinds of data may not be sharable due to the nature of the records themselves, or to ethical and privacy concerns (e.g. preliminary analyses, drafts of scientific papers, plans for future research, peer reviews, communication with partners, etc.). Research data also do not include trade secrets, commercial information, materials necessary to be held confidential by researcher until they are published, or information that could invades personal privacy. Research records that may also be important to manage during and beyond the project are: correspondence, project files, technical reports, research reports, etc. # 8\. RE 4 DATA SETS Projects are required to deposit the research data - the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; and other data, including associated metadata, as specified and within the deadlines laid down in a data management plan (DMP). At the same time, projects should provide information (via the chosen repository) about tools and instruments at the disposal of the beneficiaries and necessary for validating the results, for instance specialised software(s) or software code(s), algorithms, analysis protocols, etc. Where possible, they should provide the tools and instruments themselves. The types of data to be included within the scope of the RE 4 Data Management Plan shall as a minimum cover the types of data that is considered complementary to material already contained within declared project deliverables. In order to collect the information generated during the project, the template for data collection will be circulated periodically every 6 months. The scope of this template is to detail the research results that will be developed during the RE 4 Project detailing the kind of results and how it will be managed. The responsibility to define and describe all nongeneric data sets specific to an individual work package is with the WP leader. ## _Data set reference and name_ Identifier for the data set to be produced. All data sets within this DMP have been given a unique field identifier and are listed in the table 4 (Collection of project results and sharing strategy). ## _Data Set Description_ A data set is defined as a structured collection of data in a declared format. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set may comprise data for one or more fields. For the purposes of this DMP data sets have been defined by generic data types that are considered applicable to the RE 4 project. For each data set, the characteristics of the data set have been captured in a tabular format as enclosed in table 4 (Collection of project results and sharing strategy). ## _Standards & Metadata _ Metadata is defined as “data about data”. It is “structured information that describes, explains, locates, and facilitates the means to make it easier to retrieve, use or manage an information resource”. Metadata can be categorised in three types: * Descriptive metadata describes an information resource for identification and retrieval through elements such as title, author, and abstract. * Structural metadata documents relationships within and among objects through elements such as links to other components (e.g., how pages are put together to form chapters). * Administrative metadata manages information resources through elements such as version number, archiving date, and other technical information for the purposes of file management, rights management and preservation. There are a large number of metadata standards which address the needs of particular user communities. ## _Data Sharing_ During the period, when the Project is live, the sharing of data shall be defined by the configuration rules defined in the access profiles for the project participants. Each individual project data set item shall be allocated a 3 character “dissemination classification” for the purposes of defining the data sharing restrictions. The classification shall be an expansion of the system of confidentiality applied to deliverables reports provided under the RE 4 Grant Agreement. PU: Public RE: restricted to a group specified by the consortium CO: Confidential, only for members of the consortium; Commission services always included. The three above levels are linked to the “Dissemination Level” specified for all RE 4 deliverables. All material designated with a PU dissemination level is deemed uncontrolled. In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, or security-related). Data will be shared when the related deliverable or paper has been made available at an open access repository. The expectation is that data related to a publication will be openly shared. However, to allow the exploitation of any opportunities arising from the raw data and tools, data sharing will proceed only if all co-authors of the related publication agree. The Lead author is responsible for getting approvals and then sharing the data and metadata on Zenodo (www.zenodo.org), a popular repository for research data. The Lead Author will also create an entry on OpenAIRE (www.openaire.eu) in order to link the publication to the data. OpenAIRE is a service that implements the Horizon 2020 Open Access mandate for publications and its Open Research Data Pilot and may be used to reference both the publication and the data. A link to the OpenAIRE entry will then be submitted to the RE 4 Website Administrator (FENIX) by the Lead Author. ## _Data archiving and preservation_ Both Zenodo and OpenAIRE are purpose-built services that aim to provide archiving and preservation of long-tail research data. In addition, the RE 4 website, linking back to OpenAIRE, is expected to be available for at least 2 years after the end of the Project. At the formal project closure all the data material that has been collated or generated within the Project and classified for archiving shall be copied and transferred to a digital archive (coordinator responsibility). The document structure and type definition will be preserved as defined in the document breakdown structure and work package groupings specified. At the time of document creation the document will be designated as a candidate data item for future archiving. This process is performed by the use of codification within the file naming convention (see Section 10). The process of archiving will be based on a data extract performed within 12 weeks of the formal closure of the RE 4 Project. The archiving process shall create unique file identifiers by the concatenation of “metadata” parameters for each data type. The metadata index structure shall be formatted in the metadata order. This index file shall be used as an inventory record of the extracted files, and shall be validated by the associated WP leader. Figure 5. OpenAIRE website Figure 6. ZENODO repository # 9\. DATA SETS TECHNICAL REQUIREMENTS The applicable data sets are restricted to the following data types for the purposes of archiving. The technical characteristics of each data set are described in the following sections. The copy rights with respect to all data types shall be subject to IPR clauses in the GA, but shall be considered to be royalty free. The use of file compression utilities, such as “WinZip” is prohibited. No data files shall be encrypted. ## 9.1 Engineering CAD drawings The .dwg file format is one of the most commonly used design data formats, found in nearly every design environment. It signifies compatibility with AutoCAD technology. Autodesk created .dwg in 1982 with the launch of its first version of AutoCAD software. It contains all the pieces of information a user enters, such as: Designs, Geometric data, Maps, Photos. ## 9.2 Static graphical images Graphical images shall be defined as any digital image irrespective of the capture source or subject matter. Images should be composed such to contain only objects that are directly related to RE 4 activity and do not breach IPR of any third parties. Image files are composed of digital data and can be of two primary formats of “raster” or “vector”. It is necessary to represent data in the rastered state for use on a computer displays or for printing. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its colour equal to the colour depth of the device displaying it. The RE 4 project shall only use raster based image files. The allowable static image file formats are JPEG and PNG. There is normally a direct positive correlation between image file size and the number of pixels in an image, the colour depth, or bits per pixel used in the image. Compression algorithms can create an approximate representation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. The use of compression tools shall not be used unless absolutely necessary. ## 9.3 Animated graphical images Graphic animation is a variation of stop motion and possibly more conceptually associated with traditional flat cell animation and paper drawing animation, but still technically qualifying as stop motion consisting of the animation of photographs (in whole or in parts) and other non-drawn flat visual graphic material. The allowable animated graphical image file formats are AVI, MPEG, MP4, and MOV. The WP leader shall determine the most suitable choice of format based on equipment availability and any other factors. This is mainly valid for the RE 4 project promo video, which is expected to contain animated graphical images, infographics and on site interviews. Table 2: Video formats <table> <tr> <th> **Format** </th> <th> **File** </th> <th> **Description** </th> </tr> <tr> <td> MPEG </td> <td> .mpg .mpeg </td> <td> MPEG. Developed by the Moving Pictures Expert Group. The first popular video format on the web. Used to be supported by all browsers, but it is not supported in HTML5 (See MP4). </td> </tr> <tr> <td> AVI </td> <td> .avi </td> <td> AVI (Audio Video Interleave). Developed by Microsoft. Commonly used in video cameras and TV hardware. Plays well on Windows computers, but not in web browsers. </td> </tr> <tr> <td> WMV </td> <td> .wmv </td> <td> WMV (Windows Media Video). Developed by Microsoft. Commonly used in video cameras and TV hardware. Plays well on Windows computers, but not in web browsers. </td> </tr> <tr> <td> QuickTime </td> <td> .mov </td> <td> QuickTime. Developed by Apple. Commonly used in video cameras and TV hardware. Plays well on Apple computers, but not in web browsers. (See MP4) </td> </tr> <tr> <td> RealVideo </td> <td> .rm .ram </td> <td> RealVideo. Developed by Real Media to allow video streaming with low bandwidths. It is still used for online video and Internet TV, but does not play in web browsers. </td> </tr> <tr> <td> Flash </td> <td> .swf .flv </td> <td> Flash. Developed by Macromedia. Often requires an extra component (plug-in) to play in web browsers. </td> </tr> <tr> <td> Ogg </td> <td> .ogg </td> <td> Theora Ogg. Developed by the Xiph.Org Foundation. Supported by HTML5. </td> </tr> <tr> <td> WebM </td> <td> .webm </td> <td> WebM. Developed by the web giants, Mozilla, Opera, Adobe, and Google. Supported by HTML5. </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 723583 </th> <th> </th> </tr> <tr> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> MPEG-4 or MP4 </td> <td> .mp4 </td> <td> MP4. Developed by the Moving Pictures Expert Group. Based on QuickTime. Commonly used in newer video cameras and TV hardware. Supported by all HTML5 browsers. Recommended by YouTube. </td> </tr> </table> ## 9.4 Audio data An audio file format is a file format for storing digital audio data on a computer system. The bit layout of the audio data (excluding metadata) is called the audio coding format and can be uncompressed, or compressed to reduce the file size, often using lossy compression. The data can be a raw bitstream in an audio coding format, but it is usually embedded in a container format or an audio data format with defined storage layer. The allowable animated audio file formats is MP3 or MP4. This is mainly valid for the RE 4 Project promo video, which is expected to contain interviews with key partners, voice over and music. Table 3: Audio formats <table> <tr> <th> **Format** </th> <th> **File** </th> <th> **Description** </th> </tr> <tr> <td> MIDI </td> <td> .midi .mid </td> <td> MIDI (Musical Instrument Digital Interface). Main format for all electronic music devices like synthesizers and PC sound cards. MIDI files do not contain sound, but digital notes that can be played by electronics. Plays well on all computers and music hardware, but not in web browsers. </td> </tr> <tr> <td> RealAudio </td> <td> .rm .ram </td> <td> RealAudio. Developed by Real Media to allow streaming of audio with low bandwidths. Does not play in web browsers. </td> </tr> <tr> <td> WMA </td> <td> .wma </td> <td> WMA (Windows Media Audio). Developed by Microsoft. Commonly used in music players. Plays well on Windows computers, but not in web browsers. </td> </tr> <tr> <td> AAC </td> <td> .aac </td> <td> AAC (Advanced Audio Coding). Developed by Apple as the default format for iTunes. Plays well on Apple computers, but not in web browsers. </td> </tr> <tr> <td> WAV </td> <td> .wav </td> <td> WAV. Developed by IBM and Microsoft. Plays well on Windows, Macintosh, and Linux operating systems. Supported by HTML5. </td> </tr> <tr> <td> Ogg </td> <td> .ogg </td> <td> Theora Ogg. Developed by the Xiph.Org Foundation. Supported by HTML5. </td> </tr> <tr> <td> MP3 </td> <td> .mp3 </td> <td> MP3 files are actually the sound part of MPEG files. MP3 is the most popular format for music players. Combines good compression (small files) with high quality. Supported by all browsers. </td> </tr> <tr> <td> MPEG-4 or MP4 </td> <td> .mp4 </td> <td> MP4. Developed by the Moving Pictures Expert Group. Based on QuickTime. Commonly used in newer video cameras and TV hardware. Supported by all HTML5 browsers. Recommended by YouTube. </td> </tr> </table> ## 9.5 Textual data A text file is structured as a sequence of lines of electronic text. These text files shall not contain any control characters including end-of-file marker. In principle the least complicated form of textual file format shall be used as the first choice. On Microsoft Windows operating systems, a file is regarded as a text file if the suffix of the name of the file is "txt". However, many other suffixes are used for text files with specific purposes. For example, source code for computer programs is usually kept in text files that have file name suffixes indicating the programming language in which the source is written. Most Windows text files use "ANSI", "OEM", "Unicode" or "UTF-8" encoding. Prior to the advent of Mac OS X, the classic Mac OS system regarded the content of a file to be a text file when its resource fork indicated that the type of the file was "TEXT". Lines of Macintosh text files are terminated with CR characters. Being certified Unix, macOS uses POSIX format for text files. Uniform Type Identifier (UTI) used for text files in macOS is "public.plain-text". ## 9.6 Numeric data Numerical Data is information that often represents a measured physical parameter. It shall always be captured in number form. Other types of data can appear to be in number form i.e. telephone number, however this should not be confused with true numerical data that can be processed using mathematical operators. ## 9.7 Process and test data Standard Test Data Format (STDF) is a proprietary file format originating within the semiconductor industry for test information, but it is now a Standard widely used throughout many industries. It is a commonly used format produced for/by automatic test equipment (ATE). STDF is a binary format, but can be converted either to an ASCII format known as ATDF or to a tab delimited text file. Software tools exist for processing STDF generated files and performing statistical analysis on a population of tested devices. RE 4 innovation development shall make use of this file type for system testing. ## 9.8 Microsoft Office Application Suite RE 4 Project partners shall use the currently MS supported operating system and convert from any previous obsolete releases. The types of specific applications available within the current Microsoft Windows operating system shall be used to support all project activities in preference to any other software solutions. The data file types associated with these applications shall be saved in the default format and be in accordance with the file naming convention as specified in Section 10. At the Microsoft Office Application level the “file properties” shall be configured using the “document properties” feature. This is accessed via “Info” dropdown within the “File” menu. The “properties” and “advanced properties” present a data entry box under the “Summary” as shown in the figure below. Figure 7. Data Entry Box – Summary Title: Duplication of the name used for the data file name Subject: Identifier for RE 4 work package discrimination and shall be of the following format RE4_WPxx. Author: Name of the person creating the document and be formatted to have the surname stated first as follows: surname_firstname_secondname Manager: Name of the author’s immediate line manager and be formatted to have the surname stated first as follows: surname_firstname_secondname Company: Company name of the author to be stated as follows: companyname_RE4 participant number Keywords: Free format text and should contain key words that would be relevant and useful to future data searches. The keywords should all be in lower case and separated with commas Comments: Description of file contents in free format text Hyperlink base: Blank The tickbox indicating “Save Thumbnails for All Word Documents” shall be untagged. ## 9.9 Adobe Systems Portable Document Format (PDF) is a file format developed by Adobe Systems for representing documents in a manner that is independent of the original application software, hardware, and operating system used to create those documents. A PDF file can describe documents containing any combination of text, graphics, and images in a device independent and resolution independent format. These documents can be one page or thousands of pages, very simple or extremely complex with a rich use of fonts, graphics, colour, and images. PDF is an open standard, and anyone may write applications that can read or write PDFs royalty-free. PDF files are especially useful for documents such as magazine articles, product brochures, or flyers in which you want to preserve the original graphic appearance online. # 10\. NAMING CONVENTION All files irrespective of the data type are named in accordance with the following document file naming convention: **[PROJECT]** _[WORKPACKAGE]_[TASK]_ **[TITLE]** _ **[VERSION]_[** DISSEMINATIONCLASS]_[ARCHIVE] Where: * [PROJECT] is RE 4 for all document types (mandatory) * [WORKPACKAGE] is the RE 4 project work package number, with WP as a prefix * [TASK] is the RE 4 project task number, with T as a prefix * [TITLE] represents the description of the data item contents excluding capitalisation and punctuation characters (mandatory) * [VERSION] is the version number consisting of integer numbers only without leading zeros, prefixed with V (mandatory) * [DISSEMINATIONCLASS] is the dissemination classification allocated to a document type that define the data access post archiving, consists of the characters CO and a suffix of a single number in the range 1 to 3 * [ARCHIVE] this is a single character defining the allocation of the data item for future archiving and is represented by a Y or N. # 11\. EXPECTED PROJECT RESULTS AND RESEARCH DATA Expected RE 4 Project results described by tasks are listed in the table below. The table template is circulated periodically (at month 18, month 36) in order to monitor the results and set the strategy for their sharing. Table 4: Collection of project results and sharing strategy <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP1 Mapping and analysis of CDW reuse and recycling in prefabricated elements </td> <td> CETMA </td> <td> Task 1.1 Diagnosis of CDW management in the EU </td> <td> M1-M6 </td> <td> CETMA </td> <td> Data collection on CDW </td> <td> Report outlining the current CDW management situation, not only in the participating countries but in all European countries, against the background of national waste management plans and prevention programmes. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Statistics assessment </td> <td> Report assessing the information and data collected into the deliverable D1.1 (Data collection on CDW), in order to compare them with data coming from other studies in progress at the time of the Project proposal writing. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Task 1.2 Current status of construction of prefabricated elements with reused/recycled material </td> <td> M1-M6 </td> <td> ROS </td> <td> Data collection on prefab construction with and without CDW </td> <td> Report providing an overview of the state of the art regarding construction of prefabricated elements with and without CDW and recycling technologies and plants of the different CDW </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Task 1.3 Current status on policy measures and regulatory frameworks </td> <td> M1-M9 </td> <td> CETMA </td> <td> Current status on policy measures and regulatory framework </td> <td> Report outlining EU policies and regulations about prefabricated elements integrating recycled materials from CDW. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Certification framework </td> <td> Report providing a template including all the possible certification issues related to RE 4 final products. </td> <td> xls </td> <td> public </td> </tr> <tr> <td> **Data Shari** </td> <td> **ng** </td> <td> Public reports will be shared on RE 4 project website public section, eventually on ZENODO. </td> <td> **Data Archi preserv** </td> <td> **ving and** **ation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website, which already has its backup procedures. </td> <td> **Data management Responsibilities** </td> <td> Sonia Saracino </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP2 Strategies for innovative sorting of CDW and reuse of structures from dismantled buildings </td> <td> STAM </td> <td> Task 2.1 CDW material specifications </td> <td> M1-M4 </td> <td> STAM </td> <td> CDW average composition among EU </td> <td> Qualitative and quantitative literature analysis on composition of CDW in different European areas and in different building typologies, with main reference to the material classes in which the CDW can be divided. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> CDE's separating system performance requirements </td> <td> Definition of how the weight-based separation system has to work and </td> <td> pdf </td> <td> public </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> which macro-classes of materials will have to be separated in Task 2.3. </th> <th> </th> <th> </th> </tr> <tr> <th> STAM's sorting system input material definition </th> <th> Definition of the status of separated CDW as required for the proper working of robot-based sorting system in Task 2.4 (material classes, sizes, shapes, etc). </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> STAM's sorting system performance requirements </th> <th> Definition of the required productivity, classes of materials to be sorted from CDW and output size of fragments for the robot-based sorting system to be developed in Task 2.4. </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> </th> <th> Task 2.2 Definition of sustainable strategies for the disassembly and reuse of structures and components from dismantled buildings </th> <th> M4-M15 </th> <th> ROS </th> <th> Building Typology Identification </th> <th> Literature review of existing building stock in 5 European countries (Italy, Spain, UK, Sweden and Germany) with regards to numbers of buildings, construction typology and </th> <th> pdf </th> <th> confide ntial* </th> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> energy efficiency standards. </th> <th> </th> <th> </th> </tr> <tr> <th> Summary of requirements and standards with regards to building demolition in Italy, Spain, UK, Sweden and Germany </th> <th> Summary of standards and national codes in relation to the demolition of buildings for five European countries (Italy, Spain, UK, Sweden and Germany). </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Sustainable Dismantling Strategy with focus on high reusability </th> <th> Literature review of state of the art demolition processes and development of innovative strategy based on experts knowledge. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> Task 2.3 Innovative strategies and processes for separating CDW based on weight criteria </th> <th> M4-M15 </th> <th> CDE </th> <th> Process flow diagram </th> <th> To be completed </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Component size and selection </th> <th> To be completed </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Throughput </th> <th> To be completed </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Power usage </th> <th> To be completed </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> output material definition </th> <th> To be completed </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Task 2.4 Innovative strategies and processes for sorting </th> <th> M4-M20 </th> <th> STAM </th> <th> Technical requirements for STAM's sorting system components </th> <th> According to the functional specifications for sorting system outlined </th> <th> pdf </th> <th> confide ntial* </th> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> CDW based on advanced robotic system </th> <th> </th> <th> </th> <th> </th> <th> in Task 2.1, definition of the detailed specification of each single component to be integrated. </th> <th> </th> <th> </th> </tr> <tr> <th> Sorting system design </th> <th> Detailed 3D design of the sorting system, including commercial and customized parts to be assembled. </th> <th> stp/dwg </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> </th> <th> </th> <th> CDW classification algorithm compiled file </th> <th> Software algorithm to real-time extract material information from the data coming from sensors. </th> <th> TBD (dll or other) </th> <th> confide ntial* </th> </tr> <tr> <th> CDW NIR data </th> <th> Raw and processed data converted from the physical signals detected from sensors about hyperspectral response of different CDW fragments. </th> <th> TBD (csv, xls, etc.) </th> <th> confide ntial* </th> </tr> <tr> <th> Performance calculation and results </th> <th> Test campaign on the integrated sorting system in both laboratory and industrial environment, with relevant calculation of performance indexes (productivity, error rate, etc.). </th> <th> xls, MATLAB </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> </th> <th> </th> <th> Evaluation and final considerations </th> <th> Collection and evaluation of results from the test campaign. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <td> </td> <td> </td> <td> Task 2.5 BIM-compatible DSS and platform for CDW estimation and management </td> <td> M4-M24 </td> <td> STRESS </td> <td> Platform as a support for Collection plant managers, recyclers and construction/ demolition companies. </td> <td> Estimation of the types and quantities of CDW that will be generated during construction/ demolition, with possible utilization options and related logistic references. </td> <td> ICT tool; pdf for the tool guideline </td> <td> confide ntial* </td> </tr> <tr> <td> **Data Sharing** </td> <td> </td> <td> All public documents in pdf will be shared on RE 4 project website public section and ZENODO, confidential reports on RE 4 project website private section. </td> <td> **Data Archiving and preservation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE4 project website which already has its backup procedures. </td> <td> **Data management** **Responsibilities** </td> <td> Tomasso Zerbi </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP3 Innovative concept for modular/easy installation and disassembly of eco- friendly prefabricated elements </td> <td> ROS </td> <td> Task 3.1 Definition of indicators for easy installation, disassembly, recycling and reuse for newly developed prefabricated elements </td> <td> M1-M12 </td> <td> ROS </td> <td> European design scenario - loads and material properties </td> <td> Report on housing stock, legal requirements and construction options Structural Calculations </td> <td> pdf, xls </td> <td> public </td> </tr> <tr> <td> List and rating of indicators for easy installation </td> <td> Catalogue of building components with specific data regarding design and dimensions </td> <td> pdf </td> <td> public </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <th> Concept design of dismountable building system </th> <th> Set of Drawings M1:100 Structural Calculations </th> <th> dwg, pdf, xls </th> <th> public </th> </tr> <tr> <th> Concept design of dismountable connections </th> <th> Set of Drawings M1:20 Details 1:5 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <th> Task 3.2 Design concept for prefabricated elements for the refurbishment of residential or commercial build </th> <th> M3-M30 </th> <th> ROS </th> <th> Strategy for the thermal optimisation of existing façade elements </th> <th> Report on housing stock, legal requirements and construction options </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> Design of façade/roof element for extensions </th> <th> Set of Drawings M1:50 Details 1:10 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <th> Task 3.3 Design concept for the development of components for the new construction of residential or commercial buildings </th> <th> M3-M30 </th> <th> ROS </th> <th> Design of structural system </th> <th> Set of Drawings M1:50 Details 1:10 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <th> </th> <th> Concept design of foundations </th> <th> Set of Drawings M1:50 Details 1:10 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <th> Concept design of slab elements </th> <th> Set of Drawings M1:50 Details 1:10 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <th> </th> <th> Concept design of bearing / non-bearing walls </th> <th> Set of Drawings M1:50 Details 1:10 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <th> Concept design of facade elements </th> <th> Set of Drawings M1:50 Details 1:10 </th> <th> dwg, pdf </th> <th> public </th> </tr> <tr> <td> </td> <td> </td> <td> Task 3.4: Numerical modelling to support the prototypes design and to predict the prototypes performance </td> <td> M7-M33 </td> <td> STRESS </td> <td> FE models and results of numerical simulations (structural, thermal and fire resistance data) </td> <td> FE analysis results, including performances (thermal, structural and fire resistance) of the developed prefabricated elements </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> **Data Sharing** </td> <td> </td> <td> All public documents in will be shared on RE 4 project website public section and ZENODO, confidential reports on RE 4 project website private section. </td> <td> **Data Archiving and preservation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management** **Responsibilities** </td> <td> Andrea Klinge </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP4 Technical characterisation of CDW-derived materials for the production of building elements </td> <td> QUB </td> <td> Task 4.1 Collection of representative samples of CDW sorted material </td> <td> M3-M6 </td> <td> CDE </td> <td> Collection of North & South samples - mixed source </td> <td> Unsorted CDW samples collected from two different recycling plants. The first recycling plant located in UK (N-EU), the second one located in Southern France (S-EU) </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> Delivery of samples to partners n/a </td> <td> Unsorted CDW samples from the above two recycling plants delivered to CDE and QUB. </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> Physical assessment of CDW samples </td> <td> Manual sorting and sieving of unsorted CDW </td> <td> MS word </td> <td> confide ntial* </td> </tr> <tr> <td> Assessment methods </td> <td> Manual sorting and sieving of unsorted CDW in order to determine its precise composition in terms of the following: ▪ Silt/clay (< 0.075 mm) * Fine sand (0.075-0.6 mm) * Medium/coarse sand (0.6-4 mm) * Mixed concrete/mineral aggregate (4-8 mm) * Mixed concrete/mineral aggregate (8-16 mm) * Mixed concrete/mineral aggregate (16- 20 mm) * Mixed concrete/mineral aggregate (> 20 mm) * Mixed mortar/plaster (> 1.7 mm) * Ceramics (bricks & tiles > 1.7 mm) * Glass (> 1.7 mm) * Steel (nails, re-bars, hooks, tags etc. > 1.7 mm) Lightweight (mixed plastics/wood > 1.7 mm) </td> <td> MS word </td> <td> confide ntial* </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> Results and discussion </th> <th> Deliverable 4.1 </th> <th> MS word </th> <th> confide ntial* </th> </tr> <tr> <th> Task 4.2 Characterisation of CDW-derived materials </th> <th> M5-M11 </th> <th> QUB </th> <th> Chemical and durability characterisation of mineral fraction </th> <th> The following types of tests were performed: * Petrographic description * Constituent classification of coarse aggregate * Water-soluble chloride salt content * Water-soluble sulfate content * Carbonate content of fine aggregates * Organic matter (humus & fulvo acid content) * Water soluble components from recycled aggregate * Total/bulk chemistry of the material * Resistance to freezing & thawing * Resistance to weathering * Volume stability-drying shrinkage Alkali-silica reactivity </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> </th> <th> </th> <th> Geometrical and physical characterisation of mineral fraction </th> <th> The following types of tests were performed: * Grading * Flakiness index * Flow coefficient of fine aggregates * Resistance to fragmentation * Resistance to wear Particle density and water absorption </th> <th> pdf/xls </th> <th> public </th> </tr> <tr> <th> </th> <th> </th> <th> Lightweight fraction characterisation </th> <th> The following types of tests were performed on Wood&Plastic and Rigid Plastic fractions: * Grain size * Density * Water absorption The following types of tests were performed on Wood Flakes fraction: ▪ Grain size </th> <th> pdf </th> <th> public </th> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> ▪ Density Moisture content </th> <th> </th> <th> </th> </tr> <tr> <th> Fine fraction characterisation </th> <th> The following types of tests were performed: * Liquid limit * Plastic limit * Plasticity * XRF analysis * XRD analysis * FTIR Spectroscopy * DTA-TGA * Soluble components (humus, fulvo acid, sulfates & chlorides * Soluble components (alkalis) * Activity index Reactivity/isothermal calorimetry </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> Physical assessment of timber from CDW </th> <th> The following types of tests were described in detail: * In situ strength assessment * Visual on-site inspection * Assessment on chemical contamination * On-site separation In addition, a reprocessing exercise was performed </th> <th> pdf </th> <th> public </th> </tr> <tr> <td> </td> <td> </td> <td> Task 4.3 Variability of the chemical-physical features of CDW-derived materials and effect on technological properties of developed products </td> <td> M9-M18 </td> <td> QUB </td> <td> Variability of the chemicalphysical features of CDW aggregate and effect on compressive strength development assessment </td> <td> The following types of tests were performed: * Grading of fine and coarse aggregate * Constituent classification of coarse aggregate * Water absorption & particle density * Water-soluble chloride content * Water-soluble sulfate content * Slump of fresh OPC concrete * Fresh density of OPC concrete * Hardened density of OPC concrete </td> <td> pdf </td> <td> public </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> ▪ Compressive strength of OPC concrete Tensile strength of OPC concrete </th> <th> </th> <th> </th> </tr> <tr> <th> Variability of the chemicalphysical features and effect on the insulation assessment </th> <th> The following types of tests were performed on Wood&Plastic and Rigid Plastic fractions used for insulating mortars: * Grading * Water absorption & particle density * Consistency of insulating mortars * Fresh density of insulating mortars * Hardened density of insulating mortars * Flexural strength of insulating mortars * Compressive strength of insulating mortars * Thermal conductivity of insulating mortars * Specific heat capacity of insulating mortars * Water vapour resistance factor of insulating mortars The following types of tests were performed on Wood flakes fraction used for insulating wood-based panels: * Grading * Water absorption & particle density * Density of wood-based insulating panels * Thermal conductivity of wood-based insulating panels * Specific heat capacity of wood-based insulating panels Water vapour resistance factor of woodbased insulating panels </th> <th> pdf </th> <th> public </th> </tr> <tr> <td> </td> <td> </td> <td> Task 4.4 Definition of quality classes for utilisation in different applications </td> <td> M16-M20 </td> <td> RISE (CBI) </td> <td> Quality classes </td> <td> Definition of quality classes for each CDW sorted fraction (mineral aggregate, lightweight aggregate & timber) </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Potential applications for recovered CDW-derived materials </td> <td> Classification of each CDW sorted fraction (mineral aggregate, lightweight aggregate & timber) according to the potential for structural and nonstructural applications </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Task 4.5 Development of alkali activated binders from sorted brick and tiles waste </td> <td> M11-M20 </td> <td> QUB </td> <td> Investigate the potential for alkali activation of ceramic (bricks and tiles) fraction </td> <td> Development of alkali activated binder from ceramic CDW </td> <td> pdf </td> <td> confide ntial* </td> </tr> <tr> <td> **Data Sharing** </td> <td> </td> <td> All public documents in will be shared on RE 4 project website public section and ZENODO, confidential reports on RE 4 project website private section. </td> <td> **Data Archiving and preservation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management Responsibilities** </td> <td> Konstantinos Grigoriadis </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP5 Development of precast components and elements from CDW </td> <td> RISE ( CBI ) </td> <td> Task 5.1: Development of materials incorporating CDW, Portland cement and alkali activated binders </td> <td> M2-M18 </td> <td> CETMA </td> <td> Formulations of concrete with CDW+OPC </td> <td> Mix designs of ordinary concrete mix and self-compacting concrete mixes, based on 70-100% replacement of virgin aggregates with CDW aggregates, and using OPC as binder. Fresh and hardened concrete properties for both mix designs, including durability performance. </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> Formulations of concrete with CDW+AAB </td> <td> Mix designs of ordinary concrete mix, based on up to 100% replacement of virgin aggregates with CDW aggregates, and using alternative material as binders (Fly ash and slag). Fresh and hardened concrete properties for both mix designs, including durability performance. </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> Formulations of lightweight concrete with CDW+OPC and CDW+AAB </td> <td> Formulations of lightweight concrete with CDW+OPC and CDW+AAB </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> Formulations of earth plaster and adhesive from CDW </td> <td> Formulations of earth plaster and adhesive from CDW </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> Task 5.2: Development of prefabricated components </td> <td> M6-M24 </td> <td> QUB </td> <td> Development of building blocks </td> <td> Blocks prepared with 100 % recycle aggregate varying the cement content and bulk density of the block to achieve a minimum </td> <td> pdf </td> <td> confide ntial* </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> strength of 7.3 MPa. Data is compared to control blocks whenever appropriate. </th> <th> </th> <th> </th> </tr> <tr> <th> Development of reconstituted tiles </th> <th> Reconstituted tiles will be developed using unsorted CDW material. A proper binder will be tested in order to optimise the process. Moulding and extrusion processes will be used for the above purpose. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> Development of timber beams and columns (structural support system) and weatherboarding </th> <th> The use of recycled laminated timber, laminated veneer lumber, plywood box and nail plated timber beams in new structural timber elements and systems will be investigated. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> Development of insulation panels </th> <th> Two different types of insulating panels will be investigated: wood fibre panels and composite plastic panels. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Task 5.3: Development of prefabricated elements </th> <th> M6-M31 </th> <th> RISE (CBI) </th> <th> Cladding panel for ventilated façade applications </th> <th> Design, dimensions and type of material to use. Production of labscale specimens. </th> <th> Blueprint/ Prototype </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> Sandwich element with integrated insulation </th> <th> Design, dimensions and type of material to use. Production of labscale specimens. </th> <th> Blueprint/ Prototype </th> <th> confide ntial* </th> </tr> <tr> <th> </th> <th> Load bearing concrete element </th> <th> Design, dimensions and type of material to use. Production of labscale specimens. </th> <th> Blueprint/ Prototype </th> <th> confide ntial* </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Non-load bearing internal partition wall </td> <td> Design, dimensions and type of material to use. Production of labscale specimens. </td> <td> Blueprint/ Prototype </td> <td> confide ntial* </td> </tr> <tr> <td> Please fill in data description </td> <td> Refinement of mix designs for the production of up-scaled structural elements. </td> <td> Blueprint/ Prototype </td> <td> confide ntial* </td> </tr> <tr> <td> Task 5.4: Refinement and production of pre-fab test elements </td> <td> M18-M31 </td> <td> QUB </td> <td> Development of mix designs suitable for a range of applications </td> <td> Refinement of mix designs for the production of up-scaled nonstructural elements such as building blocks. </td> <td> pdf </td> <td> confide ntial </td> </tr> <tr> <td> </td> <td> </td> <td> Development of formulations for building blocks with a reduced carbon footprint </td> <td> Test results on factory-scale prototypes </td> <td> pdf </td> <td> confide ntial* </td> </tr> <tr> <td> Task 5.5: Performance and durability testing of larger precast and timber elements </td> <td> M20-M31 </td> <td> RISE (CBI) </td> <td> Test results on sandwich elements </td> <td> Test results on factory-scale prototypes </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> Test results on load bearing elements </td> <td> Test results on factory-scale prototypes </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> </td> <td> </td> <td> Test results on cladding panels </td> <td> Design, dimensions and type of material to use. Production of labscale specimens. </td> <td> xls </td> <td> confide ntial* </td> </tr> <tr> <td> **Data Sharing** </td> <td> </td> <td> All data produced within WP5 are confidential and will be shared with other project partners on RE 4 project website private section. </td> <td> **Data Archiving and preservation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management** **Responsibilities** </td> <td> Linus Brander </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP6 Pilot level demonstration of CDW based prefabricated elements </td> <td> ACCI ONA </td> <td> Task 6.1 Design and adapt the prefabricated elements production line </td> <td> M23-M28 </td> <td> VORTE X </td> <td> Technical and technological information and data related to the line adjustment for the extrusion of concrete with CDW inside for roof tiles/floor tiles/facade </td> <td> Description of applied modification of the production line for the usage of the CDW materials. Picture/videos of the production process. Technical reports. </td> <td> pdf, dwg, jpeg </td> <td> confide ntial* </td> </tr> <tr> <td> Task 6.2 Manufacture and testing of the prefabricated elements prototypes, quality control and characterization </td> <td> M26-M32 </td> <td> CREAG H </td> <td> Production of the prefabricated elements to build the RE 4 demonstrator </td> <td> Industrial scale production of selected elements developed in WP5. Namely concrete precast structural and nonstructural façade panels and support structures, as well as timber frame structural and nonstructural elements, extruded roof and floor tiles and façade elements. Furthermore, insulating panels with plastic a plastic and wood are to be manufactured. </td> <td> samples; pdf; jpg </td> <td> confide ntial* </td> </tr> <tr> <td> Tests results on the prefabricated elements </td> <td> Standardized tests will be carried out by RE4 partners to control the quality of the produced </td> <td> xls </td> <td> confide ntial* </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> elements. All the results obtained from these tests will be collected and will feed task 7.5. </th> <th> </th> <th> </th> </tr> <tr> <th> Task 6.3 New residential and/or commercial buildings made of a high ratio of waste: design and construction of the demonstrator </th> <th> M18-M36 </th> <th> ACCIO NA </th> <th> Design of the RE 4 demonstrator </th> <th> Detailed design of the RE 4 demonstrators using the RE4 elements developed in WP5. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Construction of the RE 4 demonstrator </th> <th> Construction of the RE 4 demonstrators using the selected RE 4 elements </th> <th> pf, jpg, avi </th> <th> public </th> </tr> <tr> <td> </td> <td> </td> <td> Task 6.4 Refurbishment of residential and/or commercial building: installation of the panels/blocks on an existing façade </td> <td> M31-M36 </td> <td> STRESS </td> <td> Results of tests. Testing a and validation of the RE4 elements for refurbishment at STREES facility. </td> <td> Description of the set-up of the installation of the façade panels. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Task 6.5 Disassembly demonstration of conventional building vs. 6.3 and 6.4 demonstrators </td> <td> M18-M40 </td> <td> ACCIO NA </td> <td> Disassembly demonstration of RE4 demonstrator VS conventional building </td> <td> Demonstration and monitoring of the easy disassembly of the RE4 demonstrator an comparison with the demolition of conventional building </td> <td> pf, jpg, avi </td> <td> public </td> </tr> <tr> <td> </td> <td> </td> <td> Disassembly demonstration of 6,3 demonstrator </td> <td> Demonstration and monitoring of the easy disassembly of the Task 6.3 demonstrator (the one refurbished) </td> <td> pf, jpg </td> <td> public </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> Evaluation and quantification of the disassembly demonstration </th> <th> Evaluation and quantification of the disassembly efficiency using the indicators and the deconstruction strategy developed in WP3 and WP2 respectively in order to be able to compare the process properly in a standardized way. </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> Task 6.6 Monitoring and techno-economic analysis of the performance. Validation </th> <th> M36-M40 </th> <th> ACCIO NA </th> <th> Monitoring strategy </th> <th> Definition of the monitoring plan (type and number of sensors, acquisition system, monitoring period, etc.) to monitor properly the energy efficiency of the RE 4 demonstrators. </th> <th> pdf </th> <th> confide ntial* </th> </tr> <tr> <th> Results of the monitoring. Validation performance. </th> <th> Data analysis of all the measurements taken in order to validate the performance of the RE 4 demonstrator. </th> <th> Pdf, xls </th> <th> confide ntial* </th> </tr> <tr> <th> Techno-economic analysis </th> <th> Validation of the RE 4 building from a techno- economic point of view, comparing the results obtained in the monitoring </th> <th> Pdf, xls </th> <th> confide ntial* </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> with the model carried out in WP3. </td> <td> </td> <td> </td> </tr> <tr> <td> **Data Sharing** </td> <td> </td> <td> Demonstration activities will be shared on RE 4 project website, social network profiles. Public reports will be shared on RE 4 project website public section, confidential reports and data on private section. </td> <td> **Data Archi preservati** </td> <td> **ving and on** </td> <td> Regular backup of data on server, managed by IT department. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management** **Responsibilities** </td> <td> María Casado </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP7 Life-cycle and HSE analysis and certification/s tandardization strategy definition </td> <td> STRESS </td> <td> Task7.1 Inputs related to scaled-up processes </td> <td> M28-M30 </td> <td> STRESS </td> <td> Scaled-up processes </td> <td> Industrial data collection on state of art competing products (to be utilized as a benchmark). Conceptual design of the RE4 processes scaled-up at industrial scale (materials and energy balances, equipment list and layout, process flow diagrams…) </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Task7.2 Goal & Scope definition </td> <td> M13-M24 </td> <td> STRESS </td> <td> Goal and Scope Report </td> <td> Development of common assessment framework for LCA, LCCA an S-LCA, in order to compare, in an integrated and consistent way, environment, economic and social impacts of the RE4 technologies/products versus standard solutions. </td> <td> pdf (LCA, LCCA and S-LCA); jpeg (graphic represen tation) </td> <td> public </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> Task7.3 Inventory </th> <th> M18-M32 </th> <th> RISE (CBI) </th> <th> Meta data for all data: reference, year of inventory, data valid to year, regional validity, technical specification, modules included. </th> <th> Definition and collection of all data for the LCA/LCC/SLCA analysis </th> <th> xlsx </th> <th> Genera l life cycle data are public. We don't know any confide ntial data yet. </th> </tr> <tr> <th> </th> <th> </th> <th> If good quality on data, material impact data for modules A1-A5, B1-B6, C1-C4 according EN15804. </th> <th> xlsx </th> <th> Genera l life cycle data are public. We don't know any confide ntial data yet. </th> </tr> <tr> <th> Building element data: type, weight, dimensions, service life, including materials, etc. </th> <th> xlsx </th> </tr> <tr> <th> Processes environmental data: for Waste management, transport, construction, deconstruction, upgrading material etc </th> <th> xlsx </th> </tr> <tr> <th> </th> <th> </th> <th> Task7.4 Assessment and interpretation </th> <th> M21-M40 </th> <th> RISE (CBI) </th> <th> LCA Result in tables and figures Impact categories and module according EN 15 804 </th> <th> Definition and interpretation of the LCA,LCC and SLCA results. </th> <th> xlsx </th> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> for all scenarios and sensitivity analysis </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> </td> <td> </td> <td> Task7.5 HSE issues analysis </td> <td> M21-M42 </td> <td> CREAG H </td> <td> Products HSE analysis report </td> <td> Assessing potential risks for the environment and for the workers’ health, due to the investigated processes and related to possible presence or release of dangerous substances </td> <td> pdf </td> <td> confide ntial* </td> </tr> <tr> <td> </td> <td> </td> <td> Processes HSE analysis report </td> <td> pdf </td> <td> confide ntial* </td> </tr> <tr> <td> </td> <td> </td> <td> Task7.6 Certification strategies, technical documentation and contribution to standardization </td> <td> M31-M42 </td> <td> QUB </td> <td> Development of technical documentation in the form of technical data sheets TDS/DoP </td> <td> Summarising the performance and other technical characteristics of the product, machine, component, material, a subsystem or software in sufficient detail to be used by an engineer. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> </td> <td> </td> <td> Drafting of preliminary EPD documentation based on the existing product categories for the EPD (EN 15804) and complying with the ISO 14025-Type III </td> <td> Communicating transparent and comparable information about the life-cycle environmental impact of the products, collected and quantified according to the criteria defined by international standards. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Analysis of the most convenient certification strategy for each expected RE4 project </td> <td> Considering the four steps in the process for certification: Application (including testing); Evaluation (qualification criteria); Decision (second </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> review); Surveillance (the product in the marketplace). </td> <td> </td> <td> </td> </tr> <tr> <td> **Data Sharing** </td> <td> </td> <td> All public documents in pdf, xls and jpg will be shared on RE 4 project website public section and ZENODO, confidential reports and data on RE 4 project website private section. </td> <td> **Data Archi preservati** </td> <td> **ving and on** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management** **Responsibilities** </td> <td> Loredana Napolano </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP8 Training, dissemination and exploitation </td> <td> FENIX </td> <td> Task 8.1 Use of economic instruments and waste management performances </td> <td> M1-M9 </td> <td> ACR+ </td> <td> Use of economic instruments </td> <td> Assessment of economic instruments of CDW management for European representative countries. </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Task 8.2 Business modelling and business plans </td> <td> M18-M42 </td> <td> FENIX </td> <td> Business Models and Business Plans for RE 4 </td> <td> Business models based on the new value chain, business plans focused on the sustainability strategy and replicability. </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> Task 8.3 Market assessment, Exploitation and IPR management </td> <td> M1-M42 </td> <td> FENIX </td> <td> Market Assessment </td> <td> Preliminary market analysis of the European market of CDW recycling and reuse </td> <td> pdf </td> <td> public </td> </tr> <tr> <td> Exploitation Plan and IPR strategy </td> <td> Identification of the key exploitable results, exploitation forms, </td> <td> pdf </td> <td> Confid ential </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> competition, risk analysis, potential obstacles </th> <th> </th> <th> </th> </tr> <tr> <th> IPR manual </th> <th> Background knowledge and existing patents mapping, potentially overlapping IPR, optimal IPR protection </th> <th> pdf </th> <th> Confid ential </th> </tr> <tr> <th> </th> <th> Task 8.4 Dissemination </th> <th> M1-M42 </th> <th> FENIX </th> <th> Images: Images and logos from project partners, photos/videos from dissemination events, project promo videos consisting of animated graphical images, filming, voice over and music. Promo materials shared online. _The owner gives permission to FENIX to use images for dissemination purposes of RE 4 _ </th> <th> .eps,.jpe g,.png, mpeg, .avi, .mp4, pdf </th> <th> public </th> </tr> <tr> <th> Dissemination and Communication Plan </th> <th> Report identifying target audiences, key messages, communication channels, roles and timelines </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> Task 8.5 Data Management </th> <th> M1-M6 </th> <th> FENIX </th> <th> Data Management Plan </th> <th> Report analysing the main data uses and restrictions related to IPR according to the Consortium Agreement </th> <th> pdf </th> <th> public </th> </tr> <tr> <th> Task 8.6 Training </th> <th> M24-M42 </th> <th> FENIX </th> <th> Data sheets, videos for training purposes (webinars, workshops, social profiles) </th> <th> pdf </th> <th> public </th> </tr> <tr> <td> **Data Sharing** </td> <td> All dissemination and promo material will be shared on RE 4 project website, social network profiles, videos on YouTube, thematic portals. Public reports will be shared on RE 4 project website public section, confidential reports on private section. </td> <td> **Data Archiving and preservation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management** **Responsibilities** </td> <td> Petra Colantonio </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Form at** </th> <th> **Level** </th> </tr> <tr> <td> WP9 Project Management </td> <td> CETMA </td> <td> Task 9.1 Project organization and planning </td> <td> M1-M42 </td> <td> CETMA </td> <td> Strategic Action Plan </td> <td> Report defining the general approach to quality assurance and the procedures to be followed for the production of outcomes such as deliverables or reports. Risks management plan is included. </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> Task 9.2 Risk management </td> <td> M1-M42 </td> <td> CETMA </td> <td> Risks management plan </td> <td> N/A: it is included in the Strategic Action Plan </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> Task 9.3 Monitoring and evaluation </td> <td> M1-M42 </td> <td> CETMA </td> <td> Monitoring and evaluation Plan </td> <td> Report defining strategies, methods and tools to manage the Project and track its progress with respect to the Strategic Action Plan (SAP) </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> Task 9.4 Reporting to the EC </td> <td> M1-M42 </td> <td> CETMA </td> <td> First short interim management report </td> <td> Report providing the status of activities performed in RE 4 Project from M1 to M6 </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> Second short interim management report </td> <td> Report providing the status of activities performed in RE 4 Project from M7 to M12 </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Third short interim management report </td> <td> Report providing the status of activities performed in RE 4 Project from M19 to M24 </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> Final short interim management Report </td> <td> Report providing the status of activities performed in RE 4 Project from M30 to M36 </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> **Data Shari** </td> <td> **ng** </td> <td> Reports will be shared on RE 4 project website private section. </td> <td> **Data Arch preser** </td> <td> **iving and vation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management Responsibilities** </td> <td> Sonia Saracino </td> </tr> </table> <table> <tr> <th> **WP number and name** </th> <th> **WP** **lead** </th> <th> **Task number and name** </th> <th> **Duration** </th> <th> **Task lead** </th> <th> **Dataset name** </th> <th> **Dataset description** </th> <th> **Format** </th> <th> **Level** </th> </tr> <tr> <td> WP10 Ethics requirements </td> <td> CETMA </td> <td> N/A </td> <td> M1-M42 </td> <td> CETMA </td> <td> EPQ - Requirements </td> <td> Report providing further information about the possible harm to the environment caused by the research and state the measures that will be taken to mitigate the risks and ensuring that appropriate health and safety procedures conforming to relevant local/national guidelines/legislation are followed for the staff involved in this Project. </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> NEC - Requirements </td> <td> Report examining ethical issues involved in the RE 4 Project, especially those related to the participation of a non-EU country in the research activities. </td> <td> pdf </td> <td> Confid ential </td> </tr> <tr> <td> **Data Sharing** </td> <td> Reports will be shared on RE 4 project website private section. </td> <td> **Data Archiving and preservation** </td> <td> Regular backup of data on server, managed by IT departments. Data will be stored on the RE 4 project website which already has its backup procedures. </td> <td> **Data management Responsibilities** </td> <td> Sonia Saracino </td> </tr> </table> *To be kept confidential at least till the end of the RE 4 project. <table> <tr> <th> </th> <th> This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 723583 </th> <th> </th> </tr> </table> # 12\. PUBLICATIONS The RE 4 Consortium is willing to submit papers for scientific/industrial publication during the course of the RE 4 Project. In the framework of the Dissemination and Communication Plan agreed by the GA, R&D partners are responsible for the preparation of the scientific publications, while the Scientific and Technical Committee (Scientific and Technical Manager and WP leaders) is responsible for review and final approval. As a general approach, the R&D partners are responsible for the scientific publications as well as for the selection of the publisher considered as more relevant for the subject of matter. Each publisher has its own policies on self-archiving: * **Green open access** : researchers can deposit the final version of their published article (peerreviewed manuscript) into a subject-based repository or an institutional repository before, after or alongside its publication. Access to this article is often delayed (embargo period). Publishers recoup their investment by selling subscriptions and charging pay-perdownload/view fees. * **Gold open access:** author pays publishing, a publication is immediately provided in open access mode by the scientific publisher. Associate costs are shifted from readers to the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. (e.g. _http://www.springer.com/gp/_ , _https://www.elsevier.com/_ , _https://www.oasis-open.org/_ , _http://www.sherpa.ac.uk/romeo/index.php_ ) After the paper is published and license for open access in obtained, R&D partner will contact Dissemination and Exploitation Manager (FENIX), who is responsible for RE 4 data management, and he will upload the publication into RE 4 project website and deposit the publication in the OpenAIRE or Zenodo repository indicating the project it belongs to in the metadata. For adequate identification of accessible data, all the following metadata information will be included: * Information about the grant number, name and acronym of the action: **European Union** **(UE), Horizon 2020 (H2020), Research Innovation Action (RIA), RE 4 acronym, GA N° 723583 ** * Information about the publication date and embargo period if applicable: **Publication date, Length of embargo period** * Information about the persistent identifier (for example a **Digital Object Identifier** , DOI), if any, provided by the publisher (for example an **ISSN number** ) For more detailed rules and processes about OpenAIRE, Zenodo, it is possible to find within FAQ on the link https://www.openaire.eu/support/faq. _RE4_Deliverable D8.4_Data Management Plan update (M18)_FINAL_V3.0_ _© RE 4 Consortium - This document and the information contained are RE 4 consortium property and shall not be copied or disclosed to any third party without RE 4 consortium prior written authorisation **54** _ <table> <tr> <th> </th> <th> This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 723583 </th> <th> </th> </tr> </table> # 13\. RE 4 DATA MANAGEMENT PLAN PROGRESS The Table 5 lists datasets shared publically at month 18. The other public deliverables already available at M18 (D1.2, D1.3, D1.4, D1.5, D4.3 and the updated version of D8.4) will be shared publically after the final approval of the European Commission. Table 5: Datasets shared publically at month 18 <table> <tr> <th> **WP** </th> <th> **WP** **lead** </th> <th> **Dataset name** </th> <th> **Form at** </th> <th> **Type** </th> <th> **Open access** </th> <th> **Links** </th> </tr> <tr> <td> WP1 </td> <td> CETMA </td> <td> RE4_D1.1_Data collection on CDW_Final_V1.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4_Valorization of construction and demolition wastes: RE4 building solutions_ AMAM2017_CETMA </td> <td> pdf </td> <td> Paper </td> <td> RE 4 website public, ProScience proceedings, ZENODO </td> <td> _https://www.scientevents.com/proscience/download/val_ _orization-of-industrial-wastes-sus-con-building-solutions/_ (DOI:10.14644/amamicam.2017.001) _https://doi.org/10.5281/zenodo.1185180_ </td> </tr> <tr> <td> WP2 </td> <td> STAM </td> <td> RE4_D2.1_CDW specifications and material requirements for prefabricated structures_Final_V2.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public, ZENODO </td> <td> _http://www.re4.eu/documents/deliverables_ _https://doi.org/10.5281/zenodo.1175607_ </td> </tr> <tr> <td> RE4_Indexing and sorting robot based on hyperspectral and reflectance information for CDW recycling_HISER conference_STAM </td> <td> pdf </td> <td> Paper </td> <td> RE 4 website public, ZENODO </td> <td> _http://www.re4.eu/documents/publications/scientificpublications_ _https://doi.org/10.5281/zenodo.1175617_ </td> </tr> <tr> <td> WP4 </td> <td> QUB </td> <td> RE4_D4.1_Composition of materials from demolition and available volumes of sorted fractions_Final_V2.0 </td> <td> Pdf </td> <td> Deliverable </td> <td> RE 4 website public, ZENODO </td> <td> _http://www.re4.eu/documents/deliverables_ _https://doi.org/10.5281/zenodo.1185211_ </td> </tr> </table> _RE4_Deliverable D8.4_Data Management Plan update (M18)_FINAL_V3.0_ _© RE 4 Consortium - This document and the information contained are RE 4 consortium property and shall not be copied or disclosed to any third party without RE 4 consortium prior written authorisation **55** _ <table> <tr> <th> </th> <th> This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 723583 </th> <th> </th> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> RE4_D4.2_Geometrical, physical and chemical characterisation of CDW-derived materials_Final_V2.0 </th> <th> pdf </th> <th> Deliverable </th> <th> RE 4 website public, ZENODO </th> <th> _http://www.re4.eu/documents/deliverables_ _https://doi.org/10.5281/zenodo.1185221_ </th> </tr> <tr> <td> WP8 </td> <td> FENIX </td> <td> RE4_D8.1_Website creation_FENIX_Final_V0.3 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4_D8.2 Communication and Dissemination Plan_Final_V2.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4_D8.3_Promo Material Design_Final_V2.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4_D8.4_Data Management Plan_Final_V2.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4_D8_5_UseOfEconomicInstruments_Final _V4.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4_D8_6_MarketAssessment_Final_V2.0 </td> <td> pdf </td> <td> Deliverable </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/deliverables_ </td> </tr> <tr> <td> RE4-Folder-A4-EN-WEB (2) RE4-Infographic RE4_KoM_POSTER RE4_roll up poster </td> <td> Png Pdf Pdf Pdf </td> <td> Promo materials </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/promo-material_ </td> </tr> <tr> <td> Re4_first newsletter_Sep 2017 </td> <td> pdf </td> <td> e-Newsletter </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/promomaterial/newsletters_ </td> </tr> <tr> <td> La Gazzetta Del Mezzogiorno_Kick-off meeting RE4_14.09.16 RE4_CBI-nytt 2 2016 RE4_KoM_QuotidianoDiPuglia_14112016 RE4_LaGazzetta_KoM_09092016 RE4_LaRepubblica_Nov2016 </td> <td> Pdf Pdf Jpg Jpg jpg </td> <td> Publication </td> <td> RE 4 website public </td> <td> _http://www.re4.eu/documents/publications/popularized_ _-publications_ </td> </tr> </table> _RE4_Deliverable D8.4_Data Management Plan update (M18)_FINAL_V3.0_ _© RE 4 Consortium - This document and the information contained are RE 4 consortium property and shall not be copied or disclosed to any third party without RE 4 consortium prior written authorisation **56** _ <table> <tr> <th> </th> <th> This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 723583 </th> <th> </th> </tr> </table> # 14\. CONCLUSION This report contains the second release of the Data Management Plan and represents the status of the mandatory quality requirements at the month 18. This report should be read in association with all the referenced documents, appendices and including the EC Grant and Consortium Agreement, annexes and guidelines. The report will be subject to revisions as required to meet the needs of the RE 4 project and will be formally reviewed at month 36 to ensure ongoing fitness for purpose. At month 18 more detailed information about the dataset description, sharing, archiving, preservation and responsibilities was updated by each WP leader and outcomes can be seen in the table 4 (Collection of project results and sharing strategy). Data which were already shared publically for RE 4 project with open access are listed in the table 5 (Datasets shared publically at month 18) with links where they can be accessed and downloaded. The other public deliverables already submitted (D1.2, D1.3, D1.4, D1.5, D4.3 and the updated version of D8.4) will be shared publically after the final approval of the European Commission.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1166_Z-Fact0r_723906.md
# 2 Introduction The amount of data generated is continuously increasing while use and re-use of data to derive new scientific findings is relatively stable. This information would be useful in the future if the data is well documented according to accepted and trusted standards which enable the recognition of suitable data by negotiated agreements on standards, quality level and sharing practices. For this purpose, DMP defines strategies to preserve and store data over the defined period of time in order to ensure their availability and re- usability after the end of Z-Fact0r project. According to the Guidelines of ORD Pilot in H2020, research data refers to information, in particular facts or numbers, collected to be examined and considered and as well as basis for reasoning, discussion, or calculation. The overall objective of Z-Fact0r project is to develop zerodefect manufacturing strategies for on-line production. Z-Fact0r aims to contribute to the eradication of defects in manufacturing, providing better quality of products, increasing flexibility, and reducing production costs. Thus, research activities are more focused on the production process and tools than on production of research or observation of data, so the amount of research data which will be produced within the project is limited, at least at this stage of the project. ## 2.1.1 Participation in the pilot on open research data The EC is running a flexible pilot under H2020 called the ORD pilot. The ORD pilot aims to improve and maximize access and re-use of research data generated by H2020 projects and takes into account the need to balance openness and protection of scientific information, commercialization and IPR, privacy concerns, security as well as data management and preservation issues. The 2017 work programme of ORD pilot has been extended to cover all the thematic areas of Horizon 2020. Following the recommendation of the EC, Z-Fact0r project is participating in the ORD Pilot and DMP is D9.4 deliverable (D.9.4) due M6 of the project. The DMP of Z-Fact0r project has been prepared by taking into account the document template of the “Guidelines on DMP in H2020”. This document will be updated and augmented with new datasets and results, according to the progress of the activities of the Z-Fact0r project. Also, the DMP will be updated to include possible changes in the consortium composition and policies over the course of the project. The procedures that will be implemented for data collection, storage, access, sharing policies, protection, retention and destruction will be according to the requirements of the national legislation of each partner and in line with the EU standards. ## 2.1.2 Building a DMP in the context of H2020 The EC provided a document with guidelines for project participating in the pilot. The guidelines address aspects like research data quality, sharing and security. Following these guidelines, DMP will be developed with aim to provide a consolidated plan for Z-Fact0r partners in the data management plan policy that the project will follow. The consortium will comply with the requirements of Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. The consortium will preserve the right to privacy and confidentiality of data of the survey participants, by providing them two documents: The Participant Information Sheet and the Consent Form. These documents will be sent electronically and will provide information about how the answers will be used and what the purpose of the survey is. The participants will be assured that their answers will be used only for the purposes of the specific survey. The voluntary character of participation will be stated explicitly in the Consent Form. Before conducting the survey, the consortium will examine and follow the requirements of the national legislation in line with the EU standards, whether the proposed data collection requires special local/national ethical/legal permission. An ethical approach will be adopted and maintained throughout the fieldwork process. The responsible partners will assure that the EU standards regarding ethics and Data Management are fulfilled. Each partner will proceed with the survey according to the provisions of the national legislation that are adjusted in line with the respective EU Directives for Data Management and ethics. The consortium will follow a transparent recruitment process for the engagement of stakeholders and inclusion/exclusion criteria for all the surveys will be explained in the Participant Information Sheet. Each partner will send an invitation (by mail) to participants/third parties that have neither the role in Z-Fact0r project nor professional relationship with the consortium to participate in the survey. The consortium will also examine whether personal data will be collected and how to secure the confidentiality in such a case. The Steering Committee of the project will also ensure that EU standards are followed. The issue of informed consent for all survey procedures, all participants will be provided with a Participant Information Sheet and Consent Form to provide informed consent. The default position for all data relating to residents and staff will be anonymous. ## 2.2 Z-Fact0r Data Management Plan (DMP) ### 2.2.1 General description This document outlines the first version of the project’s DMP. The DMP is presented as D9.4 public deliverable (Month 6) of WP9, DCE. The main purpose of DMP is to provide an analysis of the main elements of data management policy that will be used by the consortium with regard to all the datasets that will be generated by the project (e.g. numerical, images, etc.). This document describes the Research Data with the metadata attached, and presents an overview of datasets to be produced by the project, their characteristics and the management processes to make them discoverable, accessible, assessable, usable beyond the original purpose, and disseminated between researchers. It also introduces the specifications of the dedicated Data Management Portal developed by the project in the context of the ORD Pilot, allowing the efficient management of the project’s datasets and providing proper OA on them for further analysis and reuse. In addition, the DMP of Z-Fact0r project reflects the current status of discussion within the consortium about the data that will be produced. ### 2.2.2 Activities of Data Management Plan The DMP is a dynamic document, updated throughout the whole project lifecycle. The final version of this report will be delivered by the end of the project, reflecting on lessons learnt and describing the plans implemented for sustainable storage and accessibility of the data, even beyond the project’s lifetime. A Knowledge Management system will be developed, which incorporates in a structured way, the technical and business knowledge created during the project. The activities of the Z-Fact0r concerning the data management are planned as follows: * Knowledge management – to be led by the DEM, in which the DMP will be delivered. * A knowledge management document will be created, based on DMP, describing how the acquired data and knowledge will be shared and/or made open, and how it will be maintained and preserved. The identifiable project data will be provided in a manner to define the relevant knowledge, increase partners’ awareness, validate the result, and timeframe of actions. * Technology watch - All partners will be responsible for periodically updating the knowledge management system with outcomes of research work conducted by other groups and any new patents/patent applications, i.e. to ensure that ongoing relevant technological developments and innovations are identified, analysed, and hopefully built upon during the course of the project. ### 2.2.3 Register on numerical datasets generated or collected in Z-Fact0r The goal of the DMP is to describe numerical model or observation datasets collected or created by Z-Fact0r during the runtime of the project. The register on numerical datasets has to be understood as a living document, which will be updated regularly during the project´s lifetime. The operational phase of the project started in October 2016, so there is no dataset generated or collected until delivery date of this DMP (M6). However, this is not a fixed document so it will be updated and augmented with new datasets and results during the duration of Z-Fact0r project. The information listed below reflects the conception and design of the individual partners in the different work packages at the beginning of the project. The data register will deliver information according to the information detailed in _Annex 1 of the GA document_ : * Dataset reference and name: identifier for the dataset to be produced. * Dataset description: descriptions of the data that will be generated or collected, its origin or source (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. * Partners activities and responsibilities: partner owner of the device, in charge of the data collection, data analysis and/or data storage, and WPs and tasks it is involved. * Standards and metadata: reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created. Format and estimated volume of data. * Data exploitation and sharing: description of how data will be shared, including access procedures and policy, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.) and if this information will be confidential (only for members of the CCS) or public. In case a dataset cannot be shared, the reasons for this should be mentioned (e.g. ethics, rules of personal data, intellectual property, commercial, privacy-related, security-related). * Archiving and preservation (including storage and backup): description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. ### 2.2.4 Metadata for Data Management An initial plan of research data has been explored in _Annex 1 of the_ GA. The dataset list is provided in the table below, while the nature and details of each dataset are presented in the next section. _Table 1. Research data that will be collected and generated during Z-Fact0r._ <table> <tr> <th> **Research Data** </th> <th> **Partners** </th> </tr> <tr> <td> Data structures with production machine signatures (healthy and deteriorated conditions) </td> <td> ATLANTIS </td> </tr> <tr> <td> Machine Deterioration thresholds for predicting production of defected products </td> <td> ATLANTIS </td> </tr> <tr> <td> RCA data structures for identifying the root cause of a defect in upstream stages </td> <td> CERTH/ITI </td> </tr> <tr> <td> Data from the comparative assessment (i.e. with and without Z-Fact0r) in the 3 use cases: difference in production cost/waste/scrap, in detection efficiency, in singlestage production defect rate, in average multistage production defect rate, in production output quality (qualified output / total output produced), in defect propagation to downstream stages </td> <td> MICROSEMI, INTERSEALS, DURIT </td> </tr> <tr> <td> Defect detection efficiency data: false alarm rate, precision, recall, F-Measure </td> <td> ALL PARTNERS </td> </tr> <tr> <td> Defect prediction efficiency data: positive prediction rate </td> <td> ALL PARTNERS </td> </tr> <tr> <td> Discrete Event Modelling – cost function generation to optimize production with green scheduling </td> <td> BRUNEL </td> </tr> <tr> <td> Validation and verification of KPIs to assess the direct impact of system level to the final cost </td> <td> BRUNEL, EPFL </td> </tr> <tr> <td> Context aware models and associated algorithms </td> <td> EPFL </td> </tr> <tr> <td> Additive manufacturing methodologies for rework and repair </td> <td> CETRI </td> </tr> <tr> <td> Improved functionalities of i-LiKe knowledge management and DSS suite </td> <td> HOLONIX </td> </tr> </table> Partners will characterize their research data and associated software and/or used in the project whether these are discoverable, accessible, assessable and intelligible, useable beyond the project’s life and interoperable. In specific, research data can be discovered by means of an identification mechanism such as Digital Object Identifier and accessible by defining modalities, the scope of the action, establish the licenses and define the IPR. Otherwise research data will be assessable and intelligible allowing third parties to make assessments. Also, the dataset will be useable beyond the original purpose for which it was collected or usable to third parties after the collection of the data for long periods (repositories, preservation and curation). Finally, research data will offer interoperability to specific quality standards and allow data exchange between researchers, institutions, organizations, countries, re-combinations with different datasets, data exchange, compliant with available software applications. ### 2.2.5 Data description In order to collect the information about the research data that will be generated in different activities of the Z- Fact0r project, we have elaborated a template to be completed by the consortium partners. This template includes the following information items: * Dataset reference and name: name, homepage, publisher, maintainer * Dataset description: description, provenance, usefulness, similar data, re-use and integration * Standards and metadata: metadata description, vocabularies and ontologies * Data sharing: license, URL dataset description, openness, software necessary, repository * Archiving and preservation: preservation, growth, archive, size _2.2.5.1 Dataset per partner_ All partners have identified the data that will be produced in the different project activities; <table> <tr> <th> **DS.CETRI.Z-Repair_AM_processing** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Formulation of the inks or paste for additive manufacturing repairing processes & printing/deposition protocols. </td> </tr> <tr> <td> Source </td> <td> Various characterization techniques, e.g. microscopy, viscometer, printing station. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> CETRI </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> CETRI </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> CETRI </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> CETRI </td> </tr> <tr> <td> WPs and tasks </td> <td> T2.4 in WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> The metadata include: 1. The characteristics of the materials to be deposited. 2. The user requirements as obtained by the end users. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> No standards apply. The format will be in the form of spreadsheets and images (TIFF of JPG). Estimated volume is < 10 MB. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The results of the study have the potential to be exploited by CETRI along with the Z-Fact0r end users MICROSEMI, DURIT and INTERSEALS, as well as by SIR, towards the implementation of integrating new processes in their production lines. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> In general, the data will be confidential with the exception of possible future publications in the case the consortium permits such activities. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> CETRI will generate 3 sets of data for the 3 Z-Fact0r end-users (MICROSEMI, DURIT, INTERSEALS). Each set will be shared with the individual partners in the form of raw data and complete reports in order to receive feedback during the project implementation. </td> </tr> <tr> <td> Embargo periods </td> <td> No </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored into a computer and an external hard disc and will be send frequently to the individual end-users. The data will be stored permanently in a computer in CETRI facilities. </td> </tr> </table> <table> <tr> <th> **DS.CETRI.Z-Repair_laser_processing** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Measurements of the laser source couples to measurements of the processed surface. </td> </tr> <tr> <td> Source </td> <td> Laser source. Laser Power Meter Microscopy. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> CETRI </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> CETRI </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> CETRI </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> CETRI </td> </tr> <tr> <td> WPs and tasks </td> <td> T2.4 in WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and </td> <td> The metadata include: 1. The type/origin of the processed material. 2. The conditions of the experiments. </td> </tr> <tr> <td> storage dates, places, and documentation) </td> <td> </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> No standards apply. The format will be in the form of spreadsheets and images (TIFF of JPG). Estimated volume is < 10 MB. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The results of the study have the potential to be exploited by CETRI along with the Z-Fact0r end users MICROSEMI, DURIT and INTERSEALS, as well as by SIR, towards the implementation of integrating new processes in their production lines. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> In general, the data will be confidential with the exception of possible future publications in the case the consortium permits such activities. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> CETRI will generate 3 sets of data for the 3 Z-Fact0r end-users (MICROSEMI, DURIT and INTERSEALS). Each set will be shared with the individual partners in the form of raw data and complete reports in order to receive feedback during the project implementation. </td> </tr> <tr> <td> Embargo periods </td> <td> No </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored into a computer and an external hard disc and will be send frequently to the individual end-users. The data will be stored permanently in a computer in CETRI facilities. </td> </tr> </table> <table> <tr> <th> **DS.DURIT. Production line. Demo 3.** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Data collected: * Dimensions/shapes/surface and 3D details * Superficial defects like cracks </td> </tr> </table> <table> <tr> <th> Source </th> <th> The data will be collected by different sensors and imaging devices such as cameras. Installed in the production line after green machining and after finishing operations. Ideally installed in machine for real time inspection although very difficult to implement at the current time. </th> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> DURIT </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> DURIT </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> DURIT </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> DURIT </td> </tr> <tr> <td> WPs and tasks </td> <td> The data are going to be collected in WP5 and WP6. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> The dataset will be accompanied by information regarding: * Drawings and sequence of operations. * Batch of material used. * Operators involved. * Date, time. * Temperature and relative humidity in the metallurgy section. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Our tests will be in a specific type of pieces. The volume of data depends of the quantity of order. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Production process recognition and help during the different production phases, avoiding mistakes. Support of quality checks and production batches recalls </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> The full dataset will be confidential and only the members of the consortium will have access on it. Furthermore, if the dataset or specific portions of it (e.g. metadata, statistics, etc.) are to become of widely OA, a data management portal will be created that should provide a description of the dataset and link to a download section. Of course, these data will be </td> </tr> <tr> <td> </td> <td> anonymized, so as not to have any potential ethical issues with their publication and dissemination. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> Data sharing is dependent of DURIT, DURIT´s customers and partner requirements. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> All information belongs to the industrial partner that owns the shop floor. All data will respect the partner policies. All data has to be stored till the end of life/warranty of the produced component. Probably also stored at DURIT servers at the cloud. </td> </tr> </table> <table> <tr> <th> **DS.EPFL.01_KMDSS** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Z-Fact0r Knowledge Management and Decision Support System Dataset. </td> </tr> <tr> <td> Source </td> <td> Device Manager, Event Manager, Semantic Context Manager, ZFact0r Repository. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> The device will be owned by Z-Fact0r End-users (MICROSEMI, INTERSEALS, DURIT), where the data collection will be performed. </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> Various partners related to the specific event and/or operation. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> Various partners related to the specific event and/or operation. </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> EPFL will store data related to KMDSS (various partners can handle the rest of the data). </td> </tr> <tr> <td> WPs and tasks </td> <td> The data will be collected within the activities of WP2, WP3 and WP4. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Indicative metadata include: Input from the Sensor Network (through the Device Manager), the overall model of Production activities (through Z-Fact0r Repository), shop-floor events data from the Event Manager, and context – aware knowledge stemming from the Semantic Context Manager (Ontology). </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Data can be available in XML or JSON format. Estimation of the volume of data cannot be predicted in advance of a real use of the technology at the shop floor level. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The collected data will be used for better understanding of the processes and activities evolving in the shop-floor which will provide actionable knowledge in the form of a set of recommendations to (i) supervise and provide feedback for all the processes executed in the production line, (ii) evaluate performance parameters and responding to defects, keeping historical data, (iii) send efficiently alarms to initiate actions, filter out false alarms, increase confidence levels (through previously acquired knowledge) of early defect detection and prediction, etc. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Accessible to Z-Fact0r consortium members including the commission services as defined in the Z-Fact0r GA. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The sharing of this data is yet to be decided together with the industrial partners. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in a dedicated repository. </td> </tr> </table> <table> <tr> <th> **DS.EPFL.02.SemanticContextManager** </th> </tr> <tr> <td> **Data Identification** </td> </tr> </table> <table> <tr> <th> Dataset description </th> <th> Context-aware shop-floor analysis and semantic model for the annotation and description of the knowledge to represent manufacturing system performance. </th> </tr> <tr> <td> Source </td> <td> Z-Fact0r repository. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> Z-Fact0r End-users (MICROSEMI, INTERSEALS, DURIT) </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> EPFL </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> EPFL </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> EPFL </td> </tr> <tr> <td> WPs and tasks </td> <td> The data will be collected within the activities of WP3 and in particular T3.5. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Data from Z-Fact0r repository (data concerning machines, workers, actors, activities and processes, production data logs, etc.). </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Generated output will be the semantic enrichment of shop-floor data for representation of processes, actors, alarms, actions, work-pieces/products, etc., e.g. as RDF Triplets. Standards: W3C-OWL, RDF. Less than 2GB. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Data is required for the Z-Fact0r ontology development. Ontology describes semantic models. The ontology will be used in order to drive the semantic framework. Furthermore, it will be used for data integration, visualization, inferencing /reasoning. The ontology will describe the basic entities of the project and model relevant structures of multi-stage manufacturing processes. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Accessible to Z-Fact0r consortium members including the commission services as defined in the Z-Fact0r GA. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The Ontology will be uploaded in a server where it will be accessible to Z-Fact0r consortium members including the commission services. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in a dedicated repository. No expiry date – revisions will be kept. </td> </tr> </table> <table> <tr> <th> **DS.HOLONIX.ProductionManagement** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Collections of data from industrial partner’s production plant, operators, and elaborated data from other Z-Modules. These collections of data contain information about machine conditions, plant conditions, process KPIs of an Industrial production plant. </td> </tr> <tr> <td> Source </td> <td> Industrial partners’ production plant with its operators and other Z-Modules. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> Industrial partners </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> Industrial partner with support of HOLONIX presumably </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> HOLONIX and Z-Modules </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> HOLONIX </td> </tr> <tr> <td> WPs and tasks </td> <td> T3.2 in WP3 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> A set of RESTful APIs will be released with documentation of how to require data from datasets. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> No estimation has been done so far. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Support on the monitoring of production machine, production performance and process both for operators and other monitoring modules. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Collections of data of Production management module should be accessible only for Z-Fact0r consortium partners only. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> Data sharing should not be possible with users outside of the project. A set of RESTFul APIs will be implemented to share data between partners. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Physical place to store production data has still to be decided, data will be stored at least for all the duration of the project. </td> </tr> </table> <table> <tr> <th> **DS.HOLONIX.Repository** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> The repository is a collection of dataset coming from various sources including sensors, operator notes, production line installed at industrial partners of the project as well as data incoming from Z-modules as results of their calculation. </td> </tr> <tr> <td> Source </td> <td> Z-Fact0r industrial partner’s production plant and Z-modules. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> Z-Modules’ responsible partner and industrial partners. </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> Z-Modules and industrial partners. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> Various Z-modules </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> HOLONIX </td> </tr> <tr> <td> WPs and tasks </td> <td> T3.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> A set of RESTful APIs will be released with documentation of how to require data from datasets. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> JSON will be data exchange format between Repository and ZModules. No estimation of data volume has been done so far. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The datasets collected should be used by various modules of the project for the pursue of Zero defects production objective. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Still to be clarified, for the nature of the dataset collected, only the members of consortium should have rights to access to the datasets with appropriate authorization/authentication policy. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> No discussion about this matter has been done so far, and should not be shared with entities outside of Z-Fact0r consortium. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Still to be decided where the collected datasets will be stored definitely. </td> </tr> </table> <table> <tr> <th> **DS.CONFINDUSTRIA.Events &Roadmapping ** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Info regarding the demand for Zero Defect production and a global matching coming from a Desk Research based on technology brokerage system available at CONFINDUSTRIA. List of potential tradefairs and events. </td> </tr> </table> <table> <tr> <th> </th> <th> List of potential customer/visitors of workshop and potential companies interested in Z-Fact0r technology. </th> </tr> <tr> <td> Source </td> <td> Internet, specific DBs (we have not selected anyone). </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> CONFINDUSTRIA </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> CONFINDUSTRIA </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> CONFINDUSTRIA </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> CONFINDUSTRIA </td> </tr> <tr> <td> WPs and tasks </td> <td> WP 7: **T7.3** Roadmap for wider adoption and take-up WP 8: **T8.2** Adoption Plan for increasing Awareness WP 9: **T9.2** To identify the relevant conference or event </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> It will be used the only available and not confidential (public) data and documentation coming from Z-Fact0r results in order to define our strategy and desk research </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> \-- </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> It will be used the only available and not confidential (public) data and documentation coming from Z-Fact0r results in order to define our strategy and desk research </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> * It will be respected the project rules about confidentiality, by using and disseminate the only public data * Our results will be public </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> * Our results will be public, they could be shared and re-used as a model: * Research method structure * Roadmap structure * Business Network created </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Original data and results will be kept in our company server at least until the all project duration and audit period. Results will be also shared with partners and kept in the project repository. </td> </tr> </table> <table> <tr> <th> **DS.ATLANTIS.ES-DSS** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Dataset for insufficient glue detection obtained by cameras and lasers at the glue implementation machine. The camera images will be processed and not saved anywhere, while the metadata of insufficient glue placement will be used for analysis and detection. Data will be used for early detection of failures. The metadata will be able to send notifications and alarms to the responsible control operators and glue workers. </td> </tr> <tr> <td> Source </td> <td> The dataset will be collected by using cameras and lasers at the glue machine </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> The device will be owned to the industry (MICROSEMI), where the data collection is going to be performed. </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> Various partners related to the specific incident and/or operation. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> Various partners related to the specific incident and/or operation. </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> ATLANTIS will store data related to ES-DSS (various partners can handle the rest of the data). </td> </tr> <tr> <td> WPs and tasks </td> <td> The data are going to be collected within activities of WP3 and more specifically within activities of T3.1, T3.2, T3.3 and T3.4. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and </td> <td> The dataset will be accompanied with a detailed documentation of its contents. Indicative metadata include: </td> </tr> <tr> <td> storage dates, places, and documentation) </td> <td> 1. description of the experimental setup (e.g. location, date, etc.) and procedure that led to the generation of the dataset, 2. annotated detection of insufficient glue, activity, business process, state of the monitored activity. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> The data will be stored at XML format and are estimated to be 1GB per day. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The collected data will be used for the development of the activities analysis and incident detection methods of the Z – Fact0r project and all the tasks, activities and methods that are related to it. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> The full dataset will be confidential and only the members of the consortium will have access on it. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The sharing of this data is yet to be decided along with the industrial partners. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in a DB. RAID and other common backup mechanism will be utilized to ensure data reliability and performance improvement and to avoid data losses. </td> </tr> </table> <table> <tr> <th> **DS.ATLANTIS.Evaluation** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Values of the KPIs for: 1. Technical indicators. 2. User/Stakeholders acceptance. 3. Indicators for accessing the impact of the project on the factories. </td> </tr> <tr> <td> Source </td> <td> The dataset will be collected from Z-Fact0r industrial partners, technology providing partners and User/Stakeholders - the tool/solution beneficiaries. </td> </tr> </table> <table> <tr> <th> **Partners activities and responsibilities** </th> </tr> <tr> <td> Partner owner device </td> <td> The device will be owned by the consortium. </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> ATLANTIS with the respective and responsible partners per toolkit/task/plant. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> ATLANTIS. </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> ATLANTIS will store analysed data related to Solution Evaluation. </td> </tr> <tr> <td> WPs and tasks </td> <td> The data are going to be collected through demonstrations in relevant environment, specifically within T5.3 activity in collaboration with WP6. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Collected data from the execution of the demonstrations at the operational environment of the pilot sites (WP6) as well as the users’ acceptance and overall impact will be analysed and documented - Report on Solution Validation. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Alphanumeric </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Solution validation will be synthesised and documented in the form of report - deliverable. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> The full dataset will be confidential, the reports will be public. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> Data will be shared among involved partners. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> To avoid data losses during the project and to ensure data reliability analysed data will be stored for up to two years after the project life by ATLANTIS. </td> </tr> </table> <table> <tr> <th> **DS.ATLANTIS.ReverseSupplyChain** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Dataset for gathered data during the manufacturing process, obtained by cameras, lasers and other measurement sensors. The camera images will be processed and not saved anywhere while the metadata for all the other sensors will be used for analysis in order to activate the Reverse – flow process in the Reverse Supply Chain. Data will be used for defect detection in the reverse supply chain. The metadata will be able to send notifications and alarms to the responsible machine operators for removal the defected parts, special inspection, return to previous internal tier (upstream stage) or external tier (other production line or external supplier). Standards and prototypes shall be included in the data for comparison with the defected parts and setting acceptance levels. </td> </tr> <tr> <td> Source </td> <td> Cameras, lasers and measurement instruments at different points of the production lines </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> The device will be owned to the industry, where the data collection is going to be performed. </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> Various partners related to the specific incident and/or operation. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> ATLANTIS will analyse the data in order to provide answers and reliable use of the Reverse Supply Chain. </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> ATLANTIS will store data related to Reverse Supply Chain (various partners can handle the rest of the data). </td> </tr> <tr> <td> WPs and tasks </td> <td> The data are going to be collected within activities of WP2 and more specifically within activities of T2.5. </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> The dataset will be accompanied with a detailed documentation of its contents. Indicative metadata include: (a) description of the experimental setup (e.g. location, date, etc.) and procedure that led to the generation of the dataset, </td> </tr> <tr> <td> </td> <td> (b) annotated detection of a defective part in the production line, the cause of the defect, the acceptable standards and limits of the part, as well as return point in the production process. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> The data will be stored at XML format and are estimated to be 100ΜΒ per day. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The collected data will be used for the development of the activities analysis and defect detection methods in the production lines of the Z – Fact0r project plants and all the tasks, activities and methods that are related to it. The Reverse Supply Chain shall be able to use the data in order to decide whether or not a defective part should return to a previous tier. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> The full dataset will be confidential and only the members of the consortium will have access on it. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The sharing of this data is yet to be decided along with the industrial partners </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in a DB. RAID and other common backup mechanism will be utilized to ensure data reliability and performance improvement and to avoid data losses. </td> </tr> </table> <table> <tr> <th> **DS.DATAPIXEL.3DPointcloud** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> High accuracy and high resolution 3D Pointclouds of scanned parts. The Pointcloud is a list of 3D points, and can be structured and unstructured </td> </tr> <tr> <td> Source </td> <td> DATAPIXEL 3D Scanner </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> DATAPIXEL and Z-Fact0r repository </td> </tr> <tr> <td> WPs and tasks </td> <td> WP2 and WP3 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Metadata includes part identification, date and time of data collection, equipment. Pointcloud is part of the information associated with the manufactured parts. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> ASCII list of X Y Z is the most common format. Typically, Pointclouds have a size between 100 K to 10M points, or 3Mbytes to 300 Mbytes. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Main use will be the automatic detection of defects by two methods: CAD based inspection and GD&T analysis. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Confidential, except parts authorized by the industrial partners. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The data will be shared using the Sensor network manager, and stored in the repository for further future analysis. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. </td> </tr> </table> <table> <tr> <th> **DS.DATAPIXEL.CADModel** </th> </tr> <tr> <td> **Data Identification** </td> </tr> </table> <table> <tr> <th> Dataset description </th> <th> The CADModel is the description of the surfaces and geometries of the designed part. It is representing the 3D information of the manufactured part model, and will be utilized for CAD based inspection. </th> </tr> <tr> <td> Source (e.g. which device?) </td> <td> Industrial partner’s CAD modelling software and DATAPIXEL 3D Pointcloud Analysis software. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the device </td> <td> Industrial partners (MICROSEMI, INTERSEALS, DURIT) and DATAPIXEL. </td> </tr> <tr> <td> Partner in charge of the data collection (if different) </td> <td> Same </td> </tr> <tr> <td> Partner in charge of the data analysis (if different) </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of the data storage (if different) </td> <td> DATAPIXEL and Z-Fact0r repository </td> </tr> <tr> <td> WPs and tasks </td> <td> WP2 and WP3 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places) and documentation? </td> <td> Metadata includes part identification, date and time of data generation. CAD Model is part of the information associated with the manufactured parts. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> STEP format </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Main use will be the automatic detection of deviations based in local regions and the extraction of nominal values for GD&T analysis. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Confidential, except parts authorized by the industrial partners. </td> </tr> <tr> <td> (Confidential, only for members of the _CCS_ ) / Public </td> <td> </td> </tr> <tr> <td> Data sharing, re-use and distribution (How?) </td> <td> The data will be shared by the 3D Pointcloud Analysis module, and stored in the repository for further future analysis. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup): where? For how long? </td> <td> Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. </td> </tr> </table> <table> <tr> <th> **DS.DATAPIXEL.DeviationMaps** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> The deviation map is a 3D representation of surface deviations calculated between a captured Pointcloud and the reference CAD model. The deviation map is represented as a list of regions with their corresponding deviation. Typically, the regions are polygonal regions with their associated deviation. </td> </tr> <tr> <td> Source </td> <td> DATAPIXEL 3D Pointcloud Analysis software. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> DATAPIXEL and Z-Fact0r repository. </td> </tr> <tr> <td> WPs and tasks </td> <td> WP2 and WP3 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Metadata includes part identification, date and time of data generation. Deviation Map is part of the information associated with the manufactured parts. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> A polygonal mesh with an associated deviation number in mm. Most common formats are STL with annotated deviations and PLY. Typically, deviation maps have a size between 100 K to 1M polygons, or 10Mbytes to 100 Mbytes. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Main use will be the automatic detection of defects based in local deviations. A deviation threshold can be defined to identify defects. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Confidential, except parts authorized by the industrial partners. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The data will be shared by the 3D Pointcloud Analysis module, and stored in the repository for further future analysis. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. </td> </tr> </table> <table> <tr> <th> **DS.DATAPIXEL.MeasurementPlan** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> The MP is a definition of the GD& to be measured in the Pointcloud. The MP contains a detailed definition of geometrical elements and the tolerances associated to them. This information is the input to the Geometrical Feature Extraction module of the 3D Pointcloud Analysis software </td> </tr> <tr> <td> Source </td> <td> DATAPIXEL 3D Pointcloud Analysis software. Normally the MP is extracted from the geometrical information contained in the CAD model </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> DATAPIXEL and Z-Fact0r repository </td> </tr> <tr> <td> WPs and tasks </td> <td> WP2 and WP3 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Metadata includes project identification, date and time of data generation. MP is part of the project information. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> The standard format for MP can be QIF or DMO. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Main use will be automatic measurement of dimensions and geometries based in nominal values and tolerances. This information will be used for defect detection and process analysis. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Confidential, except parts authorized by the industrial partners. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The data will be shared by the 3D Pointcloud Analysis module, and stored in the repository for further future analysis. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. </td> </tr> </table> <table> <tr> <th> **DS.DATAPIXEL.MeasurementResults** </th> </tr> <tr> <td> **Data Identification** </td> </tr> </table> <table> <tr> <th> Dataset description </th> <th> The Measurement Results are the set of measurement values extracted from the Pointcloud based in the MP. </th> </tr> <tr> <td> Source </td> <td> DATAPIXEL 3D Pointcloud Analysis software. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> DATAPIXEL and Z-Fact0r repository. </td> </tr> <tr> <td> WPs and tasks </td> <td> WP2 and WP3 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Metadata includes part identification, date and time of data generation. Measurement Results is part of the information associated with the manufactured parts. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> The standard format for MP can be QIF or DMO. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Main use will be the automatic detection of based in geometrical deviations. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Confidential, except parts authorized by the industrial partners. </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> The data will be shared by the 3D Pointcloud Analysis module, and stored in the repository for further future analysis. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. </td> </tr> </table> <table> <tr> <th> **DS.CERTH/IRETETH.DataConditioning** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Data collected by DATAPIXEL’s laser system or other complementary data sources for defect detection. </td> </tr> <tr> <td> Source </td> <td> Should be defined by DATAPIXEL. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> DATAPIXEL </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> DATAPIXEL + the relevant end user (manufacturer) depending on the use case. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> IRETETH/CERTH </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> TO BE DEFINED </td> </tr> <tr> <td> WPs and tasks </td> <td> WP2 / T2.1 - T2.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> Should be discussed between DATAPIXEL and IRETETH/CERTH. </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Should be determined by DATAPIXEL. In IRETETH/CERTH we are open to use different data formats with a preference in raw data formats. As far as the volume, the more the better. Ideally, we would like to have hundreds of measurements per product (e.g. 500 per case including defected and non- defected). </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> This should be discussed within the relevant partners. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> This should be discussed within the consortium and approved by the DEM. </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> This should be discussed within the relevant partners. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> We need to see who is responsible for the data storage task. </td> </tr> </table> <table> <tr> <th> **DS.SIR.RoboticCellData** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Collections of data from SIR robotic deburring cell. These collections of data contain information about machine conditions, algorithms, machine data, plant conditions, process KPIs. </td> </tr> <tr> <td> Source </td> <td> SIR robotic deburring cell. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> SIR </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> SIR with support of technological partners involved in the task. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> SIR with support of technological partners involved in the task. </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> SIR </td> </tr> <tr> <td> WPs and tasks </td> <td> T2.3 in WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> \-- </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Mainly consisting in MS documents released using the following formats (.doc, .pptx and .xls files, images for visualizing and conceptualizing the use cases will be released as PDF files), UML </td> </tr> <tr> <td> </td> <td> documents, machine algorithms (various type of informatics languages C#, RAPID, etc). </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> The datasets collected should be used by SIR to achieve the objectives of T2.3. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> Confidential </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> No discussion about this matter has been done so far, and should not be shared with entities outside of Z-Fact0r consortium. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Documents will be store in FREEDCAMP document management system. Machine data, algorithms and machines backups will be store in the SIR internal repository. </td> </tr> </table> <table> <tr> <th> **DS.SIR.IndustrialPartnersData** </th> </tr> <tr> <td> **Data Identification** </td> </tr> <tr> <td> Dataset description </td> <td> Collections of data from industrial partners. These collections of data contain information about machine conditions, plant conditions, process KPIs of an Industrial production plant. </td> </tr> <tr> <td> Source </td> <td> Industrial partners’ production plant, internal reports, operators. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner device </td> <td> Industrial partners. </td> </tr> <tr> <td> Partner in charge of data collection </td> <td> Industrial partner with support of SIR. </td> </tr> <tr> <td> Partner in charge of data analysis </td> <td> Task leader </td> </tr> <tr> <td> Partner in charge of data storage </td> <td> Task leader and SIR </td> </tr> <tr> <td> WPs and tasks </td> <td> WP1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (Production and storage dates, places, and documentation) </td> <td> \-- </td> </tr> <tr> <td> Standards, Format, Estimated volume of data. </td> <td> Mainly consisting in MS documents released using the following formats (.doc, .pptx and .xls files, images for visualizing and conceptualizing the use cases will be released as PDF files) and UML documents. The metadata standard proposed is the CERIF. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose/use of the data analysis) </td> <td> Only for members of the CCS. </td> </tr> <tr> <td> Data access policy / Dissemination level </td> <td> The information leading to the preparation of the following deliverable might be confidential as the following deliverables are marked as confidential: * D1.1 Z-Fact0r User requirements DURIT M3 * D.1.3 Z-Fact0r system architecture EPFL M5 * D1.5 Report on Z-Fact0r strategy implementation and risk analysis EPFL M18 </td> </tr> <tr> <td> Data sharing, re-use and distribution </td> <td> For the time being data are expected to be used internally as input by the other WPs. However, D1.2 Report on the analysis of SoA, existing and past projects initiatives due by CERTH at M2 and D1.4 Z-Fact0r Use Cases due by INTERSEALS at M6 are expected to be released publicly. </td> </tr> <tr> <td> Embargo periods </td> <td> None </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (including backup) </td> <td> Documents are stored in FREEDCAMP document management system. Data and documents will be up to five years after the project completion. Revisions will be stored in the FREEDCAMP document management system. </td> </tr> </table> _2.2.5.2 Dataset per task_ In addition, data information that will be generated in the different tasks, has been identified by the different partners; <table> <tr> <th> Task: **T1.1- T1.5, T2.1-T2.5, T6.1-T6.3** </th> </tr> <tr> <td> WP: WP1 + WP2 + WP6 </td> </tr> <tr> <td> WP Leader: SIR </td> </tr> <tr> <td> Author: G. Tinker (MICROSEMI) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Main aims for MICROSEMI are for the improvement of the dispense process and its analysis. Other opportunities for the system and the data might be learning how much glue might be needed for a new size die (prediction) and checking LCP panels for surface defects prior to dispensing. * Explain the relation to the objectives of the project/WP/Task The data being collected will enable the KPIs to be monitored and to generate history for prediction and correction of the process. #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) MICROSEMI preference for data would be: * xls for sensor history data * xls for a volumetric measurement of the glue dispensed o jpeg images of the surface * Is the data generated or collected from other sources under certain terms and conditions? TBC – not believed to be a requirement at this stage * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. TBC – Possibilities of either Pointcloud clouds from DATAPIXEL or micro – profilometry data generated by CERTH or both may need to be utilised * State the expected size of the data (if known) Currently unknown but Good IT infrastructure at MICROSEMI means Data size should not be a constraint * Standards None #### 3) Ownership * Is another organization contributing to the data development? TBC – If the answer does end up being yes, it will be a member of the Z-Fact0r project **4) Reuse of existing data** * Specify if existing data is being re-used (if any) No Data is currently being collected other than the process improvement project that has already been completed. **5) Data use** * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful Much depends on: Who wants access? How often they want access? How big the files are? MICROSEMI does have an FTP site – terms of access to this will have to be agreed by MICROSEMI and the members of the Z-Fact0r project. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? TBC – this depends a little on the data being collected and if it is deemed sensitive. #### 7) Storage and disposal * How will this data be stored? Probably on a local PC with the option to back up data (depending on size) to the MICROSEMI servers at Caldicot. * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> </th> <th> Task: **T1.1 -T1.5** </th> <th> </th> </tr> <tr> <td> WP: 1 USER REQUIREMENTS – SPECIFICATIONS – USE CASE ANALYSIS </td> </tr> <tr> <td> WP Leader: SIR </td> </tr> <tr> <td> Author: Marcello Pellicciari (SIR) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Qualitative and quantitative data will be produced: 1. WP1 data generated and collected are aimed at defining both the user and system requirements and use cases (T.1.1 and T.1.4) 2. Bibliographic and data-based information (e.g. Cordis) for T.1.2 State of the art to analyse new, live and past projects, initiatives in the field. 3. Workflow and UML diagrams, blue prints will be generated to design the ZFact0r architecture (T.1.3) 4. Report on Z-Fact0r strategy and risk analysis (T.1.5) to monitor the status of the manufacturing process in real time. * Explain the relation to the objectives of the project/WP/Task Data are related to all tasks WP1. (See above) #### 2) Types * Are the data digital/hard copies or both? Digital data and documents will be produced. * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Data are preserved in their incoming format, Files generated and used will be mainly consisting in MS documents released using the following formats (.doc, .pptx and .xls files, images for visualizing and conceptualizing the use cases will be released as PDF files). * Is the data generated or collected from other sources under certain terms and conditions? No * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Unified Modelling Language will be used * State the expected size of the data (if known) Not yet a clear idea. * Standards Not at the moment. #### 3) Ownership  Is another organization contributing to the data development? To date no other external organization is contributing to the data development activities of WP1. #### 4) Reuse of existing data  Specify if existing data is being re-used (if any) For the time being data are expected to be used internally as input by the other WPs. However, D1.2 Report on the analysis of SoA, existing and past projects initiatives due by CERTH at M2 and D1.4 Z-Fact0r Use Cases due by INTERSEALS at M6 are expected to be released publicly. #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful D1.2 Report on the analysis of SoA, existing and past projects initiatives due by CERTH at M2 and D1.4 Z-Fact0r Use Cases due by INTERSEALS at M6 are expected to be released publicly. #### 6) Dissemination Level of Data  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? The information leading to the preparation of the following deliverable might be confidential as the following deliverables are marked as confidential: * D1.1 Z-Fact0r User requirements DURIT M3 o D.1.3 Z-Fact0r system architecture EPFL M5 * D1.5 Report on Z-Fact0r strategy implementation and risk analysis EPFL M18 #### 7) Storage and disposal * How will this data be stored? Documents are stored in FREEDCAMP document management system. Content creator upload the relevant file. * How long is it required to keep the data? Expire date. Will revisions be kept? Data and documents will be up to five years after the project completion. Revisions will be stored in the FREEDCAMP document management system. <table> <tr> <th> Task: T.1-USER REQUIREMENTS </th> </tr> <tr> <td> WP: WP1 + WP6 </td> </tr> <tr> <td> WP Leader: SIR </td> </tr> <tr> <td> Author: E. Soares (DURIT) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Automated quality control, with high accuracy level and predictive system for defect generation based on online continuous monitoring. * Explain the relation to the objectives of the project/WP/Task The data collected will enable to detect probability or trends that lead to defects that normally result in scrapping the parts. #### 2) Types * Are the data digital/hard copies or both? Both * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) o xls for sensor history data o jpeg images of the defects * Is the data generated or collected from other sources under certain terms and conditions? Possibly collected by sensors at an bench top apparatus. * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Optical and physical sensors to be studied. * State the expected size of the data (if known) A few MB per type of part. Perhaps 1 GB per day. * Standards #### 3) Ownership * Is another organization contributing to the data development? Only partners from Z-Fact0r **4) Reuse of existing data** * Specify if existing data is being re-used (if any) No Data is currently being collected #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful System software, cloud where DURIT servers are stored and some local pc. data will be used mainly by Quality Department **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? Partners from Z-Fact0r can have access during the project. In our premises access is limited to quality operators. #### 7) Storage and disposal * How will this data be stored? Probably on a local PC + DURIT servers at the cloud. * How long is it required to keep the data? Expire date. Will revisions be kept? five years minimum <table> <tr> <th> Task: **T1.4, T6.2** </th> </tr> <tr> <td> WP: WP1 (User Requirements), WP6 (Demonstration activities) </td> </tr> <tr> <td> WP Leader: SIR, INTERSEALS </td> </tr> <tr> <td> Author: Pierino Izzo (INTERSEALS) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection * Explain the relation to the objectives of the project/WP/Task The INTERSEALS actual data are generated and exploited to plan and manage the Customer orders, to planning the production, getting feedback by the production phase, managing of the maintenance. The Z-Fact0r will useful for all the objectives of the software itself: Z-DETECT, ZPREDICT, Z-PREVENT, (Z-REPAIR), Z-MANAGER. This data could then be connected to the INTERSEALS ERP and Quality Data Management. #### 2) Types * Are the data digital/hard copies or both? The data are digital. * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files). They could be of two kinds: .xls, SQL formats, emails. * Is the data generated or collected from other sources under certain terms and conditions? For Z-Fact0r, there isn’t this possibility. * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. The data for Z-Fact0r will be generate by: * FT-IR (Infrared spectroscopy): for material checking. * System control (CoMo by Kistler) that concentrate the data from the sensor cavities pressure. * Injection moulding machine parameters: these data can be achieved by the connection to the PLC of the injection Machine (the PLC can be Siemens, Omron, Moog). * Data from visual and dimensional checking machine (DATAPIXEL will be involved in this) * Data from the worker that work beside the production cell and will communicate with the software using Augmented Reality. * State the expected size of the data (if known) At the moment, our servers are of about 150Gbyte. * Standards SQL server #### 3) Ownership * Is another organization contributing to the data development? No, we have all the competence to generate and manage the data. **4) Reuse of existing data** * Specify if existing data is being re-used (if any) The existing data are re-used as really useful: * for making quotations * for process study * quality control * traceability * claim answer #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful See point 4 #### 6) Dissemination Level of Data  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? The data could be accessible after signing the INTERSEALS NDA. #### 7) Storage and disposal * How will this data be stored? Workstation server, SQL server * How long is it required to keep the data? Expire date. Will revisions be kept? At least for 6 months for the dynamic production data and five years for the static data. <table> <tr> <th> Task: **Task 1.3** </th> </tr> <tr> <td> WP: WP 1 </td> </tr> <tr> <td> WP Leader: SIR </td> </tr> <tr> <td> Author: EPFL </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/collection Development of the architecture of the Z-Fact0r system (i.e. functional view, information view, deployment view, etc.) & the definition and description of the main components * Explain the relation to the objectives of the project/WP/Task A complete description of the modules included in the detailed view is provided in order to point out the responsibilities of each module and their interactions with the global System Architecture #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Emails, doc files, .vpp files etc. * Is the data generated or collected from other sources under certain terms and conditions? No * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Visual Paradigm V14.0 for component diagrams * State the expected size of the data (if known) Not known * Standards UML for component diagrams **3) Ownership**  Is another organization contributing to the data development? #### ALL Z-Fact0r partners 4) Reuse of existing data  Specify if existing data is being re-used (if any) No #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful As described in GA document. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? As described in GA document. Accessible to Z-Fact0r consortium members including the commission services. Based on further discussions and agreement between partners, part of data (e.g. overall approach and architecture etc.) could be published in the form of an article or conference proceedings for dissemination purposes. #### 7) Storage and disposal * How will this data be stored? All the collected info/data will be delivered in the deliverable D1.3 * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project for further research; but it will be with consent of the consortium members in case the data is to be accessed and used for the purpose of academic exercise (e.g. teaching and publications) <table> <tr> <th> Task: **Task 1.5** </th> </tr> <tr> <td> WP: WP 1 </td> </tr> <tr> <td> WP Leader: SIR </td> </tr> <tr> <td> Author: EPFL </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Monitoring the application of the various Z-Fact0r strategies and risk analysis will determine how well & successful the implementation of the strategies will be in the use cases, aligned with the project objectives. * Explain the relation to the objectives of the project/WP/Task Data collected and generated will support part, machine and process level itself continuous monitoring in real time. Actions of correctness will be suggested in case of error occurrence. Re-evaluations of the deployed strategies will be conducted. Also, manufacturing equipment, part and process status measurement analysis will be adapted to provide the means for process validation. Z-Fact0r strategies developed according to objectives. #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Data types could be: .doc files, emails, SQL DB programs, .XML files etc. * Is the data generated or collected from other sources under certain terms and conditions? * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Machine sensors, network infrastructure /middleware (device manager)/ shop- floor (Z- Fact0r repository for machine processes, part condition and worker’s actions) * State the expected size of the data (if known) Not known * Standards #### 3) Ownership  Is another organization contributing to the data development? ALL Z-Fact0r partners #### 4) Reuse of existing data  Specify if existing data is being re-used (if any) Data will be reused for corrective actions on the deployed strategies and actions will be suggested based on correlations by the automatic decision support mechanism. #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful As described in GA document. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? Confidential, only for members of the consortium, Commission Services #### 7) Storage and disposal * How will this data be stored? * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **Task 2.2** </th> </tr> <tr> <td> WP: 2 </td> </tr> <tr> <td> WP Leader: EPFL </td> </tr> <tr> <td> Author: IRETETH/CERTH </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Data needed for formulation of data driven model. * Explain the relation to the objectives of the project/WP/Task Defect prediction from process inputs, correlated to Z-DETECT Module. **2) Types** * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Can accept data in any format (*.csv, *.xls, etc). Data output will be in the form of matlab files, (*.mat, *.m, etc). * Is the data generated or collected from other sources under certain terms and conditions? Data are collected from the manufacturing processes (end users’ collection systems) * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. State the expected size of the data (if known) Standards #### 3) Ownership  Is another organization contributing to the data development? No #### 4) Reuse of existing data  Specify if existing data is being re-used (if any) No #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful As described on GA document. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? Confidential, only for members of the consortium, CCS. #### 7) Storage and disposal  How will this data be stored? How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **Task 2.5** </th> </tr> <tr> <td> WP: WP 2 </td> </tr> <tr> <td> WP Leader: EPFL </td> </tr> <tr> <td> Author: EPFL </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Supervise and provide feedback for all the processes executed in the production line, evaluating performance parameters and responding to defects, keeping historical data. Send efficiently alarms to initiate actions, filter out false alarms, increase confidence levels (through previously acquired knowledge) of early defect detection and prediction. * Explain the relation to the objectives of the project/WP/Task KMS refers to an information and communication technology system for managing knowledge in organizations for supporting creation, capture, storage and dissemination of information. Facilitate the adoption of risk-based thinking (in line with ISO 9001:2015) at enterprise level by supporting faster and better decision making at shop-floor. Link the 5 intertwined zero-defect strategies (i.e. Z-PREDICT, Z-PREVENT, Z-DETECT, ZREPAIR and Z-MANAGE). Implement the designed Z-MANAGE strategy and interface with MES and/or other high level manufacturing systems in-place. * Provide the inference engine a second layer of autonomous decision support in relation to the 5 Z-Fact0r strategies. * Update the monitoring and inspection conditions and constraints of the ES-DSS. * Define rights for data sharing and exchanging internally with various enterprise systems and decision making units, as well as externally with customers and suppliers #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Data should be available at XML or JSON format. * Is the data generated or collected from other sources under certain terms and conditions? * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. It needs inputs from the Sensor Network (through the Device Manager), the overall model of Production activities (through Z-Fact0r Repository) and context – aware knowledge stemming from the Semantic Context Manager (Ontology) * State the expected size of the data (if known) Estimation of the volume of data can be done only by the source. * Standards #### 3) Ownership  Is another organization contributing to the data development? Yes #### 4) Reuse of existing data  Specify if existing data is being re-used (if any) No #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful Reaction to Incident detection, re-adaptation of the production processes and notifying components of Z-Fact0r which has subscribed for these events. #### 6) Dissemination Level of Data Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? As described on GA document. #### 7) Storage and disposal * How will this data be stored? In a main KM server and also into local terminals when appropriate indications have been disseminated. * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **T2.5 / T3.4** </th> </tr> <tr> <td> WP: WP2 / WP3 </td> </tr> <tr> <td> WP Leader: CERTH / EPFL </td> </tr> <tr> <td> Author: Ziazios Konstantinos (ATLANTIS) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection o Early stage decision support system o Reverse supply chain system * Explain the relation to the objectives of the project/WP/Task o Data will be used to model the early stage DSS. o Models for the supply chain #### 2) Types * Are the data digital/hard copies or both? Digital. * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) o Images o CSV o JSON o Binary * Is the data generated or collected from other sources under certain terms and conditions? Collected from sensors. * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. o From laser scanning o From user input o Batch files * State the expected size of the data (if known) Several GBs per day. * Standards Not known at this stage. #### 3) Ownership * Is another organization contributing to the data development? Only partners of the consortium **4) Reuse of existing data** * Specify if existing data is being re-used (if any) No #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful o Used for modelling o To visualise processes o Create visual KPI’s #### 6) Dissemination Level of Data  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? * Avoiding storage of sensitive data. Stored encrypted always. * Limited access to confidentiality data. #### 7) Storage and disposal  How will this data be stored? * On cloud only for the training / modelling period. * No production data will store outside the shop floor. How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **T2.1, T2.2, T3.1** </th> </tr> <tr> <td> WP: WP2 / WP3 </td> </tr> <tr> <td> WP Leader: CERTH / EPFL </td> </tr> <tr> <td> Author: Toni Ventura (DATAPIXEL) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection o High accuracy and high resolution 3D Pointcloud of scanned parts. o CAD Model by description of the surfaces and geometries of the designed part. * Deviation map of a 3D representation of surface deviations calculated between a captured Pointcloud and the reference CAD model. * MP, definition of the GD&T to be measured in the Pointcloud. * Explain the relation to the objectives of the project/WP/Task Data generation of WP2 and WP3 will be connected with Z-DETECT activities, in particular with T2.1, T2.2 and T3.1. #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) ASCII list of X Y Z, STEP format, STL with annotated deviations and PLY and QIF or DMO. * Is the data generated or collected from other sources under certain terms and conditions? Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. DATAPIXEL 3D Scanner, 3D Pointcloud Analysis software and Industrial partner’s CAD modelling software. * State the expected size of the data (if known) Typically, Pointcloud have a size between 100 K to 10M points, or 3Mbytes to 300 Mbytes and deviation maps have a size between 100 K to 1M polygons, or 10Mbytes to 100 MBytes. * Standards Metadata includes part identification, date and time of data generation, collection, equipment. Pointcloud, CAD model, deviation map and measurement results are part of the information associated with the manufactured parts. Also, MP is part of the project information. #### 3) Ownership  Is another organization contributing to the data development? Only partners of the consortium #### 4) Reuse of existing data  Specify if existing data is being re-used (if any) No #### 5) Data use How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful Main use will be the automatic detection of defects by two methods: CAD based inspection and G&T analysis. Also, for automatic measurement of dimensions and geometries based in nominal values and tolerances and local deviations. This information will be used for defect detection and process analysis. The data will be shared using the Sensor network manager, 3D Pointcloud Analysis module and stored in the repository for further future analysis **6) Dissemination Level of Data**  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? Confidential, except parts authorized by the industrial partners. #### 7) Storage and disposal * How will this data be stored? Data will be stored in the Z-Fact0r repository and by the 3D Pointcloud analysis software. * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **T3.2** </th> </tr> <tr> <td> WP: WP3 </td> </tr> <tr> <td> WP Leader: CERTH </td> </tr> <tr> <td> Author: Simone Parrotta (HOLONIX) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Within this task data from sensors will be integrated and stored in Z-Fact0r. Retrieved data will came from sensors and systems from industrial partners. * Explain the relation to the objectives of the project/WP/Task The task will define and develop the middleware and related tools for the Z-Fact0r sensor data integrations. #### 2) Types * Are the data digital/hard copies or both? Digital data will be stored in Cloud Based DB. * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) XML, JSON, CSV * Is the data generated or collected from other sources under certain terms and conditions? Proper term and conditions will be defined later during the project * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Data will be collected from: new sensors placed in the shop floor to support the processes monitoring; PLC; legacy systems. * State the expected size of the data (if known) * Standards XML, JSON, CSV #### 3) Ownership Is another organization contributing to the data development? Z-Fact0r industrial partners will provide confidential data regarding their processes. **4) Reuse of existing data**  Specify if existing data is being re-used (if any) #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? They will be used internally for system testing and validation. * Outline the data utility: to whom will it be useful **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? Data confidentiality/ sensitive have been already mentioned in the GA. Processes data from industrial consortium partners should be kept confidential based on internal company policies. #### 7) Storage and disposal * How will this data be stored? Data will be stored within Z-Fact0r repository. * How long is it required to keep the data? Expire date. Will revisions be kept? At least five years after the project ends. <table> <tr> <th> Task: **Task 3.5** </th> </tr> <tr> <td> WP: WP 3 </td> </tr> <tr> <td> WP Leader: CERTH </td> </tr> <tr> <td> Author: EPFL </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/collection Data is required for the Z-Fact0r ontology development. Ontology describes semantic models. The ontology will be used in order to drive the semantic framework. Furthermore, it will be used for data integration, visualization, inferencing/reasoning. * Explain the relation to the objectives of the project/WP/Task Context-aware shop-floor analysis and semantic model for the annotation and description of the knowledge to represent manufacturing system performance. The ontology will describe the basic entities of the project and model relevant structures of multi-stage manufacturing processes. #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Required input will be data from Z-Fact0r repository (data concerning machines, workers, actors, activities and processes, production data logs, etc.), e.g. in XML, CSV, etc. Generated output will be the semantic enrichment of shop-floor data for representation of processes, actors, alarms, actions, work-pieces/products, etc., e.g. as RDF Triplets * Is the data generated or collected from other sources under certain terms and conditions? Data from Z-Fact0r repository (data concerning machines, workers, actors, activities and processes, production data logs, etc.) * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Data will be stored in a dedicated repository * State the expected size of the data (if known) Less than 1GB * Standards W3C-OWL, RDF #### 3) Ownership * Is another organization contributing to the data development? Z-Fact0r End-users (MICROSEMI, INTERSEALS, DURIT) **4) Reuse of existing data** * Specify if existing data is being re-used (if any) No #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful The ontology will be used in order to drive the semantic framework. Furthermore, it will be used for data integration, visualization, inferencing/reasoning. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? Accessible to Z-Fact0r consortium members including the commission services #### 7) Storage and disposal * How will this data be stored? The Ontology will be uploaded in a server where it will be accessible to Z-Fact0r consortium members including the commission services * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **T4.1 - T4.3** </th> </tr> <tr> <td> WP: 4 </td> </tr> <tr> <td> WP Leader: Brunel University London </td> </tr> <tr> <td> Author: Brunel </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection The purpose of the data collection and generation is to facilitate the building of the eventbased model, green scheduler using the KPIs, implementation of the scheduler and extracting cost functions. The raw data will be collected from plants and the output will provide the metrics for process and control optimisation to minimise defect. * Explain the relation to the objectives of the project/WP/Task The fulfilment of the tasks will lead to achieving: An event-base modelling platform and green scheduler to identify the key parameters that influence and have the largest effect of creation of defects in the productions process as well as energy consumption and carbon emissions. They will assist in customisation of the measurement of the KPIs for each industrial partner in the consortium, and build the framework for implementing control and optimisation solutions to minimise defect. #### 2) Types * Are the data digital/hard copies or both? Mainly digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) Mainly DB driven files that can be converted to CSV, JSON, TXT, HTML, and SML * Is the data generated or collected from other sources under certain terms and conditions? The agreed T&C of the consortium * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. PLC, SCADA, Production Management Systems, Internet, and project Intranet. * State the expected size of the data (if known) Large but not known at this stage. * Standards Control Area Network, TCP/IP. #### 3) Ownership * Is another organization contributing to the data development? Members of the consortium. **4) Reuse of existing data** * Specify if existing data is being re-used (if any) N/A #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful Members of the consortium, in addition the results of the R&D project will be disseminated according to the consortium agreement in the form of conference, journal, specialist magazine/website outlets. #### 6) Dissemination Level of Data  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? N/A within the sensitive and project oriented data will remain within the boundaries of the consortium #### 7) Storage and disposal * How will this data be stored? In local data storages defined and design specifically for the project. Brunel University SERG Laboratories will have a dedicated storage and computing facility for the project. The data will then be stored and utilised in accordance with the T&C of the consortium agreement. * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project for further research; but it will be with consent of the consortium members in case the data is to be accessed and used for the purpose of academic exercise (e.g. teaching and publications) <table> <tr> <th> Task: **Task 4.2** </th> </tr> <tr> <td> WP: WP 4 </td> </tr> <tr> <td> WP Leader: BRUNEL </td> </tr> <tr> <td> Author: EPFL </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection For the validation and verification of the KPI models, i.e. Productivity, Efficiency, Quality (Customer Satisfaction), Environmental Impact, and Inventory levels. * Explain the relation to the objectives of the project/WP/Task #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) .docs & formats for discrete event simulation (descriptive) models using off- the-shelf simulation packages * Is the data generated or collected from other sources under certain terms and conditions? * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Already installed actuators and sensors will be used for monitoring and evaluating the KPIs. * State the expected size of the data (if known) N/A * Standards #### 3) Ownership * Is another organization contributing to the data development? **4) Reuse of existing data** * Specify if existing data is being re-used (if any) #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful Based on the prediction of the expected results and depending on the measurements received for the evaluation of the KPIs through the use-cases utilization they will be finetuned in order for afterwards on-line & real-time application of them. Corrective actions will be considered for the production line based on the results received. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? The final deliverable report associated with this task will be public. All the other data will be Accessible to Z-Fact0r consortium members. #### 7) Storage and disposal * How will this data be stored? * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project <table> <tr> <th> Task: **Task 4.4** </th> </tr> <tr> <td> WP: WP 4 </td> </tr> <tr> <td> WP Leader: BRUNEL </td> </tr> <tr> <td> Author: EPFL </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Design and development of the cost functions for each of the KPIs. * Explain the relation to the objectives of the project/WP/Task Models will be industry specific and will be defined as monetary loss functions due to loss of Productivity (OEE, OLE, Resource Utilisation), Efficiency (energy consumption per produced unit), Quality (process and product quality loss models), Environmental Loss (emissions of pollutants per produced unit), and Inventory (storage, and work-in-process). #### 2) Types * Are the data digital/hard copies or both? Digital * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) .xls , .doc files * Is the data generated or collected from other sources under certain terms and conditions? * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Financial data from end-users. * State the expected size of the data (if known) * Standards #### 3) Ownership  Is another organization contributing to the data development? Z-Fact0r end- users. #### 4) Reuse of existing data  Specify if existing data is being re-used (if any) No #### 5) Data use * How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful At a second stage, a validation and verification process of the direct observations and experiments on the Shop-floor and direct measurement of costs against system state and contrastively with the simulated ones. **6) Dissemination Level of Data** * Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? #### 7) Storage and disposal * How will this data be stored? * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **T7.3, T8.2, T9.2** </th> </tr> <tr> <td> WP: 7-8-9 </td> </tr> <tr> <td> WP Leader: INOVA+, CETRI </td> </tr> <tr> <td> Author: CONFINDUSTRIA </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection Regularly, Data generation and Collection is an important requirement aimed to develop activities/tasks, to allow Analysis, to measure performances and to measure the achievement of own objectives. * Explain the relation to the objectives of the project/WP/Task Data generation, and, in particular, data collection will be useful in order to develop market analysis and Customer Adoption Plan. We also recognize the importance of these options, generate and collecting data, by thinking to the planned workshops, in which results achieved will be shared (in respect to the privacy needed). #### 2) Types * Are the data digital/hard copies or both? Mainly digital. * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected .xls files, .ppt files, emails, .doc files * Is the data generated or collected from other sources under certain terms and conditions? Since we will have to use available info and results from other WP we will respect the required IP protection, confidentiality of data. * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Data will be collected from website, available DB and we will use the only not confidential results/information that we could share in order to develop our tasks (Roadmapping, Customer Adoption plan and DCE). * State the expected size of the data (if known) Unknown * Standards #### 3) Ownership * Is another organization contributing to the data development? **4) Reuse of existing data** * Specify if existing data is being re-used (if any) We will use existing data coming from web, available surveys or other WPs. #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful For their definitions and objective, our tasks and results will be useful for the all project partners and some of them for some potential customers, since our tasks imply Partners and customers sharing. #### 6) Dissemination Level of Data  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? No confidentiality. #### 7) Storage and disposal * How will this data be stored? Z fact0r Shared digital folders. * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. <table> <tr> <th> Task: **8.1-8.5 & 9.1-9.6 ** </th> </tr> <tr> <td> WP:8,9 </td> </tr> <tr> <td> WP Leader: CETRI </td> </tr> <tr> <td> Author: Dr Souzanna Sofou (CETRI) </td> </tr> </table> #### 1) Scope * State the purpose of the data generation/ collection 1. Website: project dissemination and product innovation delivery. 2. Innovation Management Strategy: Form the IM strategy for the ultimate use and dissemination of project Results. 3. Innovation Management Roadmap: Design and implement WPs 8 and 9\. 4. DMP Questionnaire: Manage Research Data, MetaData, before and after project duration. 5. Deliverables: [D.8.1-D.8.5], [D9.1-D9.5]: as explained in the vi) W. vii) Publications: dissemination and communication activities. viii) Z-Fact0r leaflet: communication activity for project wider acceptance. ix) Z-Fact0r poster: communication activity for project wider acceptance. * Explain the relation to the objectives of the project/WP/Task As explained above #### 2) Types * Are the data digital/hard copies or both? _Digital:_ i) Website ii) Innovation Management Strategy iii) Innovation Management Roadmap iv) DMP Questionnaire 5. Deliverables: [D.8.1-D.8.5], [D9.1-D9.5] 6. Publications (hard copies may also be sent for journal & conference publications) _Hard Copies:_ 7. Z-Fact0r leaflet viii) Z-Fact0r poster * What types of data will the WP generate/collect? Specify the types and formats of data generated/collected (for example .xls files, .ppt files, emails, .doc files) <table> <tr> <th> _i)_ Website </th> <th> developed by wordpress </th> </tr> <tr> <td> _ii)_ Innovation Management Strategy </td> <td> .ppt and .pdf file </td> </tr> <tr> <td> _iii)_ Innovation Management Roadmap: </td> <td> .xls file </td> </tr> <tr> <td> _iv)_ DMP Questionnaire: </td> <td> .doc file </td> </tr> <tr> <td> _v)_ Deliverables: [D.8.1-D.8.5], [D9.2-D9.5] </td> <td> doc files, .pdf files website </td> </tr> <tr> <td> _vi)_ Publications </td> <td> .doc files, .pdf files </td> </tr> <tr> <td> _vii)_ Z-Fact0r leaflet </td> <td> .cdr file, .ppt file, .pdf file </td> </tr> <tr> <td> _viii)_ Z-Fact0r poster </td> <td> .cdr file, .ppt file, .pdf file </td> </tr> </table> * Is the data generated or collected from other sources under certain terms and conditions? i), iii), Data taken also from the GA. iv) Data collected from participants. 5. Data generated during project duration, data from other deliverables will be used. 6. Research Data generated. 7. Data taken also from the GA. * How is generated/collected? Specify the origin of the data and instruments/tools that will be used. Not applicable for WP8 and WP9. * State the expected size of the data (if known) For digital files: less than 100Mb. For Hard Copies: Z-Fact0r leaflet: print on both sides A4 size, Z-Fact0r poster: print on one side, according to conference restrictions. * Standards #### 3) Ownership According to the ownership model. * Is another organization contributing to the data development? According to the ownership model. **4) Reuse of existing data** * Specify if existing data is being re-used (if any) Data from other WP´s might be used in dissemination and communication files. #### 5) Data use  How will this data be exploited and/or shared/made accessible for verification and re-use? Outline the data utility: to whom will it be useful All WP8 and 9 data will be useful to the consortium for the ultimate use and dissemination of project results. #### 6) Dissemination Level of Data  Confidentiality/ Sensitive data. If data cannot be made available, explain why. Who will have access? <table> <tr> <th> Website </th> <th> Public </th> </tr> <tr> <td> Innovation Management Strategy </td> <td> Private, the DEM has created the file, only the project partners have access </td> </tr> <tr> <td> Innovation Management Roadmap: </td> <td> Private, the DEM has created the file, only the project partners have access </td> </tr> <tr> <td> DMP Questionnaire: </td> <td> Private, Only the project partners have access </td> </tr> <tr> <td> Deliverables: [D.8.1D.8.5], [D9.1-D9.5]: </td> <td> D8.1, D8.2, D8.3, D8.5, D9.2, D9.3, D9.5: Confidential D8.4, D9.1, D9.4: Public </td> </tr> <tr> <td> Publications </td> <td> Public, dissemination rules apply </td> </tr> <tr> <td> Z-Fact0r leaflet </td> <td> Public, dissemination rules apply </td> </tr> <tr> <td> Z-Fact0r poster </td> <td> Public, dissemination rules apply </td> </tr> </table> #### 7) Storage and disposal * How will this data be stored? All digital files will be stored in FREEDCAMP All hard copy files will be stored by the DEM as well as all consortium parties. * How long is it required to keep the data? Expire date. Will revisions be kept? Duration of the project, and potentially five years after the completion of the project. _2.2.5.3 Dataset per WP_ Data information of the partners has been used to define the complete RDI that will be generated in each WP; <table> <tr> <th> **WP1** </th> <th> **User requirements, specifications, use case analysis** </th> <th> WP leader: **SIR** </th> </tr> </table> **Objective** : Development of the architecture of the Z-Fact0r system and the definition and description of the main components. A complete description of the modules included in the detailed view in order to point out the responsibilities of each module and their interactions with the global System Architecture. Qualitative and quantitative data generated and collected are aimed at defining both the user and system requirements and use, prepare the bibliographic and databased information, design the workflow and UML diagrams and report on Z-Fact0r strategy and risk analysis to monitor the status of the manufacturing process in real time. **Data description:** The data being collected will enable the KPI’s to be monitored and to generate history for prediction and correction of the process. Digital data and documents will be preserved in their incoming format, files generated and used will be mainly consisting in MS documents released using the following formats (.doc, .pptx, .vpp and .xls files, emails, SQL DB programs, images for visualizing and conceptualizing the use cases will be released as PDF files). **Instrument and tools:** Unified Modelling Language, TBC for either Pointclouds or micro– profilometry data. Machine sensors, network infrastructure /middleware (device manager)/ shopfloor (Z-Fact0r repository for machine processes, part condition and worker’s actions). <table> <tr> <th> **WP2** </th> <th> **Production-process monitoring, detect life-cycle management and remanufacturing** </th> <th> WP **EPFL** </th> <th> leader: </th> </tr> </table> **Objective** : Data generation will be needed for formulation of data driven model for the defect prediction from process inputs, correlated to Z-DETECT Module. Supervise and provide feedback for all the processes executed in the production line, evaluating performance parameters and responding to defects, keeping historical data. Send efficiently alarms to initiate actions, filter out false alarms, increase confidence levels (through previously acquired knowledge) of early defect detection and prediction. **Data description:** Data generated will be digital and any format data can be accepted (.CSV, .XLS, etc). However, data output will be in the form of matlab files, (.XLM, .JSON, .MAT, .M, etc). **Instrument and tools:** Inputs from the Sensor Network (through the Device Manager), the overall model of Production activities (through Z-Fact0r Repository) and context – aware knowledge stemming from the Semantic Context Manager (Ontology) will be necessary. **Terms & Conditions of data generated: ** Data are collected from the manufacturing processes (end users’ collection systems). <table> <tr> <th> **WP3** </th> <th> **Data management and early stage DSS for inspection and control** </th> <th> WP leader: **CERTH** </th> </tr> </table> **Objective** : Data is required for the Z-Fact0r ontology development. Ontology describes semantic models for the annotation and description of the knowledge to represent manufacturing system performance. The ontology will be used in order to drive the semantic framework. Furthermore, it will be used for data integration, visualization, inferencing/reasoning. Data from sensors will be integrated and stored in Z-Fact0r. Retrieved data will came from sensors and systems from industrial partners. **Data description:** Digital data will be stored in Cloud Based DB and required input will be data from Z-Fact0r repository (data concerning machines, workers, actors, activities and processes, production data logs, etc.). Generated output will be the semantic enrichment of shop-floor data for representation of processes, actors, alarms, actions, work-pieces/products, etc., e.g. RDF Triplets, .CSV, .XML, .JSON. **Instrument and tools:** Data will be collected from new sensors placed in the shop floor to support the processes monitoring; PLC; legacy systems. **Data re-use:** The ontology will be re-used in order to drive the semantic framework. Furthermore, it will be used for data integration, visualization, inferencing/reasoning. <table> <tr> <th> **WP4** </th> <th> **System modelling for fast forward cost functions** </th> <th> WP leader: **BRUNEL** </th> </tr> </table> **Objective** : The purpose of the data collection and generation is to facilitate the building, validation and verification of the KPI models, implementation of the scheduler and extracting cost functions. The raw data will be collected from plants and the output will provide the metrics for process and control optimisation to minimise defect. **Data description:** Data generate will be mainly digital and DB driven files can be converted to CSV, XLS, TXT, HTML and XML. **Instrument and tools:** Data and instruments that will be used are PLC, SCADA, Production Management Systems, Internet, and project Intranet. Already installed actuators, sensors and financial data from end-users will be used for monitoring and evaluating the KPIs. **Data re-use:** Data reutilization will be fine-tuned in order for afterwards on-line & real-time application of them and corrective actions will be considered for the production line based on the results received. <table> <tr> <th> **WP5** </th> <th> **Integration & Testing Validation ** </th> <th> WP leader: **ATLANTIS** </th> </tr> </table> **Objective** : Diverse set of technologies will be developed and all s/w and h/w components and platforms will be integrated throughout a predefined integration methodology. The technology validation plan is to be defined and executed while applying corrective design and reimplementation on all detected errors. Furthermore, the methodology validation throughout the demonstration in relevant environments will be used as well as the evaluation data and feedback that are going to be collected, analyzed and documented. **Data description:** Data generated will be made available from the Z-Fact0r components that will be integrated into the complete system. Multiple formats will be available, however, all compatible to commonly agreed standards, most probably Business to Manufacturing Markup Language (B2MML) in XML form. Moreover, from the evaluation part of the WP data will come out in XLS format. **Instrument and tools:** Data will be provided by the Z-Fact0r components that form the 5 strategies. For this WP, primary, non-analysed data are not considered, rather than the results of their analysis by the Z-Fact0r tools and components. Data related to end user evaluation will be most probably collected using online forms and questionnaires, that will allow transfer into XLS files. **Data re-use:** The collected data will be evaluated and considered in order to reach a better understanding of the processes and activities evolved in the shop-floors. Combined with the input from the technical validation plan this will lead to fine-tuning of the Z-Fact0r components and the lessons learned could be transformed into actionable knowledge. <table> <tr> <th> **WP6** </th> <th> **Demonstration activities** </th> <th> WP leader: **INTERSEALS** </th> </tr> </table> **Objective** : Automated quality control, with high accuracy level and predictive system for defect generation based on online continuous monitoring. The data collected will enable to detect probability or trends that lead to defects that normally result in scrapping the parts. **Data description:** Data generate will be digital and hard copies and data format will be .xls for sensor history data and volumetric measurement and .jpeg images of the defects, and also SQL formats, emails, etc. **Instrument and tools:** Possibly collected by sensors at a bench top apparatus and optical and physical sensors to be studied. FT-IR (Infrared spectroscopy): for material checking. System control (CoMo by Kistler) that concentrate the data from the sensor cavities pressure. Injection moulding machine parameters: achieved by the connection to the PLC of the injection Machine (the PLC can be Siemens, Omron, Moog). Data from visual and dimensional checking machine. Data from the worker that work beside the production cell and will communicate with the software using Augmented Reality. **Data re-use:** The existing data are re-used as really useful for making quotations, for process study, quality control, traceability and claim answer. <table> <tr> <th> **WP 7,** **8 & 9 ** </th> <th> **Valorization, market replication,** **dissemination/ communication/exploitation** </th> <th> WP leader **: INNOVA, CETRI** </th> </tr> </table> **Objective** Data generation and collection is an important requirement to develop activities/tasks, allow analysis, measure performances and measure the achievement of WP objectives as: Website (dissemination and innovation delivery), DMP Questionnaire (Manage Research Data, MetaData, before, during and after project duration), Innovation Management Strategy (for the ultimate use and dissemination of results) and Roadmap (planning WP7, design and implement WPs 8 and 9), Publications (dissemination and communication activities), Z-Fact0r leaflet and poster (communication activity wider acceptance) and Market analysis and Customer Adoption Plan. **Data description:** Data generated will be digital for Website, Innovation Management Strategy, Innovation Management Roadmap, DMP Questionnaire, Deliverables and Publications (hard copies may also be sent for journal & conference publications), and hard copies in the case of ZFact0r leaflet and poster. Format of data generated will be .xls, .ppt, .pdf, .doc and .cdr files and emails. **Instrument and tools:** Data will be collected from website and available DB, with only nonconfidential results/information that could be shared for the development of WP activities (Roadmapping, Customer Adoption plan and DCE). In conclusion, data and data management-related challenges under Z-Fact0r are identified and addressed mainly within WP1 (T1.1, T1.4), WP2 and WP3. As described in the proposal, data to be used in the project will include: on- line (nearly) real-time and historical data related to (i) the product (desired specifications; quality inspection results, etc.); (ii) the production equipment and environment (e.g. temperature, pressure, vibrations, etc.); (iii) manufacturing/ production and maintenance (e.g. capacity, planning, etc.). Sources of these data will be: existing sensors and actuators (such as sensors embedded in production machinery, quality inspection equipment, etc.), as well as new novel sensors and actuators (such as laser scanning, visual and/or IR cameras, noncontact profilometers, etc.), and enterprise systems. The type of sensors/ actuators and data to be used will be defined and finalized per Z-Fact0r use case on the basis of the required metrics at product and workstation level at single manufacturing stage, and also at multiple stage. Additionally, non-research data collected related to Innovation Management like the IPR registry that includes the IP strategy per Result are confidential and are only stored in the FREEDCAMP repository and the website private area, with no access rights for members outside the consortium. ### 2.2.6 Policies for access, sharing and re-use Data generated during Z-Fact0r project will be confidential. Ownership and management of intellectual property and access will be limited to the project consortium partners. For this purpose, policies for access, sharing, and re- use have been established: _2.2.6.1 Partners Background_ Partners have identified their background for the action (data, know-how or information generated before they acceded to the Agreement), which will be accessible to each other partners to implement their own tasks (under to legal restrictions or limits previously defined in the CA). The partners should be able to access, mine, exploit, reproduce and disseminate the data. This should also help to validate the results presented in scientific publications. The partner´s background, acquired prior to the starting date of the project, will remain the sole property of the originating partner, provided that it was presented in the CA. _2.2.6.2 Data Ownership and Access_ The full dataset will be confidential and only the members of the consortium will have access on it. Special consideration will be taken for the project dissemination dataset (e.g. leaflet, brochures, posters, etc.) that will be considered as public information. As described in GA, data generated are expected to be used internally as input by the other WPs. All the partners will have free-access to the results generated during the project, the information needed for implementing their own tasks under the action and for exploiting their own results. Also, this information will be available to EU institutions, bodies, offices or agencies, for developing, implementing or monitoring EU policies, however such access rights are limited to non-commercial and non-competitive use. Regarding ORD Pilot, data that will be generated in the OA will be decided during the course of the project and can include; final peer-reviewed scientific research articles that will be published in the online repository after the publication, research data including data underlying publications, curated data and/or raw data and public deliverables of the project (described in GA). If any document or dataset are decided to become of OA, a special section into the data management portal (FREEDCAMP) will be created that should provide a description of the item and link to a download section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination. _2.2.6.3 Naming rules_ All data files will be saved using a standardized, consistent file naming protocol agreed by the project partners, which will include relevant metadata to ensure their accessibility. The metadata standard proposed is the CERIF. _2.2.6.4 Storage Information_ Documents of the dataset will be stored at the data management portal ( **FREEDCAMP** ) created and maintained by CERTH/ITI, while links to the portal will exist at the Z-Fact0r website. The Data Management Portal developed by the project ( **FREEDCAMP** ) in the context of the ORD Pilot, allows the efficient management of the project’s datasets and provides the proper OA of them for further analysis and reuse. The dataset will remain at the data management portal for the whole project duration, as well as for at least 2 years after the end of the project. Finally, after the end of the project, the portal is going to be accommodated with other portals at the same server, so as to minimize the needed costs for its maintenance. _2.2.6.5 Data sharing and dissemination_ Data will be reused for corrective actions on the deployed strategies and actions will be suggested based on correlations by the automatic decision support mechanism. Research data results will be disseminated according to the CA in the form of conference, articles in a journal, specialist magazine/website outlets or conference proceedings for dissemination purposes. All patent applications and all other publications will require prior agreement in respect to content and the publication media. To this end, each partner should notify the consortium members about the content and material they wish to publish/disseminate and a 21 days’ evaluation period will be provided as stated in the CA. _2.2.6.6 IPR management and security_ As an innovation action close to the market, Z-Fact0r project covers high-TRL technologies and aims at developing marketable solutions. The project consortium includes nine industrial partners from the private sector, in particular, CETRI, ATLANTIS (Technical Management), HOLONIX, DATAPIXEL, SIR, INOVA, MICROSEMI (demonstrator/end user), INTERSEALS (demonstrator), and DURIT (demonstrator). Those partners obviously have Intellectual Property Rights on their technologies and data, on which their economic sustainability is at stake. Consequently, the Z-Fact0r consortium will protect that data and get approval of concerned partners before every data publication. The data management portal will be equipped with authentication mechanisms, so as to handle the identification of the persons/organizations that download them, as well as the purpose and the use of the downloaded dataset. _2.2.6.7 Data expire date_ Copyright statements of the Z-Fact0r project will protect any written material produced during its lifetime. As described in GA, the information and data supplied by all project partners and documents produced during the project will be protected for a period of five years after the project completion unless there are agreements between the partners. After the end of the project, the partners should keep for five years the original documents, digital and digitalized documents, records and other supporting documentation in order to prove the proper implementation of the action and the costs they declare as eligible. ## 2.3 Data currently being produced in Z-Fact0r This version of the DMP does not include the actual metadata about the data being produced in ZFact0r because there is no dataset generated or collected until delivery date of this deliverable (M6). Further details will be provided in the next updated version. # 3 Data Management related to Zero-defects Manufacturing The quality and performance data of the Manufacturing enterprises will be considered private and will only be available after granting permission. On the other hand, the research data about modelling procedures, KPI validation, event modelling, inspection and real-time quality control, as well as the system optimization, which will be collected/generated during Z-Fact0r will be distributed freely. # 4 Data Management Portal (FREEDCAMP) The Data Management Portal, a web based portal name as “FREEDCAMP”, is being used within the Z-Fact0r project for the purposes of the management of the various datasets that will be produced by the project, as well as, for supporting the exploitation perspectives for each of those datasets. FREEDCAMP Portal will need to be flexible in terms of the parts of datasets that are made publicly available. Special attention is going to be given on ensuring that the data made publicly available violates neither IPR issues related to the project partners, nor the regulations and good practices around personal data protection. ## 4.1 FREEDCAMP portal functionalities The FREEDCAMP Portal is accessed through a web based platform which enables its users to easily access and effectively manage the various datasets created throughout the development of the project. Regarding the user authentication, as well as the respective permissions and access rights, the following three user categories are foreseen: * **Admin;** the Admin has access to all of the datasets and the functionalities offered by the DMP and is able to determine and adjust the editing/access rights of the registered members and users (OA area). Finally, the Admin is able to access and extract the analytics, concerning the visitors of the portal. * **Member;** when someone successfully registers to the portal and is given access permission by the Admin, she/he is then considered as a “registered Member”. All the registered members will have access to and be able to manage most of the collected datasets. Knowledge sharing and public documents, apart from the admin and the registered members, as **OA** area will be available for users who will not need to register and they will have access to some specific datasets, as well as to project outcomes. Figure 1 shows the Login page of the FREEDCAMP portal. Figure 1\. Login Page of the FREEDCAMP Portal FREEDCAMP portal will be easily and effectively managed by the members. A variety of graphs, pie charts etc. is going to be employed for helping members to easily understand and elaborate the data. In particular, the architecture of the portal presents special interfaces organized to comply the information. All tasks and datasets available in the DMP will be accompanied by a short description of the item (Figure 2 and 3). Figure 2. Data access page of the FREEDCAMP portal. Figure 3\. F ile access of the FREEDCAMP P ortal. Dataset will be structured in three different folders into FREEDCAMP portal; Tasks, Discussion, Files. Draft documents and deliverables, and other data will be uploaded on specific tasks folders, and final version documents will be uploaded into the file section of appropriate folder. In addition, technical and progress meetings will be scheduled in the FREEDCAMP portal calendar (Figure 4). Figure 4. Calendar access of the FREEDCAMP Portal. ## 4.2 Data Backup: Private Area of the Project Website As described in Deliverable 9.1, the website private area can be used by all the partners: i) for storage of files and confidential deliverables ii) for providing feedback on work in progress iii) for exchanging information about upcoming events, conferences, etc. The website Private Area is only accessible by the consortium partners using a username and password, and will be used as a backup repository to store data that are either confidential or data that will be made public after a release date that has been identified by the data owner. ## 4.3 Open Access Section A special free access section into the data management portal (FREEDCAMP) will be created to upload the documents, data and datasets and other information that are decided to become of OA. The description of the item and the link to a download section will be available into this section. Of course, these data will be anonymized, so as not to have any potential ethical issues with their publication and dissemination. # 5 Future Work A spreadsheet has been created that will be used throughout the project for the continuous logging of data and datasets, as well as the related information that has been previously presented. Figure 5 shows the spreadsheet of Z-Fact0r data and datasets. Figure 5. Spreadsheet of Z-Fact0r data and datasets. Additionally, the release date of the datasets that will be available in the open research data pilot will be defined in this xls. More specifically, after the IP protection route has been defined for each “result” in the IPR registry currently being developed, dissemination actions will take place for some of them (for example for those for which a patent application has been submitted). As soon as a dissemination action is complete, data can be uploaded in the open research data pilot. ## 5.1 Roadmap of actions to update the DMP This deliverable is a dynamic document and will be updated and augmented throughout the whole project lifecycle with new datasets and results according to the progress of the activities of the ZFact0r project. Also, the DMP will be updated to include possible changes in the consortium composition and policies over the course of the project. For that purpose, the final version of this report will be delivered 6 month before the end of the project (M36), reflecting on lessons learnt and describing the plans implemented for sustainable storage and accessibility of the data, even beyond the project’s duration. # 6 Conclusions This report includes the DMP and describes the RDI that will be generated during Z- Factor project and the challenges and constraints that need to be taken into account for managing it. In addition, it describes the updated procedures and the infrastructure used in the project to efficiently manage the produced data, named as FREEDCAMP Portal. The DMP is identified as starting point for the discussion with the community about the Z-Fact0r data management strategy and reflects the procedures planned by the work packages at the beginning of the project. An elaborated questionnaire has been distributed between the consortium partners, asking them what kind of data they were expecting to produce and collect during the project. From this information, it has become clear that currently only work packages 1, 2 and 3, are planning to generate or collect data that can be classified as relevant information according to the definition of the European Commission. Nonetheless, DMP is not a fixed document and it can be the case that this situation evolves during the lifespan of the project. Thus, the DMP will be updated and augmented with new datasets and results twice during project lifetime with the Project Periodic Reports. Regarding storage information, documents generated during the project will be stored in FREEDCAMP Portal which is the document management system of the project. This information, data and documents produced during the project will be protected for a period of two years after the project completion, as it is described in GA. # 7 Glossary ## Participant Information Sheet The information sheet is an important part of recruiting research participants. It ensures that the potential participants have sufficient information to make an informed decision about whether to take part in your research or not ( _http://www.kcl.ac.uk/innovation/research/support/ethics/training/infosheet.aspx_ ). ## Consent Form A form signed by a participant to confirm that he or she agrees to participate in the research and is aware of any risks that might be involved. ## Metadata Metadata is data that describes other data. Meta is a prefix that in most information technology usages means "an underlying definition or description." Metadata summarizes basic information about data, which can make finding and working with particular instances of data easier. ( _http://whatis.techtarget.com/definition/metadata_ ) or _http://www.data-_ _archive.ac.uk/media/54776/ukda062-dps-preservationpolicy.pdf_ ## Repository A digital repository is a mechanism for managing and storing digital content. Repositories can be subject or institutional in their focus. ( _http://www.rsp.ac.uk/start/before-youstart/_ _what-is-arepository/)_ # 8 Bibliography * Guidelines on Data Management in Horizon 2020, Version 2.0, 30 October 2015: _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi- oapilotguide_en.pdf_ * Guidelines on OA to Scientific Publications and Research Data in Horizon 2020, Version 2.0, 30 October 2015: _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi- oapilotguide_en.pdf_ * Webpage of European Commission regarding OA: _http://ec.europa.eu/research/science-society/open_access_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1167_SUREAL-23_724136.md
**Introduction** The focus of SUREAL-23 is on the particulate emissions from contemporary light duty direct injection (DI) internal combustion (IC) engines (Diesel and gasoline) that will be addressed by homologation standards beyond Euro-6, especially for nanoparticles smaller than the current regulation cut-off limit of 23 nm with the threshold of at least 10 nm. Within this context, the objectives of SUREAL-23 are: * to complement existing standard instrumentation by introducing extensive size and composition characterization of exhaust particles especially for sizes below 23 nm, * to support future emissions compliance through technical development in real driving emissions measurement, * to fully characterize the nature of the particulate emissions which potentially evade current emission control technology and regulations, * to contribute to future definitions of particulate emissions limits of the “Super Low Emission Vehicles”. The aim of WP7 is to establish effective project management bodies and decision-making procedures, monitor the progress of the project activities versus important objectives & project milestones and to apply relevant corrective actions if necessary. Moreover, it includes the dissemination of the project's results and maximize its impact to the most wide and relevant audience, as well as the establishment of links with relevant projects in EU, USA and Japan. Finally, the creation of an exploitation plan defining the technologies which can contribute to current regulatory measurements and can be taken up into new products, has to be performed. This deliverable, _D7.7 “Data management plan (Final Version)”,_ aims to update the 2 nd version of the DMP (D7.6 “ _Data management plan (2 nd Version) _ ”) about the management and accessibility of project’s data and findings with emphasis in those that are accessible via the H2020 Open Research Data Project (OpenAIRE). The updated DMP includes descriptions about a) the data / reports generated and provided openly to third parties b) the data sharing (taking into considerations any legitimate reasons for not sharing data) and d) data archiving and preservation; for the second half of the project (1839M). # Abbreviations list <table> <tr> <th> APTL </th> <th> Aerosol and Particle Technology Laboratory </th> </tr> <tr> <td> EC </td> <td> European Commission </td> </tr> <tr> <td> CA </td> <td> Consortium Agreement </td> </tr> <tr> <td> CERTH </td> <td> Centre for Research and Technology – Hellas </td> </tr> <tr> <td> CPERI </td> <td> Chemical Process and Energy Resources Institute </td> </tr> <tr> <td> DG-GRO </td> <td> Directorate-General "Growth" </td> </tr> <tr> <td> DMP </td> <td> Data Management Plan </td> </tr> <tr> <td> DOI </td> <td> Digital Object Identifier </td> </tr> <tr> <td> DoW </td> <td> Description of Work </td> </tr> <tr> <td> INEA </td> <td> Innovation Networks Executive Age </td> </tr> <tr> <td> NEDC </td> <td> New European Driving Cycle </td> </tr> <tr> <td> PMP </td> <td> Particle Measurement Program </td> </tr> <tr> <td> UNECE </td> <td> United Nations Economic Commission for Europe </td> </tr> <tr> <td> WLTP </td> <td> Worldwide harmonized Light vehicles Test Procedures </td> </tr> </table> # Short Project Overview A large proportion of the total number of particles emitted from direct injection engines are below 23 nm and the EU aims to regulate those emissions and impose limits for new light-duty vehicles. Before SUREAL-23, accounting for sub-23 nm particles was not a straightforward choice due to the absence of accurate quantification methods, especially under real driving conditions. The project aimed to increase the knowledge regarding the nature of sub-23 nm particles from different engine/fuel combinations under different operating conditions. It also aimed to overcome the relevant barriers by introducing novel measurement technology for concentration, size and composition measurements. In parallel, state of the art aerosol measurement techniques were advanced for better compatibility with sub-23 nm exhaust particles as well as onboard use. The developed instrumentation was used to assess sub-23 nm particle emissions from both Diesel and GDI vehicles accounting for effects of the fuel, lubricants, after-treatment and driving conditions for existing and near-future vehicle configurations. The most suitable concepts for on- board use have been evaluated accordingly. The project provided measurement technologies that will complement and extend established particle measurement protocols, sustaining the extensive investments that have already been made by industry and regulation authorities. It delivered a systematic characterisation of sub23 nm particles to facilitate future particle emission regulations as well as to assess any potential trade-off between advances in ICE technology towards increased efficiency and emissions. The consortium consisted of European and US organizations, which are leaders in the field of aerosol and particle technology. # OBJECTIVES The deliverable _D7.7. “Data management plan (Final Version)”_ objectives are to describe: a) the data and reports generated and provided openly to third parties b) the data sharing and c) the data archiving and preservation during the second half of the project (18-39M) including provisions for the data preservation beyond the project. SUREAL-23 is registered in the OpenAIRE Project - in a _volunteer basis_ \- in order to widely share its findings (Figure 1). For this purpose specific information/outcomes was deposited in ZENODO ( _https://zenodo.org/_ ) ; the European repository for the EU funded research. It is mentioned that the creation and update of the current DMP was facilitated by the OpenAIRE proposed web tool _DMPonline_ ( _https://dmponline.dcc.ac.uk_ ) . _Figure_ _1_ _. SUREAL_ _-_ _23_ _registration in OpenAIRE_ _| Explore_ # DATA SUMMARY SUREAL-23 project generates: 1. experimental results and findings related to the assessment of the developed sub-23nm particle sampling and measurement techniques (6 in total) and their demonstration, as described and discussed in reports delivered by M39; 2. experimental data and scientific findings based on a test matrix that combines sub-23 nm particle emissions measurements emitted by Gasoline Direct Injection (GDI), Diesel and CNG engines / vehicles with standard and under development measurement and sampling techniques. The up to date mature outcomes of the project were presented in relevant international conferences and published in scientific journals. Additionally, selected scientific reports and publications are available (in *.pdf formats) in the _open access_ European repository ZENODO, thus in OpenAIRE. Project’s public deliverables are also available in ZENODO / OpenAIRE and project’s website as well. Final data and reports are stored in Coordinator's (APTL/CPERI/CERTH) data server. Completed deliverables and reports are also available in the common _private_ "members area" accessed through project's website ( _http://sureal-23.cperi.certh.gr_ ) (Figure 2). _Figure 2. Common, private "members area" accessed through project's website._ Project’s scientific outcomes are expected to be useful to the scientific and academic society related to engine development, aerosol science, emission control and particle measurements. Moreover, the automotive industry and the instrumentation developers could be also interested. Last but not least vehicle particulate emissions regulators is expected to pay special attention on the sub-23 nm emission measurements, as already expressed during the Clustering meetings of the GV-02-2016 research projects organized by INEA and the two public events organized by the project i.e the Joint Workshop (10-11 October, 2018, Thessaloniki) and the Final Workshop (10 December, 2019, Lyon) (see Deliverables _D7.2 “First Dissemination Report”_ and _D7.3 “Final Dissemination Report”_ ). # FAIR DATA ## Making data findable Identifiability is obtained for each report / deliverable by filling a standard introductory table that includes information such as author's and co- author's names and affiliation, associated work-package and technical activity (task), date (due and delivered), responsible partner and (if any) involved partners. During document's preparation a versioning strategy is followed in order to facilitate the document's exchange among partners. All relevant scientific publications have a Digital Object Identifier (DOI) issued by the Journal itself, while the project’s reports / deliverables selected to be deposited at ZENODO repository, receive a DOI as well. Additionally, each deposition at ZENODO is linked with related keywords that at least include the “sub-23 nm particle emissions” term. Item’s versioning is also sustained by ZENODO (all versions are cited by using the same DOI). ## Making data openly accessible As already mentioned, selected Open Access project's data / reports, deliverables, research papers and public presentations are deposited in the European repository for EC funded research ZENODO. Consequently, items are accessible via OpenAIRE. During the whole duration of the project (1-39M) the below selected items were deposited at ZENODO with author’s permission (Figure 3a & b), including project’s public deliverables with technical / scientific content. The rest of project’s public deliverables relevant to management such as dissemination reports, DMP and exploitation plan are open accessed via project’s website, in the “NEWS” section ( _http://sureal-23.cperi.certh.gr/public-deliverables/_ ) . _Figure 3a. List of deposited SUREAL-23 items at ZENODO repository (1/2)_ _Figure 4b. List of deposited SUREAL-23 items at ZENODO repository (2/2)_ As clearly stated at the Consortium Agreement (CA), pre-existing data is of each partner's intellectual property. Such data is subject to each project partner's policy governing intellectual property. In case of "close" and confidential data / reports / outcomes produced in the frame of the project, they will be available only after an invention disclosure or provisional patent is filed. This mainly concerns either the data of the novel measurement instrumentation or the emission data of engines or vehicles that automotive industry provided under specific terms of use. Assessed research results were shared within the scientific community via presentations at conferences, workshops and other dissemination events (see Deliverable _D7.2_ “ _First Dissemination Report_ ” and _D7.3 “Final Dissemination Report”_ ). Whenever available, the DOI of each published item is given at the project’s website, section “NEWS” (e.g. Figure 5). _Figure 5. Example of web announcement of project’s publication, including information about its DOI._ ## Making data interoperable Interoperability of produced reports and outcomes is obtained by following standard measurement protocols such as the Particle Measurement Program (PMP) protocol, standard driving cycles for vehicle homologation (e.g. New European Driving Cycle (NEDC), Worldwide harmonized Light vehicles Test Procedure (WLTP)), as well as European and/or international certified measurement methods. Additionally, a standard technical and scientific vocabulary is used in order to document all research outputs. ## Increase data re-use The open data / reports are available for re-use through the practices already described at paragraph 3.2. by the time that each report is completed, delivered and approved by EC. Stakeholders from research, industry and regulatory authorities - that already mentioned in chapter 2 - is expected to be interested in the data/reports re- use. It is also mentioned that EC and especially Directorate-General "Growth" (DG-GRO), as well as the United Nations Economic Commission for Europe (UNECE), Particle Measurement Programme have already express their interest of re-using SUREAL-23 open reports for regulatory purposes. Re-use of the most remarkable findings of the project will be also ensured via a _joint_ _publication_ of the three GV-2-2016 “sub-23nm” clustering projects PEMS4NANO (G.A.724145), DownToTen (G.A. 724085 ) and SUREAL-23 that is currently under preparation. The above provisions for data accessibility and re-use do not apply for the results and produced knowledge that considered confidential or IP protected. In such cases - that mainly referred to parts or concepts of the developed measurement techniques - the involved partners are investigating manners of IP protection (see D7.4 “ _Exploitation Plan_ ”). # ALLOCATION OF RESOURCES Experimental data and related reports are produced, prepared and assessed in the frame of the technical and dissemination work-packages of SUREAL-23 project which covers the required costs of man-effort and any other required goods or services. Mrs. Eleni Papaioannou coordinates the project, as well as the Data Management activities, while Mr. Apostolos Tsakis is responsible for the data storage and security and Mrs. Penelope Baltzopoulou for the OpenAIRE and Zenodo repositories. # DATA SECURITY With regards to data security and storage, final experimental data and completed scientific and technical reports are stored on Coordinator's Data Server which is mirrored, routinely backed up and daily versioned. Data security is ensured by implementation of local (intra-laboratory) and central (CERTH's information department) firewalls. # ETHICAL ASPECTS No ethical issues are expected as already assessed according to EU rules and regulations and described at the SUREAL-23 "Description of Work" (DoW) document.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1174_ULISENS_726499.md
# DATA SET REFERENCE, NAME AND DESCRIPTION This section shows a description of the information to be gathered, the nature and the scale of the data generated or collected during the project. These data are listed below: ULISENS Project Parameters and Data are divided into Confidential and Non Confidential Information: <table> <tr> <th> **Confidential Information** </th> <th> **Non Confidential Information:** </th> </tr> <tr> <td> * Names of the providers of prototypes and their industrial details: Company names and addresses. * Technical files including all compulsory documents for demonstrators, with standards and applicable directives * Prototype testing results * Declaration of Conformity and Notified body testing results * Pilot plant design, parameters and critical data for the upscaling of the production. * Jobs and quality procedures. * Details of potential customers and interests related to the commercialization plan </td> <td> * Names of stakeholders in which the demonstrators are installed including details as names, addresses and other. * Results of the demonstration activities carried out * Contents of the training actions. </td> </tr> </table> # STANDARDS AND METADATA The main objectives of the ULISENS project are not scientific publications. However, _Open Access (OA) will be implemented in peer-review publications (scientific research articles published in academic journals), conference proceedings and workshop presentations carried out during and after the end of the project. In addition, non-confidential PhD or Master Thesis and presentations will be disseminated in OA._ The publications issued during the project will include the Grant Number, acronym and a reference to the H2020 Programme funding, including the following sentence: “Project ULISENS has received funding from the European Union´s Horizon 2020 research and innovation programme under grant agreement No 726499”. In addition, all the documents generated during the project will indicate in the Metadata the reference of the project: ULISENS H2020 726499. Each paper will include the terms Horizon 2020, European Union (EU), the name of the action, acronym and the grant number, the publication date, the duration of embargo period (if applicable) and a persistent identifier (e.g. DOI). The purpose of the requirement on metadata is to maximise the discoverability of publications and to ensure the acknowledgment of EU funding. Bibliographic data mining is more efficient than mining of full text versions. The inclusion of information relating to EU funding as part of the bibliographic metadata is necessary for adequate monitoring, production of statistics, and assessment of the impact of Horizon 2020 [2]. # DATA SHARING All the scientific publications of the Horizon 2020 project will be automatically aggregated to the OpenAIRE portal (provided they reside in a compliant repository). Each project has its own page on OpenAIRE ( _Figure 1_ ) featuring project information, related project publications and datasets and a statistics section. BIOTICA will ensure that if there were scientific papers derived from ULISENS project, they will be available as soon as possible in OpenAIRE, taking into account embargo period (in case they exist). **_FIGURE 1_ : ULISENS INFORMATION IN OPENAIRE WEB ( _WWW.OPENAIRE.EU_ ) ** BIOTICA will check periodically if the list of publications is completed. In case there are articles not listed it is necessary to notify to the portal. The steps to follow to publish an article and the subsequent OA process are: The final peer-reviewed manuscript is added to an OA repository. The reference and the link to the publication should be included in the publication list of the progress Report. When the publication is ready, the author has to send it to the coordinator, who will report to the EC through the publication list included in the progress reports. Once the EC has been notified by the coordinator about the new publication, the EC will automatically aggregate it at the OpenAIRE portal. # ARCHIVING AND PRESERVATION In order to achieve an efficient access to research data and publications in ULISENS project, Open Access (OA) model will be applied. Open access can be defined as the practice of providing on-line access to scientific information that is free of charge to the end-user. Open Access will be implemented in peerreview publications (scientific research articles published in academic journals), conference proceedings and workshop presentations carried out during and after the end of the project. In addition, nonconfidential PhD or Master Thesis and presentations will be disseminated in OA. Open access is not a requirement to publish, as researchers will be free to publish their results or not. This model will not interfere with the decision to exploit research results commercially e.g. through patenting [3]. The publications made during ULISENS project will be deposited in an open access repository (including the ones that are not intended to be published in a peer-review scientific journal). The repositories used by project partners will be: ZENODO will be used by ULISENS As stated in the Grant Agreement (Article 29.3): _“As an exception, the beneficiaries do not have to ensure open access to specific parts of their research data if the achievement of the action´s main objective, as described in Annex I, would be jeopardized by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access”._ This rule will be followed only in some specific cases, in those that will be necessary to preserve the main objective of the project. Dissemination Plan Research Results Data Management Plan Research Decision to disseminate / share Decision to exploit/ protect Publications Depositing research data Gold OA Green OA Restricted access and/or use Access and use free of charge Patenting (or other form of protection) And/or According to the “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020” [2], there are two main routes of open access to publications: **Self-archiving (also referred to as “green open access”):** in this type of publication, the published article or the final peer-reviewed manuscript is archived (deposited) by the author \- or a representative - in an online repository before, alongside or after its publication. Some publishers request that open access be granted only after an embargo period has elapsed. **Open access publishing (also referred to as “gold open access”):** in this case, the article is immediately provided in open access mode as published. In this model, the payment of publication costs is shifted away from readers paying via subscriptions. The business model most often encountered is based on one-off payments by authors. These costs (often referred to as Article Processing Charges, APCs) can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. As a conclusion, the process involves two steps, firstly BIOTICA will deposit the publications in the repositories and then they will provide open access to them. Depending on the open access route selected self-archiving (Green OA) or open access publishing (Gold OA), these two steps will take place at the same time or not. In case of self-archiving model, embargo period will have to be taken into account (if any). ## GREEN OPEN ACCESS (SELF-ARCHIVING) This model implies that researchers deposit the peer-reviewed manuscript in a repository of their choice (e.g. ZENODO). Depending on the journal selected, the publisher may require and embargo period between 6 and 12 months. The process to follow for ULISENS project is: 1. BIOTICA prepares a publication for a peer-review journal. 2. After the publication has been accepted for publishing, the partner will send the publication to the project coordinator. 3. BIOTICA will notify the publication details to the EC, through the publication list of the progress report. Then, the publication details will be updated in OpenAIRE. 4. The publication may be stored in a repository (with restricted access) for a period of between 6 and 12 months (embargo period) as a requirement of the publisher. 5. Once the embargo period has expired, the journal gives Open Access to the publication and the partner can give Open Access in the repository. Partner prepares the publication Partner notifies to project coordinator Partner stores the publication in a repository with restricted access Publication in OpenAIRE Coordinator notifies EC Partner gives Open Access to the publication Embargo period **_FIGURE 3_ : STEPS TO FOLLOW IN GREEN OPEN ACCESS PUBLISHING WITHIN ULISENS PROJECT ** ## GOLD OPEN ACCESS (OPEN ACCESS PUBLISHING) When using this model, the costs of publishing are not assumed by readers and are paid by the authors, this means that these costs will be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. These costs can be considered eligible during the execution of the project. The process foreseen in ULISENS project is: 1. The partner prepares a publication for a peer-reviewed journal. 2. When the publication has been accepted for publishing, the partner sends the publication to the project coordinator. 3. The coordinator will notify the publication details to the EC, through the publication list of the progress report. Then, the publication details will be updated in OpenAIRE. 4. The partner pays the correspondent fee to the journal and gives Open Access to the publication. This publication will be stored in an Open Access repository. Partner prepares the publication Partner notifies to project coordinator Partner pays the fees and gives Open Access to the publication Publication in OpenAIRE Coordinator notifies EC _**FIGURE 4: STEPS TO FOLLOW IN GOLD OPEN ACCESS PUBLISHING WITHIN ULISENS PROJECT** _ # BIBLIOGRAPHY 1. E. Commission, “Guidelines on Data Management in Horizon 2020. Version 2.1,” 15 February 2016. 2. E. Commission, “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. Version 2.0,” 30 October 2015\. 3. E. Commission, Fact sheet: Open Access in Horizon 2020, 9 December 2013.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1175_CITADEL_726755.md
# Executive Summary This deliverable aims to present a plan for the data management, collection, generation, storage and preservation related to CITADEL activities. In this action, we envision five different types of data: data related to the use cases, data related to the meta-analysis to be done in the social-sciences tasks, data coming from publications, public deliverables and open source software. The document presents, following the EC template [1], how these different types of data will be collected, who the main beneficiaries are, and how CITADEL will store them, manage them, and make them accessible, findable and re-usable. The text continues with the foreseen resources needed for the openness and data to finalize with security and ethical aspects that will be taken into consideration in the context of CITADEL. This plan is the first version of the data management plan, which will be updated in subsequent versions (M12, M24 and M36) as part of the Technical Reports, having as input the work carried out in the use cases (WP5), the social and technical work packages (WP2 – WP4) and the dissemination activities (WP6). # Introduction ## About this deliverable This deliverable focuses on the management of the data in CITADEL. In CITADEL there will be two different data, the first strand related to the publications generated as part of the research activities, and the second strand related to the data collected from citizens, users and non-users of digital public services, as well as from civil servants, that will be used as part of the implementation of the different key results established in the project. ## Document structure The document follows the established H2020 template for a Data Management Plan (DMP) [1]. Section 2 presents a summary of what the purpose of the data collection and generation is in the case of CITADEL. Section 3 explains how the data and metadata will be made fair, and thus accessible, findable and reusable. Section 4 briefly explains how the financial resources for this openness are envisioned at this stage to be allocated. Section 5 and 6 focus on the security and ethical aspects respectively. Section 7 presents the conclusions and future work. # Data Summary The purpose of the data to be collected in CITADEL is twofold. On one hand, to understand why citizens are not currently using a certain digital public service, even if they do have the skills for it. On the other hand to understand what the opinion is of citizens that are using a digital public service and how this digital service can be improved or created from scratch with the participation of the citizens. To realize this, CITADEL will develop the following Key Results (KR) [2]: * **KR1: CITADEL Recommendations and guidelines to transform PA** . A set of guidelines derived from the use cases and surveys will focus on the re-definition of the public policies, processes and services to be adapted to citizens´ and organizations’ (users and non-users) expectations. The use of these CITADEL recommendations will allow PAs save significant money in the re-design of their processes. * **KR2: CITADEL Information monitoring service:** This service monitors and analyses available citizens´ (user and non-users) data (e.g. feedback, open data, demographic statistics, preferences and so on). The objective is to extract, using big data, semantics and privacy preserving technologies, relevant information required for the formulation of the recommendations (KR1). This service is composed of two important assets: * Monitoring services asset: Based on semantics, this asset will select the relevant information in each moment and in each situation. o Analysis services asset: Based on the relevant information provided by the monitoring asset, this asset will provide KPI (Key performance Indicators) or if possible recommendations. * **KR3: CITADEL tool supported methodology for services co-creating.** This methodology will guide and support PAs in the co-creation process and will be customized for taking into account the characteristics of each PA. The adaptation could be done based on different aspects like, legal systems, scope and audience or the target group. * **KR4: CITADEL Co-creation collaborative tool** that allows the PA, the Private Sector and citizens to co-create new public services at a conceptual level. * **KR5: CITADEL Discovery service,** that allows discovering digital public services based on the citizen’s profile and the result of semantic analysis of the data such as preferences, utilization, opinions and so on. * **KR6: CITADEL Assessment services** that are responsible for allowing those citizens that use digital public services to evaluate them providing useful information for improving them. * **KR7: CITADEL Security toolkit:** A set of assets integrated in CITADEL: * Dedicated asset for the integration of privacy regulations * Implementation of cloud- and device-based personal data privacy features o Privacy-by-Default features * **KR8: CITADEL Ecosystem** is the main result of the project and aggregates both the social and technical aspects. CITADEL Ecosystem will include not only the services provided by the other objectives of CITADEL (KR2: Information monitoring service; KR3: Co-creation collaborative tool; KR4: Discovery service; CITADEL DISCOVERY SERVICE (KR2): Assessment services and KR6: Security toolkit) but also those outcomes focused on the transformation of the CITADEL Recommendations for the re-definition of the public processes (KR1: Recommendations to transform PA). All the CITADEL services to enable both the pillars of ‘Understand to transform’ and ‘Co-create to transform’ will be integrated as part of the CITADEL ICT Enablers. The high level architecture of the different components and their link to the CITADEL KR’s are shown in the next figure. **Figure 1.** CITADEL High level architecture and the link to CITADEL’s envisioned Key Results Out of all the Key Results envisioned for CITADEL, the ones that are more data-oriented, either by using, generating or analyzing data are KR2 - CITADEL Information monitoring service, CITADEL DISCOVERY SERVICE (KR2) - CITADEL Discovery service, KR6 - CITADEL Assessment services and KR7- Security toolkit, which aims to apply privacy by design and security by design principles as well as anonymization and pseudo-anonymization techniques. In CITADEL three distinct environments are envisioned, namely, a development environment, an integration environment and dedicated environments for each of the use cases.. The development environment will be deployed in the corresponding sites of the technological partners (imec, FINCONS and TECNALIA). The integration environment will be deployed at TECNALIA’s DevOps infrastructure and will include the different components developed by the technological partners. For testing purposes, these partners will use synthetic, fake, data or ‘persona’ data, but never real data, anonymized or not, coming from the use cases. The third environment is the customized environment for the four distinct CITADEL use cases. The CITADEL platform and the already existing IT systems of the four different use cases will be integrated by means of REST APIs, and will be deployed on the PAs own sites. The data that will be collected from these will follow the principles underlined along this deliverable, fulfilling always the GDPR requirements. **Figure 2.** CITADEL Envisioned environments Different types of data will be collected and generated in the context of the project. These can be summarized as follows: 1. Data related to the execution of the use cases, testing KR1 –KR8. 2. Data related to the results of the analysis and metadata analysis of the social-related work packages (WP2 and WP3), which will be embedded (partially or wholly) in KR1 – KR8. 3. Data related to scientific publications. 4. CITADEL public deliverables. 5. CITADEL Open Source Software ## Data related to the use cases The aim of this section is to establish what the purpose is of the collection and generation of data within CITADEL, which the sources of this data are and how this is important to realize the Key Results of CITADEL. Furthermore, we also determine the baseline for the formats for the data both collected and generated and the main beneficent of the collection and generation of such data. Use cases are a key pillar of CITADEL as they will validate the work (in WP5) to be performed in the social and technological work packages (WP2-WP4). In principle, for the real execution of the use cases, CITADEL trials will involve real end users (i.e. data subjects) using a test bed designed to emulate operational services they are using or intend to use. Wherever possible, use cases will use synthetic (i.e. fictional but representative) data, avoiding the use of personal data from operational systems at the PAs. However, this does not mean that no personal data will be collected from participating citizens. In such cases, anonymized data from users of digital public services will be collected. These data can originate either from the databases already in production at the Public Administrations where citizens’ data are stored, which in the event they are not anonymized, they will be anonymized or de-anonymized through a dedicated anonymization component of CITADEL, as part of the module ‘Security Management’ of Figure 1. In the event surveys are to be launched, as for instance in the case of the VARAM’s use case, in order to capture and understand why digitally skilled citizens are not using the offered online governmental services, or to understand the civil servants’ willingness to change, the users will be required to provide some personal data (such as age group, education level), but this will be stored in a confidential manner. Data will be anonymised once data collection has been completed. This is especially relevant for CITADEL Information monitoring service (KR2) and CITADEL Evaluation Service (KR6). For the CITADEL Discovery service (KR2), in which citizens are recommended digital public services, based on their profile, this will be pseudo- anonymized, taking into consideration generic characteristics such as services already being used by that specific citizen as well as age, for instance in the cases of ICTU. This data will be used following the European GDPR legislation for personal data. The generic sources of data for CITADEL Information monitoring service (KR2), CITADEL Discovery Service (KR5) and CITADEL Evaluation Service (KR6) are envisioned as follows: * **Open Data** coming in a first phase, from EU-wide open data portals such as, the European Data portal 1 , and the EU open data portal 2 . In a second stage, it might be needed to consider other open data portals at EU, national, regional or local level. This will be updated in subsequent technical reports due M12, M24 and M36. The format of such data is preferably JSON, CSV, TSV (Tab separated values), or when relevant, SDMX for statistical data. The exact data sets to be used are part of the study to be performed in D2.1. * **Synthetic data** , generated for testing purposes. This data will be preferably in JSON. * **Citizens’ data** stored in the governments participating in CITADEL, namely, Antwerp, Regione Puglia, the Netherlands and Latvia. As stated before, not be shared with the technological partners. Furthermore, for the CITADEL Information monitoring service (KR2) and CITADEL Evaluation Service (KR6) data will be (pseudo-)anonymized. The preferred formats are JSON or CSV. The collected data will be curated and processed as part of CITADEL Information monitoring service (KR2), CITADEL Discovery Service (KR5) and CITADEL Evaluation Service (KR6), with the main aim to provide recommendations (KR1) to the Public Administrations, which will be shown, mostly, in the form of KPI reports. The format for the generated metadata will be preferably JSON, RDF and XML. The size of the expected datasets is unknown at this stage of the project. Furthermore, each use case will need certain specific data for the execution of their own characteristics. This data will be useful only for each of the PA that owns / collect the data. Next, we proceed to describe, by use case, the data available at this stage of the project. ### Stad Antwerpen For Stad Antwerp, this is the current data available: * **Back-end data of** the Antwerp citizen platform (antwerpen.be): * These data are not anonymized, neither is Stad Antwerpen authorized to use the data for other purposes than the stated objective (i.e. delivery of the requested service). This is part of the citizen platform’s general conditions and is confirmed by legal partner, Time.Lex. * In January 2017, a censored back-end data sample has been delivered to Tecnalia to give insight in the data structure. Conducting trials on the data thus is possible. Hence, back-end data will always be stored on local servers and managed by Digipolis (the externalised IT-department from Stad Antwerpen). This policy applies to the use case within CITADEL as well. * **Front-end data** Google analytics data of the platform o These data are anonymised and aggregated and can be used within the CITADEL project. * **Marketing research data,** such as: Antwerp Monitor, Panel surveys, and Surveys that are conducted in the local city desk, which are anonymised and aggregated and can be used within the CITADEL project. A specific survey to measure usability within the use case can be conducted on Stad Antwerpen’s own panel ### VARAM Current available data in the use case of VARAM include: 1. Latvia's e-index results raw data; 2. National Citizens' portal www.latvija.lv, Google Analytics Data; 3. Service catalogue and service descriptions of more than 2000 public services; 4. Public opinion evaluation on public sector services; 5. Regional development data. 6. HotJar (Analytics and Feedback application) installation in portal All existing data are anonymous. There might be personalised data when CITADEL co-creation tools are used. However, as expressed before, the intention is to use fictional but representative data in order to evaluate the adequateness of CITADEL ICT Enablers for cocreation. ### ICTU ICTU does not collect any data such as names or addresses but the current system does collect IP addresses or other information by means of ‘cookies’. ICTU, however, will generate data during the co-creation activities. These data will be gathered will through focus group of youngsters turning 18 or parents/caretakers for, with a need analyses; prototyping testing, etc., users for prototyping, customer satisfaction surveys and qualitative research (interviews, workshops) on target audience and needs analysis (and validation). For such data, inform of consents will be collected and signed. ### Reglione Puglia In the case of Regione Puglia, existing data includes tourist flow, prices and services, personal data of business players, promotion data (photo, description, and activities), special offers, leisure activities and events, pathways, attractors, social data, google analytics. These data are anonymized or (pseudo-) anonymized. In the case this is not feasible, making participant subjects easily recognisable, fictional data will be sought to be used instead. ## Data related to the analysis and metadata analysis of socialrelated work packages In the context of the activities related to ‘Understand to transform’, vignette studies will be carried out to understand the willingness of civil servants to changes. Vignette studies are mostly used for qualitative research studies and they can be defined as “short stories about hypothetical characters in specified circumstances, to whose situation the interviewee is invited to respond” [3]. In the context of CITADEL, these studies will be used as a complementary technique to other data collection methods. Before the vignette studies are carried out, civil servants participating in them will be required to sign informed consent forms. Their answers will be used, processed and stored in an anonymously manner. Moreover, in those activities for Understanding the citizens, surveys and questionnaires will be launched to understand why citizens are or are not using digital public services. This is especially relevant in the case of VARAM. The data provided by the participants to these questionnaires will be treated in an anonymous way. Data allowing to identify participants will be deleted prior to data analysis. In the context of the activities related to ‘Co-create to transform’, a metadata analysis of existing co-creation, co-production and co-design approaches will be performed. The origin of the data is theoretical and empirical studies, papers and projects’ deliverables (e.g. from FP7 or H2020), from which the CITADEL partners own the rights, either because they are the authors, for the access fees paid to editorial companies, they are published following an open access policy or are publicly available. For this metadata analysis, no standard format will be used, but rather an ad-hoc one, created for the purpose of CITADEL needs. The results will be exposed mainly in the form of a deliverable, a study, which will be published on the CITADEL website and as part of the rules that will be used to customize the co-creation methodology. The data created as part of the previously explained (meta-)data analysis in the context of both ‘Understand’ and ‘Co-create’ will serve as input data for the implementation of the ICT enablers to be developed in the project. The exact format is at this stage still to be decided, but it will be customized for the needs of the CITADEL ICT Enablers. The transmission of this data will be done programmatically. ## Data related to scientific publications As part of the dissemination activities, CITADEL will publish scientific publications in conferences and journals. Following the EC Mandate on Open Access [4], CITADEL adheres to the Open Access policy choosing the most appropriate route for each case. CITADEL favours whenever possible, the ‘green’ open access, in which the published article or the final peerreviewed manuscript will be deposited in an online repository, before, at the same time as, or after publication, ensuring that the embargo period requested by certain publishers has elapsed. The format in which the data related to the scientific publications will be accessible pdf files. The metadata to be used will be compliant with that of the repository where the paper is to be deposited and will be compliant with the format requested by Open Aire, as to ease the index. The data related to the scientific publications will be relevant to scientists in the field of innovation in the public sector as well as developers and technical-oriented communities. ## CITADEL public deliverables All information and material related to the public such as public deliverables, brochures, posters and so, on will be freely available on the project website in the form of accessible pdf files. When IPR of foreground knowledge needs to be protected, the corresponding disclosures will be published. All deliverables include a set of keywords and a brief description that are aimed to facilitate the indexing and search of the deliverables in search engines. The keywords in each deliverable aim to stress the main topics addressed in the document, be it a report or a software – related document. The audience of the public deliverables of CITADEL range from general audiences, interested in the activities performed in the project, to more specialized audiences, interested in social sciences, innovation in the public sector, ICT enablers for the Public Sector, or experiences gathered through the pilots. ## CITADEL Open Source Software CITADEL will develop ICT Enablers as means to help Public Administrations to aid in their transformation. The source code will be released, whenever IPR of the partners is not breached, under a friendly open source licensing schema. CITADEL ICT Enablers will be developed in a variety of programming languages but deployed using a container-based approach following a micro-services [5] architecture. The size of the source code, the readme files, the user manual and technical specifications as well as the docker scripts cannot be known at the moment. The open source software is aimed at developers of ICT providers of the PAs and PA providers’ IT departments. # Fair Data This section focuses on the following aspects, namely, on the feasibility and appropriateness of making data findable, openly accessible, interoperable and reusable in the context of CITADEL. ## Data related to the use cases At this stage, for all use cases, both the collected and generated data, anonymized or fictional, are not envisioned to be made openly accessible. In principle, all data collected, stored and processed will be treated as strictly confidential, and kept for a specific period of time as stated on the consent form. This time period shall be no longer than necessary to achieve the aims of the scenario and to validate the project objectives, and after this point, the data will be destroyed as required. The openness of the fictional data for testing purposes is, however, an option that will be evaluated as part of the exploitation and sustainability strategy and the final decision will be provided in subsequent updates of the Data Management Plan. In the event they are finally released, they will follow machine – readable formats such as CSV, RDF/XML or RDF/JSON, and when appropriate, the DCAP-AP format [7]. Following that structure, it will immediately make the use cases’ data immediately identifiable, openly accessible, semantically interoperable and re-usable. Each metadata set will be accompanied by its licensing schema. ### Stad Antwerpen As stated in Section 2.1.1, the back-end data from this scenario will always be stored on local servers and managed by the externalized IT-department from Stad Antwerpen, Digipolis. The other data, anonymized or pseudo-anonymized, as well as the synthetic data that will be used when needed, will also be stored in the instance of CITADEL that will be created for the Stad Antwerp scenario (see Figure 2. CITADEL Envisioned environments). Access to the data will be restricted to citizens under their pseudonyms as well as authorized members of the CITADEL implementation team who might require access to the data. VARAM In the case of VARAM, and as represented in Figure 2. CITADEL Envisioned environments, the data related to this use case will also be stored in VARAM’s premises, following the Latvian legislation procedures on data protection. The data gathered through the surveys as well as the citizens’ data will be anonymized or when not possible, it will be pseudo-anonymized. For the activities in which citizens’ participation will be required in a proactive way, such as for the co-creation phase, the approach to be followed will be under a pseudo-anonym procedure. That is, real data of users will not be used or disclosed, but rather fictional although representative data will be favored. Access to the data will be restricted to citizens under their pseudonyms as well as authorized members of the CITADEL implementation team who require access to the data. ### ICTU As in the previous cases, in ICTU the same approach for the usage of the data of the citizens participating in the co-creation activities will be sought, that is, carry out these activities under a pseudo – anonym. Access to the data will be restricted to citizens under their pseudonyms as well as authorized members of the CITADEL implementation team who require access to the data. ICTU will also store its own data in its own CITADEL customized environment. ### Regione Puglia Regione Puglia will also host a customized CITADEL environment in its own premises, following the Italian legislation on data protection. As in the previous use cases, access to the data will be restricted to citizens under their pseudonyms as well as authorized members of the CITADEL implementation team who require access to the data. ## Data related to the analysis and metadata analysis of social – related workpackages Data related to the metadata analysis to be performed in the social sciences work packages will be released mainly as statistical data, following when feasible, standardized formats (e.g. SPSS). In the context of the work to be done in the strand of ‘Understand to transform’, the software AutoMap 3 is envisioned to be used to analyse the willingness of civil servants to change processes, policies or services, either in conjunction with citizens (co-design, co-create) or individually. AutoMap delivers as output a specific format DyNetML as well as standard formats such as CSV or XML. The personal data of the civil servants participating in the questionnaires and vignette studies will not be disclosed, being therefore, anonymized. The licensing schema for these data is currently under evaluation but the project will favour its openness as much as possible. However, depending on the licensing schema selected, the reusability may be limited. The collected and generated data will be stored as part of the results of CITADEL on the project’s website and it will be available 3 years beyond the time frame of the project. In addition, where relevant for secondary research, data will be deposited in open access social data archives such as DANS or Gesis. ## Data related to scientific publications The project will favour, whenever possible, the ‘green’ open access, in which the published article or the final peer-reviewed manuscript will be deposited in an online repository, before, at the same time as, or after publication, ensuring that the embargo period requested by certain publishers has elapsed. The Consortium will ensure open access to the publication within a maximum of six months. CITADEL partners have the liberty to choose the repository where they will deposit their publications, although Open Aire – compliant and Open Aire - indexed repositories such as Zenodo [5] will be favoured. The partner TECNALIA will use its own repository, already indexed by Open Aire. For the case of the scientific publications, a persistent identification number will be provided when uploading the publications to the selected repository / repositories. ## Data related to deliverables For the project’s publications on the website, the naming convention to be used will be <<Dx.y Deliverable name _ date in which the deliverable was submitted.pdf>>. All deliverables include a set of keywords and a brief description that are aimed to facilitate the indexing and search of the deliverables in search engines. The deliverables will be stored at TECNALIA’s hosting provider, and for three years beyond the duration time frame of the project ## Open Source Software CITADEL has envisioned following a freemium business model for the ICT enablers and the CITADEL ecosystem which implies a free version of the software as well as a premium one. The free versions of the open source components of the CITADEL ICT Enablers will be released as open source in a source code repository, namely GitHub, which will be stored at TECNALIA’s premises. The free versions of these components will be accessible, findable and reusable by any developer that is interested in the CITADEL ICT Enablers. Furthermore, CITADEL will explore the possibility of developers outside of the consortium to experience with such components, which will ensure the uptake and sustainability of the software results of the project. Alongside every software component, a readme file as well as technical specifications document will be released and made available, and in addition the docker script so as to be able to deploy the container in any desired infrastructure. # Allocation of resources CITADEL does not envision an additional need for beyond the duration of the project to handle data or making the data fair. As expressed before, open access repositories will be favoured. In the case of open source software, the partner TECNALIA will ensure that the github repository is available after the project duration, either by keeping it in its own premises or by transferring it to existing open source projects and communities, such as JoinUp [7]. # Data Security CITADEL will develop the ICT enablers adhering to the security – by – design and privacy – by design principles that allow the security and audit standards to remain consistent across multiple environments. Furthermore, as part of the CITADEL Security toolkit (KR7) and the overall CITADEL ecosystem (KR8), the following components are at this stage foreseen in the architecture (see next figure): * 1) Access Management, which grant authorized users the right to use a service, while preventing access to non-authorized users of CITADEL; * 2) Credentials Management to manage credential information such as user names and passwords, and * 3) Anonymization, which is responsible for removing personally identifiable information from user data or pseudonymizing the personally identifiable information. **Figure 3.** Security components envisioned in the CITADEL ecosystem as part of the ICT Enablers Moreover, CITADEL will ensure that the General Data Protection Regulation (GDPR), which will enter into force in May 2018, is ensured, especially in regards to protection of private data. The security components shown above will be implemented with that principle in mind. # Ethical Aspects The basis of ethical research is the principle of informed consent. All participants in CITADEL use cases will be informed of all aspects of the research that might reasonably be expected to influence willingness to participate. Moreover, project researchers will discuss before and after each practical exercise (e.g. interview, co-creation session, etc.) to maintain on- going consent. Participants will be recruited by each organization leading the use cases (VARAM, ICTU, Stad Antwerp, and Regione Puglia) and other supporting organizations (e.g. InnovaPuglia, FINCONS) and will cover more than one type of citizens. If participants wish to withdraw from the participation in the use cases at any time, they will be able to do it, and their data, even the pseudo-anonymized data, will be destroyed. A specific task has been included in CITADEL to ensure that ethical principles are used throughout the use cases, which are clustered together for management purposes in the work package related to the use cases (WP5). This task involves the performance of a privacy impact assessment per use case. CITADEL also has a task dedicated to analyse technical and operational issues related to privacy, and identify suitable solutions to be included in the CITADEL framework. Furthermore, CITADEL has as one of the key results to be delivered a Security Toolkit, which looks for the application of security-by- design and privacy-by-design principles # Conclusions This deliverable has presented the plan for the management of data in the CITADEL project. In this action, several types of data will be collected and generated. Specifically, we envision that there will be five different types of data, namely data coming from the use cases, from the metadata analysis to be performed in the social sciences work packages, from publications, deliverables and open source software. This data will be anonymized or pseudo- anonymized using, when relevant, an anonymization engine as part of the ICT enablers to be developed. In the event this engine cannot be used, such as in surveys to civil servants or directly to the citizens, manual anonymization techniques will be used. A combination of real data coming from the different Public Administrations with fictional but relevant data will be sought. GDPR compliance will be ensured. All data and metadata generated will follow machine readable formats, such as CSV, RDF or JSON. Data from the use cases will be stored in the use cases’ premises fulfilling the relevant legislations. Data from publications will be stored in Open Aire indexed repositories favouring the green model whenever possible. Other publications such as deliverables will be stored at TECNALIA’s hosting services. This deliverable will be updated in subsequent releases, namely in M12, M24 and M36 as part of the technical reports. It is envisioned that in those versions the aspects that at this stage are not fully clear (e.g. naming conventions, versioning, dataset size) will be clarified as work progresses in all the work packages.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1177_FRAME_727073.md
# 1 Introduction The main contribution of D7.4 is to introduce the changes undergone in Work Package 5 which now relies on various data sources to achieve the new objectives described in the Grant amendment: on the one hand, providing new Total Productivity Estimates for European countries, and on the other hand, determining the respective roles of drivers of unemployment in explaining why some European countries recovered faster than others after the Great Recession. To do so, Work Package 5 now involves different empirical estimations. Consequently, two Work Packages are currently using data. The report is organized in two main sections related to Work Packages 5 and 6 respectively. The conclusion summarizes the key implications regarding the replicability of the project findings. # 2 Work Package 6: estimations of the R&D and Technologies adoption parameters The first set of estimations relates to R&D parameters. P1-P3 are estimated based on patent data and OECD data related to Research and Development (R&D) expenditures. P4 and P5 are based on previous work developed by Comin and Mestieri (2014), known as the CHAT dataset. Finally, P6 is estimated by aggregating and merging two sources of data: micro-data coming from ZEW and the confidential dataset summarizing all agreements between firms and the Fraunhofer institutes. ## 2.1 Data description: micro datasets ### 2.1.1 Mannheim Innovation Panel (MIP) The MIP represents an annual survey conducted by ZEW on the behalf of the German Ministry of Research and Education. The MIP provides information about the introduction of new products, services and processes, the expenditures for innovations and how the economic success achieved with new products, new services and improved processes. In addition, the survey gives information about the factors which promote and also hinder innovation activities of enterprises. The innovation survey from ZEW lays an important basis for evaluating Germany’s technological performance. It is also the basis for the German Community Innovation Survey (CIS), which constitutes the EU science and technology statistics surveys carried out each two years by EU member states and number of ESS member countries. ### 2.1.2 PATSTAT PATSTAT contains bibliographical and legal status patent data from leading industrialised and developing countries. This is extracted from the EPO’s databases and is provided as raw data or online. The PATSTAT product line forms a unique basis for conducting sophisticated analyses of bibliographic and legal status data. It has become a standard in the field of patent intelligence and statistics. The PATSTAT product line consists of three individual databases. They are available in raw data format or via PATSTAT Online, a web-based interface to the databases. For the conditions related to different access modalities, see https://www.epo.org/searchingfor- patents/business/patstat.html#tab-2. ## 2.2 Data description: macro datasets ### 2.2.1 GERD dataset - OECD The OECD data is available online and covers a wide range of OECD and non-OECD countries. Data is available from 1981 onwards. RD data reported in this dataset have been collected according to 2002 guidelines of the Frascati Manual, which have been now superseded by the Manual’s 2015 edition. Revised definitions are not expected to revise significantly the major indicators. Data are provided in million national currency (for the euro zone, pre-EMU euro or EUR), million current PPP USD and million constant USD (2010 prices and PPPs). This table contains research and development (R&D) expenditure statistics. Data includes gross domestic R&D expenditures by sector of performance (business enterprise, government, higher education, private non- profit, and total intramural) and by source of funds (business enterprise, government - including public general university funds -, higher education, private non-profit and funds from abroad - including funds from enterprises and other funds from abroad). ### 2.2.2 CHAT dataset The Cross-country Historical Adoption of Technology (CHAT) dataset is an unbalanced panel with information on the adoption of over 100 technologies in more than 150 countries since 1800.It contains information on the diffusion of about 104 technologies in 161 countries during the last 200 years. It extends the data used in Comin and Hobijn (2004) and Comin, Hobijn, and Rovito (2006). Almost all of our source data is only available at an annual frequency. For some of the older technologies, like steamships, data go back until the early Nineteenth Century. The last year in the sample is 2003. Some data, especially in the earlier part of the sample, is not available at an annual frequency. The technology measures in CHAT capture a similar intuition. They are either: (i) the number of capital goods specifically related to accomplishing particular tasks, (ii) the amounts of particular tasks that have been accomplished, (iii) the number of users of a particular manner to accomplish a task. For more details about CHAT, see http://www.nber.org/papers/w15319.pdf. ## 2.3 Access to the datasets ### 2.3.1 Micro-data: replication of results vs confidentiality The MIP data is composed of micro-data (German firms) and their use for scientific purposes is free of charge. The ZEW places value on the fact that the whole scientific community benefits from the MIP. The data is placed at the disposal of external users in an anonymous form (Scientific Use File / Education Use File) for scientific, non-commercial purposes. At the moment more than 100 scientists use the Scientific Use files, and over a dozen of researchers visit ZEW’s Research Data Centre every year. For more information about the Data Centre, please see https://kooperationen.zew.de/en/zew- fdz/home.html. PatStat is widely used in the academic community to extract patent data. Even if the latter is private, most of universities with faculties in Economics and Economics of Innovation acquires licenses for its use. The scripts linked to the patent extraction can be found on the GitHub depository in order to replicate the results for each sector and country over time. The only exception to an opened access to the data concerns the micro-data about the confidential agreements between the Fraunhofer institutes and German firms to license, or to perform R&D. The FRAME project could benefit from a specific allowance to use the data within the FRAME project only (see Appendix A for more details). ### 2.3.2 Aggregated data: replication and public sources Besides the limitations previously mentioned, the other data sources are already publicly available: the GERD dataset is available online 1 , CHAT dataset 2 . To ease the replication of the research results across Work Packages, the detailled calibrated estimates will be summarized on the project website. Moreover, the scripts used in extracting patent data are already available on the GitHub depository 3 . Doing so allows people with licences to PatStat to replicate the queries and estimations of P1-P3. # 3 Work Package 5 The reference method developed by Basu, Fernald, and Kimball (2006) cannot be easily applied in the European context. The lack of available quaterly data does not allow a direct application of their methodology to the European case. However, the EU-KLEMS database provides an alternative by providing series of annual TFP measures. The main drawback is linked to the assumption behind the estimation: constant returns to scale and do not adjust for changes in factor utilization. Work Package 5 proposes to extend these estimations by relying this assumption and uses EU-KLEMS as a starting point. ## 3.1 TFP estimation: EU countries vs the USA ### 3.1.1 Main datasets: description and access Work Package 5 uses outputs and inputs measures at the industry level (for further details, see O’Mahony and Timmer (2009) and Jäger (2017)). Two main sources of data are used to evaluate the added-value of the developed methodology. First, the EU KLEMS is used for EU countries to provide annual industry-level growth accounting data. Second, the World KLEMS data has been used to assess the relevancy of the methodology to estimate the TFP for the USA. EU-KLEMS is maintained and updated on the official website of the initiative: http: //www.euklems.net/. Therefore, data is freely available and recently updated. The EU KLEMS updates in the new ISIC Rev. 4 industry classification are provided on a country by country basis. Similarly, the use of capacity utilization is also available on the European Commission website and freely available https://ec.europa.eu/info/business-economy- euro/indicatorsstatistics/economic-databases/business-and-consumer- surveys/download-business-a nd consumer-survey-data_en. Regarding the USA, the industry measures used to estimate the TFP are coming from another publicly dataset, available online http://www.worldklems.net/ data.htm. The US capacity utilization data comes from the Federal Reserve Board’s monthly reports on Industrial Production and Capacity Utilization (G.17) 4 The data is constructed by the Federal Reserve on the basis of an underlying Census Bureau survey of manufacturing firms, the Census Bureau’s Quarterly Survey of Plant Capacity (QSPC). In order to tackle endogeneity, Work Package 5 relies on different variables to instrument the relationship between the capacity utilization and TFP. Therefore, secondary datasets are also involved. ### 3.1.2 Secondary datasets: description and access Work Package 5 relies on different sources of shocks to tackle endogeneity: shock in oil prices, monetary policy shocks, and fiscal policy shocks. Oil prices are computed by deflating the Brent Europe price of oil with each country’s GDP deflator for European country. The latter relies on two public available data sources: the Brent Europe price (COILBRENTEU) from the World Bank 5 and the OECD GDP data see https://data.oecd.org/gdp/gross- domesticproduct-gdp.htm. For the USA, oil prices are retrieved from FRED (Federal Reserve Bank of St. Louis) available online at https://fred.stlouisfed.org/series/WPU114112153. The variable linked to uncertainty has been computed based on the number of journal articles per year and country, and known as the Economic Policy Uncertainty (EPU). This data source is already widely used in the economic and finances academic communities 6 . The EPU database is maintained and available by various US universities, its use is also free of charge and available online: www.policyuncertainty.com. The variable based on monetary shocks comes from the ECB policy announcements which are used in Jarocinski and Karadi (2018) but is not yet publicly available. For the moment, the study is under review and the summary of the monetary shocks has been directly communicated to the Work Package 5 team by Jarocinski and Karadi. When the study will be published, it will be possible to replicate the estimations with the specified monetary shocks. For the United States, the WP5 team uses the series of narratively identified monetary policy shocks from the seminal work of Romer and Romer (2004), as updated in Wieland and Yang (2016) and provided at an annual frequency in the latter paper 7 . For fiscal shocks, WP5 mainly relies on a database on fiscal consolidation shocks compiled by Alesina et al.(2015), which identifies changes in taxes and government spending motivated by debt and deficit reduction concerns, and therefore arguably unrelated to productivity shocks. Their database, which builds on earlier efforts by Pescatori et al. (2011), is available at the annual level for all countries in our sample between 1978 and 2014 8 . For the United States, a measure of exogeneous tax changes developed by Romer and Romer (2010) 9 is used and available at the quarterly level for the period 1945-2007. ## 3.2 Drivers of unemployment The aim is to evaluate how much discount factors can explain of the actual dynamics of unemployment by comparing the unemployment rate predicted by the model to the actual data for each of the four countries we consider. To evaluate the extent to which variation in discounts can explain unemployment variability across EU countries, WP5 relies on realized country specific returns on stock market data. Data is provided by WRDS.The estimations are also compared with the Leading Economic Indicators from OECD, as these measures have been shown to predict stock maret returns (see Zhu and Zhu, 2014 on Financial Research Letters) 10 . Before doing that, WP5 explores the qualitative predictions of the model by discussing the Impulse-Response Functions (IRFs) which involves different datasets due to the calibration exercises. ### 3.2.1 Data description SDF are estimated with data collected onstock market returns as a measure of risky return in each country. WRDS offers measures of national stock market returns and are computed (by WRDS) as weighted averages of price variations of each stock in each stock exchange. The discussion of the IRF involves different data sources to calibrate the model. First, to benchmark the model, the project team of WP5 tries to mimic the results of Hall (2017) by using US data. The latter is about the stock market prices and dividends. The data has been collected by Professor Robert Shiller. Secondly, the authors discuss the results in the light of previous contributions for EU countries, namely Elsby et al.(2013). The WP5 project team does not follow the same methodology as Elsby et al. (2013) but computes the productivity shocks by dividing the GDP of various countries by the total number of workers in these countries. D5.1 and D5.2 use the calibration existing in the literature but is rather US- biased. In order to provide relevant estimates for the EU countries, the authors have calibrated the model with OECD data to characterize the labour market dynamics with the: NRR (net replacement rates, for the new calibration), DUR_D (unemployed persons by duration, to calibrate job-finding and separation rates) and MEI (for quarterly measures of unemployment). ### 3.2.2 Access to the datasets The market returns are estimated based on the computation of a private database provider, Wharton Research Data Services (WRDS). WRDS offers a dataset called “World Indices by WRDS”. WRDS computes of stock market returns at the country level, so what WRDS does her is to aggregate the stock market returns of listed companies from the security issue level for a given country. The access to the data requires a license and can be accessed online: https:// wrds-www.wharton.upenn.edu/pages/support/manuals-and-overviews/wrds- world-index/. The Euro OverNight Index Average (EONIA) as a measure of the net risk-free return. The data is provided by the Statistical Data Warehouse of the European Central Bank. All series are expressed in percent per annum and available at monthly frequency. The data collected by Robert Shiller is available publicly at http://www.econ.yale.edu/shiller/data/ie_data.xls. Such dataset has monthly observations about prices Pt and dividends dt, which have been provided by Standard and Poor’s. Finally, the productivity shocks are also based on another public source of data, see http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=nama_10_ gdp&lang=en and http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=namq_10_ pe&lang=en. Finally, the calibration of the labour market parameters is also by definition available for replication and for different countries to fit the researchers’ needs for replicability purposes as well. For more information about the data, please see https://stats.oecd.org/ Index.aspx?DataSetCode=NRR. # 4 Conclusion Broadly speaking, the FRAME project will ease the access to the data and scripts for replication within the scientific community. Most of the sources of data is publicly available, either by being in opened access already, or by belonging to national statistical offices. Consequently, the use, archiving and, preservation of these datasets follow the rules of the respective institutions. However, due to confidential agreement and copyrights, two micro sources of information will not be available (i.e. PatStat and Fraunhofer Institutes licensing agreements). The confidentiality linked to the data does not impact the replication of the macro-findings across Work Packages because the aggregated estimations to define the calibration of the model will be provided. Only the micro-estimations from Work Package 6 will not be possible to replicate. The archiving policy of the data and scripts involved in the FRAME project will follow the ZEW Data Protection Policy. Consequently, the latter will be available for replication and scientific purposes for the next 10 years at the ZEW data center 11 . The upload of the different scripts linked to each type of modelling will be achieved after receiving the acceptance of scientific journals of the related studies. Doing so will ensure the publication of the FRAME results and their replication. The access to the different datasets involved within the project is centralized on the website 12 to cover the largest spectrum of users. The project website links to the official sources of the data but the datasets are not associated to official PID. It is therefore likely that the url used to cross-link the datasources might be affected. The Data Management Plan has been designed to be the most user friendly as possible by providing access to most of the data involved in the modelling approaches but also in calibrating the models. The depository will be updated over the submission process of the different deliverables. The deliverables will be available on the project website after receiving the acceptance of the deliverables by the European Commission. The scripts linked to the different models will be made available for download when the project members will receive the acceptance of their related articles into scientific journals. The scripts linked to the modelling approaches are based on the Dynare software, which is available for free. # A Agreement for the exploitation of the Fraunhofer dataset FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan FinalDataManagementPlan
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1180_SHiELD_727301.md
# Executive Summary The objective of this deliverable is to present a data management plan for the SHIELD project. This document covers a wide set of activities such as data collection, generation, storage and preservation. In this action, we envision five different types of data: data related to the use cases, data coming from publications, public deliverables and open source software. The document presents, following the EC template [1], how these different types of data will be collected, who the main beneficiaries are, and how SHIELD will store them, manage them, and make them accessible, findable and re-usable. The text continues with the foreseen resources needed for the openness and data to finalize with security and ethical aspects that will be taken into consideration in the context of SHIELD. This plan is the first version of the data management plan, which will be updated in subsequent versions (M18 and M36) as part of the Technical Reports, having as input the work carried out in the use cases (WP6), the social and technical work packages (WP2 – WP5) and the dissemination activities (WP7). # Introduction ## About this deliverable This deliverable focuses on the management of the data in SHIELD. In this context there are two different types of data: those related to the publications generated as part of research activities, and those related to the data collected from citizens, users and non-users of digital public services, as well as from civil servants, that will be used as part of the implementation of the different key results established in the project. ## Document structure The document follows the established H2020 template for a Data Management Plan (DMP) [1]. Section 2 presents the data summary of what the purpose of the data collection and generation is. Section 3 explains how the data will be made fair, and thus findable, accessible, interoperable and reusable. Section 4 briefly explains how the financial resources for this openness are envisioned at this stage to be allocated. Section 5 and 6 outline security and ethical aspects respectively. And finally Section 7 presents the conclusions and future work. # Data Summary ## Purpose of the data collection/generation and its relation to the objectives of the project The following list of SHIELD‘s project objectives and related key results (KR) provides a description for each KR specifying the purpose of the data collection/generation (if any): • **(O1) Systematic protection of health data against threats and cyber-attacks** . * **KR01: Knowledge base of generic security issues that may affect a system** . The purpose is to create a knowledge base which captures threats that should be managed by the architecture and regulatory data protection requirements (supporting objective O4). This knowledge base does not capture nor user's health data nor users, and it only manages threats and compliance issues in specific end-to-end applications. For the SHIELD use cases we will use fake data just to prove the benefits of the results. o **KR02: Tool that provides an automated analysis of data structures in order to identify sensitive elements that may be vulnerable to specific threats.** Data structure used to have flaws and weaknesses during the storage or exchange of data. The purpose is to analyse/collect the schema of these structures. SHiELD pilots will be used to identify sensitive data, and it will be traced during the pilots to ensure its privacy aspects and that access rights requirements are kept. * **KR03: Security requirements identification tool** : this tool will allow models of end-to-end applications to be created, and security threats and compliance issues affecting that application to be automatically identified. We will just list security threats and compliance issues according to ‘security by design’ principles. * **(O2) Definition of a common architecture for secure exchange of health data across European borders.** * **KR04: SHiELD open architecture and open secure interoperability API:** the purpose is to create a SHiELD architecture which is composed by the results of epSOS project but also with tools brought by SHIELD partners such as the anonymisation mechanisms. Furthermore the health data interchanged is fake, and we do not use real users data. SHIELD pilots will invent users for each scenario. Basically the approach is to allow citizens and healthcare providers the possibility for accessing their health data from other countries * **KR05: SHiELD (Sec)DevOps tool:** the purpose is twofold. During development time, a set of architectural patterns (mainly in Java) are stored in order to check data protection security mechanisms. During run time a set of tools provide monitoring facilities alerting the operator of the system that a threat is likely to occur. * **(O3) Assurance of the protection and privacy of the health data exchange.** This objective is addressed mostly in WP5, led by IBM based on their expertise in novel data security mechanisms for securing the exchanged data among the different Member States. This data is protected before, during and after it is exchanged. * **KR06: Data protection mechanisms** : the purpose is to collect a suite of security mechanisms to address data protection threats and regulatory compliance issues in end-to-end heterogeneous systems. This includes (but not limited to) tamper detection for mobile devices, data protection mechanisms, and consent-based access control mechanisms. o **KR07: Privacy protection mechanisms:** these privacy mechanisms address different aspects of privacy protection and regulation of data. These include methods for sensitive information identification. The purpose is to use and develop methods to mask private sensitive information dynamically on the fly as well as methods able to anonymize data while enabling analysis on the data. * **(O4) To understand the legal/regulatory requirements in each member state, which are only partly aligned by previous EU directives and regulations and provide recommendations to regulators for the development of new/improved regulations** . o **KR08: Legal recommendations report.** For this KY we are not going to use private data. The purpose is to create a common regulatory framework where the legal requirements regarding security among the state members are aligned. * **(O5) Validation of SHiELD in different pilots across three Member States** o **KR09: Pilots:** the purpose is to test implementations which are deployed in three Member States, supporting validation scenarios defined. The collected data will be used to prove that scenarios are working. * **KR10: Best practices:** the purpose of the data used is to describe lessons learned and best practices for protecting health data * **(O6) Dissemination of SHiELD results** o **KR11: Publications:** the purpose is to collect the scientific papers, white papers, popular press articles, media and social media we are producing **.** o **KR12: Take up opportunities: its purpose is to identify the main** users, standards bodies and regulators. ## Types and formats * During these first 6 months we are just considering the format suggested in [2] we are considering a Patient Summary as an identifiable “dataset of essential and understandable health information” that is made available “at the point of care to deliver safe patient care during unscheduled care [and planned care] with its maximal impact in unscheduled care”; it can also be defined at a high level as: “the minimum set of information needed to assure healthcare coordination and the continuity of care” [2]. From a technical point of view, we will use as readable formats such as CSV, XML or JSON. Examples of the XML format are described in [3] which is the official Metada Registry. SHIELD project manages structured and unstructured data collected during current and past patient hospitalizations. For example: **Structured data** refers to kinds of data with a high level of organization, such as information in a relational database. For example: * SDO (discharge form) that contains 5 .txt files where each field is separated by “;” * ED (Emergency Department) dataset. o Medical images o Treatment forms. * Constant collection forms. * **Unstructured data** refers to information that either does not have a pre-defined _data_ _model_ or is not organized in a pre-defined manner. Unstructured information is typically _text_ \- heavy, but may contain data such as dates, numbers, and facts as well. Examples of are: * Reports of complementary tests (radiology, pathological anatomy, endoscopy, etc.) * Discharge letter. o Monitoring of evolutions in external consultations. Both documents are typically uploaded in PDF format. ## Re-use of existing data We will reuse the existing and available data provided in epSOS (http://www.epsos.eu/home/epsos-results-outlook.html) just to check the feasibility of the solutions provided in SHIELD. ## Origin of the data The data is based on the scenarios provided in SHIELD, and more precisely on the three member states requirements (UK, Italy, Spain (Basque country)). The data used in the scenarios are not real, and do not belong nor describe any individual. Additionally we follow the principle of informed consent (see section 6 “Ethical aspects”). The use of these data can help us to demonstrate the technology developed. ## Expected size of the data At this stage of the project it still hard to define precisely the data size and ingestion rate. However, it can be useful to go into details regarding the dimension of the most important data involved in the use cases: * **Medical images** : include all the bio-images such as ultrasound scan, MRI (magnetic resonance imaging) or CT (computer tomography) scan. Considering that the **Computerized Tomography** uses 3D x-rays to make detailed pictures of structures inside of the body, it takes **pictures in slices** , like a loaf of bread. This means that each slice it’s a picture, the number of pictures can be from 30 for simple examinations to 1000+ for sophisticated examinations. This scan can be repeated several times (2-6) to reduce noise and to ensure high quality of the exam. In conclusion we will have from 30 to 1000 images each one with the size of 5 MB times 2-6 series; we can say that a single CT examination for a patient will have the size of 300 MB – 30 GB depending on the kind of investigation; * **SDO and ED dataset** : is around 1 Kb per patient since any images are included and since the information is codified (.txt format). ## To whom might it be useful ('data utility')? The results of SHIELD might be useful for healthcare providers, governments, and patients. # FAIR DATA This data management plan follows the FAIR (Findable Accessible Interoperable Reusable) principles. ## Data findable There are different types of data: * Data related to the use cases * Data coming from publications * Data coming from public deliverables * Open source software ### Data related to the use cases During the lifetime of the project and especially during the trials execution, SHiELD partners expect several types of data to be generated, mainly health data, location data, personal data (“fake” names, addresses, contact information, IP addresses, etc.), pseudonymised data (user names, device identifiers, etc.), traffic data, as well as others. The first step in development of the use case studies will be to produce a high level outline of the scenario to be used in the project. Starting from epSOS data exchange gateway, a set up for subsequent validation experiments will be deployed. Since these experiments will involve some novel security mechanisms whose value is not yet proven, current patient data will not be used directly in the use cases. Instead, an equivalent test system will be implemented by using synthetic patient data to verify that security is effective without compromising the data exchange interoperability requirements and that SHiELD solutions are compliant to European General Data Protection Regulation 679/2016. The second step of the project will see the creation of synthetic data sets which may be sampled or combined randomly and associated with fictitious Patients. This synthetic set of medical information will include the minimum patient summary dataset for electronic exchange developed in the epSOS project [4] defined according to the clinical point of view keeping in mind the medical perspective of the final users (medical doctors and patients). SHIELD WP6 deliverable 6.1 describes a set of scenarios, and all digitalized data included in Electronic Health Records (EHR) includes as example: * Patient’s personal data * Medical histories & Progress notes * Diagnoses * Acute and chronic medications * Allergies * Vacinations * Radiology images * Lab and test results * Clinical parameters (blood pressure, heart rate, capillary glucose,…) For each scenario it is going to be necessary to stablish which are going to be the minimum clinical data in order to try to manage the patient in the most efficient way. On the one hand, it will be necessary to establish the sensitivity and security of the data, but on the other hand it is essential to provide the health professionals with the minimum imprescriptible data in order to perform an efficient management and also provide security in the management of the patient. One of the scopes of SHiELD, is to stablish the minimum data necessary for each scenario just to improve the clinical management of foreign patients while traveling along Europe. In this way we need to: * Denominate the fields to include, its format and range of values that can adopt. * The classification of the field as part of the minimum set or if its inclusion is recommended, corresponding to each Health Service the final decision to include it or not. * Inclusion of the field and its value as part of the attributes of the document as a "tag" to identify the essential elements of its content without having to open (decrypt) the document. To codify different fields of the minimum dataset that will be exchange we have: * **SNOMED CT** or **SNOMED Clinical Terms:** is a systematically organized computer processable collection of medical terms providing codes, terms, synonyms and definitions used in clinical documentation and reporting. SNOMED CT is considered to be the most comprehensive, multilingual clinical healthcare terminology in the world. The primary purpose of SNOMED CT is to encode the meanings that are used in health information and to support the effective clinical recording of data with the aim of improving patient care. SNOMED CT provides the core general terminology for electronic health records. SNOMED CT comprehensive coverage includes: clinical findings, symptoms, diagnoses, procedures, body structures, organisms and other etiologies, substances, pharmaceuticals, devices and specimens. between different Health Systems we have: * **ICD-10** is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems, a medical classification list by the World Health Organization. It contains codes for diseases, signs and symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or diseases. The code set allows more than 14,400 different codes and permits the tracking of many new diagnoses. The codes can be expanded to over 16,000 codes by using optional subclassifications. This is just a brief list of medical data; indeed, it represents only a subset of the whole set of medical information that could be involved in SHiELD project. For documents including sensitive information we will remove and hide all data. For example, Figure 1 represents a dismissal letter in which sensitive information can be found. It can be seen that in the discharge letter the sensitive information are **name and surname** of the patient and a **clinical record number** that is bound to the patient. All these data will be removed. Figure 1: Document containing clinical record number and name. Regarding medical images, Figure 2represents a slice of a simulated patient. Within this figure some sensitive information are circled in blue: * **FANTOCCIO** is the space dedicated to the patient name and surname; * **PID** is the internal patient ID, it means that the code identifies the patient within hospital internal system; * **Acc.num** is a progressive number in the hospital internal system; Figure 2: Slice of a simulated patient Additional to synthetic data regarding to patient past hospitalization, SHiELD project can include mobile data that can be useful for diagnostic purposes. Data could come from both mobile and wearable devices; some examples of datasets are provided: * GPS tracks ( _e.g._ localization); - Posts ( _e.g._ social registrations); - Last known activities: * SMS sent at time XX.XX; * Weather data; * Activity tracker - Chronic patient monitoring; * Drug therapy. This data coming from wearable devices are not directly health related but allow for health related conclusions after processing. ### Metadata All publications will be indexed by using Digital Object Identifiers or similar mechanisms to be discovered and identified. All papers in journals and magazines will use this identifier. Concerning the naming convention, we will use the following: <<Dx.y Deliverable name _ date in which the deliverable was submitted.pdf>>. Each paper or deliverable contains a keywords section that can be used to optimize possibilities for re-use. Each deliverable is tagged with a clear version number as indicated on Figure 1, Figure 2 and Figure 3. This is part of the metadata that each deliverable contains. Additionally * Editor(s): who is/are the main leaders of this document * Responsible Partner: who is/are the main responsible partner of this document * Status-Version: draft, released, final * Date: submission date * Distribution level (CO, PU): confidential or public access according to SHIELD proposal * Project Number: SHIELD project number * Project Title: SHIELD title * Title of Deliverable * Due Date of Delivery to the EC: date to be sent to the European Commission (EC) * Workpackage responsible for the Deliverable * Editor(s):Who edit this deliverable * Contributor(s): who have contributed * Reviewer(s): reviewers * Approved by: people who internally approved it to be submitted to EC * Recommended/mandatory readers * Abstract: it summarises this document * Keyword List: a set of words which can provide an overview of the topic of this deliverable * Disclaimer: copyrights if any Each document registers its revision history: version number, data, reason for the modification, and by whom it is modified. Figure 3: Deliverable front page where version is shown Figure 4: Document description contains version number Figure 5: Page headers contains version number ## Data openly accessible Data related to the use cases are going to be accessible through the SHIELD deliverables which will be published on the website ( _http://www.project- shield.eu/_ ) . All deliverables include a set of keywords and a brief description that are aimed to facilitate the indexing and search of the deliverables in search engines. Scientific publications are going to be published as Open Data, we will use Open Aire [3] – compliant repositories. For example TECNALIA use its own repository, already indexed by Open Aire. There are other repositories such as Zenodo [4] that can be used. The deliverables will be stored at AIME’s hosting provider, and for three years beyond the duration time frame of the project All data produced will be made available through the use of deliverables, papers in journals/magazines/conferences, or repositories. The data used for proving functionalities are not real, and they are going to be distributed using open source repositories, which will be easily accessible by using a browser. According to the SHIELD Grant Agreement (GA) page 15 “The SHiELD DevOps and solution will be as open source as possible (taking into account exploitation plans and the IPR issues that might arise from the usage of proprietary background)”. But basically all tools are following a freemium licensing schema, where there is a public version that can be released as open source and a commercial edition. All these software will be released at the end of the project, because they are going to be mature enough. At this moment, there is no specific arrangements, restrictions of use (apart from GA), there is no data access committee, and licenses depend on each tool used in SHIELD. ## Data interoperable Basically SHIELD project will produce a platform based on OpenNCP [6] which is interoperable with other software. The structures used for data exchange follow the eHealth DSI Interoperability Specifications [7]. Most of the vocabularies used follow the traditional software engineering artefacts descriptions, and for the eHealth domain we are using the HL7 [8] which specifications do not have a cost. ## Increase data re-use (through clarifying licences) Data stemming from the use cases will be delivered through the appropriate deliverables. Our approach is to extend a branch of the OpenNCP, and to add SHIELD functionalities. Once we have finalised the project we integrate these functionalities to the OpenNCP community, and this community will maintain this platform. At the time of writing, we do not envision any embargo on data. # Allocation of resources SHIELD does not envision additional resources for handling data management. SHIELD will use open access repositories as much as possible for the following data: * data related to the use cases * data related to the meta-analysis * data coming from publications * data coming from public deliverables * open source software Obviously there is an indirect cost for making data FAIR in our project. But we consider as part of the activities of the SHIELD project. All partners in the SHIELD project are responsible for data management. # Data security SHIELD will ensure that the General Data Protection Regulation (GDPR), which will enter into force in May 2018, is ensured, especially in regards to protection of private data. In addition SHIELD project provides the following key results dealing with data security: * [KR03] Security requirements identification tool * KR04] SHiELD open architecture and open secure interoperability API * [KR06] Data protection mechanisms: a suite of security mechanisms that address data protection threats and regulatory compliance issues in end-to-end heterogeneous systems * [KR07] Privacy tool: it monitors the data access attempts to ensure that only valid requests are accepted and only the data that is really needed is provided # Ethical aspects The basis of ethical research is the principle of informed consent as stated in our proposal. All participants in SHIELD use cases will be informed of all aspects of the research that might reasonably be expected to influence willingness to participate. Project researchers will clarify questions and obtain permission from participants before and after each practical exercise (e.g. interview, co-creation session, etc.) to maintain on-going consent. Participants will be recruited by each organization leading the use cases (Osakidetza, FCSR, Lanc) and other supporting organizations (e.g. Ibermatica, Aimes) and will cover more than one type of citizens. If participants wish to withdraw from the participation in the use cases at any time, they will be able to do it, and their data, even the pseudo-anonymized data, will be destroyed. In WP1 there is a task entitled as “Task 1.3 Ethical trials management” where we ensure that ethical principles are used throughout the use cases, which are clustered together for management purposes in the work package related to the use cases. Further explanations on ethical matters will be gathered in deliverable D1.6 Ethical protocols and approvals. # Conclusions The document describes our SHIELD data management plan according to the established H2020 template for a Data Management Plan (DMP) [1]. This document is alive during the whole project, and it will be updated on a regular basis. Data Summary section indicates the purpose of the data collection and generation. This is a complex task, because there are several data which can be managed. Each data will be made FAIR (findable, accessible, interoperable and reusable). SHIELD project’s key results are briefly explains how the financial resources for this openness are envisioned at this stage to be allocated. Section 5 and 6 outline security and ethical aspects respectively, and finally Section 7 presents the conclusions and future work
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1183_ECHOES_727470.md
# INTRODUCTION AND OVERVIEW ## General overview of data collection activities in ECHOES ECHOES is a mixed methods project where the key methodological strength comes from combining qualitative and quantitative techniques with an interdisciplinary approach from the perspectives of Social Sciences and Humanities, including sociology, psychology, political science, and economics. This allows us to study individual and collective energy choices and energy- related behaviour across the EU 28 plus Norway and Turkey to identify and close knowledge gaps on factors that are drivers or barriers for choices enabling sustainable energy transitions. In turn, this will allow us to formulate innovative policy advice with a toolbox for how to further the European Energy Union, providing an insight of the interrelationships between the identified factors and their interrelations with technological, regulatory, and investment-related aspects to implement the SET-plan and meet the challenges specified in the SET-plan integrated roadmap. The work in the project is delivered in different work packages, which have their own methodological approaches. ## Purpose and scope of this document This data management plan (DMP) first gives a comprehensive overview of all data types collected in ECHOES, including a detailed mapping of data collection activities to work packages and ECHOES partners (see Section 2). Afterwards, each partner’s responsibilities with respect to data collections, data handling and analysis, as well as data storage are identified (Section 3). In the final section, the document defines ECHOES standards for all data collections (including research ethical standards), data storage and handling, data documentation, data anonymization, access to data for exploitation and future use, and data deletion. It needs to be noted, that a data management plan is understood to be a dynamic document that will be adjusted and adapted during the course of the project. It will be integrated into the ECHOES project handbook (deliverable 1.2) which will be implemented in ECHOES as a WIKI-based knowledge / procedures database constantly updated. In that respect, the DMP describes the ECHOES data management at the point in time it was delivered to the European Commission. The document was written with reference to the Guidelines to FAIR data management in Horizon 2020 ( _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi- oa-datamgt_en.pdf)_ _._ ## Future revisions of this document A data management plan is a dynamic document, which will be constantly updated. Since the DMP will become part of the dynamic project handbook (deliverable D1.2), it will be integrated in its revision cycles. However, a formal revision of the DMP will be provided on an annual basis, which means in January 2018 and January 2019. # DATA COLLECTION ECHOES is a mixed-methods project and its methodological strength comes from combining qualitative and quantitative techniques with an interdisciplinary approach. This allows us to study individual and collective energy choices and energy-related behaviour across the EU 28 plus Norway and Turkey to identify and close knowledge gaps on factors that are drivers or barriers for choices enabling transitions to sustainable energy. ECHOES includes data collection and handling activities in all WPs (excluding WP1) which strongly depend on each other.This complexity demands a strict coordination between the different tasks and WPs as they depend on each other and input from preceding tasks is not only required within the same WP but also in other WPs. Table 1 presents an overview of the various data collections that are part of ECHOES (see the first column) and indicates which WP(s) participate in each data collection or data handling. For example, the first data collection is “Literature search” and WPs 2-6 contribute to this data collection. #### _Table 1:_ _Data collection methods used in different WPs_ <table> <tr> <th> **Method/WP** </th> <th> **WP1** </th> <th> **WP2** </th> <th> **WP3** </th> <th> **WP4** </th> <th> **WP5** </th> <th> **WP6** </th> <th> **WP7** </th> <th> **WP8** </th> </tr> <tr> <td> Literature search </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> ✓ </td> <td> ✓ </td> <td> ✓ </td> <td> Synthesizes data </td> <td> </td> </tr> <tr> <td> Document study </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> International survey </td> <td> </td> <td> ✓ </td> <td> Coordinates </td> <td> ✓ </td> <td> ✓ </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> Local surveys </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> Quantitative experiments </td> <td> </td> <td> </td> <td> ✓ </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Interviews </td> <td> </td> <td> ✓ </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> Case studies/site visits </td> <td> </td> <td> </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> Netnography </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> Workshop </td> <td> </td> <td> ✓ </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> ✓ </td> </tr> <tr> <td> Focus Groups </td> <td> </td> <td> </td> <td> </td> <td> ✓ </td> <td> ✓ </td> <td> </td> </tr> <tr> <td> Discussion events </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> ✓ </td> <td> </td> <td> </td> </tr> </table> ## Data types Table 2 presents key characteristics of each data collection in ECHOES. The first column indicates the type of data collection, the second column indicates where the data for each data collection type come from, the third column indicates how the data are collected, the fourth column indicates whether data from a given data collection type will be published in an open access mode at the end of the project, the next column lists the tasks and/or WPs that contribute to a given data collection type, and finally the last column names all partners involved in a given data collection. Note that the same data collection type may be used in several independent data collections in different WPs. ##### _Table 2:_ _Details of data collection_ <table> <tr> <th> **Type of data collection** </th> <th> **Source of data** </th> <th> **How are data collected** </th> <th> **Open access** </th> <th> **WP/task** </th> <th> **Partners (lead partners underlin ed)** </th> </tr> <tr> <td> Literature review data </td> <td> Published papers and reports (qualitative data and quantitative input to meta-analysis in WP4) </td> <td> Papers are located through databases such as Web of Science and Scopus using appropriate search words (e.g., “electric mobility”, “energy conservation”). Relevant information, such as what factors influence the purchase of electric vehicles, is retrieved from the identified papers. </td> <td> Y </td> <td> WP2, 3.1, 4.1, 5.1, 6.1, 7.1 1 </td> <td> _VTT,_ _NTNU,_ _ROMA3,_ _JR, IUE,_ _EI_ , ULEI, UACEG, TECN </td> </tr> <tr> <td> Document study data </td> <td> Documents published by relevant stakeholders, such as policy-makers and regulators, NGOs and professional organizations (qualitative data, quantitative data for WP2 database) </td> <td> Documents published by relevant stakeholders are located in the European Commission’s document database, and through search engines such as Google and Google Scholar using appropriate search words (e.g., “electric mobility”, “Energy Union”). In addition, relevant policy papers are located through databases such as Web of Science and Scopus. These policy papers can indicate additional potentially useful sources of information that will be subsequently explored as well. Relevant information, such as what policy measures were successful in increasing the uptake of electric vehicles, is retrieved from the identified documents. </td> <td> Y </td> <td> 2.1, 2.2, 3.2, WP5, WP6, 7.1 </td> <td> _VTT,_ _NTNU,_ _JR, IUE,_ _EI_ , TECN, EUAS, ENEL, ENEF, ROMA3 </td> </tr> <tr> <td> International survey data </td> <td> Responses of participants in the international survey (quantitative data) </td> <td> WP3 will coordinate the efforts related to the multi-national survey. Empirical WPs 4-6 will provide questionnaire items to be used in the survey. The survey will be subsequently implemented in EU-28 plus Norway and Turkey by a professional company that specializes in conducting surveys. The survey will cover at least 9000 households. </td> <td> Y </td> <td> WP2, 3.3, 4.3, 5.1, 5.2, WP6, 7.1 </td> <td> _VTT,_ _NTNU,_ _EI, JR,_ _IUE_ , ROMA3, TECN </td> </tr> </table> <table> <tr> <th> Local survey data </th> <th> Responses of participants in the local survey (quantitative data) </th> <th> WP3 will coordinate the efforts related to the national surveys. The national surveys will be implemented either as separate small-scale surveys, or as add- ons to the general survey with questions relevant only for the local analyses. Empirical WP 6 will provide questionnaire items to be used in the surveys. </th> <th> Y </th> <th> WP2, WP3, WP4, 6.1, 6.2, 7.1 </th> <th> _VTT,_ _NTNU,_ _ROMA3,_ _IUE, EI_ , UACEG, ULEI, TECN </th> </tr> <tr> <td> Quantitative experiments </td> <td> Responses of experimental participants (quantitative data) </td> <td> WP4 develops the design of psychological experiments, which are subsequently carried out in six countries (Bulgaria, Finland, Germany, Italy, Norway, Spain, and Turkey). The experiments will partly be conducted online as local additions to the international survey, partly offline in the laboratories of the participating partners. </td> <td> Y </td> <td> 3.3, 4.2, 7.1 </td> <td> _NTNU,_ _ULEI, EI_ , ROMA3, VTT, IUE, JR </td> </tr> <tr> <td> Interview data </td> <td> Responses of interviewees (qualitative data) </td> <td> Interviews with relevant stakeholders will be conducted as part of WPs 2 and 6. Interviewees will be selected according to availability in Turkey, Norway, Italy, Finland, Germany, Austria, Bulgaria and/or Spain. </td> <td> Y </td> <td> 2.1, 2.2, WP3, WP5, 6.2, 7.1 </td> <td> _VTT,_ _NTNU,_ _JR, IUE,_ _EI_ , ULEI, UACEG TECN, ROMA3 </td> </tr> <tr> <td> Case study data / Site visit data </td> <td> Observational data from site visits, available structural framework data (qualitative and quantitative data) </td> <td> Case studies will be conducted as part of WPs 5 and 6. Cases will be analysed based on their structural framework and available data about their success criteria and site visits. Cases will be selected according to availability in Turkey, Norway, Italy, Finland, Germany, Austria, Bulgaria and/or Spain. </td> <td> Y </td> <td> WP3, 5.5, 5.6, 6.2, 6.4 , 7.1 </td> <td> _NTNU,_ _ROMA3,_ _TECN,_ _IUE, EI_ , JR, ULEI, UACEG, VTT, STMK </td> </tr> <tr> <td> Netnography </td> <td> Publicly accessible information on people’s behaviour on the internet (qualitative and quantitative data) </td> <td> Ethnographic research of people’s publicly accessible behaviour on the internet will be performed as part of WP6. </td> <td> Y </td> <td> WP3, WP6, 7.1 </td> <td> _NTNU,_ _IUE, EI_ , VTT, JR, ROMA3 </td> </tr> <tr> <td> Workshop data </td> <td> Responses of workshop participants (qualitative data) </td> <td> Data on views of relevant experts and stakeholders are collected during an expert workshop, during two policy-makers workshops and during a workshop for EU member states organized by TECN. </td> <td> Y </td> <td> 2.4, WP3, WP5, 6.4, 7.1, 7.3, 7.4, 8.3 </td> <td> _VTT,_ _NTNU,_ _JR, IUE,_ _EI,_ _TECN_ , UACEG, ROMA3, ULEI, EUAS, ENEF, </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> GAP, STMK, ENEL </td> </tr> <tr> <td> Focus group data </td> <td> Responses of participants in focus group discussions (qualitative data) </td> <td> WPs 5 and 6 prepare the protocol for focus group discussions, which are subsequently carried out in six countries (Austria Bulgaria, Finland, Italy, Norway, Spain, and Turkey). </td> <td> Y </td> <td> 3.3, 5.2, 6.1, 6.2, 7.1 </td> <td> _NTNU,_ _JR, IUE,_ _EI_ , UACEG, ULEI, VTT, TECN, ROMA3, EUAS, GAP, STMK </td> </tr> <tr> <td> Discussion event data </td> <td> Responses of national stakeholders (qualitative data) </td> <td> JR organizes a discussion event with national stakeholders where their views and opinions will be discussed and recorded. </td> <td> Y </td> <td> 5.4, 7.1 </td> <td> _JR, EI_ , NTNU, TECN, UACEG, IUE, ROMA3, GAP, VTT </td> </tr> </table> # PARTNER RESPONSIBILITIES This section describes the data collections and data processing each WP is responsible for. The international survey as the largest data collection in ECHOES with the highest need for coordination is described in an extra chapter 3.8. For a complete overview of data collection responsibilities see _**Appendix I** _ (constantly updated during the project). ## WP2 (VTT) VTT (WP2 lead beneficiary) is responsible for the formulation of SSH database and SSH indicators. The SSH data will be collected on the national, regional and the EU level. The formulation of the database will start from analysing the existing national and EU level databases to evaluate the existing gaps in the quantitative data and EU policies, regulations and directives. VTT will first gather existing information on both quantitative and qualitative SSH data, relevant databases, missing data, ideas and suggestions on important literature via a web-based survey addressed to all consortium partners (task 2.1 and 2.2). VTT will also arrange an expert workshop, planned in July 2017 2 , to get feedback and ideas from policy-makers and other stakeholders of the future data requirements (task 2.4). VTT directly supports the formulation of which variables need to be measured and the collection of relevant new data, both quantitative and qualitative, based on the results from WP3-WP7. Those results will come from the international survey (inputs from WP2 itself, WP3, WP4, WP5 and WP6), local surveys (inputs from WP2 itself, WP3, WP4 and WP6), quantitative experiments (inputs from WP3 and WP4), interviews (inputs from WP2 itself, WP3, WP5, WP6 and WP7), case studies (inputs from WP3, WP5 and WP6) and focus groups (inputs from WP3, WP5 and WP6). The empirical work of WP4, WP5 and WP6 will be coordinated by WP3 and then transferred into the database created by WP2. VTT will then formulate indicators that support modelling, analysing and comparison of transition to sustainable energy systems (task 2.3). The final outcome will be a comprehensive database that is a combination of SSH relevant quantitative and qualitative data from existing databases and newly collected quantitative and qualitative data. The database will be open source and offered under the regulations of the pilot on open research data in H2020. This will therefore ensure high usefulness. ## WP3 (NTNU) NTNU (WP3 lead beneficiary) is responsible for preparing the conceptual ground, defining, shaping and harmonizing the empirical work of WP4, WP5, WP6 and coordinating the data collection across WP4, WP5, WP6 in order to transfer the data into the database created in WP2. WP3 will closely coordinate with WP7 to ensure a seamless transition between the pre-empirical, empirical and post-empirical stages covered in WP7. NTNU will start by synthesizing the state-of-art from the three ECHOES perspectives (micro, meso and macro levels) on energy choices and energy transitions through the identification of existing knowledge and knowledge gaps to be closed to foster the Energy Union (task 3.1). This prepares and supports the literature review in WP4, WP5 and WP6 and structures the final synthesis conducted in WP7. WP3 will then develop a strategy document to foster the implementation of existing knowledge and to allow targeted work on filling identified knowledge gaps in WP3, WP4, WP5, WP6, WP7 (task 3.2). On the empirical side, WP3 ensures that all research questions are addressed, makes sure that existing knowledge is used, and together with WP4, WP5, WP6 determines which questions are most relevant for the data collection. WP3 will coordinate ECHOES data collection and produce a complete documentation of data collection activities (task 3.3). The data collection includes survey-based experiments, psychological experiments, focus groups among households and among social units and experts, case studies and netnography, and secondary analysis of official documents, reports and research literature. WP3 is also responsible for coordinating the conduction of the multinational survey (EU 28 + Norway and Turkey), accommodating all items/questions that are best dealt with in the form of questionnaires (for details see subsection 3.8). ## WP4 (ROMA TRE) ROMA3 (WP4 lead beneficiary) is responsible for gathering knowledge and data on individual energy choices in a group context. WP4 builds on WP3 (literature study, identification of research gaps and theoretical innovation) and will complement results from quantitative and qualitative case studies. Firstly, a meta-analysis will be conducted with the aim of shedding light on identity processes and energy related behaviours. The meta-analysis will provide conceptual and statistical guidelines for how to design subsequent surveys and experiments across the ECHOES consortium (task 4.1). Empirically, survey and experimental quantitative methods will be combined, and the obtained data will be subjected to statistical analysis (task 4.2). WP4 also contributes a set of items to the largescale multinational survey (task 4.3). All data will be transferred to WP3. ## WP5 (JR) JR (WP5 leader beneficiary) is responsible for gathering knowledge and data on collective behaviour driven by “energy cultures”. WP5 will first develop a research typology on energy lifestyles, integrating the literature study carried out in WP3 (task 5.1). Quantitative and qualitative assessments of European lifestyles will be applied. The multinational survey will provide extensive quantitative information and will give an assessment of typical collective energy lifestyles. Qualitative research will be carried out through focus groups that will allow deep insight in the self-definition and key dimensions of selected energy lifestyle types (task 5.2). WP5 will secondly focus on the development of an “energy memory” research guide with the literature inputs from WP3 (task 5.3). Multinational studies such as discussion events and national reports on “energy memory” will be produced, complemented by the outputs of discussion events to be organized with national stakeholders (task 5.4). WP5 will then closely cooperate with WP6 in assessing the factors enabling collective energy consumer action through multinational studies and small national assessment (task 5.5). A qualitative assessment via case studies in selected partner countries will identify collective energy practices (task 5.6). All the data gathered trough quantitative and qualitative assessments will then be transferred to WP3. ## WP6 (IUE) IUE (WP6 leader beneficiary) is responsible for gathering knowledge and data on formal social units as drivers behind collective decision-making. WP6 will first identify the variables and dimensions that affect decision-making processes for target stakeholders through a literature review based on previous research. Based on the knowledge gathered trough the literature review, a design of the empirical quantitative and qualitative research will be developed (task 6.1). Quantitative techniques will be implemented to quantify attitudes, opinions, and other selected variable relevant to energy choices and energy related behaviour within formal social units. Following that, a combination of several qualitative different formats such as focus groups, in-depth interviews, netnography, surveys (including the multinational survey) and case studies will be implemented in selected countries (see table 2). These methods will be applied to three types of social units: formal social units, collective decision-making units and individual consumers engaging in joint contracts (task 6.2). The evaluation of the results obtained from the empirical assessment and statistical analysis will then reveal the best practices and successful implementation and will expose the lower level dynamics affecting energy choices. A dissemination workshop will be organized among the EU member states and associate countries in order to spread the evidence from good practices and successful implementations (tasks 6.3 and 6.4). The final result is a synthesis study that delivers suggestions and recommendations derived from the analysis, and it will be used as a basis for the development of a decision tree algorithm as an instrument to establish integrated governance and energy conservation (task 6.5). All data will be transferred to WP3. ## WP7 (EI) In the ECHOES project, EI (WP7 leader beneficiary) is responsible for providing a knowledge synthesis for the Energy Union and by doing that involved on the analysis/synthesis of all data produced in ECHOES. WP7 brings together the knowledge gathered in WP3 with the knowledge produced in WP4, WP5 and WP6 with the aim of identifying the driving factors of energy related choices and behaviour that have importance and practical relevance (task 7.1). The identification is followed by the ranking of the driving factors according to their potential impact on energy related choices and behaviours and according to their potential for being implemented as part of energy policy. Through an in-depth analysis of all relevant policy documents gathered in previous WPs, WP7 will gain a holistic consolidated knowledge (task 7.2). WP7 will then consult stakeholders from all relevant areas (market, politics, and environmental agencies) to scrutinize the scientific results obtained in ECHOES. For this purpose, two policy-maker workshops are foreseen (task 7.3). The creation of policy-ready recommendations represents the final outcome of WP7 and it consists of suggesting how best to exploit the new knowledge in light of policy-making. This is the result of a continuous interaction with all stakeholder groups, including policy makers (task 7.4). ## WP8 (TECN) In the ECHOES project, TECN (WP8 leader beneficiary) is responsible for the communication, dissemination and impact of the obtained project’s results. WP8 has developed the Data Management Plan for ECHOES – this document. All information about the data collection standards, data coding, referencing and processing and about the exploitation of the data during the project and beyond is included in the Data Management Plan. As its next task, WP8 will bundle the data packages for the dissemination and exploitation. This will be done in close cooperation with the respective WP leaders and the result will be the basis for the project final exploitation strategy and corresponding business plan (task 8.2). WP8 will design a strategy for the communication and dissemination of results, taking into consideration different targets, dissemination channels and activities and means of verification, in close cooperation with WP2. A joint workshop will be organized by TECN during the SET Plan Conference/EERA Joint Programmed in order to communicate and disseminate the results (task 8.3). The strategy for the exploitation of the results represents the main outcome of WP8 and will support the consortium’s joint efforts to maximize the project’s impact and also the business plan for individual partners (task 8.4). ## The international survey The largest collection of primary data will come from a multinational survey that will collect views, opinions and answers in simulated choice experiment in the EU-28 plus Norway and Turkey. The survey will cover at least 9000 households that span 30 countries and 27 languages. WP3 will accommodate all items/questions that are best dealt with in the form of questionnaires. WP3 then coordinates the conduction of the survey, regularly controls the progress of the data collection, and finally distributes the relevant data to the respective researchers/WPs. WP4 will contribute with a question block to the survey with the objective of extending, validating and generalizing the findings on the factors driving energy related decision-making at the individual level. WP5 will, through the multinational survey, collect data that will enable to identify the most important energy lifestyles and to see if there are comparable patterns of energy lifestyles across Europe. WP6 will use the multinational survey to quantify attitudes, opinions and other selected variables relevant to energy choices and energy related behaviour within formal social units. The results of the ECHOES multinational survey on factors driving individual and collective energy choices will be coordinated by WP3 and then transferred to WP7. WP7 will exploit the data from the multinational survey for deriving policy-ready recommendations. The data gathered from the ECHOES multinational effort will also be available for future use as an open source in the SSH database created in WP2. The ECHOES multinational survey data collection will be carried out by an experienced data collection company that will have the responsibility for recruiting participants, translating the instructions/survey questions, hosting the online survey on secure servers, and for collecting and cleaning the data. # DATA MANAGEMENT This chapter describes the procedures applied in ECHOES for the different steps of data collection, management, storage, and publication in detail. ## Formal ethics approval The EU commission Ethics summary report (Ref. Ares (2016)2334779 - 19/05/2016) identifies three ethics issues to be managed in ECHOES: 1) Involvement of human participants, 2) data collection and processing and 3) involvement of non-EU countries. ### Involvement of human participants There are three sub-points related to the involvement of human participants to be managed: 1. Details on the procedures and criteria that will be used to identify/recruit research participants must be provided. 2. Detailed information must be provided on the informed consent procedures that will be implemented for the participation of humans. 3. Templates of the informed consent forms and information sheet must be submitted on request. ###### Details on the procedures and criteria that will be used to identify/recruit research participants Participants to the 30 country quantitative survey will be recruited from already registered members of a web-panel that the data collection institution operates. The participants will be sampled to be representative for each country. Participants are 18 years or older and must be able to give informed consent. They will be informed about the aim of the study, the collected data, data handling, storage and anonymization procedures as well as publication of the anonymized data and its inclusion in the open data pilot. By following the link to participate they explicitly give their consent to participate (templates for the consent/information sheet see point 3). Participants will earn points in the point system of their panel operator as reward for their participation. Participants of the other empirical work (experiments, focus groups) will be recruited locally from the general population 18 years and older or the population of experts/stakeholders relevant for the topic of the data collection. Only participants will be recruited that are able to give informed consent and information sheets and consent forms according to the templates under point 3 will be used. Participants will be recruited through mailing lists, newspaper advertisement, snowball systems, posters, or the like. To increase motivation participants will participate in a lottery of a smaller gift card (e.g. 50 Euro) or receive other small incentives like for example chocolate bars. Expenses that they have for participating will be reimbursed. In the netnography, only publically available internet sources will be used. Wherever individuals are identifiable in the raw data, their consent to use the data for the analysis will be collected before the data is used. Usage of internet data will be conducted after clearance by the relevant data protection authorities. For observational data, the same standards will be applied. ###### Informed consent procedures Before participation in the online survey panel members will be invited to participate in the survey by e-mail. In the e-mail the information from the informed consent form is presented and a link to the survey is included. The participants are instructed that by clicking the link they consent to participate in the study as described in the information included in the mail. For the other empirical studies, information is presented in written form when participants are recruited. It will be repeated immediately before the data collections are started and the consent form is signed by the participant before they enter the experiments or focus group. Participants are also informed that they can retreat their consent until the data is anonymized without any disadvantages and without having to give a reason. ###### Informed consent forms and information sheet The information sheets and consent forms will be based on the standard form provided by the Norwegian Centre for Research Data (NSD) and will be in line with national regulations. Future updates of the data management plan will include a documentation of all consent forms/information sheets used in ECHOES (see _**Appendix II** _ ) ### Data collection and Processing There are three sub-points related to Data collection and Processing 1. Copies of opinion or confirmation by the competent Institutional Data Protection Officer and/or authorization or notification by the National Data Protection Authority must be submitted (which ever applies according to the Data Protection Directive (EC Directive 95/46, currently under revision, and the national law)). 2. If the position of a Data Protection Officer is established, their opinion/confirmation that all data collection and processing will be carried according to EU and national legislation, should be submitted. 3. Detailed information must be provided on the procedures that will be implemented for data collection, storage, protection, retention and destruction and confirmation that they comply with national and EU legislation. ###### Copies of opinion (sub-points 1 and 2) All partners that collect and process data have confirmed they will do so according to Data Protection Directive 95/46 and the national law. All confirmations by the institutional or national data protection officers regarding the conduction of data collections in ECHOES in accordance with EU and national legislation will be collected in _**Appendix III** _ of the data management plan as soon as they are received. They will be provided at the earliest possible time point prior to starting the data collections. **Information on the procedures for data collection, storage, protection, retention and destruction.** This document (D8.1), The project Plan (D1.1) as well as the Project Handbook (D1.2) provides this information. All procedures are in accordance the principles of the “Guidelines on Data Management in Horizon 2020”. ###### Further data collection and procession standards Procedures in this section, on ECHOES data collection, storage, protection, retention and destruction, are based on the “Guidelines on Data Management in Horizon 2020”. The data collection, storage, protection, retention and destruction will be conducted according to National and EU legislation, as all partners dealing with data have declared. ECHOES includes different forms of data collection (a large quantitative survey based on a panel of a professional survey provider, local surveys, psychological experiments, workshops, netnography, and interviews), and the main principles for all types of data collections is to anonymize the datasets at the earliest possible point in the process by separating the identifying information from the rest of the data, to store all raw data on encrypted and password secured servers cleared for storage of medical research data and to delete directly or indirectly identifying personal data as soon as the purpose of collecting and quality controlling the data is served. The conducting party will ensure that data collection and anonymization will comply to EU legislation and ECHOES project handbook. NTNU has the responsibility to curate all raw-data. In more detail, the main quantitative survey data will already be anonymized when it is received from the survey company by ECHOES. ECHOES will only collaborate with survey companies which can document the clearance by all national and EU legislative bodies relevant for the study. The contract with the survey company will specify the consent procedures (only panel members who have given their consent will be recruited, it will be explicitly informed that clicking the link to the online survey equals consent to participate, this consent can be retracted at any point until the data is anonymized). The data set will be anonymized by the survey company removing all directly or indirectly identifying information before sending it to the ECHOES research team. After a quality control by ECHOES the survey company will immediately delete the coding tables coupling identifying information to the survey data. All informants and subjects in interviews and the experiments will give a written informed consent to their participation, either by receiving and signing a _consent and information_ letter or email, ensuring they understand what the consent is concerning and what the consequences of participation will be. Identifying information will be stored separated from the study data. After quality control of the data, the identifying information will be deleted. Video and audio material from interview sessions will be deleted after transcription and quality control. All raw data will be stored and protected on ECHOES encrypted servers for secure data storage that meets the Act relating to Personal Data Filing Systems (Personregisterloven). Personal data directly linking person and data will always be kept separate from the datasets. Identifying personal data will be retained for a maximum of 1 year after collection completion to allow for thorough quality control. All such data will thus be deleted by march 2018 the latest. <table> <tr> <th> **Procedures for data protection** </th> </tr> <tr> <td> * Data will be anonymized at the earliest possible point in time * Non-anonymised data will be stored under strictest precautions and under no circumstances made public * Data produced in one WP will always be anonymised before it is shared with the other WPs * Consent is sought before storing collected material in electronic form * The personal data collected through interviews generate personal data, this data will be anonymized during transcription * Any text produced based on research results will respect this principle of privacy and anonymity * All secondary data sources shall contain already anonymized datasets, which will not allow identifying individuals directly or indirectly * Activities carried out outside the EU will be executed in compliance with the legal obligations in the country where they are carried out * The activities must also be allowed in at least one EU Member State * All data transferred between project partners (within or outside the EU) will be restricted to anonymized data and transfer will only be made in encrypted form via secured channels </td> </tr> </table> ### Involvement of non-EU countries ECHOES non-EU partners (partners NTNU, IUE and EUAS) have confirmed that the ethical standards and guidelines of Horizon2020 will be rigorously applied, regardless of the country in which the research is carried out. ### Local data collections As shown in Table 2, ECHOS will produce 10 categories of data in addition to the international survey (described separately in Section 4.1.5), involving all scientific partners and WPS 2-8. The involved WPs are responsible for raw data production and monitoring, and data preparation (including translation to English and transcription if necessary). Each partner producing raw-data will upload the raw-data (including audio/video files of interviews or focus groups) to the protected servers at the _Services for sensitive data_ (TSD), which are NTNUs solution for sensitive data including medical information (see also Section 4.3.1). Each partner is responsible for the anonymization of the data collected. WP3 is responsible for the overall curation of the collected raw-data and the distribution of the anonymized data to the ECHOES partners. WP2 is responsible for transfer and long-term storage of anonymized, while all relevant WPs are responsible to utilize relevant data. See the constantly updated **_Appendix I_ ** for details. All informants and subjects in interviews and the experiments will give a written informed consent to their participation, either by receive and sign a _consent and information_ letter or email, ensuring they understand what the consent is concerning and what the consequences of participation will be. Identifying information will be stored separated from the study data. After quality control of the data the identifying information will be deleted. Video and audio material from interview sessions will be deleted after transcription and quality control. ### International survey The ECHOES survey effort is coordinated and administrated in WP 3, while scientific design of the questionnaires and the ancillary choice experiments are defined in WPs 4-6. A professional service company is needed to a) translate the questionnaire into all 26 official languages (EU + Norway and Turkey), b) to program the questionnaire in an adequate online-survey tool, c) recruit survey participants (according the selection criteria defined by the consortium) and d) to send out the links to this questionnaire to participants. Emphasis is put on hiring a company that specialize in the above-mentioned tasks and that has a proven record of accomplishment of multi- national efforts to ensure the highest quality data material. We will collect several quotes for the ECHOES survey service, and choose the one offering the best value for money ratio. Companies in question must document their routines and procedures for data management, sensitive information and personal data – and thus prove that all data collection services and processing will be carried out according to EU and national legislation. The chosen company will transfer anonymized data to the ECHOES project server through a secure file transferring system. The conduct of the ECHOES survey follows a stepwise process: <table> <tr> <th> **Procedure for the international survey (all conducted by ECHOES partners)** </th> </tr> <tr> <td> 1. Collection of (preliminary) results of the psychological laboratory experiments and the focus group discussions, each of which is implemented in 6 countries 2. Derivation of key research questions that a) could not be conclusively answered by the experiments or group discussions, or b) for which high heterogeneity between the countries was identified requiring the collection of additional data from all 30 countries. 3. Identified questions are then implemented in the ECHOES questionnaire, a) as questions being equal for all respondents, or b) as design variables, i.e. ancillary information that varies by the respondents to identify the impact of these variables on her/his answers in the following questions or choice experiments. An example of such a design variable is the (varying!) description of a certain energy policy instrument, along with the question whether the respondent would vote in favour of or against that policy in an opinion poll. 4. Econometric/statistical analyses of the data are then carried out to identify the effectiveness of the design variables (e.g. policy variables) on behaviour. </td> </tr> </table> ## Data collection procedures In order to achieve quality assurance, quality control and consistency throughout the project; specific data collection procedures will be added to the DMP as they are developed by the involved partners ahead of the different data collections. All procedures will be developed to meet general scientific quality criteria for data collections 3 as indicated in the following table: <table> <tr> <th> **Quality standards** </th> </tr> <tr> <td> **Accuracy** Is the data collected correct and complete? Are the data entry procedures reliable? **Efficiency** Are the resources used to collect data the most economical available to achieve those objectives? **Effectiveness** Have the objectives been achieved? Have the specific results planned been achieved? **Feasibility and timeliness** Can data be collected and analysed cost effectively? Can it provide current information in a timely manner? **Relevance** What is the relevance of the data/information/evidence to primary stakeholders? Is data collection compatible with other efforts? Does it complement, duplicate or compete? **Security** Is the confidentiality ensured? **Utility** Does data provide the right information to answer the questions posed? </td> </tr> </table> ### Literature review The ECHOES literature review is the first data collection to be conducted. The literature review includes all three perspectives and technology foci of ECHOES, WPs 3, 4, 5, 6 and 7 are involved, starting at different times and with adjoining scopes. To ensure quality, consistency and to avoid unnecessary overlap between partners and WPSs, WP3 has a specific focus on coordination. All partners will use similar templates (see _**Appendix V** _ ), a local literature review database on the ECHOES SharePoint will store all references reviewed, and finally the following procedure shall be followed: <table> <tr> <th> **Procedure for the literature review** </th> </tr> <tr> <td> 1. Keep a log that tracks all steps taken while performing the literature review 2. Choose a topic, define relevant research questions. 3. Define the scope of the review 4. Select the databases to be used for searches 5. Conduct searches; keep track of all search words and combination of search words 6. Review the literature and fill out the relevant templates according to your findings 7. Register the references reviewed on the literature review database 8. Store the data in their respective folders on SharePoint </td> </tr> </table> ### Document study The document study procedure will be developed ahead of the data collection and documented in an updated DMP. ### Local survey(s) The local survey procedures will be developed ahead of the data collection and documented in an updated DMP. ### Quantitative experiments Procedures for the quantitative experiments will be developed ahead of the data collection and documented in an updated DMP. ### Interview data Interview data procedure will be developed ahead of the data collection and documented in an updated DMP. ### Case study/site visits Case study procedure will be developed ahead of the data collection and documented in an updated DMP. ### Netnography Netnography procedure will be developed ahead of the data collection and documented in an updated DMP. ### Workshop data Workshop data collection procedure will be developed ahead of the data collection and documented in an updated DMP. ### Focus group data Focus group data collection procedure will be developed ahead of the data collection and documented in an updated DMP. ### Discussion event data Discussion event data collection procedure will be developed ahead of the data collection and documented in an updated DMP. ## Data documentation All collected data shall include a metafile when stored on the ECHOES secure storage solution and/or the ECHOES SharePoint server. The file will later be made available for external users of the data. This metafile shall explain the kind of data included, involved personnel, date and duration of the data collection, variable names/labels, recruiting procedures, response rates, whether or not it is anonymized, related WPs and tasks, and finally a summary. _**Appendix IV** _ provides two templates for qualitative and quantitative data sets that will be adapted during the course of ECHOES. ## Data storage and curation All non-anonymized data will be stored and protected on ECHOES encrypted server space for secure data storage, described in 4.4.1. Anonymized data will be stored in the ECHOES SharePoint solution described in 4.4.2 in encrypted form (see 4.4.3). WP3 (NTNU) is responsible for the curation of all data collected in ECHOES and its save storage. The storage solutions for non- anonymized raw-data and anonymized data include daily backup routines to prevent data loss. All data files will be assigned a persistent and unique Digital Object Identifier (DOI) by the end of the project through services such as Figshare, Zenodo or Dryad. ### Non-anonymized raw-data The non-anonymized raw-data will be stored on a secure server, hosted and operated by the University of Oslo (UiO) and their _Services for sensitive data_ (TSD) ( _https://www.uio.no/english/services/it/research/storage/sensitivedata/_ ) that meets the Act relating to Personal Data Filing Systems (Personregisterloven). This secure storage solution is NTNUs standard for sensitive data. ECHOES will establish a server space as soon as first data is produced. The server used will either be a windows 2012 server or the Linux RedHat 6.0. Backup is performed by the nominal UiO backup system, using an encryption key that only exists on that terminal-server and in two safes in two separate locations. Data import/export is provided by a "file-sluice" that carries data between a storage area inside of the solution to/from outside of the solution. All users can import data to the server, and work with the data on the server using tools such as R, PSPP, SPSS, STATA, MATLAB, SAS etc. Data can only be exported by the coordinator and administrators. Access to the server is provided first through an encrypted SSH tunnel to one of the login machines. Then either RPD (windows) or Spice (Linux) to access the ECHOES project. TSD use a two-factor login, users will be given a one-time password by smartphone/Yubikey. When establishing a user account in the system, the users have to be identity checked, for example by logging inn through the Norwegian Bank ID or MinID system (for the administering users located in Norway) or similar systems in other countries. User guides are available at the TSD homepage 4 . The folder structure on the server will be based on the ECHOES WP structure, and separate folders for each kind of data for all relevant WPs. Datasets that belong to multiple WPs will be stored in sub-folders of WP3. ### Anonymized data All data collection and processing that will be done during ECHOES will be carried out according to national legislation and the EU Directive 95/46/EC / General Data Protection Regulation (Regulation (EU) 2016/679). The consortium and the partners are responsible for following the ethical procedures in their respective countries (see Section 4.1). Anonymized data will be stored at the ECHOES SharePoint solution in encrypted and password-protected form (see Section 4.4.3). ECHOES partners have access to this solution through personal logins provided by NTNU. The overall folder structure is based on the ECHOES WP structure; each WP folder includes a data sub-folder and these will include folders for the specific kinds of data produced. ### Encryption standards and procedures All data files will be transferred via secure connections and in state-of-the- art encrypted and password-protected form (for example with the open source 7-zip tool providing full AES-256 encryption: _http://www.7-zip.org/_ ) . Passwords will not be exchanged via e-mail but in personal communication between the partners. The encryption solutions will be chosen in accordance with the ECHOES partners’ IT supports. ### File name standards To ensure that data files as well as any other file in ECHOES has a clear name identifying the content, the following file name standards are used. All documents shall be numbered by their type of document, and the assigned subsequent numbering within each WP (first deliverable of WP1: **D1.1,** first deliverable of WP 2: **D2.1** ) **XXX** : Identifies which main category the document belongs to. In order to always easily identify the files, the project name **ECHOES-** shall be included as a prefix to all document categories. **YYY** : Will always be a number assigned subsequently for each new doc in the XXX category and WP. **ZZZ** : Issue number <table> <tr> <th> **XXX** </th> <th> **XXX explanation** </th> <th> **YYY** </th> <th> **ZZZ** </th> </tr> <tr> <td> D </td> <td> Deliverable </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc. </td> </tr> <tr> <td> MAN </td> <td> Management </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> <tr> <td> DAT </td> <td> Data files </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> <tr> <td> DOC </td> <td> Data documentation file </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> <tr> <td> NOT </td> <td> Notes </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> <tr> <td> MOM </td> <td> Minutes of meeting </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> <tr> <td> PRE </td> <td> Presentations </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> <tr> <td> PAP </td> <td> Journal paper manuscript </td> <td> 1.1, 1.2, 2.1, 2.2 etc. </td> <td> 1,2,3,etc </td> </tr> </table> The file-name shall always consist of: document number, document title and issue (in this order). Underscore shall be used between document number, issue number and document title. There shall be no open spaces in the document title. Logical short versions of words can be used in the document title part of the filename in order to shorten the filename. If the document is a draft version, this is indicated by “DR” after Issue# and underscore. Example (this document, first issue): **ECHOES-D8.1_DMP_1** ## ECHOES Data base An open access database will be created to gather both quantitative and qualitative SSH data. The data will be compiled from public and non-commercial databases. In addition, database will compile and store data based on the results from WP3-WP7 of this project and WP2’s own data and information collection via a web-based survey(s), workshops, and interviews. The researchers have a duty of transparency to fully inform how the data will be used and to what purpose the data is for. Thus, the ethically compliant data collection will be guided by proportionality (proportional of WP2 research aims) and follow the legal safeguards to minimize any risks related to unauthorized release of personal and private information. The empirical work of WP4, WP5 and WP6 will be coordinated by WP3 and then transferred into the database created by WP2. The data handling procedures defined in this document apply to the operation of the ECHOES database. ## Deletion of data Identifying personal data will be retained for a maximum of 1 year after collection completion to allow for thorough quality control. All such data will thus be deleted by march 2018 the latest. Anonymized data will not be deleted but stored and made available for future use through the open data pilot. ## Dissemination and exploitation of the data Although the dissemination and exploitation strategy of the project is yet under development and will be published through deliverables D8.2 and D8.3, different type of data or data packages are likely to be used for dissemination and exploitation purposes: * _Database (see Section 4.5)_ : the data base is expected to be one of the main exploitable results of the ECHOES project. The database will be open source and offered under the regulations of the pilot on open research data in H2020. This open access character’s compatibility with ECHOES’ exploitation strategy will be analysed during the project (WP8) and restrictions to open access use will be kept at a minimum and only implemented if strong exploitation benefits stand against. In any case, the procedures for data management set in this document, particularly regarding WP2, will be followed and respected. * _International survey_ : in addition to the use of data from the international survey (WP3) for deriving policy recommendations in WP7, data on user preferences and energy use strategies might be of interest for utilities and other commercial agents in the energy fields. In this regard, the exploitation pathways will be analysed throughout the project, especially though the bundling of anonymized data packages. The procedures for data management set in this document, particularly regarding WP3, will be followed and respected. * _ECHOES contact database_ : For project dissemination purposes an ECHOES contact database is being created in WP8. All the partners contribute to this database by providing information of persons and organisations to be informed or contacted during the project. Once collected, this data will be stored on a secure server, hosted and operated by the University of Oslo (UiO) and their Services for sensitive data (TSD) ( https://www.uio.no/english/services/it/research/storage/sensitive-data/) that meets the Act relating to Personal Data Filing Systems (Personregisterloven). ## Open data pilot ECHOES provides access to all primary data collected and secondary data aggregations where publication of the data does not collide with copyrights of the initial data providers. This is in line with the H2020 open data pilot openaire (see _https://www.openaire.eu/opendatapilot_ ) . Data will be made available as soon as ECHOES primary research and publication interests are fulfilled. No embargo period is implemented once the ECHOES publications are finished and no restrictions are foreseen to be put on the re-use of the data at this point. WP2 with the ECHOES database is responsible for providing open access to the data. ### General principles All data in ECHOES shall be open access if no other important principles stand against it. In this respect, the Consortium Agreement and the Grant Agreement are binding, especially Section 9 (“Access rights”) and Section 10 (“Non disclosure of information”) of the Consortium Agreement are relevant for determining potential need for access restriction to ECHOES data. ### Size of the data The size of the data files is not determined at this point of the ECHOES project yet. The international survey is expected to have at least 9000 participants, the focus group interviews will typically have between 6 and 12 participants. This section will be updated as soon as more information is available. ### Target group for the data use The data provided in ECHOES will be of interest for policy makers, businesses in the energy sector, stakeholder groups and other researchers. They will be documented and presented in a way that makes them accessible for non- scientists. ### Access procedures The data made available through the open data pilot will be fully accessible without any restrictions (if not exploitation benefits require an embargo period, see Section 4.7). To keep track of the use of the data and its outreach, ECHOES will implement a registration procedure in the WP2 database (see Section 4.5) where an interested external user has to register in the system with name, affiliation and the reason for the wish to access the data. Access will be granted for all interested users, but with a registration, it will be possible to roughly track what the data is used for. The validity of the provided e-mail addresses is checked by sending a confirmation link before opening the database. ### Documentation procedures All data files provided by ECHOES include a documentation of the content of the data file and the context the data was collected in (see Section 4.2.2). This is important to ensure usefulness of the data for researchers and analysist not included in the data collection. The documentation procedures will be constantly updated during the ECHOES project. ### Securing interoperability For social science data it is essential to document the use and source of theoretical concepts leading to data collections to ensure interoperability across different user groups. ECHOES will create a glossary defining key terms and concepts used in the project. This glossary will be part of the data documentation. Furthermore, sources for theoretical concepts and variables measures will be documented to ensure comparability with previous and future use. For quantitative data, the psychometric performance of the variables will be documented. The use of theoretical concepts will be standardized within ECHOES and with previous use of the variables and concepts wherever possible (there will be a number of newly developed concepts in ECHOES, which cannot be aligned with previous work). ### Search keywords and data identification Each data set will be assigned a unique and persistent Digital Object Identifier (DOI) to make it identifiable when stored in the ECHOES database or a data repository. Each fill will be tagged with keywords for search purposes: ECHOES is always a keyword, furthermore, keywords describing the type of the data (e.g., “interview”, “survey”), the participants (“representative sample EU”), the type of topics included (e.g., “energy culture”, “identity”). ### File types Each data file in ECHOES will be made available with an accompanying documentation of its content (see Section 4.2.2). Qualitative data such as interview transcripts will be made available in form of text documents (in PDF, TXT, RTF and DOC format) in their original language and translation to English. Quantitative data will be made available in standard data formats for popular statistical program packages to make reuse as easy as possible (ASCII, R, STATA, SPSS). ### Curation of the data after the end of ECHOES All data in ECHOES will be made available through the ECHOES database for at least 5 years after the end of ECHOES, which means until the End of October 2024. During this period, NTNU is responsible for curating the data and for hosting costs. After that time, the data files will be transferred into a data repository that is the state of the art at that time. # APPENDIX I: DETAILED DATA COLLECTION RESPONSIBILITIES The following table presents a complete summary of all data collections in ECHOES with the responsibilities indicated. The table will be constantly updated during the project as soon as data collections are started. # APPENDIX II: CONSENT FORMS This appendix collects all consent forms and information sheets used in ECHOES. The first included document is the template for ECHOES information sheets based on the general template provided by the Norwegian Centre for Research Data (NSD). (1) Consent form template from NSD. #### Request for participation in research project _**[This template serves as an example of an information letter.** _ _**Clear text and insert your own.** _ _**NB - all information should be concise and easy to understand]** _ ##### "[Insert _project title_ ]" **Background and Purpose** [ **Describe** the purpose of the project and briefly sketch the main research topics. Indicate whether the project is a master’s or Ph.D. project at the institution, whether it is implemented as commissioned research, in cooperation with other institutions, etc.] <table> <tr> <th> [ **Describe** how the sample has been selected and/or why the person has been requested to </th> </tr> <tr> <td> participate.] </td> <td> </td> </tr> </table> **What does participation in the project imply?** [ **Describe** the main features of the project: data collection that requires active participation (surveys, interviews, observation, tests, etc., preferably describing approximate duration), and any collection of data about the participant from other sources (registers, records, student files, other informants, etc.). **Describe** the types of data to be collected (e.g. "questions will concern..."), and the manner(s) in which data will be collected (notes, audio/video recordings, etc.). If parents are to give consent on behalf of their children, inform that they can request to see the questionnaire/interview guide etc. If multiple sample groups are to be included in the project, it must be explicitly indicated what participation entails for each group, alternatively, a separate information letter must be made for each group.] **What will happen to the information about you?** All personal data will be treated confidentially. [ **Describe** who will have access to personal data (e.g. only the project group, student and supervisor, data processor, etc.), and how personal data/recordings will be stored to ensure confidentiality (e.g. if a list of names is stored separately from other data ).] [ **Describe** whether participants will be recognizable in the publication or not.] The project is scheduled for completion by [ **insert** date]. [ **Describe** what will happen to personal data and any recordings at that point. If the data will not be made anonymous by project completion: state the purpose of further storage/use, where data will be stored, who <table> <tr> <th> will have access, as well as the final date for anonymization (or information about personal </th> </tr> <tr> <td> data being stored indefinitely).] </td> <td> </td> </tr> </table> **Voluntary participation** It is voluntary to participate in the project, and you can at any time choose to withdraw your consent without stating any reason. If you decide to withdraw, all your personal data will be made anonymous. [For patients and others in dependent relationships, it must be stated that it will not affect their relationships with clinicians or others, if they do not want to participate in the project, or if they at a later point decide to withdraw.] If you would like to participate or if you have any questions concerning the project, please contact [ **Insert** name and telephone number of the project leader. In student projects contact information of the supervisor should also be inserted]. The study has been notified to the Data Protection Official for Research, NSD - Norwegian Centre for Research Data. #### Consent for participation in the study <table> <tr> <th> **[Consent may be attained in writing or verbally. If consent is obtained in writing from** </th> </tr> <tr> <td> **the participant, you can use the formulation below. If parents/guardians are to give** </td> <td> </td> </tr> <tr> <td> **consent on behalf of their children or others with reduced capacity to give consent, the** </td> </tr> <tr> <td> **consent form must be adapted, and the participant’s name should be stated.]** </td> <td> </td> </tr> </table> I have received information about the project and am willing to participate \------------------------------------------------------------------------------------------------------------- (Signed by participant, date) **[Checkboxes can be used (in addition to signature) if the project is designed in such a way that the participant can choose to give consent to some parts of the project without participating in all parts (e.g. questionnaire, but not interview), or if information is to be obtained from other sources, especially when the duty of confidentiality must be set aside in order for the information about the participant to be disclosed._Examples: - I agree to participate in the interview / - I agree that information about me may be obtained from teacher/doctor/register - I agree that my personal information may be published/saved after project completion]_ ** # APPENDIX III: CONFIRMATIONS BY DATA PROTECTION OFFICERS All confirmations regarding the conduction of data collection in accordance with national and international law, especially EC Directive 95/46 and the General Data Protection Regulation (Regulation (EU) 2016/679) will be collected in this appendix as soon as they are available. # APPENDIX IV: DATA DOCUMENTATION TEMPLATES The following two templates shall be used to document the necessary background of the data files for internal and external use in ECHOES. 1. Data documentation template for qualitative data in ECHOES 2. Data documentation template for quantitative data in ECHOES _Data documentation template for qualitative data in ECHOES_ Name of the data set: ___________ Date the data set was finalized: _________________ Date/time period the data was collected: ____________ to ____________ . Responsible partner for the collection of the data: _________________________________ (name) ____________________________(institution) Data produced in WP: ____________ Task: ____________ Data anonymized on (date): _______________ by __________________________ _Information about the participants:_ Number: __________ Age: _________________ Sex: ___________________ Participants’ background: ____________________________________________________ Recruitment procedure: _____________________________________________________ Original language of the material: _____________________________________________ Data collected by (interviewer): _______________________________________________ Transcribed by: ___________________________________________________________ Transcription rules: ________________________________________________________ Translated to English by: ___________________________________________________ Ethically cleared by: _____________________________________ on (date): _______________ Interview guidelines (or the like): _______________________________________________________ Size of the data (e.g. number of words): _________________________________________________ Short summary: ______________________________________________ _Data documentation template for quantitative data in ECHOES_ Name of the data set: ___________ Date the data set was finalized: _________________ Date/time period the data was collected: ____________ to ____________ . Responsible partner for the collection of the data: _________________________________ (name) ____________________________(institution) Data produced in WP: ____________ Task: ____________ Data anonymized on (date): _______________ by __________________________ _Information about the participants:_ Number: __________ Age: _________________ Sex: ___________________ Participants’ representative for which population: __________________________________________ Recruitment procedure: _____________________________________________________ Response rate: ___________________________________________________________ Original language of the material: _____________________________________________ Translated to English by: ___________________________________________________ Ethically cleared by: _____________________________________ on (date): _______________ Variables in the dataset: <table> <tr> <th> **Variable name** </th> <th> **Variable type** </th> <th> **Variable label** </th> <th> **Answering format / value labels** </th> <th> **Comments** </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> _Variable types:_ * T = text * D = date / time * B = binary / dichotomous * C = categorical * O = ordered categorical / ordinal * I = interval / ratio / Likert scales with 5 or more categories Short summary: ______________________________________________ # APPENDIX V: LITERATURE REVIEW TEMPLATE The following template is used in the ECHOES literature review. Paper # Title Details (Authors, Journal etc.) Keywords Main Technology Foci and sub \- foci The Type of Assestment ( regional, national etc.) Type of Formal Social Unit formal social units, ( collective decision \- maki ng bodies etc.) Definitions ) Terms etc. ( Objective(s) Methodology Main Indicators, Dimensions or Variables used or identified Results 1 2 …
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1184_KONFIDO_727528.md
# Executive Summary The document provides an initial data management plan concerning the data processed, generated and preserved during and after KONFIDO Action, as well as relative concerns generated from their usage. The deliverable aims to define a framework outlining KONFIDO’s policy for data management. In particular, this deliverable covers topics like information about the data, metadata content and format, policies for access, sharing and re-use and longterm storage and data management. During the KONFIDO Action, health data will be processed and the processing of this personal data is submitted to strict data protection legal rules. In addition, the data management plan will be affected by future results of the work performed in WP2-WP7. Therefore, the initial framework presented in this deliverable will further evolve during the Action. # Introduction The document provides an initial data management plan concerning the data processed, generated and preserved during and after KONFIDO Action, as well as relative concerns generated from their usage. The deliverable aims to define a framework outlining KONFIDO’s policy for data management. In particular, this deliverable covers topics like information about the data, metadata content and format, policies for access, sharing and re-use and longterm storage and data management. This deliverable is “living” and strongly linked with the work taking place in the Action’s Work Packages. We note here that throughout the duration of the Action, KONFIDO consortium will closely follow the activities in the eHealth Network (eHN), the Joint Action to Support the eHealth Network (JAseHN) and the eHealth Digital Service Infrastructure (eHDSI). ## Action goal KONFIDO Action will advance the state of the art of eHealth technology with respect to security aspects by providing a holistic approach – i.e. targeting all architectural layers of an IT infrastructure, namely: _storage_ , _dissemination_ , _processing_ , and _presentation_ – to address challenges of secure storage and exchange of eHealth data, protection and control over personal data, and security of health related data gathered by mobile devices. KONFIDO will build on and extend the results of a best of breed selection of successful projects (namely: epSOS, STORK, DECIPHER, EXPAND and ANTILOPE). The KONFIDO approach will be implemented in a technological framework that will rely on six technology pillars, namely: i. The new security extensions provided by some of the main CPU vendors; ii. Physical Unclonable Function (PUF)-based security solutions based on photonic technologies; iii. Homomorphic encryption mechanisms; iv. Customised extensions of selected Security Information and Event Management (SIEM) solutions; 5. A set of disruptive logging and auditing mechanisms developed in other technology sectors - such as blockchain - and transferred to the healthcare domain and 6. A customised STORK-compliant eID implementation. Given the recent advances in the field, KONFIDO will consider a customized eIDAS-compliant cross-border eID implementation. The usability of the proposed solutions will be tested in a realistic setup, deployed on top of a federated cloudbased infrastructure where data will be exchanged and services interoperate cross-border. Building on results that are already widely accepted and relying on a handful of complementary technologies (some of which are already at a high level of maturity), KONFIDO has a dramatic potential in terms of cross-sector transfer of innovation in the field of coordinated care towards improved acceptance of healthcare solutions. ## Work Package goal The deliverable is written in the context of WP1 – Project Management. The main goal of this WP is to guarantee the successful realisation and conclusion of the Action including the project administration and control, risk management, problem handling and quality assurance on management levels. This WP makes sure, that the Action runs to budget, is on time and the expected results are achieved. This goal will be reached by ensuring the correct and efficient collaboration between partners. ## Deliverable goal The deliverable aims to define a framework outlining KONFIDO’s policy for data management. The deliverable is based on the guidelines provided in “ _Guidelines on Data Management in Horizon 2020_ ” published by the European Commission and aims to answer the following issues: * What types of data will the Action generate/collect? * What standards will be used? * How will this data be exploited and/or shared/made accessible for verification and re-use? * How will this data be curated and preserved? As mentioned above, the document is dynamic and will be periodically updated in parallel with the development work taking place in other WPs. Overall, a common policy will be defined indicating the procedures for the data management during and after the Action, as well as at internal and external level. ## Deliverable structure The deliverable is divided into following sections: * Section 2 describes the Action’s approach regarding open access to scientific publications; * Section 3 refers to the Action’s approach regarding open access to research data; * Section 4 describes the datasets that will be used in the Action, provided by SUNDHED and PAUSIL; * Section 5 details any standards and metadata that will be utilised in the framework of KONFIDO; * Section 6 includes a description of the data sharing policies regarding access, sharing, re-use and distribution; * Section 7 refers to archiving and presentation in the context of KONFIDO and - Section 8 concludes the document. # Open access to scientific publications Open access to scientific publications refers to free of charge online access for users. Open access will be achieved through the following steps: 1. Any paper presenting the Action results will acknowledge the Action: “ _The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 727528-KONFIDO_ ” and display the EU emblem. 2. Any paper presenting the Action results will be deposited at least by the time of publication to a formal repository for scientific papers. If the organisation does not support a formal repository ( _https://www.openaire.eu/participate/deposit/idrepos_ ) , the paper can be uploaded in the European sponsored repository for scientific papers: _http://zenodo.org/_ . 3. Authors can choose to pay “author processing charges” to ensure open access publishing, but still they have to deposit the paper in a formal repository for scientific papers (step 2). 4. Authors will ensure open access via the repository to the bibliographic metadata identifying the deposited publication. More specifically, the following will be included: * The terms “ _European Union (EU)_ ” and “ _Horizon 2020_ ”; * “ _KONFIDO - Secure and Trusted Paradigm for Interoperable eHealth Services_ ”, Grant agreement number 727528; * Publication data, length of embargo period if applicable; and * A persistent identifier. 5. Each case will be examined separately in order to decide on self-archiving or paying for open access publishing. # Open access to research data Open access to research data refers to the right to access and re-use digital research data generated by Actions. EU expects funded researchers to manage and share research data in a manner that maximises opportunities for future research and complies with best practices in the relevant subject domain. That is: * The dataset has clear scope for wider research use; * The dataset is likely to have long-term value for research or other purposes;  The dataset has broad utility for reference and use by research communities; * The dataset represents a significant output of the research project. Openly accessible research data generated during KONFIDO Action will be accessed, mined, exploited, reproduced and disseminated free of charge for the user. Specifically, the " _Guidelines on Data Management in Horizon 2020_ " clarifies that the beneficiaries must: _“Deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_ _i. The data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; ii. Other data, including associated metadata.”_ The following section describes some sample datasets that we are planning to use in the framework of KONFIDO. The provided datasets are, at this early stage of the Action, possible examples which are probably subject to change with the evolution of the Action. For each dataset that we are going to share in the project lifetime, policies for access and sharing as well as policies for re-use and distribution will be defined and applied. A generic guideline is provided in Section 6 " _Data sharing_ " and Section 7 " _Archiving and preservation_ ". # Datasets description KONFIDO pilots involve the cross-border exchange of patient data. The end-user partners of the consortium have access to such types of data; however, it is not clear yet if the Action will require the use of real patient data for the pilots. In case this is needed, KONFIDO partners will follow best practices during pilot definition, including data anonymisation or pseudoanonymisation, ethical approvals for working with these data, informed consents of patients etc. All patient data will be handled according to national law and guidelines. This deliverable is a “live” document; therefore, it will be updated as the progress progresses and all relevant issues will be addressed. Below, we provide a brief overview of the datasets that can be provided to the KONFIDO pilots by SUNDHED and PAUSIL partners, if needed. ## SUNDHED dataset The 1 st KONFIDO end-user workshop gave valuable input in regards to the potential dataset. The typical scenario will involve a consultation or treatment situation where the general practitioner or hospital doctor will have a limited time to get an overview of the patient. Therefore, the dataset needs to be short and to the point. The actual nature of the required data will vary a lot from patient to patient, so it will be very hard to pinpoint a one-size-fits-all. For example, a patient with a broken leg will primarily need x-rays, while a COPD patient might need data on current medication and lung function. In all the situations, the doctors at the end-user workshop agreed on Electronic Health Record (EHR) notes not being important for the vast majority of the patients. A minimum dataset that would cover most of the basic needs would include: * Diagnose data based on ICPC-2 and/or ICD-10 codes; * Access to laboratory test results based on IUPAC codes; * Current medication based on ATC codes; * Open prescriptions for the patient – preferred useable to local pharmacies; * X-rays pictures or descriptions if the bandwidth is limited or the pictures are very large or scans; * EHR notes should be made available but, in a situation with limited time on hand, some kind of filter would need to be used to help the doctor get an overview. Often the notes are generated by dictation and data are not structured; * Specific data for chronic patients like INR values, lung function, HbA1c or similar. It is very important that all members of the health data ecosystem agree on the measurement values and standards, so that there is no doubt on how a doctor will interpret the numbers. ## PAUSIL dataset The PAUSIL Information System (IS) can be decomposed into three main areas: _Management and Accounting IS_ , _Clinical IS_ and _Communication and web services_ . For the purpose of KONFIDO Action, the Clinical IS is described in the Figure 1. The Clinical IS (CIS) is a computer-based infrastructure designed to collect, store and manipulate data related to the healthcare delivery process. CIS data consists of structured/unstructured text represented in a proprietary format. CIS can be, in turn, divided into Diagnostic IS and Emergency Department IS (EDIS). The Diagnostic IS includes the Laboratory IS (Lab IS) and the Radiology Information Service (RIS). The Lab IS integrates and manages the results of patients’ tests with the Electronic Health Record (EHR) and Transfusion Medicine. Data managed by RIS system are digital radiological images (DICOM format) that represent large volumes of data. The picture archiving and communications system (PACS) is designed for paper and film archiving. It provides archiving, viewing, and distribution of medical images to radiologists, physicians of other specialties, and doctors in other hospitals and it is integrated with EHRs too. All the system integrations are realised using the standard HL7 that refers to a set of international standards for transfer of clinical and administrative data between software applications used by various healthcare providers. **Figure 1. PAUSIL Information System** EDIS is based on a dedicated application that collects patient information from the triage to the discharge or admission, following the work flow described in Figure 2. Each dataset comes in specific formats, as specified in Table 2. **Figure 2. ED workflow** <table> <tr> <th> **Data** </th> <th> **Description** </th> <th> **Size** </th> <th> **Type** </th> <th> **Structure** </th> </tr> <tr> <td> CIS </td> <td> Clinical information system </td> <td> ~15 GB/500 GB </td> <td> Text </td> <td> Structured/unstructured </td> </tr> <tr> <td> EDIS </td> <td> Emergency department information system </td> <td> ~5 GB </td> <td> Text </td> <td> Structured/unstructured </td> </tr> <tr> <td> RIS </td> <td> Radiology information system </td> <td> ~6.5 TB/130 GB </td> <td> Text </td> <td> Structured/unstructured </td> </tr> <tr> <td> PACS </td> <td> Picture archiving and communications system </td> <td> ~19.5 TB/8 TB </td> <td> Image </td> <td> Unstructured </td> </tr> <tr> <td> Lab IS </td> <td> Laboratory information system </td> <td> ~2 GB </td> <td> Text </td> <td> Structured/unstructured </td> </tr> <tr> <td> EHR </td> <td> Electronic health record </td> <td> ~9 TB </td> <td> Text </td> <td> Structured/unstructured </td> </tr> </table> **Table 1. Emergency Department Dataset** # Standards and metadata Metadata is needed on the datasets. This will provide transparency and traceability to make security and audit possible. The law and regulations demand a large degree of control when handling this type of data. Also, unique identifies need to be in place for patients and doctors to make sure that data is going to the right people at the right time. Most of the data will come from the local system with lots of metadata and this will greatly vary from country to country, hospital to hospital and system to system. Some of the base data needed will include: * Time stamp for when the data was generated; * Time stamp for exchange of data; * Health professional to contact regarding the data, organisation, email and phone number; * Log with access and actions regarding the dataset; * Unique identifiers on the doctors who access the data; * Unique identifiers for the patient; * Country code on data and access log; * Data on who added or edited the data; * Data on possible shielding of certain types of data. Standards can be selected to help the data exchange, but the variation in the employed systems across Europe is very large and there will be hundreds of standards and even big variations within the same standards and/or versions of these. Datasets should be as standardized, if possible, but the exchange of data will be very hard by using defined standards. To use standards, you do not only need to invent new or develop current standards, but implement them into the actual systems which is a very difficult task. The reason for this is that you do not only need the technological support, but you also really need to make it happen. It is a rare thing that an organisation can adapt to a new standard with an existing system since that system will likely be integrated with other systems. Also, the systems/organisations asking others to adapt to a standard will not have the budget or the governance in the other organisation. Experience shows that you need to be able to handle a multitude of standards, if you want to integrate a large number of systems and KONFIDO is aiming to potentially integrate a very large number of systems. A standard for integrating to KONFIDO should be defined, but it is a must that it can co-exist with current standards and interfaces in the system. Even on a European level it is hard to imagine all countries moving at the same pace. # Data sharing The data sharing policy concerning KONFIDO Action is still under definition. Most of the existing data are privately held by KONFIDO data owners. For the time being, the consortium has not decided how the data/results generated within the Action will be treated. This will be an on-going process and is expected to be finalised later in the course of the Action as it closely related to the definition and the requirements of the KONFIDO use cases. The issues identified so far concern: * The definition of data owner(s); * The definition of incentives concerning the data providers; * The identification of user groups and the access policies concerning the data; * The definition of access procedure and embargo periods; * The compliance with corresponding legal and ethical issues. Open access to research data will be achieved in KONFIDO through the following steps: 1. Prepare the " _Data Management Plan_ " (current document) and update it as needed; 2. Select what data we will need to retain to support validation of the Action finding (the datasets described in Section 4); 3. Deposit the research data into an online research data repository. In deciding where to store Action data, the following choice will be performed, in order of priority: * An institutional research data repository, if available; * An external data archive or repository already established in the KONFIDO research domain (to preserve the data according to recognised standards); * The European sponsored repository: _http://zenodo.org/_ ; * Other data repositories (searchable here: _http://www.re3data.org_ ) , if the aforementioned ones are ineligible. 4. License the data for re-use (Horizon 2020 recommendation is to use CC0 or CC BY); 5. Provide information on the tools needed for validation, i.e. everything that could help a third party in validating the data (workflow, code, etc.). Independent of the selected repository, the authors will ensure that the repository: * Gives the submitted dataset a persistent and unique identifier to make sure that research outputs in disparate repositories can be linked back to particular researchers and grants; * Provides a landing page for each dataset, with metadata; * Helps track if the data has been used by providing access and download statistics; * Keeps the data available in the long term, if desired; * Provides guidance on how to cite the data that has been deposited. Even following the previously described steps, each case will be examined separately in order to select the most suitable online repository. ## Policies for access and sharing As suggested by the European Commission, the partners will deposit at the same time the research data needed to validate the results presented in the deposited scientific publications. This timescale applies for data underpinning the publication and results presented _._ Research papers written and published during the funding period will be made available with a subset of the data necessary to verify the research findings. The consortium will then make a newer, complete version of data, available within 6 months of Action completion. This embargo period is requested to allow time for additional analysis and further publication of research findings to be performed. Other data (not underpinning the publication) will be shared during the Action life following a granular approach to data sharing and releasing subsets of data at distinct periods rather than waiting until the end of the Action, in order to obtain feedback from the user community and refine it as necessary. An important aspect to take into account is who is allowed to access the data. It could be the case that part of a dataset should not be publicly accessible to everyone. In this case, control mechanisms will be established, including: * Authentication systems that limit read access to authorised users only; * Procedures to monitor and evaluate access requests one by one. A user must complete a request form stating the purpose for which they intend to use the data; * Adoption of a Data Transfer Agreement that outlines conditions for access and use of the data. Each time a new dataset will be deposited, the consortium will decide who is allowed to access the data. Generally speaking, anonymised and aggregate data will be made freely available to everyone, whereas sensitive and confidential data will only be accessed by specific authorised users. ## Policies for re-use and distribution A key aspect of data management is to define policies in order for users to learn the existence of data and the content it contains. People will not be interested in a set of unlabeled files published on a website. To attract interest, partners will describe accurately the content of published dataset and, each time a new dataset is deposited, disseminate the information using the appropriate means (i.e., mailing list, press release, Facebook, website news) based on the type of data and the interested target audience. Research data will be made available in a way that can be shared and easily re-used by others. That means: 1. Sharing data using open file format (whenever possible), so that they can be implemented by both proprietary and open source software; 2. Using format based on an underlying open standard; 3. Using format which is interoperable among diverse internal and external platforms and applications; 4. Using format which does not contain proprietary extensions (whenever possible). Documenting datasets, data sources and the methodology used for acquiring the data establishes the basis for the interpretation and appropriate usage of the data. Each generated/collected and deposited dataset will include documentation to help users to re-use it. As recommended, the license that will be applied to the data is CC0 or CC BY. If limitations exist for the generated data, these restrictions will be clearly described and justified. Potential issues that could affect how data can be shared and used may include the need to protect participant confidentiality, comply with informed consent agreement, protect Intellectual Property Rights, submit patent applications and protect commercial confidentiality. Possible measures that may be applied to address these issues include encryption of data during storage and transfer, anonymisation of personal information, development of Data Transfer Agreements that specify how data may be used by an end user, specification of embargo periods, and development of procedures and systems to limit access to authorised users only. # Archiving and presentation Datasets will be maintained for 5 years following Action completion. To ensure high-quality long-term management and maintenance of the dataset, the consortium will implement procedures to protect information over time. These procedures will permit a broad range of users to easily obtain, share and properly interpret both active and archived information and they will ensure that information is: * Kept up-to-date in content and format so that they remain easily accessible and usable; * Protected from catastrophic events (e.g., fire and flood), user error, hardware failure, software failure or corruption, security breaches, and vandalism. Regarding the second aspect, solutions dealing with disaster risk management and recovery, as well as with regular backups of data and off-site storage of backup sets, are always integrated when using the official data repositories (i.e., _http://zenodo.org/_ ) ; the partners will ensure the adoption of similar solutions when choosing an institutional research data repository. Partners are encouraged to claim costs for resources necessary to manage and share data; these will be clearly described and justified. Arrangements for post-action data management and sharing must be made during the life of the Action. Costs associated with long-term curation and preservation, such as POSF (Pay Once, Store Forever) storage, will be purchased before the Action ends. # Conclusions The purpose of the Data Management Plan is to support the data management life cycle for all data that will be collected, processed or generated by KONFIDO Action. The Data Management Plan is not a fixed document, but evolves during the lifespan of the Action. This document is expected to mature during the Action; more developed versions of the plan could be included as additional deliverables at later stages. The Data Management Plan will be updated at least by the mid-term and final review to fine-tune it to the data generated and the uses identified by the consortium since not all data or potential uses are clear at this stage of the Action.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1185_CrowdHEALTH_727560.md
# 1\. Executive Summary This deliverable is the first version of the Data Management Plan (DMP) of the CrowdHEALTH project and it compiles a first description of the datasets according to the FAIR template. The overall purpose of this document is to support the data management life cycle for all data that will be collected, processed or generated by the project. CrowdHEALTH participates in the Open Research Data Pilot of the H2020 Programme [1], and a DMP is required for all the projects that are participating in this programme. The DMP is usually intended to be submitted by month 6 of the project, and the European Commission recommends using a template that follows the FAIR Data Management Plan Model [2] in order to make the research data **F** indable, **A** ccessible, **I** nteroperable and **R** e-usable. However, due to CrowdHEALTH complexity in what regards data, which comes from the fact that the project deals with heterogeneous sets of data, from different sources that are initially not interoperable, and taking into account the high level of privacy and confidentiality of most datasets, together with the large amount of datasets identified, the FAIR Model was not appropriate to gather relevant information from the pilots (Use Cases) as a starting point. A more practical kind of information about the datasets was needed by the technicians to understand what type of data will be made available and how it will be used by the pilots. Consequently, some extra time was needed to create appropriate questionnaires and collect relevant information. The whole procedure to gather information about partners’ datasets, and in particular those provided by the Use Cases who have access to them, has been described in the introduction of this deliverable and forms for the compilation of information can be found in annexes. The reason for sharing information about these steps and the procedure that we have designed previous to the elaboration of the first DMP is that we proposed what can be seen as a good practice, which could be appreciated and replicated, and probably improved for its implementation in other projects of similar complexity regarding available data and datasets to be researched and used by the developments. The project has identified a list of 13 datasets and a FAIR questionnaire has been completed for each one. The DMP is a living document that will be updated for the mid-term and the final review of the project. # 2\. Introduction ## 2\. 1 Objective CrowdHEALTH DMP has been conceived to support the data management life cycle for all data that will be collected, processed or generated by the project. Moreover, the CrowdHEALTH DMP aims to identify best practices for gathering information about the variety of data to be used in the project that could optimise the development, specific standards for the generated data and assess their suitability for sharing and reuse in accordance with official guidelines. ## 2\. 2 Document Structure Section 2 - this introduction - explains what a DMP contains, the datasets identified during the project, how the CrowdHEALTH consortium proceeded to gather the information, and why we started to complete simpler templates and finally completed the FAIR Data Management Plan Template. Section 3 contains inputs to the FAIR Model following the template for all project Datasets. Annexes contain the templates of the questionnaires for the Use Cases partners that were used previously to complete the FAIR Data Management Plan Template and some additional information. ## 2\. 3 What the Data Management Plan should contain as described in the Grant Agreement From the Grant Agreement p. 193: “ _CrowdHEALTH Data Management Plan will comply with the EC Data Management Plan template_ [2] _and will specify how the generated data will be easily discovered and accessed, ensuring open access by adopting the adequate licensing scheme (e.g. Creative Commons License)._ _The plan will also generate wider interest towards improvements achieved by the project in order to facilitate and potentiate exploitation opportunities. CrowdHEALTH capitalizes on the development of a well-defined Data Management Plan (DMP), including: (i) Data Types, Formats, Standards and Capture Methods; (ii) Ethics and Intellectual Property, (iii) Access, Data Sharing and Reuse; (iv) Resourcing; (v) Deposit and Long-Term Preservation, and (vi) Short-Term Storage and Data Management._ _Moreover, the plan will describe quality evaluating tools/procedures, which will prove the data intelligibility, and will define the type of accompanying information in the form of metadata or short description to allow potential users to gain awareness on the data concepts and evaluate their suitability for future use._ ## 2\. 4 Key Performance Indicators of the Data Management Plan As stated in the Description of Action (DoA) [3], these will be the means to measure and Key Performance Indicators (KPIs) of the first evaluation of DMP: * _At least 5 open data management packages uploaded_ * _At least 5 repositories, where open data management packages are uploaded_ In case of not providing access to data, the DMP must contain the reasons for not giving access. It is too early to address the KPIs at this early stage of the project development. This will be addressed in further versions of the DMP and towards the end of the project. ## 2\. 5 Data classification as identified during the Kick-off meeting CrowdHEALTH project is complex in what regards data, for the diversity, the privacy and confidentiality of collected data, and the different sources from where data come from, and as such operates on the basis of very different and heterogeneous datasets. The first questions to the Use Case partners related to the types of datasets to be considered, their characteristics and descriptions. Answering those questions, and providing details regarding the available and potentially available data to be shared within the context of the project, was essential to start defining the high level architecture of the system. During the kick-off meeting, the consortium identified two basic stages with respect to the availability of datasets: 1. Existing datasets _provided by the use cases_ or open datasets that we may use for the project purposes 2. Datasets of aggregated data _generated (i.e. collected) during the project pilots_ Subsequently, an initial and tentative classification guided the consortium towards the understanding of what the project will face in terms of data. <table> <tr> <th> 1) _**EHR Data** (Stockholm University Hospital and Hospital La Fe) _ </th> </tr> <tr> <td> Disease-specific (e.g. obesity, diabetes, etc.) in standardised formats (e.g. HL7, FHIR, etc) </td> </tr> <tr> <td> 2) _**Living Lab and Chronic Disease Management Data** (DFKI, BioAssist) _ </td> </tr> <tr> <td> a) Nutrition data, lifestyle, activities, biosignals </td> </tr> <tr> <td> b) Standardisation (ADLs, IADLs, Open mHealth, FHIR, PHRs) </td> </tr> <tr> <td> 3) _**Public Health Data** (Ministry of Education, Science and Sport, University of Ljubljana ) _ </td> </tr> <tr> <td> 1. Somatic and motor development data sets from SLOfit system (anthropometric measurements of children 1 ); 2. In a second step, nutrition, physical activity, sedentariness, sleep, resting heart-rate, socioeconomic status and parental physical activity. </td> </tr> <tr> <td> 4) _**Social Health Data** (Care Across, BioAssist) _ </td> </tr> <tr> <td> a) Information shared through online platforms with different stakeholders (e.g. physicians, patients, etc) regarding diagnosis, treatment, co- morbidities </td> </tr> <tr> <td> b) Other sources with information about health behaviours related to interactions between patients and patients-care givers </td> </tr> <tr> <td> 5) _**Non-Medical Data** (ALL) _ </td> </tr> <tr> <td> a) Open and linked data, e.g. World bank or WHO Statistics, Socio-Economic and Financial Data, etc. </td> </tr> </table> ## 2\. 6 Pilot on Open Research Data in Horizon 2020 CrowdHEALTH participates in the Pilot on Open Research Data in Horizon 2020 [1] and will offer open access to scientific results reported in publications, to relevant scientific data and to data generated throughout the project lifetime in its numerous demonstrators, provided that they are anonymized and fully respect national and EU privacy regulations. It will aim to improve and maximise access to and re-use of scientific data generated by the project i.e., system performance data, user validations, etc. For those projects that are participating in the Pilot on Open Research Data in Horizon 2020, the European Commission suggests a template for the DMP called FAIR Data Management Plan Template. However, this template has been designed to include information progressively, and after the kick-off meeting of the project, it was clear that gathering some initial information about the datasets not included in the FAIR template was needed, not only for the purposes of preparing the DMP but to provide some information to the project developers in order to avoid any delay in the definition of the project architecture and the work package technical development. ## 2\. 7 Original questions asked by the technical WPs After the kick-off meeting, the leaders of the CrowdHEALTH technical WPs needed detailed information about the datasets to start their work. Their questions were gathered into a list that can be found in Annex 1\. ## 2\. 8 Questions to Use Cases partners (UC) In reality, we have used the pretext of the DMP deliverable – in which we need to determine what the project intends to do with the data produced within the project for each dataset– to concretise and better understand the CrowdHEALTH use cases starting points in terms of available data. The original questions asked by the technical WPs were merged and reordered with some basic details that were also needed for this first version of the DMP. The questionnaire has been distributed amongst the Use Cases partners, and replies allowed technical WP staff understanding how each Use Case operate in terms of policy making, what are their prioritised policies, how they evaluate them and what is expected as project outputs for the creation and evaluation of policies. We understand now that not all inputs gathered at that initial stage are relevant and need to be included in the final version of the DMP. However, we considered that it would be easier for Use Cases to provide all initial information related to data and how data is being used in their context into one single document, which could then serve as a basis to be consulted by the technical partners. This questionnaire can be found in Annex 2. The replies to this questionnaire were compiled in a document restricted to the project participants - D2.1a, Questions to Use Cases for the DMP. This document served as an input to WP5 and WP6 deliverables. ## 2\. 9 The First Project Dataset Questionnaire form Annex 3 shows an initial _dataset form_ oriented to the DMP preparation. This form allowed the consortium to identify all the datasets and their owners and the most important characteristics of each dataset. At the moment, the owners of the datasets are the 6 Use Cases. The replies not relevant for the DMP have been transferred to WP5 and WP6 teams. ## 2\. 10 The initial list of datasets Thanks to the compiled forms, we identified the project’s first list of datasets (some of which will be combined to create the envisioned holistic health records – this is the case of HULAFE, DFKI, CRA): * DATASET 1: HULAFE * DATASET 2: Stockholm County Council/ Karolinska University Hospital/SwedeHeart * DATASET 3: Bio Data (DFKI) * DATASET 4: Nutritional Data (DFKI) * DATASET 5: Activity Data (DFKI) * DATASET 6: Biosignals and Activity Data (BIO) * DATASET 7: BIO-PHRs (Personal Health Records) (BIO) * DATASET 8: Medication (BIO) * DATASET 9: Allergies Dataset (BIO) * DATASET 10: Social Data dataset (BIO) * DATASET 11: CancerPatientData Dataset (CRA) * DATASET 12: SLOfit Dataset (ULJ) * DATASET 13: Lifestyle Dataset (ULJ) ## 2\. 11 The FAIR Data Management Template Questionnaire The initial forms and steps were required to understand the nature of the datasets. It allowed to design the project architecture and to start the development with less risks of a delay. But in order to share the information later with other researchers, the public, etc., and to start the preparation of the DMP draft, the consortium needed to fill in a more complete form: the FAIR Data Management Template. As described in the Guidelines of FAIR Data Management in Horizon 2020 [2], this template is a set of questions that the partners should answer with a level of detail appropriate to the project. For this first version it is not required to provide detailed answers to all the questions. The DMP is intended to be a living document in which information can be made available gradually through successive updates as the implementation of the project. New versions will be produced for the periodic reviews. Section 3 shows the replies to the FAIR questionnaire of our use cases, what is the core of this deliverable and will be updated in future versions of this plan. # 3\. CrowdHEALTH FAIR Data Management Plan This section gathers all FAIR forms completed with information from the project datasets. ## 3\. 1 DATASET 1: HULAFE <table> <tr> <th> **DATASET 01** </th> <th> **NAME: Hospital La Fe Dataset (HULAFE)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _The purpose of the data is mainly to:_ * _Allow the data-driven modelling partners to develop risk stratification models for the risk of obesity, the chances of improving the condition, and the identification of population with obesity that have not been diagnosed yet –the under-diagnosis of obesity and overweight is a common and worldwide problem-._ * _Allow the forecasting modelling partners and causal modelling partners to develop forecasting and causal models for aiding in the improvement of public health policies for overweight and obesity, regarding the goal of promoting the systematic detection of obesity and overweight citizens._ * _To help policy makers in evaluating their actions by monitoring population trends in obesity on a clinical and regional level by using key performance indicators and the developed data-driven models._ * _To aid policy makers in creating and evaluating policies relating to adult obesity and overweight by performing big data analysis on collected data. HHRs will provide an opportunity for performing data analysis and building health risk predictive models._ * _To support the endocrinology service with information related to patients that at medium/high risk of becoming obese to carry out preventive actions, as well as giving feedback related to the already diagnosed obese patients._ * _To improve the healthcare management, clinical and public health dashboards introducing Key Performance Indicators to improve the treatment and follow-up of patients with obesity and overweight._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _The data are structured. There are numeric and categorical variables. The data are originally stored in SQL databases, and they are gathered through SQL queries and exported in Excel, CSV or any other text formats for research and data-analysis purposes._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _Existing data could NEVER be used for other research projects._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _The data comes from Business Intelligence datamarts that at the same time comes from some of the integrated databases of the Hospital La Fe Information Systems IRIS and ORION._ _Additionally, data from the Primary Health Care System will be available and integrated. This data comes from the databases of the Primary Care Information Systems ABUCASIS._ </td> </tr> </table> <table> <tr> <th> 1.5 Expected size </th> </tr> <tr> <td> _N > 10.000 individuals, D > 85 variables. _ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _Endocrinologists, the endocrinology service is the main supporter of this action as they need the information feedback to improve their health care attention on obese citizens and patients; Data analysts and researchers, might benefit from the data as they will provide answers on different research questions regarding the problem of overweight and obesity; Public Health Institutions, data might provide evidence regarding public health policy making on the promotion of physical activity, nutrition habits and systematic detection of obesity and overweight; Primary care professionals, who are the first health care professionals to diagnose and treat obesity; and hospital managers, who require information and indicators to take evidence-based decisions._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _The data uses a Unique Random Identifier, they are anonymized and de- identified. We use generalization and suppression mechanisms to guarantee de- identification._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _None._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _We use the date of extraction._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _None._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The dataset will only be available for the data analyst partners during the CrowdHEALTH project upon the signature of a Non-Disclosure Agreement._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _The data will be made accessible using a Hospital La Fe cloud repository._ </td> </tr> </table> <table> <tr> <th> 2.2.3 What methods or software tools are needed to access the data? </th> </tr> <tr> <td> _A web browser._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A._ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Hospital La Fe Cloud._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _Encrypted and compressed files will be available to those partners who signed a Confidential NonDisclosure Agreement. After the signature a login and a password will be provided to these partners to get access to those files._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _To be described by the end of November._ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _User login and password._ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _Only the information about diagnosis are described using ICD-9-CM codes._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _No._ </td> </tr> </table> <table> <tr> <th> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </th> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be licensed for any re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _Data will never be available for re-use._ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _The use of the data is limited to the framework of the project. Any re-use of the data after the project is forbidden by the Legal & Ethical Committee of Hospital La Fe and the Spanish law for data protection (Ley Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter Personal). Once the project is finished any data should be destroyed by any partner doing research with them. _ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _Unknown._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions)._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _María Luisa Correcher Palau (head of the Business Intelligence Technologies of Hospital La Fe)_ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Hospital La Fe has a Data Processing Center with both server and storage infrastructure racks and communications infrastructure with separated electrical panel area, with fire detection and protection systems, UPS and air conditioning units for the area's own air conditioning and with RF-120 fireresistant walls. The access to the data is allowed only to authorized staff with an active directory with user logins and encrypted passwords. The Data Processing Center has also disk to disk and tape backup systems._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Data from Electronic Health Records (EHRs) must be anonymized and de- identified to protect patients’ privacy and obey the data protection legislations specially when the data is to be used for research purposes within any research project._ _Sensitive private information such as the name, surname, date of birth, address, zip codes, and identity numbers must be suppressed. A random unique identifier is needed to link the information of patients without any relation to them in a way such that backtracking the patient is completely impossible._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _No questionnaires are going to be used in this Use Case._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> ## 3\. 2 DATASET 2: Stockholm County Council/ Karolinska University Hospital/SwedeHeart (KI) <table> <tr> <th> **DATASET 02** </th> <th> **NAME: Stockholm County Council/ Karolinska University Hospital/SwedeHeart** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _A patient diagnosed with one of listed ICD10 Codes for cardiovascular diseases in Stockholm will be registered in two different databases. The first one is the Stockholm County Database (VAL) which is responsible for collecting information from all the hospitals in Stockholm County. The second one is the SwedeHeart quality registry which is collecting information only for patients with cardiovascular diseases nationally. More specifically, Swedheart is a procedural and surgery-related registry, which gathers information, based on the disease, patient’s risk profile, medical treatment processes and laboratory results among others. The two databases are independent which means that different information for the same patient is registered._ _We expect that the extraction of information from the databases _for the same period_ (2006-2012) will give us the possibility to track the same patients. Our assumption is strong and it is based on other research projects, which have actualized similar procedures (SCREAM project). _ _Aims:_ * _Provide the relevant data to the health analytics partners in order to develop models that can be used from the policy makers in order to identify causality relationships between events, forecast the development of cardiovascular diseases on a population based sample, explore clinical pathways and produce risk stratification outcomes._ * _Create a new policy making data driven process factoring in information from sources that are currently functioning independently._ * _Support the cardiovascular and policy making research community in Sweden._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _Both databases use the same format: .csv_ _The data will be static, meaning that we are going to get two extracts, one from the VAL database and one from the SwedeHeart Quality registry._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _As included in the databases described above_ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _Already mentioned._ </td> </tr> </table> <table> <tr> <th> 1.5 Expected size </th> </tr> <tr> <td> _80.000 unique patients._ * _Phase 1 : 5.000 (December 2017)_ * _Phase 2: 25.000 (September 2018)_ * _Phase 3: 80.000 (March 2019)_ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _KI’s use case scenario stakeholders._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> None </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _Yes. Extraction date_ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _None_ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _Samples of the dataset will be shared to the CrowdHEALTH partners responsible for the data analysis during the project upon the signature of a Non- Disclosure Agreement. The data should be used for training scripts, systems or algorithms. The partners should have already implemented the GDPR regulations in their organizations._ _It must be stated that, it is important to pseudonymise/code, and sometimes double-code, the data and ensure that it is shared in a secure manner. If the data is to be sent outside KI, it must be encrypted, or external partners may be given access to a secure server at KI._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _By deposition in a repository_ </td> </tr> </table> <table> <tr> <th> 2.2.3 What methods or software tools are needed to access the data? </th> </tr> <tr> <td> _Pull/Push methods_ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _ICD CODE 10_ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _Yes_ </td> </tr> </table> <table> <tr> <th> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </th> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be licensed for any re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _Data will never be available for re-use_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _No costs have been calculated for this purpose at the moment_ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _If any costs are incurred, this will be covered from CrowdHEALTH_ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _LIME, Karolinska Institutet IT department. The IT department will assign three data security managers:_ * _Controller: the person who decides on the purpose of the personal data processing, irrespective of whether anyone else carries out the actual processing._ * _Processor: the person who carries out the personal data processing on behalf of the controller. In GDPR the processor has a direct liability and direct responsibilities. This means, for example, that agreement relations should be updated and that assistants must themselves observe a large proportion of the requirements in GDPR._ * _The Data Protection Officer (DPO) is a physical person who represents the liable instance in matters which relate to GDPR. Requirements governing the appointment of representatives for authorities._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Karolinska Institutet is following the General Data Protection Regulation (GDPR)_ _Data that will be used for the CrowdHEALTH project may only be gathered for specific, expressly stated and justified purposes and may not be subsequently processed in a manner which is incompatible with these purposes._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _After receiving ethical approval to handle the data, it is important that all personal data is stored and worked on safely, and it needs to be protected against unauthorized access. Personal data may only be stored in systems and solutions that are approved for personal data at KI._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _We are not intending to use any questionnaires_ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> ## 3\. 3 DATASET 3: Bio Data (DFKI) <table> <tr> <th> **DATASET 3** </th> <th> **NAME: Bio Data (DFKI)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _This data is collected to get an insight in the physical stress a user had throughout the day._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _The collected data will be BioSnapshots (Heartrate, step counter - once per minute) and sleep data (total sleep duration, list of individual sleep intervals (REM, deep sleep, light sleep, awake…) with their respective durations._ _The data will be stored in an SQL database and made available to the CrowdHEALTH platform through the HHRs communicated through the CrowdHEALTH platform._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _The data will be aggregated by fitness trackers, i.e. fitbit devices._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> We expect to have data from up to 100 participants, entering per day about 1-2 sleep entries and up to 1440 BioSnapshot entries for at least a year. This amounts to roughly 1.8 GB of data. </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _The data might be useful to detect relations between food intake, physical activity, overall mood and sleep patterns, which are all three collected from the same individuals over the same periods of time._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _Generally the java naming conventions are followed._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No_ </td> </tr> </table> <table> <tr> <th> 2.1.4 Do you provide clear version numbers? </th> </tr> <tr> <td> _No because the data will be gathered over time no versioning is possible._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _The whole metadata and interoperability layer will be added via HRR and the FHIR extensions._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _Information gathered in the DFKI use case will be anonymized and made available to CrowdHEALTH partners._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _For our raw Data there will be pull interfaces and regular dumps. It will be made available to the partners in deployed CrowdHEALTH platform instances by being integrated into the HHRs of patients selected by the use case partners HULAFE and CRA._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _The raw data will be provided as JSON and SQL Dumps, the SQL Dumps are created from a mariadb 10 server._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _No software will be needed for the raw data, there most likely will be a form of data documentation._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Data can be accessed directly with an API key associated to every involved partner, but the best way to access it is via HHR._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _RESTful or RPC pull interface on the DFKI aggregation platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _No_ </td> </tr> </table> <table> <tr> <th> 2.2.10 Are there well described conditions for access? </th> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _The access will be controlled by API keys_ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _Yes, but only in the HHR/FHIR representation._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _FHIR and the proposed FHIR extensions_ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability? </td> </tr> <tr> <td> _yes_ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _Yes, mappings to used source data sources will be made available_ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The raw data will be only available for all partners in CrowdHEALTH, the data integrated from it into the HHR/FHIR formatted data will be available under same restrictions as the HULAFE and CRA data._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _As soon as it is aggregated it will be available._ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _The data will be reusable by third parties._ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _There will be no data maintenance from DFKI, the data is provided as-is._ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _The HHR/FHIR formatted data is already defined FAIR._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _Jan Janssen (DFKI)_ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _The raw data will only be preserved for internal access by DFKI, the data integrated from it into the HHR/FHIR formatted data will be under the same long-term preservation policies as any other data from HULAFE and CRA._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Raw data will be stored encrypted in a server room within the DFKI premises with appropriate access restrictions._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _No, the raw data will not be safely stored for long term preservation and curation, at least not for public access._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _The collected raw data cannot be shared without any ethical or legal issues._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _No_ </td> </tr> </table> ## 3\. 4 DATASET 4: Nutritional Data (DFKI) <table> <tr> <th> **DATASET 4** </th> <th> **NAME: Nutritional Data (DFKI)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _This data is collected to get insight into the nutritional behaviour of individuals, that is food composition, food types, quantity and nutritional values._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td>  _General nutritional information about ingredients, recipes, allergens, allergies and diets._  _Manually logged nutritional habits of the patients._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _Partially._ _Some of the Ingredient and Recipe Data is collected from third party services, and some of those data will not be publicly available, but in the scope of the project all the data will be accessible._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> * _The nutritional information comes from public domain sources like the USDA Food Nutrition database._ * _Recipes are gathered from several public domain sources, and some proprietary sources._ * _LanguaL information is gathered from LanguaL._ * _Translations are made with the help of DeepL._ * _Lucene and Stanford NLP are used for matching purposes._ * _The source is always marked and a cleanup based on the source is always possible._ * _Only public domain data can be shared externally, in the scope of the project every consortium member gets access to all of the data._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _We expect to have data from up to 60 participants, entering about 2-10 dishes per day for a target duration of 1.5 years. The data will stored at DFKI, but only aggregated nutritional information will be pushed into the CrowdHEALTH platform. The collected data with all details will amount to 68 GB, from which 7 GB of aggregated nutritional information will be pushed into the CrowdHEALTH platform by using optimal datatypes._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _The data might be useful to detect relations between food intake, physical activity, overall mood and sleep patterns, which are all three collected from the same individuals over the same periods of time (please refer to information about the different datasets of DFKI regarding the data collected and correlated)._ </td> </tr> </table> <table> <tr> <th> **2 FAIR Data** </th> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _Generally the java naming conventions are followed._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No because the data will be gathered over time no versioning is possible._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _The whole metadata and interoperability layer will be added via HRR and the FHIR extensions._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _Every piece of information gathered in the DFKI use case will be available to all CrowdHEALTH partners._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _For the aggregated data there will be pull interfaces and regular dumps. It will be made available to the partners in deployed CrowdHEALTH platform instances by being integrated into the HHRs of patients selected by the use case partners HULAFE._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _The raw data will be provided as JSON and SQL Dumps, the SQL Dumps are created from a mariadb_ _10 server_ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _No software will be needed for the raw data, there most likely will be a form of data documentation._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _-_ </td> </tr> </table> <table> <tr> <th> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </th> </tr> <tr> <td> _Data can be accessed directly with an API key associated to every involved partner, but the best way to access it is via HHR._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _RESTful or RPC pull interface on the DFKI aggregation platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _The access will be controlled by API keys_ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _Yes, but only in the HHR/FHIR representation._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _FHIR and the proposed FHIR extensions._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _Yes, mappings to used source data sources will be made available_ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The aggregated data will be only available for all partners in CrowdHEALTH, the data integrated from it into the HHR/FHIR formatted data will be available under same restrictions as the HULAFE data._ </td> </tr> </table> <table> <tr> <th> 2.4.2 When will the data be made available for re-use? </th> </tr> <tr> <td> _As soon as it is aggregated it will be available._ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _The data will be reusable by third parties._ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _There will be no data maintenance from DFKI, the data is provided as-is._ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _The HHR/FHIR formatted data is already defined FAIR._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _Jan Janssen (DFKI)_ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _The aggregated data will only be preserved for internal access by DFKI, the data integrated from it into the HHR/FHIR formatted data will be under the same long-term preservation policies as any other data from HULAFE._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Raw data will be stored encrypted in a server room within the DFKI premises with appropriate access restrictions._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _No, the raw data will not be safely stored for long term preservation and curation, at least not for public access._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _The collected raw data cannot be shared without any ethical or legal issues._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _No._ </td> </tr> </table> ## 3\. 5 DATASET 5: Activity Data (DFKI) <table> <tr> <th> **DATASET 5** </th> <th> **NAME: Activity Data (DFKI)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _This data is collected to get in amount, duration, intensity and kind of physical activities of individuals._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _Data of physical activities as detected from fitness trackers extended by qualitative information provided by the participants about what activities (social or physical) they performed._ _Automatically and manually logged (or manually enhanced) information about the activities carried out._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _Partially._ _Some of the Activity Types are gathered from public domain sources_ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _Several public domain terminologies, DFKI added and user added data._ _The source is always marked and a cleanup based on the source is always possible._ _Only public domain data can be shared externally, in the scope of the project every consortium member gets access to all of the data._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _We expect to have data from up to 100 participants, entering about 2-10 entries per day for a period of 1,5 years. This will amount then to approx. 1 GB of data using optimal datatypes._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _The data might be useful to detect relations between food intake, physical activity, overall mood and sleep patterns._ </td> </tr> </table> <table> <tr> <th> **2 FAIR Data** </th> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _Generally the java naming conventions are followed._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No because the data will be gathered over time no versioning is possible._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _The whole metadata and interoperability layer will be added via HRR and the FHIR extensions._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _Every piece of information gathered in the DFKI use case will be available to all CrowdHEALTH partners._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _For our raw Data there will be pull interfaces and regular dumps. It will be made available to the partners in deployed CrowdHEALTH platform instances by being integrated into the HHRs of patients selected by the use case partners HULAFE and CRA._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _The raw data will be provided as JSON and SQL Dumps, the SQL Dumps are created from a mariadb_ _10 server_ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _No software will be needed for the raw data, there most likely will be a form of data documentation._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _-_ </td> </tr> </table> <table> <tr> <th> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </th> </tr> <tr> <td> _Data can be accessed directly with an API key associated to every involved partner, but the best way to access it is via HHR._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _RESTful or RPC pull interface on the DFKI aggregation platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _The access will be controlled by API keys_ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _Yes, but only in the HHR/FHIR representation._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _FHIR and the proposed FHIR extensions_ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _Yes, mappings to used source data sources will be made available_ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The raw data will be only available for all partners in CrowdHEALTH, the data integrated from it into the HHR/FHIR formatted data will be available under same restrictions as the HULAFE and CRA data._ </td> </tr> </table> <table> <tr> <th> 2.4.2 When will the data be made available for re-use? </th> </tr> <tr> <td> _As soon as it is aggregated it will be available._ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _The data will be reusable by third parties._ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _There will be no data maintenance from DFKI, the data is provided as-is._ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _The HHR/FHIR formatted data is already defined FAIR._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _Jan Janssen (DFKI)_ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _The raw data will only be preserved for internal access by DFKI, the data integrated from it into the HHR/FHIR formatted data will be under the same long-term preservation policies as any other data from HULAFE and CRA._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Raw data will be stored encrypted in a server room within the DFKI premises with appropriate access restrictions._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _No, the raw data will not be safely stored for long term preservation and curation, at least not for public access._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _The collected raw data cannot be shared without any ethical or legal issues._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _No._ </td> </tr> </table> ## 3\. 6 DATASET 6: Biosignals and Activity Data (BIO) <table> <tr> <th> **DATASET 6** </th> <th> **NAME: Biosignals and Activity Data** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Chronic disease management calls for constant monitoring of specific vital signs, while physical activity levels are an important parameter for assessing a patient’s lifestyle and wellness. Proper analysis of these data can assist clinicians in effective decision-making and can provide policy makers the means to measure the impact of relevant policies with respect to a population’s health, changes in patient behaviour and quality of life._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _The dataset includes historical data. More specifically, the types of measurements included are:_ * _Oxygen saturation_ * _Blood pressure_ * _Blood glucose_ * _Spirometry_ * _Weight_ * _Step count_  _Mood_ _Each measurement is accompanied by a timestamp. The data are currently structured in a custom format for internal use in the BioAssist platform and can be extracted in JSON or CSV format. However, they will be shared with CrowdHEALTH in a FHIR format_ 2 _as Observations._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _The dataset comprises data collected within the frame of BioAssist’s piloting activities throughout the past year and is continuously growing. All data that are already available will be used and new data will be continuously fed to the CrowdHEALTH platform._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _The data are collected from patients participating in BioAssist’s piloting operations. Biosignal measurements are collected from a variety of sensors (pulse oximeter, blood pressure meter, glucometer, weighing scale) and activity data are collected from wearable activity trackers. This dataset also includes patients’ daily mood, which is self-reported via BioAssist’s app._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _Approximately 100 patients are enrolled in BioAssist’s pilots. Each patient performs daily measurements of at least two types of biosignals and mood self- report, and uses the platform’s communication features twice a week on average. Within the next couple of months, these patients will also be provided with activity trackers, which will allow them to record their daily step count. This amounts to a dataset of more than 100,000 observations, which is continuously growing._ </td> </tr> </table> <table> <tr> <th> 1.6 Data utility </th> </tr> <tr> <td> _To the patients’ attending doctors, as well as potentially interested policy makers._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _Yes, the data include identifiers (patient name, observation ID), but after anonymisation they will not be identifiable._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _At the moment, the dataset utilises the SNOMED terminology, but any other standard can be applied if required (e.g. LOINC)._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _Biosignal measurements are accompanied by the sensor model and in some cases with metadata created by the sensor related to the quality of the measurement._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The dataset shall become available to the consortium after anonymization, when permission by the Greek Data Protection Authority is obtained. Legal restrictions do not permit open access to the data._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _The data will be pushed to the CrowdHEALTH big data platform and underlying store._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _The platform provides an API for obtaining the data. This API can be used to access them._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _Not Required._ </td> </tr> </table> <table> <tr> <th> 2.2.5 Is it possible to include the relevant software? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Legal constraints do not permit us to provide open access to these data. The data will be deposited on a local deployment of the CrowdHEALTH platform and will be accessible only within the project._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _The data will only be accessible for use to authorised users of the platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _We do not foresee the need for a data access committee at the moment. This will be verified when the process for obtaining permission by the Greek Data Protection Authority is completed._ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _This will be handled by the CrowdHEALTH user authentication and access control components._ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _The data are currently in custom format, but will be shared within the project in FHIR format._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _At the moment, the dataset utilises the SNOMED terminology, but any other standard can be applied if required (e.g. LOINC)._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _There is no need to provide mappings._ </td> </tr> </table> <table> <tr> <th> **2.4 Increase data re-use (through clarifying licenses)** </th> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be available for re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _There may be costs related to e.g. legal support, but they cannot be estimated at the moment. All other known costs are already covered by the project._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _This will depend on the nature of the costs that will arise._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _BioAssist’s CTO._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _This has not been discussed yet._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _BioAssist has developed state-of-the-art security, business continuity and data retention plans which are currently in place._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _At the moment, all data are stored only on BioAssist’s database._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Ethical issues have already been resolved. Permission from the Greek Data Protection Authority is required before the (anonymised) dataset can be shared with the partners. Legal restrictions do not permit open access to this dataset._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes, all patients that are supplying data to this dataset have signed informed consent._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _Not at the moment._ </td> </tr> </table> ## 3\. 7 DATASET 7: BIO-PHRs (Personal Health Records) (BIO) <table> <tr> <th> **DATASET 7** </th> <th> **NAME: BIO-PHRs (Personal Health Records)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Medical test results of patients enrolled in BioAssist’s pilots are collected from primary healthcare providers, in order to provide their doctors with access to this information and support them in monitoring their patients’ health status. Analysis of such data can be useful in tasks such as assessment of clinical pathways._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _The dataset contains historical data. Each entry includes a timestamp, category, description, comments, result type (numeric/description), result value and unit, as well as a reference range. Currently the data are structured in a custom format for internal use in the BioAssist platform and can be extracted in JSON or CSV format. However, they will be shared with CrowdHEALTH in a FHIR format as Observations._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _The dataset comprises PHR data collected within the frame of BioAssist’s piloting activities throughout the past year and is continuously growing. All data that are already available will be used and new data will be continuously fed to the CrowdHEALTH platform automatically whenever new results are generated._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _The data are collected from primary healthcare providers via appropriate web services and stored on the BioAssist platform. Some entries may be manual inputs from the platform’s users._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _Approximately 100 patients are enrolled in BioAssist’s pilots. The average number of entries in a patient’s PHR is more than 200, although some patients’ PHRs may contain no entries yet._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _To the patients’ attending doctors, as well as potentially interested policy makers._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _No._ </td> </tr> </table> <table> <tr> <th> 2.1.2 What naming conventions do you follow? </th> </tr> <tr> <td> _At the moment, the dataset utilises the SNOMED terminology, but any other standard can be applied if required (e.g. LOINC)._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _No metadata is created._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The dataset shall become available to all partners in the CrowdHEALTH project after anonymisation, when permission by the Greek Data Protection Authority is obtained. Legal restrictions do not permit open access to the data._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _The data will be pushed to the CrowdHEALTH GW._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _All software tools required to access the data will be developed as components of the CrowdHEALTH platform._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _Not Required._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Legal constraints do not permit us to provide open access to these data. The data will be deposited on a local deployment of the CrowdHEALTH platform and will be accessible only within the project._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.2.8 If there are restrictions on use, how will access be provided? </th> </tr> <tr> <td> _The data will only be accessible for use to authorised users of the platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _We do not foresee the need for a data access committee at the moment. This will be verified when the process for obtaining permission by the Greek Data Protection Authority is completed._ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _This will be handled by the CrowdHEALTH user authentication and access control components._ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _The data are currently in a custom format, but will be shared within the project in FHIR format._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _At the moment, the dataset does not utilize any standard. A standard will be applied depending on the standard that will eventually be used for dataset 6._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _It is not certain at the moment._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _There is no need to provide mappings._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be available for re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.4.4 How long is it intended that the data remains re-usable? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _There may be costs related to e.g. legal support, but they cannot be estimated at the moment. All other known costs are already covered by the project._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _This depends on the nature of the costs that will arise._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _BioAssist’s CTO._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _This has not been discussed yet._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _BioAssist has developed state-of-the-art security, business continuity and data retention plans which are currently in place._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _At the moment, all data are stored only on BioAssist’s database._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Ethical issues have already been resolved. Permission from the Greek Data Protection Authority is required before the (anonymised) dataset can be shared with the partners. Legal restrictions do not permit open access to this dataset._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes, all patients that are supplying data to this dataset have signed informed consent._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _Not at the moment._ </td> </tr> </table> ## 3\. 8 DATASET 8: Medication (BIO) <table> <tr> <th> **DATASET 8** </th> <th> **NAME: Medication** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Patients’ prescribed medications are recorded by their attending doctors on the BioAssist platform, in order to maintain a comprehensive profile for each patient and encourage medication adherence with reminders. This data can be used for patient profiling and assessment of clinical pathways._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _This dataset includes prescribed medications of patients enrolled in BioAssist’s pilots. Each entry includes active ingredient, brand name, drug form (e.g. tab, capsule, etc.), strength and dose time. It is currently structured in a custom format, but will be shared within the CrowdHEALTH in a FHIR format._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _Medication data are constantly updated. All modifications to the medication list are logged (with timestamps). The dataset includes data collected within the past year and new data are added continuously._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _Medication data is mainly collected from manual inputs from doctors._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _Prescribed medications are also recorded for all patients (approximately 100 people) enrolled in BioAssist’s pilots and this data can be linked to the other datasets. The average number of medication doses for each patient is more than 2 per day. BioAssist will also pursue additions to the dataset with data from Greek national authorities following the respective approvals._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _To the patients’ attending doctors, as well as potentially interested policy makers._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _No metadata are created_ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The dataset shall become available to the consortium after anonymization, when permission by the Greek Data Protection Authority is obtained. Legal restrictions do not permit open access to the data._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _The data will be pushed to the CrowdHEALTH GW._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _All software tools required to access the data will be developed as components of the CrowdHEALTH platform._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _Not Required._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Legal constraints do not permit us to provide open access to these data. The data will be deposited on a local deployment of the CrowdHEALTH platform and will only be accessible within the project._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _The data will only be accessible for use by authorised users of the platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _We do not foresee the need for a data access committee at the moment. This will be verified when the process for obtaining permission by the Greek Data Protection Authority is completed._ </td> </tr> </table> <table> <tr> <th> 2.2.10 Are there well described conditions for access? </th> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _This will be handled by the CrowdHEALTH user authentication and access control components._ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _The data are currently in custom format, but will be shared within the project in FHIR format._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _At the moment, the dataset does not utilize any standard. A standard may be applied depending on the standard that will eventually be used for dataset 6._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _It is not certain at the moment._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _There is no need to provide mappings._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be available for re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _There may be costs related to e.g. legal support, but they cannot be estimated at the moment. All other known costs are already covered by the project._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _This will depend on the nature of the costs that will arise._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _BioAssist’s CTO._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _This has not been discussed yet._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _BioAssist has developed state-of-the-art security, business continuity and data retention plans which are currently in place._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _At the moment, all data are stored only on BioAssist’s database._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Ethical issues have already been resolved. Permission from the Greek Data Protection Authority is required before the (anonymised) dataset can be shared with the partners. Legal restrictions do not permit open access to this dataset._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes, all patients that are supplying data to this dataset have signed informed consent._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _Not at the moment._ </td> </tr> </table> ## 3\. 9 DATASET 9: Allergies Dataset (BIO) <table> <tr> <th> **DATASET 9** </th> <th> **NAME: Allergies Dataset** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Users of the BioAssist platform can manually input their allergies, adding useful information for attending doctors. These data can be utilized for patient profiling._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _This dataset includes recorded allergies of patients enrolled in BioAssist’s pilots. Allergy data can be considered static. Entries are allergens in text format._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _This is a static dataset. Only existing data will be used._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _Allergy data is mainly collected from manual inputs from doctors._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _This dataset does not contain a large number of records, as very few patients enrolled in BioAssist’s pilots have reported allergies. However, this data will be useful when linked to the other datasets._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _Patients’ attending doctors._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _No naming conventions are followed at the moment._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _-_ </td> </tr> </table> <table> <tr> <th> 2.1.4 Do you provide clear version numbers? </th> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _No metadata are created._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The dataset shall become available to all partners after anonymization, when permission by the Greek Data Protection Authority is obtained._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _The data will be pushed to the CrowdHEALTH GW._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _All software tools required to access the data will be developed as components of the CrowdHEALTH platform._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _Not Required._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Legal constraints do not permit us to provide open access to these data. The data will be deposited on a local deployment of the CrowdHEALTH platform and will be accessible only within the project._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _The data will only be accessible for use to authorised users of the platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _We do not foresee the need for a data access committee at the moment. This will be verified when the process for obtaining permission by the Greek Data Protection Authority is completed._ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _No._ </td> </tr> </table> <table> <tr> <th> 2.2.11 How will the identity of the person accessing the data be ascertained? </th> </tr> <tr> <td> _This will be handled by the CrowdHEALTH user authentication and access control components._ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _The data are currently in custom format, but will be shared within the project in FHIR format._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _At the moment, the dataset does not utilize any standard. A standard may be applied depending on the standard that will eventually be used for dataset 6._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> There is no need to provide mappings. </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be available for re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _There may be costs related to e.g. legal support, but they cannot be estimated at the moment. All other known costs are already covered by the project._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _This will depend on the nature of the costs that will arise._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _BioAssist’s CTO._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _This has not been discussed yet._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _BioAssist has developed state-of-the-art security, business continuity and data retention plans which are currently in place._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _At the moment, all data are stored only on BioAssist’s database._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Ethical issues have already been resolved. Permission from the Greek Data Protection Authority is required before the (anonymised) dataset can be shared with the partners. Legal restrictions do not permit open access to this dataset._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes, all patients that are supplying data to this dataset have signed informed consent._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _Not at the moment._ </td> </tr> </table> ## 3\. 10 DATASET 10: Social Data dataset (BIO) <table> <tr> <th> **DATASET 10** </th> <th> **NAME: Social Data dataset** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Social data collected from the BioAssist platform provide insights into the patients’ behaviour, lifestyle and wellbeing. They can be utilised for patient profiling, as well as assessment of policies targeting patient behaviour._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _This dataset includes logs of the interaction of BIO’s pilot users with the platform’s social features. These logs provide information on contact management (e.g. number of people in contact list, number of invitations sent), videocalls (i.e. time, duration, frequency), events (e.g. notifications) and multimedia content shared (e.g. number of files uploaded by the patient, number of files uploaded by the patient’s contacts)._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _This dataset includes historical data which are continuously updated. All currently available data will be use and new data will be periodically imported to the CrowdHEALTH platform throughout the course of the project._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _Social data are collected automatically by the BioAssist platform._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _The dataset contains hundreds of thousands of records._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _Policy makers and caregivers._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _No naming conventions are followed at the moment._ </td> </tr> </table> <table> <tr> <th> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </th> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _-_ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The dataset shall become available to the consortium after anonymization, when permission by the Greek Data Protection Authority is obtained. Legal restrictions do not permit open access to the data._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _The data will be pushed to the CrowdHEALTH GW._ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _All software tools required to access the data will be developed as components of the CrowdHEALTH platform._ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _Not Required._ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _Legal constraints do not permit us to provide open access to these data. The data will be deposited on a local deployment of the CrowdHEALTH platform and will be accessible only within the project._ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _The data will only be accessible for use to authorised users of the platform._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _We do not foresee the need for a data access committee at the moment. This will be verified when the process for obtaining permission by the Greek Data Protection Authority is completed._ </td> </tr> </table> <table> <tr> <th> 2.2.10 Are there well described conditions for access? </th> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _This will be handled by the CrowdHEALTH user authentication and access control components._ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _Currently the data are structured in a custom format for internal use in our systems. A new data model for sharing this data with CrowdHEALTH is being developed._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _At the moment, the dataset does not utilize any standard._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _This will be determined at a later stage._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _This will be determined at a later stage._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _The data will not be available for re-use._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _There may be costs related to e.g. legal support, but they cannot be estimated at the moment. All other known costs are already covered by the project._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _This will depend on the nature of the costs that will arise._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _The company’s CTO._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _This has not been discussed yet._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _BioAssist has developed state-of-the-art security, business continuity and data retention plans which are currently in place._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _At the moment, all data are stored only on BioAssist’s database._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Ethical issues have already been resolved. Permission from the Greek Data Protection Authority is required before the (anonymised) dataset can be shared with the partners. Legal restrictions do not permit open access to this dataset._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes, all patients that are supplying data to this dataset have signed informed consent._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _Not at the moment._ </td> </tr> </table> ## 3\. 11 DATASET 11: CancerPatientData Dataset (CRA) <table> <tr> <th> **DATASET 11** </th> <th> **CancerPatientData Dataset (CareAcross)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _The data is collected from patients in order to receive personalised coaching and medical information. These are central themes for our use case, in terms of coaching adherence & information impact. _ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _They are structured data about patient’s daily life and information received. Mostly consisting of numbers of food portions or Boolean flags about specific diagnoses and treatments._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _Patients_ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _For each patient, we expect to collect about 30 data points. It is expected that 1,000 patients will have entered data throughout this project._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _On an individual level, to the patient. On a research level, to health care professionals and policy makers._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _Yes: input is defined as a Question and the possible answers are put down as Answers._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _All Questions and Answers have “value”s._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No_ </td> </tr> </table> <table> <tr> <th> 2.1.4 Do you provide clear version numbers? </th> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The data subjects are patients themselves who have agreed to provide data in a way that their individual records do not get shared beyond the purposes of this service, and not beyond the confines of the company._ _Aggregated data will be made available to the CrowdHEALTH platform, but not open to the public or otherwise beyond the scope of this project._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.2.10 Are there well described conditions for access? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _No specific standard exists for the variety and breadth of data we are collecting. However, the structure is such that the data can be interoperable._ _On the other hand, as discussed, the individual data will be contained within the Use Case and only aggregated data will be shared with the central CrowdHEALTH platform._ _The aggregated data will be amenable to further analysis by the CrowdHEALTH platform. This data will be exported in a structure which will contain field- value pairs; the exact structure of that export cannot be determined in advance, because the level of detail and aggregation depends on the findings of the analysis performed within the Use Case._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _The extent of such standards followed for the aggregated data structures will depend on the findings. We expect these to evolve as the data gets richer._ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _The data entered is structured, but it is patient-oriented; therefore, mappings can be provided where necessary. However, such formal ontologies are usually created in a system-orientation. For example, “dry mouth” and “mouth sores” mean completely different things to the patient, but may actually be reflected onto one single element (as opposed to two elements) within an ontology._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _Data will not be licensed but will remain property of CareAcross._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </th> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _We can approximate that the costs to construct such datasets will be moderate, if their provision is as frequent as already discussed with partners managing the centralised repository (no more frequent than every 6 months)._ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _The efforts for aggregated data that will be included in an open access scheme, will be covered as part of the corresponding grant portion. The rest will be covered by the company._ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _The company CEO (Thanos Kosmidis)_ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _Data is preserved indefinitely._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Best practices are being used for data security, using state-of-the-art commercial cloud providers. Frequent backups are being scheduled. The current commercial cloud provider is Heroku (powered by Salesforce)._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _Data is safely stored in state of the art cloud-based repositories._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _These data sets consist of sensitive patient data, contributed by the patients themselves. The Terms of our Service and our Privacy Policy restrict the level of sharing, only to an aggregate level (as discussed above and beyond)._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Patients agree with the aforementioned Terms of Service and Privacy Policy as part of the registration process._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> ## 3\. 12 DATASET 12: SLOfit Dataset - (ULJ) <table> <tr> <th> **DATASET 12** </th> <th> **NAME: SLOfit (ULJ)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Purpose of the SLOfit data collection is:_ * _to enable children and parents to compare their somatic and physical fitness development with the development of their peers and identify the needs for improvement_ * _to permit teachers to use the analysed data to identify children with special developmental needs, to follow the development of every individual child and adjust the teaching process to the needs and capabilities of children_ * _to equip school physicians and pediatricians with holistic information from multiple sources in order to enable tailoring of custom-made interventions as well as monitoring and evaluating the effects of small scale interventions;_ * _to help policy makers in evaluating their actions by monitoring population trends in physical fitness and obesity on a national level (the SLOfit data serves as scientific backbone for most of the policies, related to improvement of physical activity of children and youth and the policies, related to school physical education)_ * _creation of a collection of data on lifestyle and social economic environment beside already collected records from standard physical fitness testing. Linking the upgraded SLOfit database with the e-health system (eHR) within the CrowdHEALTH (CH) platform will provide an integrated view of the patient. This will allow more accurate forecasting of predictive health risk of children and youth and performing causal analysis and consequently the shaping of an efficient public health policy framework and the provision of integrated healthcare services within educational and health-care system._ * _to aid policy makers in creating and evaluating policies relating to childhood obesity and physical activity by performing big data analysis on collected data. HHRs will provide an opportunity for performing causal analysis and building health risk predictive models. Any intervention such as increasing physical activity, reducing obesity or general morbidity on the level of class, school, municipality, and region or on the national level could be evaluated. Moreover, It will constitute a useful tool to directly evaluate the trends among groups and an optimal means to promptly highlight the impact of the various policies._ * _to provide illustrative visualizations of the collected data on the individual, school, regional and national level. Via the CH platform, every child, his/her parents, school physician and policy makers will be see the status and development of child’s physical fitness, physical activity and linked health risks on individual and aggregated level. The integration of predictive models into this platform would also provide policy makers with a useful tool to forecast trends, simulate effects of intervention and measuring the impact of interventions and relevant policies in the context of physical fitness, physical activity and obesity._ </td> </tr> </table> <table> <tr> <th> 1.2 Types and formats of data </th> </tr> <tr> <td> _Data are stored in MS SQL database, but will be exported to Excel._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _Yes, the SLOfit data already exist but data from every year of measurement are not linked. Therefore, for the purpose of CrowdHEALTH the data will be linked into cohort data and used in visualizations._ </td> </tr> <tr> <td> 1.4 Origin </td> </tr> <tr> <td> _SLOfit is a national surveillance system for somatic and physical fitness development of children and youth in Slovenia, which was formerly known as Sports Educational Chart. The system was implemented in 1982 on a sample of Slovenian schools and after 5 years of testing, it was introduced to all Slovenian primary and secondary schools. Therefore, SLOfit enables annual monitoring of somatic and physical fitness status of children in all Slovenian schools from 1987 onwards. Every April, almost the entire Slovenian population, aged 6 to 19 (220,000 students) is measured by 8 fitness tests and 3 anthropometric measurements (see _www.slofit.org_ ) . _ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _About 200,000 students with record for every year of measurement (between 1 and 14). Every record will have about 20 values (data columns)._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> * _For decision makers, the SLOfit data might enable scientific backbone for most of the policies, related to improvement of physical activity of children and youth as well as policies for conquer child obesity. It might also provide proper data for health risk predictive models, based on physical fitness data, and simulation of physical activity interventions on physical fitness and prevalance of obesity._ * _For researchers, the SLOfit data might provide answers on the patterns of physical and motor development and their interrelatedness, and the relation between physical activity, sedentariness, Socio-Economic Status (SES), parental physical activity and physical fitness. The data analysis might provide answers on how much physical activity is necessary for retaining adequate level of physical fitness, and what ratios of physical activity and sedentariness still provide sufficient physical fitness. It might also provide proper data for designing individual somatic and motor development forecast._ * _For physicians, the SLOfit data on the individual level might provide holistic view on health of individual schoolchild and effects of interventions on physical fitness as important indicator of child’s health._ * _For parents, the SLOfit data on the individual level might provide alerts if the SLOfit data analyzed through the CrowdHEALTH platform suggests that recommendations for physical fitness or activity are not met that currently do not exist._ </td> </tr> </table> <table> <tr> <th> **2 FAIR Data** </th> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.2 What naming conventions do you follow? </td> </tr> <tr> <td> _None._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _None._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _The SLOfit dataset will not be openly available due to voluntary restrictions. Over 30 years, the SLOfit team has invested a lot of resources into collection of data and dataset management, so open access to data is not fair way to exploit possibilities of dataset for the SLOfit team. Members of the SLOfit team want to have an overview of the SLOfit dataset usage and collaboration in possible analysis._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.2.3 What methods or software tools are needed to access the data? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.5 Is it possible to include the relevant software? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> **2.3 Making data interoperable** </th> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _Yes, at least within the consortium_ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _No standard data vocabularies will be used_ </td> </tr> <tr> <td> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </td> </tr> <tr> <td> _No, we will create custom codes for our data and share it with the partners in the consortium._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _Copyright will be retained by the University of Ljubljana. No re-use will be publicly available._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.4.5 Are data quality assurance processes described? </th> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 3.2 How will these be covered? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 3.3 Who will be responsible for data management in your project? </td> </tr> <tr> <td> _Gregor Starc, Faculty of Sport, University of Ljubljana._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _Cost of long term preservation of the SLOfit dataset is approximately EUR 3,000 per year. Decision is upon Faculty of Sport and Ministry of Education. Currently, the dataset has no time limit in preservation._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Faculty of Sport, University of Ljubljana has a Data Center with both server and storage infrastructure, which are accessible only to authorized personnel, have fire detection system and video surveillance system, and air cooling. The access to the data is allowed only to authorized staff with an active directory with user logins and encrypted passwords. The Data Center has also disk to disk backup systems._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _Yes_ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Data are protected by the Personal data protection legislation. However, individual consents have been acquired that allow for sharing anonymized data._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _None._ </td> </tr> </table> ### 3.13 DATASET 13: Lifestyle Dataset (ULJ) <table> <tr> <th> **DATASET 13** </th> <th> **NAME: Lifestyle Dataset (ULJ)** </th> </tr> <tr> <td> **1\. Data Summary** </td> </tr> <tr> <td> 1.1 Purpose </td> </tr> <tr> <td> _Purpose of the Lifestyle data collection is:_ * _to enable children and parents to compare their lifestyle habits (i.e. physical activity, sedentariness, sleep) with the recommendations and identify the needs for improvement;_ * _to equip school physicians and pediatricians with holistic information from multiple sources in order to enable tailoring of custom-made interventions as well as monitoring and evaluating the effects of small scale interventions;_ * _creation of a collection of data on lifestyle and social economic environment beside already collected records from standard physical fitness testing. Linking the upgraded SLOfit database (the SLOfit + Lifestyle database) with the e-health system (eHR) within the CrowdHEALTH (CH) platform will provide an integrated view of the patient. This will allow more accurate forecasting of predictive health risk of children and youth and performing causal analysis and consequently the shaping of an efficient public health policy framework and the provision of integrated healthcare services within educational and health-care system._ * _to aid policy makers in creating and evaluating policies relating to childhood obesity and physical activity by performing big data analysis on collected data. HHRs will provide an opportunity for performing causal analysis and building health risk predictive models. Any intervention such as increasing physical activity, reducing obesity or general morbidity on the level of class, school, municipality, and region or on the national level could be evaluated._ * _to provide illustrative visualizations of the collected data on the individual, school, regional and national level. Via the CH platform, every child, his/her parents, school physician and policy makers will be see the status and development of child’s physical fitness, physical activity and linked health risks on individual and aggregated level. The integration of predictive models into this platform would also provide policy makers with a useful tool to forecast trends, simulate effects of intervention and measuring the impact of interventions and relevant policies in the context of physical fitness, physical activity and obesity._ </td> </tr> <tr> <td> 1.2 Types and formats of data </td> </tr> <tr> <td> _Data will be stored in MS SQL database, but will be exported to Excel._ </td> </tr> <tr> <td> 1.3 Re-use of existing data </td> </tr> <tr> <td> _No, the Lifestyle data will be collected within CrowdHEALTH pilot environment, except of physical fitness data which will be a subset of the SLOfit database._ </td> </tr> </table> <table> <tr> <th> 1.4 Origin </th> </tr> <tr> <td> _The Lifestyle dataset will be implemented on a sample of 2,000 students within CrowdHEALTH pilot environment in Slovenia (Škofja Loka municipality). It will include data about physical activity, sedentariness, SES, sleep, resting heart-rate, peers, academic achievement, height and body mass of parents, and parental physical activity. The data will be collected and entered by students and their parents. Beside these data the Lifestyle dataset will include also physical fitness data which will be extracted from the SLOfit dataset._ </td> </tr> <tr> <td> 1.5 Expected size </td> </tr> <tr> <td> _About 2000 students with record for 2 years of measurement in the course of the CrowdHEALTH project (2018, 2019), Every record will have about 20 values (data columns)._ </td> </tr> <tr> <td> 1.6 Data utility </td> </tr> <tr> <td> _For decision makers, the Lifestyle data might enable scientific backbone for most of the policies, related to improvement of physical activity of children and youth as well as policies for conquer child obesity. It might also provide proper data for health risk predictive models, based on physical fitness data, and simulation of physical activity interventions on physical fitness and prevalence of obesity._ _For researchers, the Lifestyle data might provide answers on the patterns of physical and motor development and their interrelatedness, and the relation between physical activity, sedentariness, SES, parental physical activity and physical fitness. The data analysis might provide answers on how much physical activity is necessary for retaining adequate level of physical fitness, and what ratios of physical activity and sedentariness still provide sufficient physical fitness. It might also provide proper data for designing individual somatic and motor development forecast._ _For physicians, the Lifestyle data combined with the SLOfit data on the individual level might provide holistic view on health of individual schoolchild and effects of interventions on physical fitness as important indicator of child’s health._ _For parents, the Lifestyle data on the individual level might provide alerts if the Lifestyle data analyzed through the CrowdHEALTH platform suggests that recommendations for physical activity, sleep or sedentary are not met that currently do not exist._ </td> </tr> <tr> <td> **2 FAIR Data** </td> </tr> <tr> <td> **2.1 Making data findable, including provisions for metadata** </td> </tr> <tr> <td> 2.1.1 Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism? </td> </tr> <tr> <td> _No._ </td> </tr> </table> <table> <tr> <th> 2.1.2 What naming conventions do you follow? </th> </tr> <tr> <td> _None._ </td> </tr> <tr> <td> 2.1.3 Will search keywords be provided that optimize possibilities for re-use? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.4 Do you provide clear version numbers? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> 2.1.5 What metadata will be created? </td> </tr> <tr> <td> _None._ </td> </tr> <tr> <td> **2.2 Making data openly accessible** </td> </tr> <tr> <td> 2.2.1 Which data produced and/or used in the project will be made openly available as the default? </td> </tr> <tr> <td> _None._ _The Lifestyle dataset will not be openly available due to voluntary restrictions. The dataset will present an upgrade of the SLOfit database. Over 30 years, the SLOfit team has invested a lot of resources into collection of data and dataset management, so open access to data is not fair way to exploit possibilities of dataset for the SLOfit team. Members of the SLOfit team want to have an overview of the dataset usage and collaboration in possible analysis._ </td> </tr> <tr> <td> 2.2.2 How will the data be made accessible? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.3 What methods or software tools are needed to access the data? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.4 Is documentation about the software needed to access the data included? </td> </tr> <tr> <td> _N/A_ </td> </tr> </table> <table> <tr> <th> 2.2.5 Is it possible to include the relevant software? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.6 Where will the data and associated metadata, documentation and code be deposited? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.7 Have you explored appropriate arrangements with the identified repository? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.8 If there are restrictions on use, how will access be provided? </td> </tr> <tr> <td> _Users will be able to access their own data via password. Aggregated data will be publicly accessible._ </td> </tr> <tr> <td> 2.2.9 Is there a need for a data access committee? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.10 Are there well described conditions for access? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.2.11 How will the identity of the person accessing the data be ascertained? </td> </tr> <tr> <td> _n/a_ </td> </tr> <tr> <td> **2.3 Making data interoperable** </td> </tr> <tr> <td> 2.3.1 Are the data produced in the project interoperable? </td> </tr> <tr> <td> _Yes, at least within the consortium._ </td> </tr> <tr> <td> 2.3.2 What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> </tr> <tr> <td> _No standard data vocabularies will be used._ </td> </tr> </table> <table> <tr> <th> 2.3.3 Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? </th> </tr> <tr> <td> _No, we will create custom codes for our data and share it with the partners in the consortium._ </td> </tr> <tr> <td> 2.3.4 In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **2.4 Increase data re-use (through clarifying licenses)** </td> </tr> <tr> <td> 2.4.1 How will the data be licensed to permit the widest re-use possible? </td> </tr> <tr> <td> _Copyright will be retained by the University of Ljubljana. No re-use will be publicly available._ </td> </tr> <tr> <td> 2.4.2 When will the data be made available for re-use? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.3 Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? </td> </tr> <tr> <td> _No_ </td> </tr> <tr> <td> 2.4.4 How long is it intended that the data remains re-usable? </td> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> 2.4.5 Are data quality assurance processes described? </td> </tr> <tr> <td> _No._ </td> </tr> <tr> <td> **3 Allocation of resources** </td> </tr> <tr> <td> 3.1 What are the costs for making data FAIR in your project? </td> </tr> <tr> <td> _None_ </td> </tr> </table> <table> <tr> <th> 3.2 How will these be covered? </th> </tr> <tr> <td> _N/A_ </td> </tr> <tr> <td> _3.3 Who will be responsible for data management in your project?_ </td> </tr> <tr> <td> _Gregor Starc, Faculty of Sport, University of Ljubljana._ </td> </tr> <tr> <td> 3.4 Are the resources for long term preservation discussed? </td> </tr> <tr> <td> _There will be no additional costs for long term preservation if the Lifestyle data will be integrated into the SLOfit database. Decision on data keeping is upon Faculty of Sport and Ministry of Education. Currently, the dataset has no time limit in preservation._ </td> </tr> <tr> <td> **4 Data security** </td> </tr> <tr> <td> 4.1 What provisions are in place for data security? </td> </tr> <tr> <td> _Faculty of Sport, University of Ljubljana has a Data Center with both server and storage infrastructure, which are accessible only to authorized personnel, have fire detection system and video surveillance system, and air cooling. The access to the data is allowed only to authorized staff with an active directory with user logins and encrypted passwords. The Data Center has also disk to disk backup systems._ </td> </tr> <tr> <td> 4.2 Is the data safely stored in certified repositories for long term preservation and curation? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **5 Ethical aspects** </td> </tr> <tr> <td> 5.1 Are there any ethical or legal issues that can have an impact on data sharing? </td> </tr> <tr> <td> _Data are protected by the Personal data protection legislation. However, individual consents have been acquired that allow for sharing anonymised data._ </td> </tr> <tr> <td> 5.2 Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> </tr> <tr> <td> _Yes._ </td> </tr> <tr> <td> **6 Other issues** </td> </tr> <tr> <td> 6.1 Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? </td> </tr> <tr> <td> _No_ </td> </tr> </table> # 4\. Conclusion remarks It is important to highlight that using easier questionnaires previous to the FAIR Data Management Plan template in projects with complex and non-homogenous datasets could be a best practice; it helps to prevent delays in the starting of technical work into the project and at the same time helps to the User Cases groups to reflect about some basic questions about their datasets for the Open Data Pilot in H2020 previously to complete the DMP template. This gradual effort took more of time than foreseen in the first months of the project but it is proving to be mutually beneficial for the work progress of both the technical and the use cases groups. In November 2017 already initial datasets (HULAFE and SLOfit) have been made available within the consortium for research and analysis. Additional datasets are expected to be available by the end of 2017, while a complete plan has been developed in order to make all datasets available within 2018.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1186_IASIS_727658.md
# 1 Introduction ## 1.1 Purpose and scope of the document Heterogeneous data sources are integrated into the iASiS knowledge graph in order to offer an integrated view of data presented in different formats, e.g. clinical notes, images, publications, web data. Diverse techniques are implemented for transforming these heterogeneous data into a unified schema, and curate and integrate this data. Furthermore, techniques for traversing and exploring the iASiS knowledge graph are demanded in order to exploit the benefits of the knowledge graph. This document describes the data management techniques implemented in iASiS to transform data in diverse formats into the unified schema of the knowledge graph; these techniques are the result of task T5.4 of WP5. Moreover, this document presents the query processing techniques implemented by Ontario, which is a federated SPARQL query engine able to explore the iASiS knowledge graph and their knowledge graphs. This document is structured as follows: Section 2 presents RML, a mapping language for transforming raw data into RDF triples, as well as the SPARQL query language; main characteristics of existing federated query approaches are also described. Section 3 presents the semantic enrichment process implemented in iASiS, as well as the main characteristics of the current version of the iASiS knowledge graph. Section 4 outlines the main features of Ontario and illustrates how it is integrated into the iASiS framework. Finally, the conclusions and outcomes reached during the execution of tasks T5.4 and T5.5 are outlined, as well as the next steps to be followed in WP5. ## 1.2 Relationship with other documents This document is related to the following deliverables of WP5: i) D5.1 where the iASiS data sources are defined; ii) D5.5 where the privacy-aware techniques are reported presented; iii) D5.6 where the final version of the privacy aware strategies will be described; and iv) D5.7, D5.8, and D5.9 where the iASiS data management and analytics services will be reported. # 2 Background ## 2.1 RML, a Mapping Language for RDF Big Data is usually presented in different formats, e.g. images, unstructured text, or tabular, being required the definition of mapping rules in order to transform data in these diverse formats into a unified schema. RML 1 is one of the existing mapping languages; it expresses mappings to transform sources represented in tabular format, e.g. CSV or relational, into RDF. Each mapping rule in RML is represented as a **Triple Map** which consists of the following parts; Figure 1 presents an overview of the main concepts of the RML mapping language. * A **Logical Source** (rr:logicalSource) that refers to a data source from where data is collected; it is composed of the following components: 1. **Source** (rml:source) - is an input source, can be JSON, XML, or CSV; ○ **Iterator** (rml:iterator) - is not required when it comes to tabular input sources, like relational databases. It is needed in case of hierarchical or structured data sources. The iterator (rml:iterator) determines the iteration pattern by the input source and specifies the extraction of the data displayed during each iteration; ○ **Reference Formulation** (rr:referenceFormulation) - as RML deals with different data serializations with various ways to refer to their elements, RML defines the reference. Such reference is specified based on the source of the input data file, e.g. in case of a JSON file, a Reference Formulation would be "JSONPath", in case of an XML file, a Reference Formulation would be "XPath". * A **Subject Map** (rr:subjectMap) - defines the subject of the generated RDF triples. * Several **Predicate-Object Maps** (rr:predicateObjectMap), combining: 1. **Predicate** Maps (rr:predicate) expressing the predicate of the RDF triple; ○ **Object** Maps (rr:objectMap) expressing the object of the RDF triple; ○ A **Referencing Object Map** , that indicates the reference to another **Triples Maps** (rr:parentTriplesMap). **Figure 1: An Overview of the RML mapping schema. Based on Dimou et al** 2 **.** ## 2.2 RDF and SPARQL Query Language Resource Description Framework (RDF) – is a model developed by the W3C consortium for describing the metadata of resources. The main goal is to present statements in the form, that is equally well perceived by both, the person and the machine. RDF provides interoperability between applications that exchange computer-understood information via the Web. It also pays attention to the means that automatically process Web resources. RDF distinguishes three different type of values: * **IRI** (Internationalized Resource Identifier) – denotes entities, act as an identifier, representing the same resource; * **Literals** – denote entities; * **Blank nodes** – represent a resource, but without a specific identifier. Finally, an RDF triple is composed and relates three elements: * **Subject** \- a described resource, represented by a URI reference or a blank node; ● **Predicate** \- the property of a resource, represented by a URI reference; * **Object** \- property value, represented by a URI reference, blank node or a literal. Thus, the subject can be represented by an IRI or a blank node, the object can be an IRI, a blank node or a literal. Links between nodes are always specific, that is why a predicate can be only represented by an IRI. The basic building block in RDF is represented by a triplet "Subject-Predicate-Object". SPARQL 3 is a W3C standard query language used to define and manipulate data in the RDF format. SPARQL queries comprise triple patterns including conjunction and disjunction operations. The main query clauses supported by SPARQL are SELECT, CONSTRUCT, ASK, and DESCRIBE. An evaluation of a SPARQL query **Q** over an RDF graph **G** , corresponds to the set instantiations of the variables in the SELECT clause of **Q** against RDF triples in **G.** A SPARQL query can include different operators, e.g. JOIN, UNION, and OPTIONAL. Moreover, FILTER can be used in order to filter from the output the instantiations of the variables of the SELECT clause of **Q** that meet a certain condition. The basic building block in the WHERE clause of a SPARQL query is the triple pattern, or a triple with variables. A Basic Graph pattern is the conjunction of several triple patterns, where a conjunction corresponds to the JOIN operator. Finally, Basic Graph patterns can be connected with the JOIN, UNION, or OPTIONAL operators. The following SPARQL query expresses the “ _Mutations of the type Confirmed somatic variant located in transcripts which are translated as proteins that interact with the drug Docetaxel_ “. PREFIX rdf: < http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX iasis: <http://project-iasis.eu/vocab/> SELECT DISTINCT ?mutation WHERE { ?mutation **rdf:type** **iasis:Mutation** . ?mutation **iasis:mutation_chromosome** ?chromosome . ?mutation **iasis:mutation_start** ?start. ?mutation **iasis:mutation_NucleotideSeq** ?nucleotideSeq. ?mutation **iasis:mutation_isClassifiedAs_mutationType** ?type. ?mutation **iasis:mutation_somatic_status 'Confirmed somatic variant'** . ?mutation **iasis:mutation_cds** ?cds. ?mutation **iasis:mutation_aa** ?aa. ?mutation **iasis:mutation_isLocatedIn_transcript** ?transcript . ?transcript **iasis:translates_as** ?protein . ?drug **iasis:drug_interactsWith_protein** ?protein . ?drug **iasis:label** **'docetaxel'** } This query is composed of 12 triple patterns; each triple, e.g. “?mutation **rdf:type** **iasis:Mutation** ” corresponds to a triple pattern, where “?mutation” corresponds to a variable, “ **rdf:type** ” to a predicate, and “ **iasis:Mutation** ” a mutation. Triple patterns are connected using the “.” operator which corresponds to a JOIN. The 12 triple patterns in the WHERE clause of the query comprise a Basic Graph Pattern. ### 2.3 Federated Query Engines and Existing Approaches A federation of SPARQL endpoints is a set of RDF datasets that can be accessed via SPARQL endpoints. A SPARQL endpoint is a Web service that provides a Web interface to query RDF data following the SPARQL protocol 4 . RDF datasets comprise sets of RDF triples; predicates of these triples can be from more than one Linked Open Vocabulary 5 , e.g. FOAF 6 or DBpedia ontology. Additionally, proprietary vocabularies can be used to describe the RDF resources of these triples, and controlled vocabularies like VoID 7 , can be used to describe the properties of the RDF data accessible through a given SPARQL endpoint. Queries against federations of SPARQL endpoints are posed through federated SPARQL query engines. A generic architecture of a federated SPARQL query engine is based on the mediator and wrapper architecture 8 . Wrappers translate SPARQL subqueries into calls to the SPARQL endpoints as well as convert endpoint answers into the query engine internal structures. The mediator rewrites original queries into subqueries that can be executed by the data sources of the federation. Moreover, the mediator collects the answers of evaluating the subqueries over the selected queries, merges the results and produces the answer of a federated query; mainly, it is composed of three components: * **Source Selection and Query Decomposition** \- decomposes queries into subqueries, and selects the endpoints that are capable of executing each subquery. Simple subqueries comprise a list of triple patterns that can be evaluated against at least one endpoint. * **Query Optimizer** \- identifies execution plans that combine subqueries and physical operators implemented by the query engine. Statistics about the distribution of values may allow for the identification of combination of subqueries and plans that reduce execution time. * **Query Engine** \- implements different physical operators to combine tuples from different endpoints. Physical operators implement logical SPARQL like JOIN, UNION, or OPTIONAL. FedX 9 , ANAPSID 10 , and MULDER 11 are exemplars federated SPARQL query engines able to execute queries over a federation of SPARQL endpoints. FedX implements source selection techniques able to contact the SPARQL endpoints on the fly to decide the subqueries of the original query that can be executed over the endpoints of the federation. Thus, FedX relies on zero knowledge about the content of the SPARQL endpoints to perform the tasks of Source Selection and Decomposition. ANAPSID exploits information about the predicates of the RDF datasets accessible via the SPARQL endpoints of the federation to select relevant sources, decompose the original queries, and find efficient execution plans. Moreover, ANAPSID implements physical operators able to adjust schedulers of query executions to the current conditions of SPARQL endpoints, i.e. if one of the SPARQL endpoint is delayed or blocked, ANAPSID is able to adjust the query plans in order to keep producing results in an incremental fashion. Finally, MULDER is a federation SPARQL engine that relies on the description of the properties and links of the classes in the RDF graphs accessible from SPARQL endpoints, to decompose the original queries into the minimal number of subqueries required to evaluate the original query into the relevant SPARQL endpoints. MULDER utilises the RDF Molecule Templates (RDF-MTs) to describe the classes in an RDF graph and its links; it also exploits the physical operators implemented in ANAPSID to provide efficient query executions of SPARQL queries, i.e. MULDER provides Source Selection and Query Decomposition and Query Optimizer components which effectively exploit the ANAPSID query engine. Ontario is the query engine implemented in the iASiS framework for accessing data from the iASiS knowledge graph. Similar to MULDER, Ontario relies on RDF Molecule Templates (RDF-MTs) for describing the RDF classes included in the iASiS knowledge graph. Additionally, Ontario maintains in the RDF-MTs metadata describing the data privacy and access control regulations imposed by the provider of the data used to populate the RDF classes of the knowledge graph. Moreover, Ontario relies on adaptive physical operators to be able to adjust query execution plans to the condition of the SPARQL endpoints that make accessible the iASiS knowledge graph. More importantly, contrary to existing federated SPARQL query engines, Ontario is able to execute SPARQL queries over data sources that are not integrated in the knowledge graph and are stored in raw formats, e.g. CSV or JSON. This feature of Ontario allows for executing queries over both RDF graphs and against data collections that are not physically integrated into the knowledge graph, providing thus a virtual integration of data sources. # 3 The Semantic Enrichment for Creating the iASiS Knowledge Graph In this section, the semantic enrichment component is described; it is able to transform and integrate heterogeneous data sources into the iASiS knowledge graph. First, the iASiS data-driven pipeline is explained. Then, the pipeline implemented by the semantic enrichment component is defined, as well as RML mappings utilised to transform raw data into RDF. Finally, the main characteristics of the version 1.0 of the iASiS knowledge graph are reported. ## 3.1 The iASiS data-driven pipeline The iASiS data-driven pipeline transforms Big clinical and biomedical data into actionable insights for the support of precision medicine and decision making; the main steps of the pipeline are presented in Figure 2\. The pipeline receives Big data sets as input, e.g. electronic clinical nodes, genomic data, medical images, and open data, and produces as output the iASiS knowledge graph. Moreover, tasks of knowledge discovery are performed in order to detect potential novel patterns and associations between entities in the iASiS knowledge graph, e.g. signatures of long survival patients of lung cancer. First, the aim of the **knowledge extraction methods** is to create ontology annotated data sets from data represented in different formats, e.g. clinical notes, medical images, hospital-derived genomic data; biomedical ontologies are used for annotation. Clinical data is collected from the clinical partners and anonymisation techniques are performed in order to remove confidential attributes, e.g. national identifier, name, address, and profession. **EHR text analysis** and **Image analysis** components are executed in order to extract relevant information from the clinical notes and medical images, e.g. treatments, diagnostics, and results of clinical tests. Biomedical vocabularies, e.g. UMLS or SNOMED are used to represent the knowledge extracted from the clinical notes. Additional knowledge extraction tasks, i.e. genomic and open data analysis are performed over open data collections; these techniques also annotate data using terms from Biomedical ontologies, e.g. UMLS and the Human Phenotype Ontology (HPO). **Figure 2: The iASiS Pipeline for Knowledge Management and Discovery** Once the annotated datasets are produced, tasks of knowledge graph creation are performed in order to semantically describe and integrate annotated data into the **iASiS** **knowledge graph** ; the iASiS unified schema is utilised for describing the data. Annotations for biomedical ontologies, e.g. UMLS, SNOMED, and HPO, are exploited to curate and integrate the entities in the annotated datasets; also, **entity linking techniques** are used to connect these entities to equivalent entities in other knowledge graphs, e.g. DBpedia and Bio2RDF; DBpedia Spotlight 11 solves the tasks of entity extraction, disambiguation, and linking. Moreover, RML mapping rules are used by the Semantic Enrichment component in order to create the RDF triples that populate the iASiS knowledge graph. Once the iASiS knowledge graph is created, it can be explored and consulted by using Ontario; moreover, Ontario allows for performing federated queries between the iASiS knowledge graph, as well as the knowledge graphs linked to it, e.g. DBpedia and Bio2RDF. Results of executing a federated query can be used as input of the tasks of Data Analytics or Knowledge Discovery, enabling thus the discovery of patterns among entities in the iASiS knowledge graph. Data privacy and access control policies are verified and enforced by all the components in the pipeline. ## 3.2 The Knowledge Graph Creation Process Creating a knowledge graph from heterogeneous data sources requires the description of the entities in the data sources using RDF vocabularies, as well as the performance of curation and integration tasks in order to reduce data quality issues, e.g. missing values or duplicates. Figure 3 depicts the pipeline followed during knowledge graph creation; it is composed of three main components: * **Semantic Enrichment** \- transforms annotated data into RDF; it relies on rules in a mapping language, e.g. RML, to generate the RDF triples that correspond to the semantic description of the input data. The iASiS unified schema and properties from existing RDF vocabularies like RDFS and OWL, are utilised as predicates and classes. Annotations in the input data are also represented as RDF triples. The RDF representation of these annotations are linked to the corresponding entities in the knowledge graph, e.g. the resource of the UMLS annotation C00031149 is associated with the resource of the PubMed publication 28381756. Moreover, equivalences and semantic relations between annotations are represented in the knowledge graph. These relationships allow for detecting entities annotated with equivalent annotations and that may correspond to the same real-world entities, i.e. they are duplicates; thus, equivalent annotations represent the input to the tasks of knowledge integration. * **Knowledge Creation & Integration ** \- receives an initial version of the iASiS knowledge graph that may include duplicates and it outputs a new version of the knowledge graph from where duplicates are removed. In order to detect if two entities correspond to the same real-world entity, i.e. they are duplicates, similarity measures are utilised, e.g. GADES 12 or Jaccard 13 ; all the entities in an RDF class of the knowledge graph are compared pairwise. Then, a 1-1 perfect weighted matching algorithm is performed in order to identify duplicates in the class; thus, if two entities are matched, they are considered as equivalent entities and merged in the knowledge graph. Fusion policies 14 are followed to decide how equivalent entities are merged in a knowledge graph; the fusion policies include: ○ **Union** \- creates a new entity with the union of the properties of the matched entities. ○ **Semantics based Union** \- creates a new entity with the union of the properties of the matched entities. Only _most general properties_ are kept in case of properties related by the _subproperty_ relationship; furthermore, if two properties are equivalent, only one of them is kept in the resulting entity. ○ **Authoritative Merge** \- creates a new entity with the properties of the entity with the data provided from an authoritative source. * **Interlinking** \- receives the iASiS knowledge graph and a list of existing knowledge graphs, e.g. DBpedia or Bio2RDF, and outputs a new version of the iASiS knowledge graph where entities are linked to equivalent entities in the input knowledge graphs. Entity Linking tools like DBpedia Spotlight are used for linking; additionally, link traversal techniques are performed to further identify links with other knowledge graphs, e.g. Bio2RDF or DrugBank. ## 3.3 RML Mapping Rules to Create the iASiS Knowledge Graph In order to transform data from heterogeneous data sources into the version 1.0 of the iASiS knowledge graph, 66 RML mapping rules have been defined. These mapping rules are defined based on the classes of the iASiS unified schema available as an instance of the ontology development tool VoCol 15 ; the mappings are available in the GitHub repository 16 of the LUH team. The RML mappings are executed by the Semantic Enrichment component defined above. Given the number of rules and the size of the data sources, optimisation techniques have been implemented with the aim of scaling up. Empirically, scalability has been evaluated, and the Semantic Enrichment component is able to generate knowledge graphs in the order of Terabytes. Furthermore, the Semantic Enrichment component is able to detect data quality issues in the input data collections; it has been empowered with data curation capabilities that allow for detecting missing values, and malformed names and identifiers. **Figure 3: The Pipeline for Knowledge Graph Creation** The following is one exemplar RML mapping rule for defining the properties of a Drug. This rule is used to transform data in CSV format into RDF data. This RML mapping rule describes the data source from which the data is extracted and the format of this data, i.e. CSV. Additionally, it specifies the URL of the class subject; finally, two properties of the class Drug are defined. The Semantic Enrichment component receives the mapping and iterates over each row of the file to transform the values of the columns of the raw file according to the mappings in the rules annotated with Class Subject and Class Properties labels. Once the end of file is reached, an RDF file is produced as output; this RDF file corresponds to the input of the knowledge integration component. ### 3.4 Example of Knowledge Graph Creation In this section, the pipeline for creating the iASiS knowledge graph is illustrated with an example. Suppose input data describing a drug, is received in a tabular format, e.g. as a CSV file, then, an RDF graph describing the drugs in the file is created. These RDF graphs are called **simple RDF molecules** or group of RDF triples that share the same subject. RML mapping rules are defined and executed to transform raw data into the RDF triples that comprise the resulting RDF molecules. Furthermore, these mapping rules indicate the format of the URIs of the resources that appear as subjects or objects of the RDF molecules created during their execution. In this case, three URIs are created, i.e. for the drug, publication, and variation. The same process is repeated for all the RML mappings that define the RDF classes in the iASiS knowledge graph in terms of the available data sources. In case several simple RDF molecules are defined for the same real- world entity, e.g. the drug Docetaxel, the process of _knowledge integration_ is executed. This process determines RDF molecules that represent equivalent entities of a class, according to the available fusion policies. These simple RDF molecules are then merged or integrated into a complex RDF molecule that represent all the properties of the real-world entity, that are represented in the different simple RDF molecules. Finally, entity linking techniques allow for discovering links between entities in the iASiS knowledge graph and equivalent entities in existing knowledge graphs, e.g. DBpedia. In this case, the resource representing the drug Docetaxel in the iASiS knowledge graph is linked to the resource that represents the same drug in DBpedia; the owl:sameAs property is utilised to represent these type of links. Linking the iASiS knowledge graph with other knowledge graphs not only allows for exploring properties that are not represented in the original knowledge graph (e.g. dbo:atcPrefix), but also enables the identification of data quality issues like missing values or duplicates. # 4 Ontario: A Federated Query Engine In this section, the characteristics of Ontario are presented. It is a federated SPARQL query engine being able to evaluate SPARQL queries over the iASiS knowledge graph and the knowledge graphs that are linked to it, e.g. DBpedia and Bio2RDF. Further, examples of federated queries are utilised to illustrate the main features implemented in Ontario in order to scale up to federation of large knowledge graphs. Finally, the RDF Molecule Templates (RDF-MTs), i.e. the source description formalism of Ontario, is utilised as the basis to provide an analysis of the main properties of the iASiS knowledge graph. ## 4.1 SPARQL Query Processing in Ontario Ontario is a federated query engine able to execute queries against a federation of data sources accessible via a Web Access Interface, e.g. SPARQL endpoints; Figure 4 depicts the main components of the Ontario architecture. First, Ontario maintains the description of the data sources in the federation in terms of RDF molecule templates (RDF-MTs). Thus, a data source is described in terms of the RDF classes of the data in the data sources, and the properties of these classes; additionally, data privacy and access policies imposed by data providers are represented in RDF-MTs. An RDF Molecule Template (RDF-MT) 17 is characterised by the following properties: * **rdfMT:hasClass** represents a class **C** of the entities described by the RDF-MT, e.g. iasis:Patient. * **rdfMT:hasProperty** represents the properties of the class of the entities described by the RDFMT, e.g. **iasis:hasBiopsy** and **iasis:hasTumorStage** . * **rdfMT:hasPolicy** allows for representing the operations that can be performed over a property whose data is provided by a given data source. The object of the rdfMT:policies is related to an entity which is an instance of the class **rdfMT:AuthorizationEntity;** which relates a property with access rights authorized by a data provider **s** to an access consumer **c** during a given time period. * **rdfMT:hasCardinality** represents the cardinality of the entities in the class **C** . * **rdfMT:endpoint** corresponds to the Web access interface to access entities of the class **C** . * **rdfMT:linkedTo** represents the links between the class **C** and other classes in the knowledge graph or in other knowledge graphs. **Figure 4: The Ontario Architecture** Given a SPARQL query Q, the **Source Selection and Query Decomposer** identifies a **query decomposition** of Q; it is composed of subqueries (SQi) of Q, and the data sources where these subqueries can be executed, e.g. in Figure 4 subquery SQ2 is executed over @KG1, @KG2, and @KG3. Metadata represented in the RDF-MTs are exploited during source selection and query decomposition, in order to identify the minimal number of data sources that have to be contacted to produce a complete answer of the query, i.e. a solution to the problem of finding the relevant sources for a federated query. Next, the **Query Optimizer** receives the query decomposition and finds an **efficient query plan** , i.e. a plan of Q over the relevant data sources that produces all the answers but in the minimal time. Different **physical operators** 18 are implemented by the Query Engine to collect the answers of the subqueries against the relevant data sources, e.g. SMJOIN or gJOIN, as well as to produce the query answer. In Figure 4, a multiway-join operator connects the subqueries over @KG1, @KG2, and @KG3. This operator is able to adjust query execution schedulers to the conditions of the sources, e.g. data transfer delays and unexpected source workload; moreover, it allows for producing query answers incrementally, thus, users do not have to wait until all the subqueries are completely executed to receive the first answer and can collect answer continuously. Techniques implemented in Ontario have been empirically evaluated and compared with state-of-the-art approaches. Observed experimental results suggest that Ontario is not only able to scale up to large data sources, but also is outperforming the state of the art in queries over a large number of RDF data sources. The main features of Ontario are illustrated in Section 4.2 with an example. Furthermore, Section 4.3 presents an analysis of the characteristics of the iASiS knowledge graph based on the metadata encoded in the RDF-MTs and links. The results suggest that the current version of the iASiS knowledge graph represents a relatively good number of connections between biomedical concepts, e.g. drugs, publications, variants, tumors, and annotations. Nevertheless, these connections need to be extended, being part of future work of WP5, through the analysis and discovery of new links between the concepts represented in the iASiS knowledge graph. ### 4.2 Example of Federated Query Processing Consider the following SPARQL query **Q** that produces “ _Mutations of the type Confirmed somatic variant located in transcripts which are translated as proteins that are transporters of the drug Docetaxel_ “. **Q:** PREFIX rdf: < http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX iasis: <http://project-iasis.eu/vocab/> PREFIX drugbank: <http://bio2rdf.org/drugbank_vocabulary:> PREFIX owl: <http://www.w3.org/2002/07/owl#> SELECT DISTINCT ?mutation WHERE { ?mutation **rdf:type** **iasis:Mutation** . ?mutation **iasis:mutation_somatic_status 'Confirmed somatic variant'** . ?mutation **iasis:mutation_isLocatedIn_transcript** ?transcript . ?transcript **iasis:translates_as** ?protein . ?drug **iasis:drug_interactsWith_protein** ?protein . ?protein **iasis:label** ?proteinName . ?drug **iasis:label** **'docetaxel'** . ?drug **owl:sameAs** ?drug1 . ?drug1 **drugbank:transporter** ?transporter . ?transporter **drugbank:gene-name** ?proteinName . } To execute this query, both the iASiS and the Bio2RDF knowledge graphs need to be consulted. Ontario maintains metadata about these two knowledge graphs, represented in RDF molecule templates, and it is able to select them as relevant sources for the query. Then, once the RDF molecule templates are selected, Ontario decomposes query **Q** into subqueries **SQ1** and **SQ2,** and executes them over the iASiS and Bio2RDF knowledge graphs, respectively. **SQ1:** PREFIX rdf: < http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX iasis: < _http://project-iasis.eu/vocab/_ > PREFIX owl: <http://www.w3.org/2002/07/owl#> SELECT DISTINCT ?mutation ?proteinName ?drug1 WHERE { ?mutation **rdf:type** **iasis:Mutation** . ?mutation **iasis:mutation_somatic_status 'Confirmed somatic variant'** . ?mutation **iasis:mutation_isLocatedIn_transcript** ?transcript . ?transcript **iasis:translates_as** ?protein . ?drug **iasis:drug_interactsWith_protein** ?protein . ?protein **iasis:label** ?proteinName . ?drug **iasis:label** **'docetaxel'** . ?drug **owl:sameAs** ?drug1 . } **SQ2:** PREFIX drugbank: <http://bio2rdf.org/drugbank_vocabulary:> SELECT DISTINCT ?proteinName ?drug1 WHERE { ?drug1 **drugbank:transporter** ?transporter . ?transporter **drugbank:gene-name** ?proteinName . } As a result, **24 mutations** of the protein **ABCB1** and **11 mutations** of the protein **ABCG2** are identified. It is important to highlight that without the integration of COSMIC data into the iASiS knowledge graph and the linking of the corresponding entities with Bio2RDF, this federated query could not be executed. Thus, these results illustrate not only the features of Ontario as a federated query engine, but also the benefits of semantically describing and integrating heterogeneous data sources into a knowledge graph. ### 4.3 Main Characteristics of the iASiS Knowledge Graph The current version of the iASiS knowledge graph has 236,512,819 RDF triples, 26 RDF classes, and in average, 6.98 properties per entity. Figure 5 shows the distribution of number of entities per RDF-MT in the iASIS knowledge graph and in the RDF-MTs from DBpedia (with prefix _dbo:_ ) and Bio2RDF (with prefix _bio2rdf:_ ) that are connected to its RDF-MTs. As observed, some RDF-MTs are populated with a relatively large number of entities, e.g. the annotations of the publications, and the mutations; in average there are 86,934.00 entities per class. Figure 5: Number of Entities per RDF-MT in the iASiS Knowledge Graph, and related RDF-MTs in DBpedia and Bio2RDF RDF-MTs of the iASiS knowledge graph and connected knowledge graphs are used to describe the main characteristics of the integrated data and their connections. To conduct this analysis, an undirected graph with the RDF-MTs of the iASiS, DBpedia, and Bio2RDF knowledge graphs was built. Figure 6 shows this graph; RDF-MTs correspond to 35 nodes in the graph, while 58 edges represent links among RDF-MT. It can be observed that all the RDF-MTs are connected to at least one RDF-MT, i.e. there are no isolated classes. **Figure 6: RDF-MTs in the iASiS, DBPedia, and Bio2RDF** **Knowledge Graphs** Using network analysis, several graph measures were computed; Table 1 reports on the results of these measures. Clustering coefficient measures the tendency of nodes who share the same connections in a graph to become connected. If the neighbourhood is fully connected, the clustering coefficient is 1.0 while a value close to 0.0 means that there is no connection in the neighbourhood. Transitivity measures if RDF-MTs are transitively connected; values close to 1.0 indicate that almost all the RDF-MTs are related, while low values indicate that many RDF-MTs are not related. Clustering coefficient and transitivity are both relatively low, i.e. 0.224 and 0.23, respectively. This suggests that many more connections between RDF classes can be included in future versions of the iASiS knowledge graph. **Table 1: Graph Analysis of the RDF-MTs of the iASiS Knowledge Graph** <table> <tr> <th> RDF-MT graph property </th> <th> Value </th> </tr> <tr> <td> Number of RDF-MTs (nodes) </td> <td> 35 </td> </tr> <tr> <td> Number of connections (edges) </td> <td> 58 </td> </tr> <tr> <td> Clustering coefficient </td> <td> 0.224 </td> </tr> <tr> <td> Transitivity </td> <td> 0.23 </td> </tr> <tr> <td> Avg. number of neighbours </td> <td> 2.629 </td> </tr> </table> # 5 Conclusions and Future Work The outcomes of performing the tasks T5.4 and T5.5 of WP5 have been presented in this document. The reported results have allowed for the understanding of the main data management techniques implemented in the iASiS framework. These techniques enable the transformation and curation of heterogeneous data sources into RDF, as well as their integration into the iASiS knowledge graph. Moreover, the Ontario federated query engine is described in terms of its components. The depicted examples illustrate the functionalities of the developed techniques, and the reported statistics of the iASiS knowledge graph facilitate the understanding of the amount of knowledge represented in the version 1.0 of the iASiS knowledge graph, as well as the opportunities that it offers for knowledge exploration and discovery. As the number of sources to be integrated in the iASiS knowledge graph will increase, techniques for physically distributing the iASiS knowledge graph will be demanded. Furthermore, caching techniques for keeping the results of the most frequent federated queries will facilitate the execution of queries at large scale. All these tasks are part of future work in WP5.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1187_DELTA-FLU_727922.md
D4.1 Data Management Plan # DATA SUMMARY DELTA-FLU participates by default in the H2020 Open access to research data pilot, which aims to improve and maximize access to and re-use of research data generated by H2020 funded projects. This initial Data Management Plan (DMP) will detail the management of the research data generated and collected during the project, in compliance with article 29.3 of the H2020 Grant Agreement. The DMP will follow the principles of FAIR data management as set out in the Guidelines on FAIR data management in H2020. This DMP will include information on the handling of research data during and after the project, what data will be collected, processed and/or generated, which methodology and standards will be applied, whether data will be shared/made open access, and how data will be curated and preserved. The initial DMP will be updated over the course of DELTA-FLU, whenever significant changes arise, such as the generation of new data, or changes in consortium composition. DELTA-FLU will be advised by the data access committees available at partner institutes and/or the ethical body overseeing consent-giving procedures, regarding the appropriate management and use of the research data. ## PURPOSE OF DATA COLLECTION/GENERATION AND RELATION TO THE OBJECTIVES OF THE PROJECT The primary aim of data collection and generation within DELTA-FLU is to advance knowledge on the key viral, host-related, and environmental factors that determine the dynamics of avian influenza (AI) in poultry and other host species, with the ultimate goal to improve prevention and control strategies against this disease. By establishing interdisciplinary research focusing on key questions of AI, DELTA-FLU will collect and generate data to determine 1) potential for some highly pathogenic avian influenza viruses (HPAIV, e.g. H5N8 clade 2.3.4.4) to be maintained in wild bird populations and spread over long- distances, 2) key viral, host, and environmental factors for incursion of HPAIV from wild birds into poultry holdings, 3) roles of viral, host, and environmental factors in the transition of low pathogenic avian influenza virus to HPAIV in poultry, 4) effect of flock immunity against AI on early detection and viral genetic drift, and 5) viral genetic factors that allow reassortants of avian and mammalian influenza viruses to transmit efficiently among pigs. Data will be used to inform about experimental design, analytic tools and methods, as well as modelling and simulation tools, in order to achieve these primary aims. ## TYPES AND FORMATS OF DATA TO BE GENERATED/ COLLECTED DELTA-FLU will collect and generate a range of data based on which targeted prevention and control strategies can be designed, as summarized in the table below. These data will largely originate from public databases, databases available from partner’s own projects and research activities in work packages 1 – 3. Standards for collecting, curating and preserving the data are integrated in the research activities of DELTA-FLU to ensure data is kept securely and protected from inappropriate use or disclosure. Where appropriate, DELTA-FLU will leverage existing standards, and continue to develop existing ones for data representation, for example for sequence-based experiments. The research data generated by DELTA-FLU will be curated in such a way that the formats will be compatible with publicly available data repositories. <table> <tr> <th> Type of data </th> <th> Origin of data </th> <th> Format </th> <th> Expected size of data </th> </tr> <tr> <td> Virus sequences </td> <td> Public databases (Genbank: https://www.ncbi.nlm.nih.gov/genbank/ or GISAID: http://platform.gisaid.org), work packages 1 to 3) </td> <td> FASTA </td> <td> Depends on amount of metadata (order of MB) </td> </tr> <tr> <td> Wild bird tracking data </td> <td> Public database (https://www.movebank.org/), work package 1 </td> <td> .csv, Movebank format </td> <td> Depends on number of birds and duration of tracking (order or GB) </td> </tr> <tr> <td> Animal trial data (tissue tropism of viruses, excretion dynamics of viruses, virulence of viruses in infected birds) </td> <td> Public databases (PubMed: https://www.ncbi.nlm.nih.gov/pubmed or Web of Science: http://apps.webofknowledge.com/), work packages 1 to 3 </td> <td> Word, Excel </td> <td> Order of MB </td> </tr> </table> ## RE-USE OF EXISTING DATA In DELTA-FLU, partners will make use of specific existing datasets collected from public databases, databases available from partner’s own projects and research activities, or datasets made available by one or more partners to the consortium. The re-use of existing data is necessary to value and integrate obtained results into the current state of knowledge. These include the following data, sources, and reuse: <table> <tr> <th> Data/ source </th> <th> Re-use </th> <th> Expected size of data </th> </tr> <tr> <td> Virus sequences / Genbank </td> <td> Creation of phylogenetic trees </td> <td> Order of MB </td> </tr> <tr> <td> Virus sequences / GISAID </td> <td> Creation of phylogenetic trees </td> <td> Order of MB </td> </tr> <tr> <td> Animal trial data / Pubmed, Web of Science, unpublished data of partners </td> <td> Evolution of genetic and phenotypic characteristics of viruses in different avian hosts </td> <td> Order of MB </td> </tr> </table> Before making datasets available amongst partners or via open access data repositories, DELTA-FLU will review any agreement(s) with third parties to evaluate any restrictive use of the data by the consortium partners, and evaluate whether or not ethical and other restrictions on further use apply. In addition, whether permissions for use of the data are needed from third parties will be evaluated. ## DATA UTILITY: TO WHOM WILL IT BE USEFUL DELTA-FLU will collect and generate a range of data, as described above, based on which targeted prevention and control strategies can be designed, including policy strategies and risk assessment. These data might be useful to a wide range of actors involved in prevention and control of avian Influenza virus outbreaks. These actors include, but are not limited to: * Research institutes and their individual scientists (such as in the areas of virology, epidemiology, molecular biology, pathology, ecology, ornithology, risk assessment and modelling, and animal and human health) * International public and animal health and food safety authorities (e.g. EU, FAO, EFSA, WHO, ECDC, OIE, OFFLU) * International nature conservation organizations (e.g. Wetlands International, Wetlands and Wildfowl Trust) * National governments and international policy-making authorities (e.g. EU) * Poultry production industry, including poultry production companies, farmers, and international associations representing the industry (e.g. European Live Poultry and Hatching Egg Association, Association of Poultry Processors and Poultry Trade in the EU). # FAIR DATA ## MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA DELTA-FLU will ensure that all data stored provide sufficient information about each data set. Typically, this will include information about experimental protocols, samples, virus strains, reagents, experimental conditions, and data analysis models and approaches. DELTA-FLU will use internationally accepted naming conventions and indicators, such as the taxonomic international accepted names for pathogens, the common pathogen titration methodologies expressing doses such as TCDI50, PFU, etc., developed and/or optimized specific detection and analyzing methods (e.g. WHO or OIE standards), as well as modelling/simulation exercises. Scenario-based mathematical models will be described in a publication submitted to an appropriate (open access) peer-reviewed journal for publication, which will be finable by its unique Digital Object Identifier. Sequence data will be uploaded with additional information/metadata. Search keywords will be provided e.g. for data in publications as well as for data in public databases. In addition, version numbers will be provided to enable a clear traceability of currentness of data and of documents There is a broad range of metadata which will be created especially in connection to sequencing data (location, origin of samples, species sampled etc.) but also to data collected from satellite tracking (time, location, species, other tracking data). The metadata will be collected following the schemes provided within the working packages and also by related projects (e.g. sequence metadata collection scheme of COMPARE). ## MAKING DATA OPENLY ACCESSIBLE Before making datasets available amongst partners or via open access data repositories, DELTA-FLU will review any agreement(s) with third parties to evaluate any restrictive use of the data by the consortium partners, and evaluate whether or not ethical and other restrictions on further use apply. In addition, whether permissions for use of the data are needed from third parties will be evaluated. In cases where transferred data generates new data, e.g. upon data analysis by one or more partners, the DELTA-FLU Consortium Agreement will address Intellectual Property issues that may arise from ownership of the processed data and use of the tools on these data. Prior to sharing of data, all researchers in DELTA-FLU will review their data obtained in the project for exploitation potential and adequate knowledge protection where appropriate, and communicate without delay any decision to protect data generated in DELTA-FLU to the Executive Board of DELTA-FLU. Adequate protection of knowledge shall take into account the need for proper dissemination of results and openly sharing of data without any unjustified delay. DELTA-FLU will actively seek to publish the data in peer-reviewed scientific journals, and will strive to publish in open access journals (“Gold model”). In case publication in open access journals is not feasible, the publications will be made available via the DELTA-FLU website, and the repositories of the participating universities and institutes, respecting any embargo periods of the journal (“Green model”). The availability of the data generated in DELTA-FLU has three main access levels. The first level is public data, in which data will be made permanently and freely available through for example open access publication and data storage in publicly accessible databases. The second level is managed access data, in which data will be stored under strict security provisions and access to these data will be provided to ( _bona fide_ ) researchers or actors able to demonstrate that they meet the requirements for privacy protection and security as assessed by the relevant data access committees (typically drawn from the ethical body overseeing consent-giving procedures, or data access committees available at partner institutes). The third level of access is private data, in which data will be made available only to defined closed consortia that will retain all access management functions. We anticipate that private data will comprise only a minor portion of all DELTA-FLU data sets. Typically, this concerns data for which potential dual use or biosecurity issues have been identified in the ethical assessment. Data sharing with third parties will be subject to a data-sharing agreement established by the DELTA-FLU Steering Committee. The agreement will specify the conditions of data use, criteria for access, and acknowledgements. Consortium partners who wish to withhold patentable or proprietary data can do so, and advice on this point will be given by the Steering Committee. Data sharing and data access and related issues will in principle be discussed and decided upon in the Steering Committee, duly advised by the consortium partners’ data management officers, and the project’s Scientific Advisory Board and Ethical Committee. If deemed appropriate and necessary, a data access committee will be established during the course of the project. In making data available, DELTA-FLU discriminates raw and final research data. Raw research data that will not be considered for a patent application by consortium partners, but has been used to generate scientific papers, will be made publicly available two years after the project end at the latest, and stored in a repository of the partner’s choice. Final research data will be made publicly available only after i) relevant scientific publications based on the data have been accepted, or ii) a patent application has been published, or iii) two years after the project end at the latest. The public data will be available following the access conditions of the used databases (e.g. EPiFLU or Genbank for sequence data or Movebank for animal movement data). Restricted data will be stored locally and access will be granted by the responsible partners upon request, including information about necessary software tools. Movebank is a free online infrastructure created to help researchers manage, share, analyze and archive animal movement data. The Movebank project is hosted by the Max Planck Institute for Ornithology in coordination with the North Carolina Museum of Natural Sciences, the Ohio State University and the University of Konstanz. Movebank collaborates with numerous governmental agencies, universities and conservation organizations, and has received funding from the National Science Foundation, the German Aerospace Center, the German Science Foundation and NASA. Movebank has long-term (>20 years) funding through the Max Planck Society and the University of Konstanz and is intended to serve as a global data archive for animal movement data. Movebank is a resource open to all researchers and organizations regardless of species, study area or source of funding. Movebank users retain ownership of their data and can choose whether or not to make their data available to the public. The database is designed for datasets that include multiple locations of individual animals, commonly referred to as tracking data. It also supports data collected by other bio-logging sensors attached to animals as well as information about animals, tags and deployments. During data collection, data on movement is continuously added to the database; this data can be made visible to the public directly, via a map tool on the Movebank homepage. After completion of specific project, data can be curated and metadata included and published in the Movebank Data Repository with specific DOI and under Creative Commons license, enabling the data to be used by other researchers. GenBank is the NIH genetic sequence database, an annotated collection of all publicly available DNA sequences. GenBank is part of the International Nucleotide Sequence Database Collaboration, which comprises the DNA DataBank of Japan (DDBJ), the European Nucleotide Archive (ENA), and GenBank at NCBI. These three organizations exchange data on a daily basis. There are several ways to search and retrieve data from GenBank. The GenBank database is designed to provide and encourage access within the scientific community to the most up-to-date and comprehensive DNA sequence information. Therefore, NCBI places no restrictions on the use or distribution of the GenBank data. However, some submitters may claim patent, copyright, or other intellectual property rights in all or a portion of the data they have submitted. NCBI is not in a position to assess the validity of such claims, and therefore cannot provide comment or unrestricted permission concerning the use, copying, or distribution of the information contained in GenBank. Some authors are concerned that the appearance of their data in GenBank prior to publication will compromise their work. GenBank will, upon request, withhold release of new submissions for a specified period of time. GISAID EpiFlu database is a non-profit organization that provides public access to the most complete collection of genetic sequence data of influenza viruses and related clinical and epidemiological data. Furthermore, the database provides rapid sharing of influenza virus data. “The Initiative involves public– private partnerships between the non-profit organization _Freunde von GISAID e.V._ [4] and governments of the Federal Republic of Germany, the official host of the GISAID EpiFlu™ database” “The database access agreement of GISAID ensures that contributors of virus sequence data do not forfeit intellectual property rights to the data”. PubMed comprises over 28 million citations for biomedical literature from MEDLINE, life science journals, and online books. PubMed citations and abstracts include the fields of biomedicine and health, covering portions of the life sciences, behavioral sciences, chemical sciences, and bioengineering. PubMed also provides access to additional relevant web sites and links to the other NCBI molecular biology resources. PubMed is a free resource that is developed and maintained by the National Center for Biotechnology Information (NCBI), at the U.S. National Library of Medicine (NLM), located at the National Institutes of Health (NIH). Web of Science connects publications and researchers through citations and controlled indexing in curated databases spanning every discipline. It allows cited reference searches to track prior research and monitor current developments in over 100 years’ worth of content that is fully indexed, including 59 million records and backfiles dating back to 1898. It is maintained by Clarivate Analytics, that provides a comprehensive citation search. It gives access to multiple databases that reference cross- disciplinary research, which allows for in-depth exploration of specialized sub-fields within an academic or scientific discipline. ## MAKING DATA INTEROPERABLE To ensure the interoperability of data, where appropriate, DELTA-FLU will leverage existing standards, and continue to develop existing ones for data representation. DELTA-FLU will make the generated and collected research data available via appropriate, dedicated, and, where possible, open access archives and database repositories. In addition, DELTA-FLU will upload basic datasets in standardized forms in a primary database as required by the journal in which the consortium partners publish their results. As the project activities and the consortium partners span multiple scientific disciplines, partners will agree on standard vocabularies for all data types present in DELTA-FLU generated and/or collected data sets, as well as on common terminology, key words, units and formats to be applied to the data and other documents. ## INCREASE DATA RE-USE (THROUGH CLARIFYING LICENCES) In DELTA-FLU, partners will make use of specific datasets collected from public databases, data-bases available from partner’s own projects and research activities, or datasets made available by one or more partners to the consortium. Therefore, the majority of the data will be stored by the partner who is performing the experiments or using the tools. In those cases, the data remain under the control of the partner who delivered the data, and management and ethical procedures will follow those of the original project under which the data were made available. Before making datasets available amongst partners, DELTA-FLU will review any agreement(s) with third parties to evaluate any restrictive use of the data by the consortium partners, and evaluate whether or not ethical and other restrictions on further use apply. In addition, whether permissions for use of the data are needed from third parties will be evaluated. In cases where transferred data generates new data, e.g. upon data analysis by one or more partners, the Consortium Agreement addresses IP issues that may arise from ownership of the processed data and use of the tools on these data. Detailed arrangements on the data use and related knowledge management issues have been agreed upon in the Consortium Agreement. The data generated and collected in the project will be made openly available to third parties after the end of the project. Possible restrictions could be put in place due to the need to finalize pa-tenting processes or publish scientific works based on the data. Final research data stored locally will remain re-usable until the end of the project and for an additional 10-years term if adequate financial sponsors will be found (if necessary). Raw and final re-search data stored in other open repositories will remain re-usable for a minimum of 10 years. All partners of DELTA-FLU have compromised about harmonizing methods and protocols to assure quality (milestones and deliverables). The majority of the applied diagnostic methods are standardized. In sum, only those methods will be used which have already been validated and accredited in partners lab to ensure highest quality standards. This ensures high quality data, which can be compared between different partners and other laboratories. # ALLOCATION OF RESOURCES Each partner in DELTA-FLU is responsible for data stored locally and for complying with their own standard operating procedures (SOPs), and any DELTA- FLU specific SOP that may be drawn up if deemed necessary, as well as for complying to the relevant legal and ethical requirements. Data sharing, data access and related issues will in principle be discussed and decided upon in the Steering Committee. Final research data stored locally will remain re-usable until the end of the project and for an additional 10-years term if adequate financial sponsors will be found (if necessary). Raw and final research data stored in open repositories, such as EMBL-EBI ENA/GenBank and Movebank, will remain re-usable for a minimum of 10 years. Consortium partners strive to publish research data as supporting materials together with their publications as much as possible, to facilitate future re- use of data by other researchers. Cost for open access publications are foreseen in the DELTA-FLU budget (around 2000 - 8000 euros per partner). Currently, we do not have a cost estimate for the staff time necessary to make data FAIR in this project. # DATA SECURITY As mentioned above, each partner in DELTA-FLU is responsible for data stored locally and for complying with their own standard operating procedures (SOPs), and any DELTA-FLU specific SOP that may be drawn up if deemed necessary, as well as for complying to the relevant legal and ethical requirements. The following general terms regarding data protection will be followed by the partners in DELTA-FLU with regard to raw and final research data which is not stored in open repositories: * The amount of data collected is relevant and not excessive; * Data are stored securely; * Data are fairly and lawfully processed; * Data accuracy is ensured (i.e. all reasonable efforts to ensure data accuracy are undertaken); * Data are used only in ways that are compatible with the original consent or agreement; * Relevant national and international regulations regarding data protection will be applied. Data storage facilities will be maintained in accordance with manufacturers’ guidelines. Data will be backed up at regular intervals, and stored safely and securely, in accordance with the consortium partners’ organizational policy. # ETHICAL ASPECTS Ethical issues that may have an impact on data sharing will be assessed during the project as part of work package 4, task 4.3 (Management of the research data generated and collected during the project). Animal trial data collection follows all national and EU laws. DELTA-FLU will not generate, collect or deal with personal data.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1188_E2mC_730082.md
# INTRODUCTION The Data Management Plan (DMP) is a deliverable required for all projects participating in the ORD Pilot (Open Research Data Pilot). In the previous work programmes, the ORD pilot included only some areas of Horizon 2020, now, under the revised version of the 2017 work programme, the Open Research Data pilot has been extended to cover all the thematic areas of Horizon 2020. It aim to encourage the data management following the principle "as open as possible, as closed as necessary". “ _The ORD pilot aims to improve and maximize access to and re-use of research data generated by Horizon 2020 projects and takes into account the need to balance openness and protection of scientific information, commercialization and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions_ ” (from “Guidelines on FAIR Data Management in Horizon 2020”). The structure of the document follows the “Guidelines on FAIR Data Management in Horizon 2020”, provided by the Directorate-General for Research & Innovation of the European Commission ( _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h20_ _20-hi-oa-data-mgt_en.pdf_ ) ## Scope of the document The final goal of this document is to show how the research data of the project are **findable, accessible, interoperable and reusable** (FAIR). This deliverable describes the data management life cycle for the data to be collected, processed and/or generated by the project. It includes information on: * the handling of research data during and after the end of the project * what type of data will be collected, processed and/or generated * which methodology and standards will be applied * whether data will be shared/made open access and * how data will be curated and preserved (including after the end of the project) It reports also some considerations on the ethical aspects and data security and also an assessment on costs related to data management. The DMP is intended to be a living document and it will be updated over the course of the project whenever significant changes arise. In any case, it will be updated as a minimum in time with the periodic reviews of the project. # OVERVIEW ON PROJECT DATA ## Data types, collection and generation <table> <tr> <th> **Data types, collection and generation** </th> <th> </th> </tr> <tr> <td> ▪ What types and formats of data will the project generate/collect? </td> <td> The E2mC will use data from crowdsourcing and social media (e.g. images, videos, text messages). The project will generate, from that data, mainly maps in support to the Emergency responders, which include a layer relevant to the social elaborated information. </td> </tr> <tr> <td> ▪ Purpose of the data collection/generation and its relation to the objectives of the project </td> <td> The main goal of the project is to produce better, more qualitative and more timely maps using information either to be collected from social media or solicited from the crowd information, enriching the already available information from the satellite sources. </td> </tr> <tr> <td> ▪ Expected size of the data </td> <td> The size of the data will be in the range of few Terabytes </td> </tr> <tr> <td> ▪ Origin of the data </td> <td> Crowdsourcing and social media </td> </tr> </table> ## Existing data <table> <tr> <th> **Existing data** </th> <th> </th> </tr> <tr> <td> ▪ Re-use of existing data </td> <td> Existing datasets from social media and crowdsourcing sources related to past disaster events will be re-used as well as products delivered by the Copernicus Emergency management Service. This reuse will enable the testing of the system on a set of “cold cases”, giving hints and suggestions on the set- up of the procedures and their reliavility. </td> </tr> <tr> <td> ▪ Way to re-use, to what end </td> <td> Data will be re-used through reprocessing of social media and crowdsourced data in combination with satellite images and visual interpretation in order to demonstrate the feasibility of </td> </tr> </table> <table> <tr> <th> **Existing data** </th> <th> </th> </tr> <tr> <td> </td> <td> deriving useful and geolocated information on the disaster location and extent form these unconventional sources. </td> </tr> </table> ## Data usability <table> <tr> <th> **Data usability** </th> <th> </th> </tr> <tr> <td> ▪ To whom might it be useful? </td> <td> These data might be useful to the Service Provider of the Copernicus Emergency Management Service in order to provide an enhanced service to the European Commission and, ultimately, to the Civil Protection of the European Member States and to the Humanitarian Aid actors. In fact, the outcomes of the project and the released data and maps, can suggest also other applications of the delivered system, by selecting other keywords and enabling the location and the mapping of other events of interest, also exploiting the multilingual capability of the system. </td> </tr> </table> # “FAIR” DATA APPROACH ## Data “Findability” <table> <tr> <th> **Data “Findability”** </th> <th> </th> </tr> <tr> <td> ▪ Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? </td> <td> E2mC deployed a master PostgresSQL/PostGIS database where to store the data produced by the project (indexes over social media data streams, data contributed form crowdsourcing, satellite images …). All the data are saved to this database through a REST interface called DataAPI. The API is developed by using the Swagger/Open API standard and the documentation is availabl e _here_ . </td> </tr> <tr> <td> ▪ What naming conventions do you follow? </td> <td> The only naming convention adopted is the one related to the creation of each specific event workspace: dYYYY-MM-DD_NameOfTheEvent The values coming from the social networks do not follow any naming convention, nevertheless the data findability for each kind of resource is guaranteed through the usage of well defined tags. </td> </tr> <tr> <td> ▪ Will search keywords be provided that optimize possibilities for re-use? </td> <td> Yes, through search interfaces using local cache/db and existing service API available by social </td> </tr> <tr> <td> ▪ Do you provide clear version numbers? </td> <td> Yes </td> </tr> <tr> <td> ▪ What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how </td> <td> We will use INSPIRE compliant metadata for the generated geospatial datasets and services </td> </tr> </table> ## Data “Accessibility” <table> <tr> <th> **Data “Accessibility”** </th> </tr> <tr> <td> ▪ Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions (Note that in multi- beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions are made in the consortium agreement and are in line with the reasons for opting out) </td> <td> Maps and derived information will be made openly available. As for data from social media and crowdsourcing, it has to be decided which data can be shared, indeed there could be ethical and legal issues related to this data that need to be further discussed with the ethical expert. </td> </tr> <tr> <td> ▪ How will the data be made accessible (e.g. by deposition in a repository)? </td> <td> A web-based access will be allowed for working on the project master database and the project Social&Crowd Platform. </td> </tr> <tr> <td> ▪ What methods or software tools are needed to access the data? </td> <td> A web browser to access the project Social&Crowd Platform and different FOSS SW package to access the data generated (images, PDF documents, database) </td> </tr> <tr> <td> ▪ Is documentation about the software needed to access the data included? </td> <td> Yes, it will be part of the metadata </td> </tr> <tr> <td> ▪ Is it possible to include the relevant software (e.g. in open source code)? </td> <td> Yes, the SW of the Social&Crowd Platform will be released as an open source software for Commission use only </td> </tr> <tr> <td> ▪ Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible. </td> <td> Yes, are available through the project Social&Crowd Platform _here_ with the following usr/pwd: demouser/demouser The project code is stored on the e-GEOS Gitlab repository _here_ . The credentials to access must be requested at the following email address: [email protected]_ </td> </tr> <tr> <td> ▪ Have you explored appropriate </td> <td> It is not necessary to take any particular </td> </tr> <tr> <td> **Data “Accessibility”** </td> <td> </td> </tr> <tr> <td> arrangements with the identified repository? </td> <td> arrangement with Gitlab. </td> </tr> <tr> <td> ▪ If there are restrictions on use, how will access be provided? </td> <td> The administrator access to Gitlab will be grated to Commission and Code maintainers only </td> </tr> <tr> <td> ▪ Is there a need for a data access committee? </td> <td> No </td> </tr> <tr> <td> ▪ Are there well described conditions for access (i.e. a machine readable license)? </td> <td> No </td> </tr> <tr> <td> ▪ How will the identity of the person accessing the data be ascertained? </td> <td> The access to the master database and to the Social&Crowd platform will be strictly controlled and credential will be generated/preconfigured through a service admin facility. Nonetheless we have no means to ensure that the provided credentials are not used by a different person </td> </tr> </table> ## Data “Interoperability” <table> <tr> <th> **Data “Interoperability”** </th> <th> </th> </tr> <tr> <td> ▪ Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? </td> <td> Yes, intermediate and final data/products will be stored in the project storage with unique resources identifiers. In some cases (e.g. Social Data), there will be the need to respect the Terms & Conditions of the specific Social Network used. </td> </tr> <tr> <td> ▪ What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? </td> <td> The project will use INSPIRE compliant metadata associated, where possible, to vocabularies such as GEMET </td> </tr> <tr> <td> ▪ Will you be using standard vocabularies for all data types present in your data set, to allow inter- </td> <td> As far as possible standard vocabularies will be used, exploiting also progress done by other FP7/H2020 projects in the field of </td> </tr> <tr> <td> **Data “Interoperability”** </td> <td> </td> </tr> <tr> <td> disciplinary interoperability? </td> <td> standardization and interoperability </td> </tr> <tr> <td> ▪ In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? </td> <td> Yes </td> </tr> </table> ## Data “Re-usability” <table> <tr> <th> **Data “Re-usability”** </th> <th> </th> </tr> <tr> <td> ▪ How will the data be licensed to permit the widest re-use possible? </td> <td> Data and products generated within the Copernicus frame (CEMS) will be publicly available according to the Copernicus terms and conditions https://www.copernicus.eu/en/how/howaccess-data …. Data and products generated within exploitation frame other than Copernicus, will be available under the license Creative Commons Attribution + ShareAlike (CC BY-SA) https://en.wikipedia.org/wiki/Creative_Co mmons_license Social data are accessible under the terms and conditions of services (Twitter, Youtube and Flickr) </td> </tr> <tr> <td> ▪ When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible. </td> <td> Data will be made available for re-use at the end of the project. During the course of the project, some datasets may be released after validation. There could be some limitations due to big amount of data. </td> </tr> <tr> <td> ▪ Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why </td> <td> Yes </td> </tr> <tr> <td> **Data “Re-usability”** </td> <td> </td> </tr> <tr> <td> ▪ How long is it intended that the data remains re-usable? </td> <td> The data will remain re-usable at least for one year after project completion </td> </tr> <tr> <td> ▪ Are data quality assurance processes described? </td> <td> Data Quality assessment will be part of final maps preparation during demonstration phases. The quality assessment of outputs is ultimately a project goal. </td> </tr> </table> # ETHICAL ASPECTS AND DATA SECURITY ## Ethical aspects <table> <tr> <th> **Ethical aspects** </th> <th> </th> </tr> <tr> <td> ▪ Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA). </td> <td> As stated in the DoA-part B section 5.1.1 _“The E2mC Project involves humans, but only for registering information relevant to an emergency crisis,_not requiring sensitive_ _information about identity, health, etc_ ., and as volunteer contributor to a crowdsourcing platform.” _ So, the ethical/legal issues in place are: 1. Management of personal data (i.e. registering information) 2. Information related to the images and videos collected (via crowdsourcing application) during a disaster as persons or vehicles can potentially be detected Detailed information on privacy/confidentiality and the procedures that will be implemented for data collection, storage, access, sharing policies, protection, retention and destruction along with the confirmation that they comply with national and EU legislation are provided in D7.1 and D7.2. </td> </tr> <tr> <td> ▪ Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? </td> <td> The Digital Informed Consent form is filled when registering and creating an account on the platform, to agree to the Terms and Conditions and to the Privacy Policy. Details are provided in D7.1. No other use of personal data is performed by the platform service. </td> </tr> </table> ## Data security <table> <tr> <th> **Data security** </th> <th> </th> </tr> <tr> <td> ▪ What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? </td> <td> The master database will be stored in a controlled cloud environment to ensure high data security and reliability </td> </tr> <tr> <td> ▪ Is the data safely stored in certified repositories for long term preservation and curation? </td> <td> No. See also 3.4 Data “Re-usability” </td> </tr> </table> # ALLOCATION OF RESOURCES ## Costs related to data management <table> <tr> <th> **Costs related to data management** </th> <th> </th> </tr> <tr> <td> ▪ What are the costs for making data FAIR in your project? </td> <td> Most of the FAIR Data Principles are already taken into account. Further evaluation will be made on the basis of the final data ensemble (code repository, demonstration phases) in terms of code, output data and source data. </td> </tr> <tr> <td> ▪ How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions) </td> <td> Data will be stored in a commercial cloud facility. Cost for storage and management are included in the project budget to cover the period of usage within the project lifecycle. </td> </tr> <tr> <td> ▪ Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)? </td> <td> The amount of data stored for each event is in the order of tens of MB. Given that, no rolling archive policies have been put in place. Nevertheless, the system database has been designed in a way that each activation is not related to another one. This separation allows the administrator to backup and put offline old data in any moment, if needed. </td> </tr> </table> ## Data management responsible The nominated data management responsible is: Mariano Alfonso Biscardi (Software Developer and Architect at e-Geos). # CONCLUSIONS This DMP issue contains all information available at this point in time, further information will be included in the next issues of the document. The E2mC project rely on the collection and use of a variety of crowdsourced data relevant to an emergency crisis. Such data (e.g. images, video, text messages etc. not requiring sensitive information about identity, health, etc.), will be collected through social media, crowdsourcing means and through volunteer contributors. They will be further analysed, processed (e.g. geolocated) and arranged for exploitation in the project master database and in the Social&Crowd Platform developed by the project consortium. Further to this, any ethical and legal issues related to the usage and sharing of the collected data shall be carefully discussed with the ethical expert as to put in place proper informed consent forms, disclaimer, protection and security measures etc. in line with personal data protection policies and applicable IPR, if any. Operators of the service (e.g., Project partners, commission personnel, Copernicus Emergency services providers…) will access the project master database and the Social&Crowd Platform on a controlled basis (username/pwd mechanism). All the data will be stored in an external Cloud facility for the duration of the project. At the finalisation of the Service Design phase of the project, details on the identification of the repositories or archive as well as data management responsibilities and roles will be discussed and allocated to partners.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1192_SCREEN_730313.md
# 1\. Data Summary What is the purpose of the data collection/generation and its relation to the objectives of the project? _The regions participating to SCREEN sent data on their smart specialization strategies with specific focus on the areas and sectors to be prioritized for the diffusion of circular economy focus sectors, R &D capabilities, education capabilities, and emerging ideas. _ What types and formats of data will the project generate/collect? _Excel files only, like the one shown in the following figure._ Will you re-use any existing data and how? _Due to the scope of the Action, that is a Coordination Action, only existing data provided by the participating regions will be collected and analysed. The main scope is to collect data in a way that can be used for the project purposes, basically matchmaking between different regional initiatives and possible synergies._ What is the origin of the data? _Regional data base of other regional sources, only from the participating regions_ What is the expected size of the data? ## Extremely variable, depending on their availability in each region; however, for the project purposes, they are contained in small excel files like the following one To whom might it be useful ('data utility')? _To other regions willing to use the tools developed by the project for the analysis of potential cross-regional value chains. Also to relevant public and private stakeholders interested to be part of a circular economy value chain._ # 2\. FAIR data ## **2.1 Making data findable, including provisions for metadata** Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? ## Not applicable: only Excel files What naming conventions do you follow? _Name of the deliverable, according to the list in the Grant Agreement, with extension xls or xlsx._ Will search keywords be provided that optimize possibilities for re-use? _No_ Do you provide clear version numbers? ## Yes, complying with the version of the deliverable or internal report What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how. ## No metadata will be created _**2.2 Making data openly accessible** _ Which data produced and/or used in the project will be made openly available as the default? ## Data on the smart specialization of the participating regions, related to the circular economy. Data on the mapping made by each region about potential circular economy value chains How will the data be made accessible (e.g. by deposition in a repository)? _All the technical reports are public and available on the project website _http://www.screenlab.eu/Deliverables.html_ . These reports also contain the above mentioned excel files. _ What methods or software tools are needed to access the data? _Any tool able to read Excel files_ Is documentation about the software needed to access the data included? _No need of any documentation_ Is it possible to include the relevant software (e.g. in open source code)? ## Free software available on the Internet (such as Open Office and similar) Where will the data and associated metadata, documentation and code be deposited? _Not applicable_ Have you explored appropriate arrangements with the identified repository? Not applicable If there are restrictions on use, how will access be provided? _No restrictions_ Is there a need for a data access committee? ## No Are there well described conditions for access (i.e. a machine readable license)? How will the identity of the person accessing the data be ascertained? _Not applicable_ ### 2.3. Making data interoperable Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)? ## Yes, Excel file format (xls od xlsx) compliant with open software applications What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? ## No metadata, the standard adopted is the excel file one Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability? In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? ## Not applicable _**2.4. Increase data re-use (through clarifying licences)** _ How will the data be licensed to permit the widest re-use possible? _Not applicable_ When will the data be made available for re-use? ## Immediately after the publication of the related SCREEN deliverable Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why. _Yes, without any restriction_ How long is it intended that the data remains re-usable? _3 years or more_ Are data quality assurance processes described? _No_ # 3\. Allocation of resources What are the costs for making data FAIR in your project? ## No specific cost, being public deliverable containing excel files How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). _Not applicable, covered by the project_ Who will be responsible for data management in your project? ## Project Coordinator Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)? _No_ # 4\. Data security What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? ## Back up assured by the service provider hosting the SCREEN web site, plus the deliverable repository managed by the European Commission Is the data safely stored in certified repositories for long term preservation and curation? _Not applicable, see the previous answer._ # 5\. Ethical aspects Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA). ## No Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? ## Yes **6\. Other issues** Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? ## No
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1195_R2PI_730378.md
# EXECUTIVE SUMMARY The Data Management Plan (DMP) is a deliverable of the R2PI project, which is funded by Horizon 2020 Programme with Grant Agreement number 730378\. The Transition from Linear 2 Circular: Policy and Innovation (R2PI) project is participating in the Horizon 2020 pilot action on Open Access to Research Data. Following the Guidelines on FAIR Data Management in Horizon 2020 1 , this Data Management Plan (DMP) describes the data management life cycle of the R2PI project. Creating data Processing data Analyzing data Presenting data Giving access to data Reusing data Figure 1 [Data Management Lifecycle] The DMP includes an overview of the datasets to be generated by R2PI and specific conditions attached to them. Specifically, the DMP outlines the handling of research data during and after the the project, including * types of data to be collected and/or generated and processed * methodology and standards to be applied * if data will be shared/made openly accessible and * how data will be curated and preserved (including after the end of the project) In combination with Deliverables 10.1 & 10.2 Ethical Requirements, the DMP specifies measures to ensure data are properly shared while ensuring the privacy of informants and respondents. This DMP first describes the R2PI project and overall principle of data collection and access. It then introduces the R2PI dataset overview. The DMP closes with the introduction of specific datasets to be collected, following the template in the the Guidelines on FAIR Data Management in Horizon 2020. This is the 1st version of the DMP. The DMP is a living document, which will be elaborated or updated along with the project progress or when significant changes occur. 1 # INTRODUCTION OF R2PI AND ITS RESEARCH METHODOLOGY R2PI is a EU-funded project under the Horizon 2020 programme, which examines the shift from the broad concept of a Circular Economy (CE) to one of a Circular Economy Business Models (CEBM), by tackling both market failure (business, consumers) and policy failure (conflicts, assumptions, unintended consequence). The goal of the R2PI project is to develop sustainable business models that would facilitate the circular economy and to propose policy packages and business guidelines that will support implementation of these business models. To achieve the concept’s ambitions, the research design employs mixed-methods, with a strong emphasis on case studies but also including desktop research, feasibility assessments (including surveys where applicable), policy formulation & stakeholder involvement. The ultimate goal of the project is to see the widespread implementation of the CE based on successful Business Models to ensure sustained economic development, to minimize environmental impact and to maximize social welfare. # GENERAL DATA MANAGEMENT PRINCIPLES The DMP of R2PI will follow the principle "as open as possible, as closed as necessary", as suggested in EC Open Research Data (ORD) pilot. Meanwhile, considering the ethical requirements, the open data plan should not represent risks for compromising the privacy of informants participating in the different interviews, surveys or case studies. The DMP will therefore assess when, what and how data can be shared within a sound research ethical framework. The figure below illustrates the policy to ensure the R2PI project has open access to research data and publications. Figure 2 [R2PI research data and publication] # R2PI DATASET SUMMARY ## Overview R2PI seeks to combine the formal analysis of business models for circularity with on-going commercial practice. Specifically, it draws on case-studies of real enterprises in a breadth of sectors to illustrate, and in some cases refute or correct, those elements considered to be important for successful Circular Economy business models. To achieve the concept’s ambitions, the project will focus on case-studies as the core of its work. Particularly, R2PI will generate data designed to study the success factors of circular business model transition through 15 in-depth case studies and policy analysis. R2PI hybrid methodology framework is built upon a combination of 1. **scientific methods and tools** , based on a broad understanding of systems innovation approaches, including technological and non-technological innovation, such as business models’ evaluation for circularity, behavioural economics and consumer behaviour analysis, rigorous environmental measurements and ex-post policy and regulation analysis and policy formation (policy packages methodology), and 2. **on-going “on the ground”** work through interactive case studies, to investigate and foster circular business model good practices. Research data in R2PI **Research data** refers to information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form (European Commission 2017). R2PI has 3 main stages to collect research data: ### (1) General conceptualization and scoping To achieve the project ambitions and objectives, R2PI will focus on case- studies as the core of its work. A preliminary phase (Work Package 2 – WP2) will set the scene it terms of a conceptual framework and the policy context, through desktop research methodology, intended to ground further research in time-space context. While defining the elements of Business Models for Circular Economy and thereby selection criteria for the case-studies, it will determine the boundaries of the scope. This conceptual and contextual framework will therefore further support the consortium in identifying and selecting relevant case-studies (WP3). ### (2) Case study selection and analysis R2PI will select 15 case studies (WP3) and conduct in-depth analysis, including on-going (from WP4) and piloted (from WP5) circular business models. The individual analysis will take into account the value-chain of each specific case: suppliers, customers, stakeholders, regulators and where the initiative lay (e.g. entrepreneurial, corporate, municipal, etc.), intended and unintended consequences of the wider policy package, as well as any nudges that may have been in place – whether by design or by default. The project shall inventory the case-studies and their results, extracting commonalities, differences and key factors of success (WP6). ### (3) Policy packages and business guidelines development The ex-post policy analysis is conducted in Task 2.4 to evaluate the extent to which policy measures to promote circular economy both induce economic, environmental and social impacts, and facilitate the uptake and functioning of these business models. This task promotes two of the project's objectives. First, the evaluation of individual policy instruments is part of building systematic understanding of the role of policy, the way it may incentivize and/or hinder the road to success. Second, it is a necessary first step in building policy packages to promote circular economy business model (CEBM). This evaluation will be carried out in the context of WP2 using qualitative methods. Further, WP7 will develop policy packages for enabling policymakers and business guidelines for enabling businesses to implement effective measures to facilitate the shift to circular economy and benefit from its advantages. The policy packages methodology is based on a mix of tools complementing one another, including policy instruments’ relation matrix, causal mapping technique, actors’ relation analysis and more. The data management overview, including data collection and deposition summaries, is summarized as below. Figure 3 [R2PI research data management method] Based on the 3 stages, the key R2PI datasets to be collected and processed are listed in the table below. The descriptions of each dataset, following the FAIR principle, are provided in the following sections. This list is indicative and may be adapted in the next versions of the DMP, taking into consideration project developments. <table> <tr> <th> No. </th> <th> Dataset name </th> <th> Brief introduction </th> <th> Responsible partners </th> <th> Related work package(s) </th> </tr> <tr> <td> 1 </td> <td> Economic Actor Survey </td> <td> Investigating the drivers, success factors and impacts of CEBM </td> <td> * University of Malta * Sapir Academic College </td> <td> WP4 </td> </tr> <tr> <td> 2 </td> <td> CEBM Case Study </td> <td> Investigating the drivers, success factors and impacts of CEBM </td> <td> * University of Malta * University of Santiago de Compostela * Carbon Trust * Business Models Inc. * CSCP </td> <td> WP4 </td> </tr> </table> # SPECIFIC R2PI DATASETS DESCRIPTION The DMP template suggested in the Guidelines on FAIR Data Management in Horizon 2020 is used to describe the key datasets and specific conditions attached, according to the current plans for gathering and analysis of data as well as the methods and processes foreseen to be applied to ensure compliance with ethics requirements. Economic Actor Survey <table> <tr> <th> **DMP component** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> 1\. Data summary </td> <td> * **Purpose and relation to the project objective** : The Economic Actor Survey will seek to gauge the extent to which firms engage in circular economy practices and measure their degree of circularity, as well as to understand the key motivators, barriers and policies that influence firms’ level of circularity. The ultimate aim is to assist businesses in their route to circularity and the adoption of new circular economy business models, and inform policies both at the national and EU-level. * **Data formats:** * Data will be collected via an online survey, and will consist of three key sections, namely questions related to business’s circular economy activities and/or policies (e.g. waste minimisation initiatives, environmental auditing, etc.), the motives, barriers and policies that influence the uptake of circular activities, and basic business characteristics (e.g. firm size, industry). No personal information regarding company or respondent name, address or contact details will be collected; * Where possible, we will use online and/or electronic archives. This will involve extracting and processing quantitative and qualitative data, including participants, objectives and outcomes; * Data will be input and stored in a spreadsheet format (e.g. Excel), to ensure accessibility to partners and researchers; * No personal information will be stored in the project database, and any final reports or publications based on this survey will present information only in terms of averages or ranges. * **Origin of the data and if existing datasets is being reused:** The University of Malta and Sapir Academic College with collaboration from the partners will form the survey questions. The project partners will conduct the survey with the economic actors of circular economy. The targets of the survey are businesses operating in Europe. The businesses already form part of existing business directories and databases which are held by the project consortium partners. * WP4 will include the identification and re-use of existing databases for addressing all the relevant issues and stakeholders. The survey will not however build on existing databases. * **Data utility:** The data might be useful for the research community (academics) on investigating circular economy, especially on business models and their market uptake. The data will also be useful for policymakers in order to analyse different barriers and policies that influence circularity. </td> </tr> <tr> <td> 2\. FAIR Data </td> <td> R2PI will develop metadata that is compliant with the Data Documentation Initiative (DDI), the most relevant international standard for describing the data produced by </td> </tr> <tr> <td> 2.1. Making data findable, including provisions for metadata </td> <td> surveys and other observational methods in the social, behavioural, economic, and health sciences. The Guidelines on FAIR Data Management in Horizon 2020 has listed suggestions on further support for developing the DMP. Following the listed support, the Consortium has chosen ZENODO 2 as the scientific publication and data repository for the project outcomes. All metadata will be stored on ZENODO in JSON-format according to a defined JSON schema. Metadata can be exported in several standard formats such as MARCXML, Dublin Core, and DataCite Metadata Schema (according to the OpenAIRE Guidelines). These types of metadata will be produced and archived through ZENODO: * Data Citation with Digital Object Identifier (DOI) * Keywords to facilitate search and optimize possibilities for re-use </td> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td> * R2PI will make selected data available on the project website. * The R2PI survey data and metadata will be deposited in ZENODO, providing open access to data files and metadata over standard protocols such as HTTP and OAIPMH. </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> • The data will be made available on ZENODO under a CC-BY licence. A 24-month embargo period following the completion of the project will be applied to allow research findings to be written up. </td> </tr> <tr> <td> 2.4. Increase data re-use (through clarifying licences) </td> <td> * Data objects will be deposited in ZENODO under open access to data files and metadata, permitting its use and reuse, as well as protecting privacy of its users. * The aggregated dataset will be disseminated as soon as possible through the project website and other R2PI dissemination activities (WP9). In the case of the underlying data of a publication, this might imply an embargo period for green open access publications. </td> </tr> <tr> <td> 3\. Allocation of resources </td> <td> * Overall, the project coordinator (CSCP) will direct the data management process, with the Executive Management Board of the project responsible for ensuring that metadata production, cross-checks, back-up and other quality control activities are maintained. The lead researchers of the respective tasks will be responsible for routine supervision of the dataset development. * In principle, all partners are responsible for survey data generation, metadata production and data quality, coordinated by the task leaders (UoM and Sapir). * Dataset storage, backup, archiving and sharing will be in the majority of cases the responsibility of the partners who own the data and/or the servers in which they will be stored. </td> </tr> <tr> <td> 4\. Data security </td> <td> The data will be backed up regularly, including * sharing data through ZENODO, where data files and metadata are backed up nightly and replicated into multiple copies in the online system; * uploading research data to Freedcamp 3 , the project management system used </td> </tr> </table> <table> <tr> <th> </th> <th> by R2PI consortium; * regular email sharing with the partners to ensure up-to-date versions are stored on partners’ server; * requesting the task leaders to back up qualitative data on a regular basis and include clear labelling of versions and dates in metadata </th> </tr> <tr> <td> 5\. Ethical aspects </td> <td> * **Informed consent** : Prior to the survey, informants will be given enough information about their involvement and what it means for them to be able to make an informed decision as to whether they wish to participate. The principle of informed consent will also apply to the storage of their data and the use of their data in any analyses or reports/publications. * **Confidentiality** : throughout all stages of the survey process (including all stages of data sharing, analysis, report-writing etc.) the identity of participants will be concealed. * All research and all data storage will be closely monitored for compliance with the appropriate ethical guidelines. Each researcher will be responsible for reviewing the ethical concerns that could be raised in relation to each and every element of their research. In doing so they will have regard to the ethical guidelines of their professional organisation, their institution, and pan-European guidelines. In all cases, where there are a number of different guidelines, all research will conform to the more stringent of the available criteria. * As it is impossible to predict with 100% certainty the ethical issues that will arise, the Executive Board will encourage debate within and between work packages on ethical standards in research and what they require in this project. </td> </tr> <tr> <td> 6\. Other </td> <td> N/A </td> </tr> </table> ## Circular Economy Business Model Case Studies <table> <tr> <th> **DMP component** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> 1\. Data summary </td> <td> • **Purpose and relation to the project objective** : R2PI combines the formal analysis of business models for circularity with on-going commercial practice. 15 casestudies of circular economy business models form the core of the project work. Specifically, the case studies will contribute to the following objectives: * To collect empirical data to understand the factors driving (hindering) the success of Circular Economy Business Models (CEBM); * To collect empirical data that would support and contribute to the definition and categorisation of CEBM; * To analyse success in terms of environmental assessment, social and economic parameters; * To understand the interests and roles played by different actors along the value chain; * To identify the role of innovation and the knowledge infrastructure in the implementation of CEBM; * To understand the role of Information and Communication Technology (ICT) in supporting the development of CEBM; </td> </tr> </table> <table> <tr> <th> </th> <th> * To quantify the economic, social and environmental effects of specific CEBM and to estimate its expanding impact; * To identify potential policies to support the transition to a Circular Economy * **Data formats:** The case studies will generate various datasets from diverse research methods combined, such as desk research, interviews, surveys, and interactive workshops, etc. Where possible, we will use online and/or electronic archives. Data will be recorded in various format, such as * Microsoft Word 2007 for text based documents (e.g. transcript of interviews); * MP3 or WAV for audio files; * Quicktime or Windows Media Video for video files (e.g. video of workshops and interviews); * SAV files for storing quantitative data analysis; These file formats have been chosen because they are accepted formats in widespread use. Files will be converted to open file formats where possible for ensuring long term storage. * **Origin of the data and if existing datasets are being reused:** 15 cases will be selected based on defined criteria, which should be sufficient to allow making generic recommendations in terms of policy making and business models transferability at the level of the EU. After defining the list of cases, desk research, in-depth interviews, dynamic group discussions and surveys, etc. will be conducted. Data will also be collected by the core team of R2PI through on-site visits and in-depth stakeholder interviews for those cases demonstrating particular innovation or circularity. After the data collection, a case study inventory database will be developed, incorporating the case studies analysed. The categorical attributes of each case study will be documented in this inventory to enhance the transparency of the findings and strengthen the repeatability and upscale-ability of the research and best practices identified. * **Data utility** : the data might be useful for researchers on investigating circular economy, especially on business model drivers, success factors and impacts. </th> </tr> <tr> <td> 2. FAIR Data 2.1. Making data findable, including provisions for metadata </td> <td> R2PI will develop metadata that is compliant with the Data Documentation Initiative (DDI), which is the most relevant international standard for describing the data produced by surveys and other observational methods in the social, behavioural, economic, and health sciences. The Guidelines on FAIR Data Management in Horizon 2020 has listed suggestions on further support for developing DMP. Following the listed support, the Consortium has chosen ZENODO as the scientific publication and data repository for the project outcomes. All metadata is stored on ZENODO in JSON-format according to a defined JSON schema. Metadata can be exported in several standard formats such as MARCXML, Dublin Core, and DataCite Metadata Schema (according to the OpenAIRE Guidelines). These types of metadata will be produced and archived through ZENODO: * Data Citation with Digital Object Identifier (DOI) * Keywords to facilitate search and optimize possibilities for re-use </td> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td> * An inventory of cases will be developed. This inventory will consist of a tabular collection of the main attributes/context of each case study. It will synthesize the characteristics of the CEBM selection, with respect to features including size, impact level, composition, industrial sector, impact on environment, society and the economy and upscale-ability. * Full data access policy will be restricted to WP4 participants, in order to protect the sensitive information of the companies. Access to the audio and video recordings of interviews will only be provided to bona fide researchers under a data sharing agreement. * Besides, the R2PI survey data and metadata will be deposited in ZENODO, providing open access to data files and metadata over standard protocols such as HTTP and OAI-PMH. </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> * The data will be stored in widely applied formats to allow long-term and wide use. * The data will be made available on ZENODO under a CC-BY licence. A 24-month embargo period will be applied to allow research findings to be written up. </td> </tr> <tr> <td> 2.4. Increase data re-use (through clarifying licences) </td> <td> * Data objects will be deposited in ZENODO under open access to data files and metadata, permitting its use and reuse, as well as protecting privacy of its users. * The aggregated dataset will be disseminated as soon as possible through the project website and other R2PI dissemination activities (WP9). In the case of the underlying data of a publication, this might imply an embargo period for green open access publications. </td> </tr> <tr> <td> 3\. Allocation of resources </td> <td> * Overall, the project coordinator (CSCP) will direct the data management process, with the Executive Management Board of the project responsible for ensuring that metadata production, cross-checks, back-up and other quality control activities are maintained. The lead researchers of the respective tasks will be responsible for routine supervision of the dataset development. * In principle, all partners are responsible for research data generation, metadata production and data quality, coordinated by the task leader (Carbon Trust). * Dataset storage, backup, archiving and sharing will be in the majority of cases the responsibility of the partners who own the data and/or the servers in which they will be stored. </td> </tr> <tr> <td> 4\. Data security </td> <td> The data will be backed up regularly, including * sharing data through ZENODO, where data files and metadata are backed up nightly and replicated into multiple copies in the online system; * uploading research data to Freedcamp 4 , the project management system used by R2PI consortium; * regular email sharing with the partners to ensure up-to-date versions are stored on partners’ server; * requesting the task leaders to back up qualitative data on a regular basis and include clear labelling of versions and dates in metadata Extra resources, such as physical storage media and cloud, are needed to </td> </tr> </table> <table> <tr> <th> </th> <th> accomplish the storage and maintenance activities described above. </th> </tr> <tr> <td> 5\. Ethical aspects </td> <td> * **Informed consent** : Prior to the interviews, survey and workshops, informants will be given enough information about their involvement and what it means for them to be able to make an informed decision as to whether they wish to participate. The principle of informed consent will also apply to the storage of their data and the use of their data in any analyses or reports/publications. * **Confidentiality** : throughout all stages of the research process (including all stages of data sharing, analysis, report-writing etc.) the identity of participants will be concealed, unless prior consent to reveal these names is granted beforehand by participating firms. * All research and all data storage will be closely monitored for compliance with the appropriate ethical guidelines. Each researcher will be responsible for reviewing the ethical concerns that could be raised in relation to each and every element of their research. In doing so they will have regard to the ethical guidelines of their professional organisation, their institution, and pan-European guidelines. In all cases, where there a number of different guidelines, all research will conform to the more stringent of the available criteria. * As it is impossible to predict with 100% certainty the ethical issues that will arise, the Executive Board will encourage debate within and between work packages on ethical standards in research and what they require in this project. </td> </tr> <tr> <td> 6\. Other </td> <td> • N/A </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1196_EUCalc_730459.md
# Executive Summary This deliverable outlines the first version of the EU calculator Data Management Plan. The DMP as outlines only deals with process data (not personal data gather during the EU calculator workshops). Accordingly, this document provides: 6. the characterization of the main types of data expected to be collected and produced during the time frame of the project, 7. the list of licences detailing data re-use policies and intellectual property rights to be adopted, 8. the list of repositories and strategies to make the data accessible after the termination of the project, 9. the first version of the metadata standards to be adopted by the EU calculator both for collected and produced data. 10. The first outline of model documentation guidelines. # Introduction ## Importance of the plan and objectives The appropriate management of data is an essential, often forgotten, undertaking of responsible research. At the start of a new research project, it is important to lay down the basic principles relating to data and data management. The creation of a shared Data Management Plan (DMP) at the beginning of the project meets the requirement of informing the involved partners on important procedures and processes regarding data collection, processing, storing and distribution. Furthermore, the DMP is pivotal in guaranteeing that the data collected and processed can be easily and properly shared within the consortium and distributed beyond the project lifetime. A data management plan helps achieve optimal handling, organizing, documenting and enhancing of research data. It is particularly important for facilitating data sharing, ensuring the sustainability and accessibility of data in the long-term, and allowing data to be reused for future research. For maximum effectiveness, the DMP must start when research is being designed and needs to consider both how data and information will be managed during the research and how they will be shared afterwards. This involves thinking critically of how research data can be shared, what might limit or prohibit data sharing, and whether any steps can be taken to remove such limitations. In the context of the European Calculator project, plans regarding the usage of data and model documentation started during its preparation phase. The time and thoughts devoted to the data management issues were preliminary and hence it is now time to undertake a more concerted effort in elaborating the MP for the European Calculator project. The DMP for a research project in the H2020 program should elaborate on the following aspects (without any particular order of importance). * Handling of research data during and after the end of the project * What data will be collected, processed and/ or generated * Which methodology and standards will be applied * Whether data will be shared/made open access and how data will be curated and preserved (including after the end of the project) These aspects are a mandatory requirement in the Programme Guidelines on FAIR Data Management in Horizon 2020 1 . The guidelines from the European Commission on the requirements for the DMP focus on the issues of data documentation, standards applied and data re-use. During the lifetime of the European Calculator project the DMP will be updated in D11.7 (Month 35). The DMP described in the sections below is to be taken as a first iteration document to synchronize the knowledge of partners on their datarelated responsibilities and to outline the major guiding principles of the EU calculator project in regard to its data policy. I addition, future iterations will also explore potential synergies of the EU calculator DMP with those being developed by INNOPATHS and REINVENT. ## FAIR data principles The open data policy of the European Union favours the implementation of the G8 Open Data Charter 2 and the FAIR Guiding Principles 3 for scientific data. The latter highlights core principles as enumerated below: **To be Findable:** * Data are assigned a globally unique and persistent identifier * Data are described with rich metadata * Metadata clearly and explicitly include the identifier of the data it describes  Data are registered or indexed in a searchable resource **To be Accessible:** * Data are retrievable by their identifier using a standardized communications protocol * The protocol is open, free, and universally implementable * The protocol allows for an authentication and authorization procedure, where necessary * Metadata are accessible, even when the data are no longer available **To be Interoperable:** * Data use a formal, accessible, shared, and broadly applicable language for knowledge representation. * Data use vocabularies that follow FAIR principles * Data include qualified references to other data **To be Reusable:** * Data are richly described with a plurality of accurate and relevant attributes * Data are released with a clear and accessible data usage license * Data are associated with detailed provenance * Data meet domain-relevant community standards The EU calculator project will adhere to these principles during the elaboration and fulfilment of its DMP. # Data origin and type The European calculator project will make use of a number of heterogeneous data sources that can be broadly classified as external and internal to the project. By external it is meant that datasets are acquired from institutions external to the European calculator consortium. By internal it is meant that the data is owned by the partners or generated within the European Calculator activities. As a rule, external open-datasets that are regularly updated will be favoured although one cannot discard the possibility of the project requiring particular external datasets subject to licensing. ## External data ### Open sources The European calculator project will connect to open data sources that are regularly maintained and updated from European and Global institutions. For example, it is foreseen to substantially rely on Eurostat and OECD data, both as model baseline and as input for statistical relations to quantify some of the model dynamics. External data repositories such the Database on Shared Socioeconomic Pathways hosted at IIASA 4 and climate data from initiatives like CORDEX 5 and the ODYSEE database 6 are also foreseen to be accessed and data retrieved during the running time of the project. Databases from existing and closed EU-funded projects such as Heat Road Maps 7 project will also be scanned for their potential in supplying the European calculator project with data. For example datasets from the TRACCS 8 project or the EU Buildings Database 9 are also expected to be used during the project. ### Paying sources At the time of writing there is not yet a clear indication that datasets falling into this category are required. This section will be updated in the next iteration the ECDMP. ## Internal data In regard to internal datasets, these will refer mostly to the data generated during the lifetime of the project and are foreseen to comprise mostly model outputs and expert opinions gathered during the sectoral expert-consultation workshops. Regarding the latter, there is in place a common procedure to guarantee confidentiality of personal data (see Del. 12.1). The expert consultation workshops will supply the team of the European Calculator with opinions on the modelling approaches undertaken and evidence base support on the choice of particular ambition levers. To a lesser extent, datasets and model processes owned by the institutions comprising the European Calculator project will also be used. For example, published datasets like the BPIE Building observatory database or dietary patterns and decarbonisation pathways will be used to inform on the development of the model. ## Data types The following section describes the current data types forecast to be generated during the project. Note that this is a preliminary judgement based on the author’s understanding of the EU Calculator model. This section will be updated periodically as part of the ECDMP life cycle. <table> <tr> <th> **Identifier** </th> <th> **Label** </th> <th> **Description** </th> <th> **Type** </th> <th> **Main responsible** </th> <th> **Access** </th> </tr> <tr> <td> **OTS** </td> <td> Observed time series (historical data) </td> <td> Observations of climate, socio- economic variables and emissions. </td> <td> Numeric </td> <td> WP leaders of the respective modules. </td> <td> Public </td> </tr> <tr> <td> **FTS** </td> <td> Future time series (generated data) </td> <td> Projections of climate, socio- economic variables and emissions. </td> <td> Numeric </td> <td> WP leaders of the respective modules. </td> <td> Public </td> </tr> <tr> <td> **LL** </td> <td> Levels of levers </td> <td> Levels of technology or lifestyle ambition. </td> <td> Numeric </td> <td> WP leaders of the respective modules. </td> <td> Public </td> </tr> <tr> <td> **CP** </td> <td> Constant parameters </td> <td> Time-invariable parameters required for the model. </td> <td> Numeric </td> <td> WP leaders of the respective modules. </td> <td> Public </td> </tr> <tr> <td> **MC** </td> <td> Model/module code </td> <td> Source code of modules and model. </td> <td> Code </td> <td> WP leaders of the respective modules & CLIMACT. </td> <td> Public </td> </tr> <tr> <td> **MD** </td> <td> Model/module documentation </td> <td> Documentation of model/module code </td> <td> Text </td> <td> WP leaders of the respective modules & CLIMACT. </td> <td> Public </td> </tr> </table> _Table 1 – Types of data considered in the European Calculator project._ # Metadata standards and data formats All the data used within the project will be available using non-proprietary formats and documented accordingly via the use of extensive metadata descriptions and EU-calculator naming conventions. The metadata descriptions will contain the required elements to guarantee that data are easily discovered. Table 2 enumerates and describes the foreseen metadata elements to be used when documenting Observed Time Series (OTS), Future Time Series (FTS), Level of Levers (LL) and Constant Parameter (CP) data in the EU calculator project. <table> <tr> <th> **Attribute name** </th> <th> **Description** </th> </tr> <tr> <td> _**ID** _ </td> <td> Unique identifier of the dataset. </td> </tr> <tr> <td> _**Title** _ </td> <td> Dataset title. </td> </tr> <tr> <td> _**Summary** _ </td> <td> Abstract related to the title attribute. </td> </tr> <tr> <td> _**Variable** _ </td> <td> Short variable name. </td> </tr> <tr> <td> _**Unit** _ </td> <td> Unit of the variable. </td> </tr> <tr> <td> _**Activity** _ </td> <td> Name of the project. </td> </tr> <tr> <td> _**Tags** _ </td> <td> List of keywords commonly used to describe the subject. </td> </tr> <tr> <td> _**Frequency** _ </td> <td> Time frequency of the variable. </td> </tr> <tr> <td> _**Period and reference** _ </td> <td> Time period for which the variable was calculated and respective reference year. </td> </tr> <tr> <td> _**Institution** _ </td> <td> URL of the home page of the institution compiling or producing the data. </td> </tr> <tr> <td> _**Contact** _ </td> <td> Email contact of the main responsible for the data as compiled or produced for the EU calculator project. </td> </tr> <tr> <td> _**Contributors and role** _ </td> <td> Any name of person contributing for the data compiled or produced for the EU calculator project, as well as the respective role. </td> </tr> <tr> <td> _**Methods summary** _ </td> <td> Brief description of the methodology used to compile/calculate the data. </td> </tr> <tr> <td> _**Data filling** _ </td> <td> Description of the approach to fill in missing country data. </td> </tr> <tr> <td> _**Source data** _ </td> <td> Any relevant sources used to compile or produce the data. </td> </tr> <tr> <td> _**Quality control** _ </td> <td> Description of the quality control process before data publication. </td> </tr> <tr> <td> _**Comment** _ </td> <td> Miscellaneous information about the data/methods used to derive the dataset. </td> </tr> <tr> <td> _**References** _ </td> <td> Any additional references. </td> </tr> <tr> <td> _**Date created** _ </td> <td> Date of data creation (YYYY-MM-DD). </td> </tr> <tr> <td> _**Data type** _ </td> <td> Type of data </td> </tr> <tr> <td> _**Workpackage and task** _ </td> <td> Project WP and task from which the data originates. </td> </tr> <tr> <td> _**Version status** _ </td> <td> Version of the data and its status for usage. </td> </tr> </table> _Table 2 – Metadata elements for OTS, PTS, LL and CP data._ Datasets on the Levels of Levers (LL), Observed/Future Time Series (OTS/FTS) and Constant parameters (CP) will be made publically available following the CSV or XLS tabular data standard. OTS refers to historical data collected from sources. The difficulty is to find data for every country and to fill the gaps. This data is used as input of for the EU calculator model. In this respect it should be noticed that 1) only credible sources could be used that have primary/secondary access to the data, i.e. owning/overseeing the system producing/recording the data or legally obliged to collect data (statistical offices) and, 2) if possible, use the data from EU level/international bodies that gather the data from member state level organizations by law, such as Eurostat, the IEA or the World Bank. Data coded as FTS are generated by the model. This is the "matrix of possibilities" that will be used by the Web application; hence, this data is an output of the model and INPUT of the Web application. LL data are created by each WP in cooperation with the stakeholders. The LL data refers to scenarios built for each lever and are therefore inputs of the model. Finally, CP data refers to constants (physical, geographic, etc...; for example country entity, mass to kcal conversion) that are required and input for the model. The documentation of this data obeys to a template whose first version is described in section 8. Module Documentation (MD) data will be made available in PDF format. As for the Model Code (MC) this will be made available in the KNIME and Python formats. * KNIME is an open source data analytics platform ( _https://www.knime.com/_ ) . It integrates various components for machine learning and data mining and possess a graphical user interface to allow assembly of nodes for data preprocessing (e.g., extraction, transformation, etc), for modeling and data analysis and visualization. * Python is a high-level programming language for general-purpose programming. It is widely used in the scientific community given its scalability and philosophy emphasizing code readability. # Policy of data re-use ## Open source licence The data produced in the European Calculator project will be stored as a comprehensive database and therefore illegible for intellectual property rights (IPR) and subjected to licencing. When it comes to intellectual property rights and licences for data, the central notion is the database as “a collection of independent works, data or other materials arranged in a systematic or methodological way and individually accessible by electronic or other means” 10 . The database notion is not restricted to a data collection stored in traditional database management systems, it relates also to data stored in a file and organized in a well-structured manner. The data resulting from the European Calculator will be published under the Creative Commons Attribution International License **CC-BY-4.0** ( _https://creativecommons.org/licenses/by/4.0/_ ) , and the Open Data Commons Attribution Licence **ODC v1.0** ( _https://opendatacommons.org/licenses/by/_ ) . These licences are permissive and do not include a copyleft clause 11 . They allow sharing (copy, redistribute and use the data) as long as the user entity gives appropriate credit. # Data storage and preservation ## Model data storage Data used in the development of the EU calculator model will be stored online using the Amazon S3 ( _https://aws.amazon.com/en/s3/_ ) solution. Inputs for the model covering the data described in section Data origin and type3 and documented according the standards described in section 4 will be stored in an EU calculator dependency. The details on how to upload both the data and metadata will be made available in the next iteration of the DMP. ## Long-term storage plan The data of the European Calculator project will be stored in a research data repository like PANGAEA ( _www.pangaea.de_ ) . PANGAEA is a free publishing repository for environmental data (although some small fees might apply in case of data from big projects) with a Digital Object Identifier (DOI) service. From this generic storage place for data, the EU calculator team will reach out to other data repositories in order to increase the visibility and secure long-term preservation of our outputs by exploring the following possibilities: * Linking the data sets produced in the European calculator to OpenEI.org ( _http://en.openei.org/wiki/Main_Page_ ) . The Open Energy Information (OpenEI.org) initiative is a free, open source knowledge-sharing platform created to facilitate access to data, models, tools, and information that accelerate the transition to clean energy systems through informed decisions. OpenEI strives to make energy-related data and information searchable, accessible, useful to both people and machines * Linking the data sets produced in the European calculator to European’s Union Open Data Portal ( _https://data.europa.eu/euodp/en/data/_ ) . The European Union Open Data Portal is the single point of access to a growing range of data from the institutions and other bodies of the European Union. The portal aims to promote their innovative use and unleash their economic potential. It also aims to help foster the transparency and the accountability of the institutions and other bodies of the European Union. The Open Data Portal is managed by the Publications Office of the European Union. * Upload our datasets to the Open Energy Modelling Initiative ( _www.openmod-initiative.org/_ ) . The Open Energy Modelling Initiative is more in line with the thematic focus of the European Calculator and hence the project outputs could gain more visibility for the community if announced through this channel. ## Model documentation One of the most important criteria of the EU Calculator is the transparency of the model. This can only be achieved by allowing anyone to understand the model and to find the source of every data that we are using. The data we are collecting are evolving quickly. It is important to keep track of the version of every data we are using. Finally, every assumption/hypothesis that we are taking are influencing the final result of the calculator, we need to track them and to explain them with details. * For Observed Time Series data that we are collecting as input of the model, we need to document the place where we found them, the date, the owner, as well as the method we are using to fill the missing data. * For Future Time Series built regarding expert opinion and stakeholder consultations, we need to document the critical hypothesis that we took to build our matrices of scenarios; * For Levels of Levers data, it is crucial to track the assumption we made to define the lifestyle ambition or the levels of technology. * For Model Code, everyone should be able to understand the code (Knime or Python) without having to know how to code. This is only possible by documenting the code with a deep Model Documentation (MD) inside the code itself. ### Model versioning The EU Calculator is a large project with several complicated modules/components where multiple developers are working together at the same time. It is simply not feasible to keep track of every modification and to be able to go back to an older version (in needed) without having something called version control. Multiple Version Control Systems (VCS) are on the market. We propose to use the popular and open source GIT system in the EU Calculator project and to host our code using the Bitbucket solution ( _https://bitbucket.org/product_ ) . Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows. ( _https://git-scm.com/_ ) . Git will allow us to keep track of the changes in the code but it is the task of every developer to document his code and the version he is pushing to the server. The code of the European Calculator can be found at the following address (you need to be registered to access it): _https://bitbucket.org/eucalcmodel/_ # Responsibilities and duties Each work-package leader institution is responsible for supervising the creation, compilation and adequate documentation of data during the life time of the project. This includes making the data available for the rest of the team, as well as guaranteeing that the data compiled or generated adheres to the metadata and format standards described in the ECDMP. PIK will support the partners with doubts regarding the metadata standards. It also falls within each work package the responsibility to keep the module documentation for the EU calculator model up to gate and compliant with the template provided. CLIMACT is responsible for supervising the model documentation, curating the code and making it available in the predefined format.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1197_EU-MACS_730500.md
# 1\. INTRODUCTION ## The study To support further product development and effective widespread uptake of climate services, as a means to boost mitigation of and adaptation to climate change as well as capabilities to cope with climate variability, the European Commission has included several actions in its current research programme Horizon 2020 (H2020). Essentially these actions follow from the logic to implement the European Research and Innovation Roadmap for Climate Services (cf. European Commission, 2015) EU-MACS and its twin project MARCO deal with analysis of the climate services market. In addition demonstration calls were launched on the added value of climate services for supposedly high value added sectors with hitherto little uptake of climate services (SC5-01-2016-2017), while other actions focus more on networking activities interlinking to better connect relevant players, such as the Coordination and Support Action (SC5-05b-2015) called Climateurope. In addition the ERANET for climate services (ERA4CS) is a programme that contains both testing of particular types of climate services in selected sectors and exploration of suitable climate service types for selected sectors. An extremely important sub-programme in H2020 is the COPERNICUS Climate Change Service (C3S) programme, which aims to generate a very comprehensive coherent and quality assured climate data set meant to support mitigation and adaptation planning, implementation and monitoring. In due course, also coping capabilities of (current) climate variability are addressed. In this framing, EU-MACS – European Market for Climate Services – will analyse market structures and drivers, obstacles and opportunities from scientific, technical, legal, ethical, governance and socioeconomic vantage points. The analysis is grounded in economics and social sciences, embedding innovation theories on how service markets with public and private features can develop, and how innovations may succeed. ## The scope and remit of this report The project EU-MACS does not produce new climate services itself, neither is it engaged in the processing of climate data or impact data as such. Instead, EU-MACS collects different types of (meta)information regarding the ‘field of climate services’ and the ‘climate services market’ (the market being a subset of the field). Apart from literature study, the typical ways of collecting information are surveys, interviews, workshops and internet based interaction formats. In addition, the project assembles stakeholder lists based on snowballing among the consortium partners and some key contacts, contacts during outreach activities, and stakeholder feedback via the website and twitter. Furthermore, purpose designed interaction formats, such as Living Lab embedded workshops and web based explorations on climate service design preferences, are used. Given the nature of these data not only maximization of access to research data is an issue, but also data protection considerations. As regards data protection, two types of potential limitations on openness play a role, being: (1) personal contact information, and (2) attribution of characteristics or opinions to participating organisations or their representatives. The implication is that for a part of the collected information only meta- information can be provided freely, whereas the actual research files (e.g. interviews) can only be made available under stringent conditions (such as ‘view only’) and only for the academic purpose of peer review and duplication. This report is meant to give the consortium partners guidance in how to collect, store, share and describe data with the aim to promote the objective of open access to research data, under the conditions of the Grant Agreement and the Consortium Agreement, while abiding to the EU regulation on data protection, (EU) 2016/679 1 ). In addition to providing guidelines, the project EU-MACS pursues also an open data pilot in order to tangibly promote the access to research data via EUDAT 2 . This Deliverable is a living document, which may be revised during the project. The first version will be formally submitted as Deliverable to the EC Project Portal. ## Structure of the report Chapter 2 briefly reviews the principal guidelines regarding open data access and data protection for this project. Chapter 3 gives a detailed account of the collected and processed information. Chapter 4 presents the guidelines for properly assembling and storing the project’s datasets. Chapter 5 describes the implementation steps planned in order to effectively support open data access by means of EUDAT, while honouring applicable data protection principles. Annex 1 – provides an insight in the description of the metadata. # 2\. PRINCIPAL GUIDELINES ## Open data * Information generated by the project EU-MACS, either as intermediate or as final product, should as much as possible be made available for reuse by third parties; * To make availability meaningful for third parties the information should be: * Easy to find and identify o Described in accompanying meta-information in order to adequately facilitate the judgement of a potential user regarding the usefulness of the considered data; this should include information on the origin, observation technology, and post-processing of the information * Clearly organised in datasets with adequate use of object/variable labels and definitions o Submitted to a commonly agreed quality control prior to definitive submission o Accurate (at the time of its collection) * Information planned to be made accessible for third parties should be reviewed with respect to data protection regulation _prior to its submission_ to facilities meant for retrieval by third parties and should be sufficiently anonymised or conversely sections of the considered datasets should not be submitted to such facilities; in the latter case meta-information disclosing that such information exists and is in possession of one or more consortium partners can be submitted to such facilities; * All consortium parties should inform the consortium coordinator about the data collected and processed in the EU-MACS project by using a preformatted table provided by the coordinator; this meta-information table will be made available to all consortium partners; the coordinator should see to it that the meta-information table is regularly updated during the project; * If there are no objections from the point of view of the Grant Agreement and of the Consortium Agreement, and if the considered data are not subject to sharing limitations due to Regulation EU 2016/679, collected and processed data are supposed to be made available for third parties by means of the Open Data Pilot (see below). An exception can be made for those data of which a majority of the consortium is of the opinion that there is no value added for third parties or if the manager of the data repository service is not interested in the submission; * For data that cannot be shared, meta-information will be made available for third parties, as much as possible; * After submission of data for the Open Data Pilot, and after quality control has shown that all conditions are met, consortium parties are not obliged to provide further support free of charge with respect to possible maintenance of the submitted data. An exception to this principle applies when preventable inadequacies are detected after submission and quality control. * Files to be made accessible for third parties will be declared under Creative Commons Licensing system; the recommended options are 3 : * Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) o Attribution 4.0 International (CC BY 4.0) ## Open data pilot * In the Open Data Pilot data generated in the EU-MACS project will be uploaded to the EUDAT facility ( _https://www.eudat.eu/_ ) and/or to the Finnish Social Science Data Archive (FSD; _http://www.fsd.uta.fi/en/data/index.html_ ) depending on the nature of the files, while an effort will be made to advertise the availability of these data to third parties; * It is the ambition of the Open Data Pilot to upload all eligible data generated in the EU-MACS project, where eligibility refers to limitations following from the Grant Agreement, the Consortium Agreement, or the Regulation EU 2016/679; * In order to be able to upload eligible data in a systemized way a set of preparatory actions should be carried out related to selection, quality assurance, data protection review, metainformation, consent from third parties, and formatting of datasets; * In order to formulate guidelines for selection, quality assurance, data protection review and metainformation the Executive Board will nominate an Open Data Pilot (ODP) committee; the ODP committee will oversee the adequate implementation of the Open Data Pilot, with support from the project’s information integrity officer; * The ODP committee will manage a table of potentially eligible data in the common workspace of the EU-MACS project, and ensure it is regularly updated regarding the declaration of eligible datasets; * The ODP committee can delegate the actual technical preparations and uploading of datasets to the designated partner(s); # 3\. TYPES OF DATASETS IN EU-MACS ## Survey data Up to now, one survey has been held by means of an online questionnaire. The questionnaire counts 42 questions. Some questions are multistage and others allow for more than one entry. The questionnaire is addressing both climate service providers and users, for whom partly common and partly distinct questions are presented. Apart from respondent characterization all questions have to do with awareness of, experiences with, and pre-conditions for climate services and their development. The online questionnaire starts with an explanation of the purpose of the questionnaire and the characteristics of the EU-MACS project. The response types include numerical information, predetermined choice options, and text. The current number of valid questionnaires is about 150, but we aim to raise this to 200 at least. The questionnaire results have been collected in an EXCEL file. The respondents are anonymous, and only needed to indicate type of organisation and country from which they operate. In case of future surveys with identifiable respondents, the publishable file contains a respondent sequence number, while the actual respondents’ identities are stored separately. More information on the survey and its results can be found in Deliverable 1.1, and to some extent in Deliverable 1.2. ( _http://eu-macs.eu/outputs/#_ >> Reports) ## Interviews Various consortium partners have conducted interviews, mostly by phone or video connection, sometimes face-to-face. Most interviews are of a semi- structured nature. Interviewees are informed in advance (by email) about the purpose and context of the interview, and are requested to sign and return an informed consent form (scanned pdf). Interview summaries are sent to the interviewees for comments and approval. Interviews have also been recorded (voice). Neither the recordings nor the individual interview summaries are public research material, but may be made accessible for academic peer review purposes. Syntheses of clusters of interviews can be considered for publication. Also the basic sets of questions for the semi-structured interviews can be published.’ Synthesizing interpretations of the interviews are presented in various WP1 deliverables. ## Workshops The Work Packages 2, 3, and 4 (hereafter WP2, WP3, WP4) organize workshops with representatives from the focus sectors, finance (WP2), tourism (WP3) and urban planning (WP4). The programme and activity descriptions (assignments) of these workshops are available as internal documents. The same applies to the internal detailed reports of the workshops. Summaries (impressions) of the workshops are published in the project’s communication channels, such as the Newsletter. The programme, activity descriptions (assignments) and the published summaries are eligible for publication and submission of copies to third parties. As with the interviews the detailed internal reports are only available for academic peer review purposes. Some highlights of the results and their interpretation will be presented in WP2, WP3 and WP4 deliverables. ## Web based interactive explorations (bidding games) For WP2 and WP3 internet based serial questionnaires are used to elicit information on inclinations and tendencies of different types of users with respect to climate service product features and provision modes. The anonymized results will be made available for 3 rd party use, provided the respondent numbers are large enough. Summaries of the results and their interpretation will be made presented in WP2 and WP3 deliverables. ## Stakeholder registry The stakeholder registry combines the collected contact information from interviews, workshops, encounters in events, and other contacts, with the exception of the internet based survey responders. The stakeholder registry is only for internal project purposes and is not supposed to be accessible for people outside the EU-MACS consortium. A protected version of the stakeholder registry is available in the protected common work space of the project. The stakeholder registry will be evicted after the conclusion of the project. # 4\. PRACTICAL GUIDELINES FOR ASSEMBLY AND STORAGE OF DATASETS ## Survey data #### Data format Even though original source data need to be kept, in case errors are found later, the target dataset discussed here is a database format which enables easy use, both as such (e.g. as EXCEL ® or ACCESS ®) and as source file for another data handling programme (SAS®, MATLAB®, etc.), and enables also metadata specification. **Meta data can in principle contain the following variables:** * Creation date / period * Creator (name; email) * Owner(s) (organisation(s) + prime contact) * Original purpose; project * Level of openness; Allowed types of re-use (incl. license version of Creative Commons 4.0) * Number of variables * Variable list (incl. applied units, resolution, observation method, post-processing steps) * Observation or measurement period * Number of observations * Number of respondents (if not same as number of observations) * Anonymization method * Kind of quality check (none; visual; simple checks on allowable ranges and plausible scores; …) * Handling of missing data (none; …) #### Storage * During the project: in the common workspace * In the Finnish Social Science Data Archive (FSD) and linked from the EUDAT service at the latest starting at the end of the project ## Interviews #### Data format * Audio and audiovisual files of recorded interviews * Transcripts of interviews (text file) #### Meta data Interview datasets are in principle not public, but their archiving should nevertheless allow for tractability in case of later needs for peer review. For each audio- and audiovisual file a separate small (text) document has to be made, containing the metadata of the file of interest. In additional a summary meta-data overview for all (audio)visual file and transcripts belong to the same interview collection can be made. * Creation date / period * Creator (name; email) * Owner(s) (organisation(s) + prime contact) * Purpose; project * Level of openness; Allowed types of re-use (see Creative Commons 4.0) – if any * Interviewee(s) – if allowed to disclose * Length of audio-recording in minutes * Length of transcript (number of words) * Used questionnaire (if any) * Informed consent form #### Storage * During the project: transcript text files in the (protected) common workspace; (audio)visual files offline * Files should be kept available for at least 5 years after the end of the project in case of peer review requests ## Workshops #### Data format * Audio and audiovisual files of recorded interviews * Transcripts of interviews (text file) #### Meta data Workshop datasets are in principle not public, but their archiving should nevertheless allow for tractability in case of later needs for peer review. See interviews for suggested meta data contents. For workshop files the precise relevant contents is case specific. #### Storage * During the project: in the (protected) common workspace; * Files should be kept available for at least 5 years after the end of the project in case of peer review requests ## Web based interactive explorations (bidding games) **Data format** Easy accessible and usable format, i.e. EXCEL®, ACCESS®, or comparable **Meta data** Same guidelines as for surveys. **Storage** Same guidelines as for surveys. ## Stakeholder registry **Data format** Same guidelines as for surveys. **Meta data** None, except indication on openness, i.e. only for project internal use. #### Storage * During the project: in the protected common workspace; * After the project: dataset should be discarded # 5\. IMPLEMENTATION STEPS ## Organising data sets and their storage during the project * WP leaders in cooperation with the coordinator and the information integrity officer will review that the relevant datasets are properly uploaded to the common workspace. Where necessary protected subsections and/or keyword protection by file will be applied. * The established situation will be reviewed by the information integrity officer * First, the situation for already existing files will be reviewed in January and February 2018. Subsequently the same procedure will applied to newly created files. * The evolution in the stored datasets will be monitored by the information integrity officer. ## Archiving datasets after the project has ended * Designated eligible files will be transferred to the Finnish Social Science Data Archive (FSD) and announced to EUDAT by the coordinator. * Designation of eligible files will be prepared by the Open Data Pilot (ODP) committee and approved by the General Assembly of the project. These decisions should be made before the termination of the project (31.12.2018). * The agreements between FMI, FSD, and EUDAT will be reviewed by the information integrity officer. # 6\. SUMMARY OF DATA GENERATED DURING THE PROJECT ## Brief description of generated data by type of information In the various EU-MACS work packages, information was collected and evaluated on the basis of a web survey, interviews and workshops. #### Web-Surveys The web survey aimed to collect information from providers and users of climate services on barriers related to the development and uptake / use of climate services in political, economic, social, technological, ethical and legal / regulatory respects. It also addressed topics such as resourcing and quality issues. The survey was organised in two branches; one provider-specific and one user- specific branch. Each branch itself was organised in three loops. The survey consists of 121 questions in total including open and different forms of closed questions (e.g. multiple choice, rating on 5-Point Likert Scales etc). Participants were routed through the survey based on their responses. None of the participants had to go through all questions. 169 participants took part in the survey, of which 109 considered themselves as a provider (which also includes intermediary providers or so-called purveyors) and 60 as a user. None of the participants had to provide any kind of personalized data. The results of survey are available as an excel file. In WP2 and WP5 web surveys failed to attract notable numbers of respondents. The modest material from these surveys has been used to provide further support to certain statements and insights In D2.1 and D5.2, but there is no point in uploading these files to a repository. #### Interviews Interviews were mainly conducted in the three case study sectors finance, tourism and urban planning with both, providers and users of climate services. Additional interviews were conducted in WP1 related to quality assurance with climate services providers only and in WP5 to receive additional views on the conclusions drawn. In many cases the interviews provided a forum to discuss also confidential details, only to be shared in a safe environment and with the knowledge that the shared information will only be used in an aggregated manner. Furthermore, the informed consent form for the interviewees promises confidentiality of the interview. Nevertheless, all interviews have been either recorded (with approved transcripts created afterwards) or trans-scripted. The information collected during interviews are stored as recorded files and / or (approved) trans-scripted text documents. In only very few cases the documentation only contains of some notes taken during the interviews. #### Workshops Workshops were conducted in two out of three case study sectors, i.e. tourism and urban planning. Participants represented mostly users of climate services but also few climate services providers. Main parts of the workshops were dedicated to Constructive Technology Assessment (CTA). The CTA part of the workshop offered a set of specific viewpoints to consider scenarios of using climate services, while at the same time giving ample space for discussion of aspects stakeholders find important. Another element of the workshops was a discussion on typical business cases for climate services use in the two sectors. Even though, proper documentation of the workshops has been done, workshops did not generate material which merits to make generally available that goes beyond the processed products and syntheses as can be found in the respective deliverables of WP3 and WP4. ## Proposition for data to submitted to open data repository Not all information collected in EU-MACS through a variety of interaction formats can be made accessible for the reasons given above. The original files are stored for five years by the organisations that collected the information. Access can be granted upon request to the owner of the information under conditions in accordance with the Consortium Agreement. However, the original files will under no circumstances be made available for commercial purposes. While the survey does not contain any personalized or confidential information, the interview records and transcripts do. The same applies to a proper workshop documentation. Therefore, the following information on the various interactions and information will be made available: * The survey will be made available both in terms of a file providing some meta-information as well as the excel file providing all responses from all 169 participants. * For interviews and workshops, only meta-information will be provided. The meta-information follow the guidelines and principles presented in chapter 4 and are made available as pdf files (see also annex 2), which will be uploaded to the Finnish Social Science Data Archive and announced to EUDAT by the coordinator.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1198_OPEUS_730827.md
# Background This is the deliverable report D9.3 “Data Management Plan” for OPEUS project’s Work Package 9, as required by the project’s Grant Agreement number 730827. It defines the data management processes that will be followed throughout the project’s lifetime. Some of these processes may change or more data types may be collected, which will then be documented in an update of this data management plan as the project evolves. As such, this deliverable is a living document that will be updated and will officially be released to the public as D9.4 “Data Management Implementation” at M30 (i.e. April 2019). # Objective/Aim This deliverable aims to: * demonstrate compliance with all applicable legislation in relation to the data being collected and used * report on the data to be collected for use in the delivery of the OPEUS project * present consortium decisions with respect to making the data FAIR and the respective mechanisms to support these decisions * discuss the security processes to be applied, including data recovery as well as secure storage and transfer of sensitive data * summarise any related ethical aspects e.g. with respect to (sensitive) personal data, informed consent, etc. ## Intended audience This Data Management Plan (DMP) is for all the researchers within OPEUS and it is considered to be a living document. Being a public document, this Deliverable may also serve as a guide for other researchers working on similar initiatives beyond the specific project. # Data Summary The purpose of the data collection/generation and the relationship of the data to the objectives of the OPEUS project: The aim of OPEUS is to develop a simulation methodology and accompanying modelling tool to evaluate, improve and optimise the energy consumption of rail systems, with a particular focus on in-vehicle innovation. The main objectives of OPEUS comprise: * the definition a simplified but universal energy requirements outlook for European urban rail systems * the development a comprehensive rail energy usage simulation methodology * the development of an energy consumption simulation modelling tool for assessment purposes and applicable to urban, regional, high speed and freight duty cycles * the further assessment of the role and optimisation potential of driver advisory systems (DAS) in relation to control strategies for different representative duty cycles and traction types * the further assessment of the potential for energy usage optimisation of novel technologies (e.g. next generation ESSs) and strategies (e.g. engine power-off at low loads) ▪ the provision of a critique of the energy consumption outlook for railway systems. To do this, three components can be defined within the OPEUS approach: * the energy simulation model * the energy use requirements * the energy usage outlook and optimisation strategies recommendation. To fulfil its objectives and achieve its aim, OPEUS needs to access, collect, create, process and manage various data types. As the project develops, it may be necessary to involve additional data; this will be included in the next version of this deliverable, D9.4 “Data Management Implementation”, due for completion at M30 (the final month of OPEUS project delivery). Types of data collected and/or generated: * _Data from previous EU-funded collaborative projects:_ The OPEUS concept builds upon and extensive range of knowledge and outcomes generated by a number of key EU-funded, collaborative projects including, MERLIN, OSIRIS, CleanER-D, RailEnergy and ModUrbanRoll. The details of their anticpated key inputs to be used are included to the DoA PartB (page 9). OPEUS consortium does not control datasets from these previous projects, but it will exploit them as maybe appropriate to augment the delivery of OPEUS’s project activities. * _Data provided by / generated by OPEUS consortium partners:_ In delivering an improved simulation tool and testing it within a relevant environment, data from consortium partners will be used e.g. operational data from STAV. Dataset description: The following template, below at _Table 1: Dataset Description Template_ , will be used to collate information about, and for describing, the datasets that are used and/or produced by the OPEUS project. The compiled descriptions of these datasets will be provided in D9.4 “Data Management Implementation” at M30 (i.e. April 2019): _Table 1: Dataset Description Template_ <table> <tr> <th> **Dataset Reference** </th> <th> **OPEUS_WPX_TX.X_vXX:** Each dataset will have a reference that will be generated by the combination of the name of the project, the Work Package and Task in which it is generated and its version (for example: OPEUS_WP3_T3.2_v01) </th> </tr> <tr> <td> **Dataset Name** </td> <td> Name of the dataset </td> </tr> <tr> <td> **Dataset Description** </td> <td> Each dataset will have a full data description explaining the data provenance, origin and usefulness. Reference may be made to existing data that could be reused. </td> </tr> <tr> <td> **Standards and metadata** </td> <td> * The metadata attributes list * The used methodologies </td> </tr> <tr> <td> **File format** </td> <td> All the format that defines data </td> </tr> <tr> <td> **Data Origin** </td> <td> Specify the origin of the data (including whether created or collected) </td> </tr> <tr> <td> **Data Size** </td> <td> State the expected size of the data </td> </tr> <tr> <td> **Data Sharing** </td> <td> Explanation of the sharing policies related to the dataset between the next options: * **Open** : Open for public disposal * **Embargo** : It will become public when the embargo period applied by the publisher is over. In case it is categorized as embargo the end date of the embargo period must be written in DD/MM/YYYY format. * **Restricted** : Only for project internal use. </td> </tr> <tr> <td> **Data Utility** </td> <td> Outline to whom the dataset could be useful – potential secondary users </td> </tr> <tr> <td> **Archiving and Preservation** </td> <td> The preservation guarantee and the data storage during and after the project (for example: databases, institutional repositories, public repositories, etc.) </td> </tr> <tr> <td> **Re-used existing data** </td> <td> Yes / No. If “Yes”, state the re-used data and how/from where they were retrieved </td> </tr> </table> Open Access approach: The OPEUS consortium has agreed to follow an “Open Access” approach (as much as possible depending on the specific data type) following the respective Horizon 2020 guidelines 1 to ensure that the results of the project results provide the greatest impact possible. OPEUS will ensure the open access 3 to all peer-reviewed scientific publications relating to its results and will provide access to the research data needed to validate the results presented in deposited scientific publications. Publications and research data made available to third parties will not contain any personal information. The following lists the minimum fields of metadata that should come with an OPEUS projectgenerated scientific publication in a repository: * The terms: “European Union (EU)”, “Horizon 2020” * Name of the action (Research and Innovation Action) * Acronym and grant number (OPEUS, 730827) * Publication date * Length of embargo period, if applicable * Persistent identifier When referencing open access data, OPEUS will include at a minimum the following statement demonstrating EU support (with relevant information being included into the repository metadata): “The OPEUS project has received funding from the European Union’s Horizon2020 research and innovation programme under grant agreement No 730827”. The OPEUS consortium will strive to make its datasets open access. When this is not the case, the “Data Sharing” element for that particular dataset will describe why access has been restricted. Regarding specific repositories where OPEUS datasets will be held during and after the project, they will be noted in the “Archiving and Preservation” field of Table 1 (above). In cases where the project partners maintain institutional repositories, these will be listed in OPEUS deliverable D9.4 “Data Management Implementation” at M30 (i.e. April 2019). The project scientific publications and, in some instances, research data will be deposited on the institutional repository depending primarily on the primary creator of the publication and on the data in question. In cases where the project partners do not operate publicly accessible institutional repositories, they will use either a domain specific repository or the EU recommended service OpenAIRE ( _http://www.openaire.eu_ ) as an initial step to finding resources to determine relevant repositories for depositing the scientific publications and the respective data. The repository will also include information regarding the software, tools and instruments that were used by the dataset creator(s) so that secondary data users can access and then validate the results. In summary, as a baseline OPEUS partners will deposit: * Scientific publications – on their respective institute repositories in addition (when relevant) to a public OPEUS repository (such as ZENODO) * Research data – to the public OPEUS repository collection (when possible) * Other project output files – to the OPEUS public repository collection (as relevant). This DMP does not include the actual metadata about the Research Data being produced in OPEUS project. Details about building a repository and accessing this metadata will be provided in deliverable D9.4 “Data Management Implementation” at M30. # FAIR Data OPEUS will in principle participate in the Open Research Data Pilot (ORDP) but data marked as “Restricted” or under an “Embargo” period will be excluded. To this end, the data that will be generated during and after the project and will be included in ORDP should be ‘FAIR’, that is Findable, Accessible, Interoperable and Reusable. These requirements do not affect implementation choices and do not necessarily suggest any specific technology, standard, or implementation solution. The FAIR principles were generated to improve the practices for data management and datacuration, aiming to describe the principles to be applied to a wide range of data management purposes, whether it is data collection or data management of larger research projects regardless of scientific disciplines. With the endorsement of the FAIR principles by H2020 and their implementation in the guidelines for H2020, they serve as a template for lifecycle data management and ensure that the most important components for lifecycle are covered. This is intended as an implementation of the FAIR concept rather than a strict technical implementation of the FAIR principles: * Making data _Findable_ , including provisions for metadata: ▫ The datasets will have very rich metadata to facilitate the findability ▫ All the datasets will have a Digital Object Identifier provided by an OPEUS public repository (e.g., ZENODO) ▫ The reference used for the dataset will follow the format: OPEUS_WPX_TX.X_vXX, including clear indication of the related WP, activity and version of the dataset ▫ The standards for metadata will be defined in the “Standards and metadata” section of the dataset description table (see Table 1, above). * Making data openly _Accessible_ : <table> <tr> <th> ▫ </th> <th> Datasets openly available are marked as “Open” in the “Data Sharing” section of the dataset description table (see Table 1). </th> </tr> <tr> <td> ▫ </td> <td> The repository in which each dataset is stored, including open access datasets, is mentioned in the “Archiving and Preservation” section of the dataset description table (see Table 1) </td> </tr> <tr> <td> ▫ </td> <td> The “Data Sharing” section of the dataset description table (Table 1) will also include information with respect to the methods or software used to access the data of each dataset </td> </tr> <tr> <td> ▫ </td> <td> The data and their associated metadata will be deposed either in a public repository or in an institutional repository </td> </tr> <tr> <td> ▫ </td> <td> The “Data Sharing” section of the dataset description table (see Table 1) will outline the rules to access the data, if restrictions exist. </td> </tr> </table> * Making data _Interoperable_ : ▫ Metadata vocabularies, standards and methodologies will depend on the repository to be hosted (incl. public, institutional, etc.) and will be provided in the “Standards and metadata” section of the dataset description table (Table 1). * Making data _Reusable_ : <table> <tr> <th> ▫ </th> <th> All the data producers will license their data if applicable to allow the widest reuse possible. More details about license types and rules will be provided in OPEUS deliverable D9.4 “Data Management Implementation” at M30. </th> </tr> <tr> <td> ▫ </td> <td> The “Data Sharing” section of the dataset description table (see Table 1) is the field where the data sharing policy of each dataset is defined. By default, the data will be made available for reuse. If any constrains exist, an “Embargo” period or “Restricted” flag will be explicitly raised in this section of Table 1. </td> </tr> <tr> <td> ▫ </td> <td> The data producers will make their data available for third-parties within public repositories only for scientific publications validation purposes. </td> </tr> </table> # Allocation of Resources In order to face the data management challenges efficiently, all OPEUS partners have to respect the policies set out in this DMP and datasets have to be identified, created, managed and stored appropriately. The OPEUS roles related to the management of the data are: * The _data controller_ **,** who acts as the point of contact for data protection issues and will coordinate the actions required to liaise between different beneficiaries and their affiliates, as well as their respective data protection agencies, in order to ensure that data collection and processing within the scope of OPEUS, will be carried out according to EU and national legislation. The data controller must ensure that data are shared and easily available * The _data producer_ **,** which is any entity that produces data within OPEUS’s scope. Each data producer is responsible for the integrity and compatibility of its data during the project lifetime. The data producer is responsible for sharing its anonymised datasets through open access repositories, according to the principles and mechanisms defined in the current document. They are in charge of providing the latest version * The _data manager_ **,** who will coordinate the actions related to data management, will be responsible for the actual implementation of the DMP successive versions and for the compliance to Open Research Data Pilot (ORDP) guidelines. As the OPEUS open data will be hosted either by institutional databases or by an open free of charge platform (e.g. ZENODO), no additional costs will be required for hosting the data. Responsibilities: UNEW, as project coordinator, is responsible for implementing the DMP. In principle, all partners are responsible for data generation, metadata production and data quality. Specific responsibilities are to be assigned depending on the data and the internal organisation in the WPs and Tasks where data is created. Dataset storage and backup, data set archiving and sharing will be, in the majority of cases, the responsibility of the partner(s) who owns the data and/or the servers in which they will be stored. # Data Security Data sharing, storage, backup/recovery and long term data preservation A “repository” is a mechanism used by a project consortium to make its project results (i.e., publications and scientific data) publicly available and free of charge for any user. Several options are considered/suggested by the EC in the frame of the Horizon 2020 programme to this aim: * For depositing scientific publications: ▫ Institutional repository of the research institutions ▫ Subject-based/thematic repository ▫ Centralised repository * For depositing generated research data: ▫ A research data repository which allows third parties to access, mine, exploit, ▫ Reproduce and disseminate free of charge ▫ Centralised repository The Consortium is aware of the mandate for open access of publications in the H2020 projects and the participation of the project in the ORDP. The academic institutions participating in OPEUS have available their own repositories which in fact are linked to OpenAIRE. These institutional repositories will be used to deposit the publications generated by them. A scientific publication and data repository, such as ZENODO (the repository set up by the EC’s OpenAIRE initiative in order to unite all the research results arising from EC funded projects), will be used for the sharing and preservation of project outcomes. The Consortium will ensure that scientific results that will not be protected and can be useful for the research community will be duly and timely deposited in the chosen scientific results repository, free of charge to any user. Considering ZENODO as the repository, the short- and long-term storage of the research data will be secured since they are stored safely in same cloud infrastructure as research data from CERN's Large Hadron Collider. It uses digital preservation strategies to storage multiple online replicas and to back up the files (data files and metadata are backed up on a nightly basis). Therefore, this repository fulfils the main requirements imposed by the EC for data sharing, archiving and preservation of the data generated in OPEUS. # Ethical Aspects There are no ethical issues related to data management in OPEUS, as the project does not handle personal (sensitive) data. # Conclusions This deliverable provides an overview of the data that OPEUS project will produce together with related data processes and requirements that need to be taken into consideration. The document gives an overview of the anticipated dataset types, defines a set of attributes to be used for describing each dataset, and presents the open access aspects to be followed by the consortium. These include a description, standards, methodologies, sharing and storage methods. With respect to making the data FAIR and the respective mechanisms to support these decisions are described, with the allocation of resources, data security and ethical aspects are presented. This deliverable is a living document which will be updated during the project lifetime as needed, including more detailed information regarding the collected / generated data. The next official updated version will be released as deliverable D9.4 “Data Management Implementation”, at M30 (i.e., April 2019), providing more information on such as: * The descriptions of the different datasets, including their reference, file format, standards, methodologies and metadata and repository to be used * Institutional repositories in cases where the project partners maintain such a repository * The use of an appropriate Open Access repository to enable the sharing and reuse of data from the project * How the data is being curated and will be preserved.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1199_RINGO_730944.md
# 1\. DATA SUMMARY _State the purpose of the data collection/generation_ ICOS collects observational data on greenhouse gas concentrations and fluxes in atmosphere, ecosystem, and marine environments. This data is targeted to scientific use to increase our knowledge of the greenhouse gas cycles and the budget of Europe and surrounding regions. _Explain the relation to the objectives of the project_ The RINGO project aims at improving the methods and data used and generated by ICOS. The improved methods will increase the quality, amount, and FAIRness of ICOS data. _Specify the types and formats of data generated/collected_ The RINGO project collects several different types of data: * CO2 ambient mole fraction and ecosystem flux data from the pre-ICOS period that will be reprocessed into higher quality data, very similar to the current ICOS Level 2 data and INGOS datasets for CH4 and N2O. This data is stored in the WMO and Fluxnet community defined Comma and Tab separated data format as clear ASCII text. * Observation data of vertical profiles of greenhouse gas mole fractions from observations using air cores is stored into a community defined ACSII format, the development, definition, and documentation of this processing is part of RINGO. * Additional raw data is usually also some comma or tab separated ASCII file. _Specify if existing data is being re-used (if any)_ Raw instrument pre-ICOS data is used to generate the higher quality Level 2 data as described in the previous section. _Specify the origin of the data_ All raw data is generated by approved pre-ICOS or candidate ICOS instrumentation. _State the expected size of the data (if known)_ Raw data volume per site is about 40 MB (ecosystem, 20 sites), 20 MB (atmosphere, 10 sites) or several Kb (marine, five sites) per day. The Level 2 data products are about 1-20 MB per year. _Outline the data utility: to whom will it be useful_ : * Scientists in many fields from climatology, biogeochemistry, biology, agriculture, forestry etc. - General public. * Scholars, students. * Policy makers. ## 2.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA [FAIR DATA] _Outline the discoverability of data (metadata provision)_ All raw data and Level 2 data products from RINGO will be published together with the relevant metadata through the ICOS Carbon Portal and follow its FAIR principles and mechanisms. The metadata is exposed through the CP search app, the B2FIND service, through the Datacite through DOIs of the collections of the datasets and in the future other portals of portals (e.g. EOSC; GEOSS). _Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?_ At ingestion, all data objects get assigned a Persistent Identifier (PID) based on the handle system. This PID contains the sha-256 checksum of the data object and resolves into a landing page that contains the relevant metadata and a link to the data object. Collections of data, per station and per release of one or more years, will be generated and minted a Datacite DOI and the metadata of these collections, including attribution information, will be stored at Datacite. _Outline naming conventions used_ Each data object has an object specification that links to the metadata that describes the data format and describes the data content ontology (data columns, variables, units). File names are preserved but are considered redundant for machine to machine interpretation and not interpreted. Each community follows its own conventions for file naming to support internal processing. _Outline the approach towards search keyword_ Search keywords will be linked to the data object specification. All metadata elements can be searched for as keyword using the open SPARQL query, and the Carbon Portal search app. _Outline the approach for clear versioning_ Part of the metadata for each data object is the data version. Each data object of higher version links to its previous (and eventual next) version. Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how All metadata is open linked data based on RDF, accessible through an open SPARQL endpoint (W3C standards). Naming convention of the metadata entries follows ISO19115 where possible. Collections and their metadata will also be findable from the Datacite search engine. ## 2.2 MAKING DATA OPENLY ACCESSIBLE [FAIR DATA] _Specify which data will be made openly available? If some data is kept closed provide rationale for doing so_ All raw and higher level data products originating from the RINGO project are provided following the ICOS data licence which is CC4BY. Raw data (Level 0) objects are not directly downloadable through the web pages, there we ask to contact the thematic center first in order to provide the user with the best service to interpret the data and to inform us of the actual data requirements, but access is allowed and will be given without further conditions. Experimental data from the RINGO project is in principle, kept closed for users from outside the relevant RINGO task until the end of the project, but will be provided with open access according to the ICOS Data Policy after the project end. Descriptive metadata will always be openly available for all data. _Specify how the data will be made available_ All data is made available through the ICOS Carbon Portal through standard https transfer. The PIDs of the data can be found through the interactive search interface at Carbon Portal or by queries through the open SPARQL endpoint or through other portals (of portals) like the B2FIND or GEOPortal. All PIDs resolve through the Handle (or DOI) system into a landing page that contains all relevant metadata and a link to the actual data object for direct download. _Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?_ All metadata and data can be accessed through restful APIs with standard web browsers and internet tools such as wget or curl and javascript or python code. All ICOS Carbon Portal code is open and licensed under GPL v3 through Github. _Specify where the data and associated metadata, documentation and code are deposited_ * Metadata: At Carbon Portal server _https://meta.icos-cp.eu_ as RDF and B2FIND (CKAN, OAI-PMH) * Datacite for collections of data * Data: At Thematic Centers, Carbon Portal data service _https://data.icos-cp.eu_ and B2SAFE at CSC (Finland) and KFz Jülich (Germany) ### \- Code: https://github.com/ICOS-Carbon-Portal _Specify how access will be provided in case there are any restrictions_ While access to RINGO raw data (Level 0) objects is open, and not subject to any restrictions or conditions, these data are not directly downloadable via the corresponding landing pages. Instead, interested users are requested to contact the relevant ICOS thematic center, which will provide the data as well as assistance on how to best use and interpret it. ## 2.3 MAKING DATA INTEROPERABLE [FAIR DATA] _Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability._ RINGO and ICOS data and metadata follow standards that are well-anchored in the Earth Science community, including relevant ISO specifications and the INSPIRE directive. This ensures smooth uptake and usage of the data products by all designated scientific user communities, including support for automated harvesting of data and metadata by machine-driven processing e.g. in the cloud. The metadata associated with all data objects stored in the ICOS Carbon Portal is stored in a RDF database, based on an open ontology based on OWL, which is part of the RDF database. Read-only access to the metadata repository is given through an open SPARQL endpoint. All metadata is also exported to the B2FIND repository where it is also linked with the PIDs of the data objects in B2SAFE through CKAN. The B2FIND repository is again linked to the GEOPortal for global access to the metadata form other portals and portals of portals. The landing pages of the data object will allow for content negotiation to deliver the metadata in the format and vocabulary of the respective community standards. This translation using equivalences will be dynamic and online, will be anchored in the ontology and thus open and easy to maintain and update. _Specify whether you will be using standard vocabulary for all data types present in your data set, to allow interdisciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?_ As described in the previous item, we plan to support all relevant standard vocabularies by mapping the ontologies to the ICOS standard dynamically. ## 2.4 INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES) [FAIR DATA] _Specify how the data will be licenced to permit the widest reuse possible_ Data will be provided in general according to the ICOS data Policy and using the Creative Commons Attribution 4.0 International (CC4BY) licence. _Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed_ In general, the data is available directly after ingestion/generation. Some data, e.g. related to development work being performed as part of the RINGO DoA, will be made available after the end of the project, see section 2.2 above. _Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why_ All data that falls under the CC4BY license is available to all third parties. RINGO experimental data results are restricted to the consortium until the end of the project. They will become available under CC4BY within 2 years after the end of the project. _Describe data quality assurance processes_ Quality assurance is at the heart of ICOS and the reason of the existence of the research infrastructure. The quality assurance procedures are described in the relevant papers and reports that are published by ICOS, the Thematic Centers and their contributors: ### \- ETC Guidelines and instructions: http://www.icos- etc.eu/icos/documents/instructions \- ATC ICOS station specifications: https://icos-atc.lsce.ipsl.fr/node/99/27248 \- ATC Data processing: Hazan, L., Tarniewicz, J., Ramonet, M., Laurent, O., and Abbaris, A.: Automatic processing of atmospheric CO2 and CH4 mole fractions at the ICOS Atmosphere Thematic Centre, Atmos. Meas. Tech., 9, 4719-4736, _https://doi.org/10.5194/amt-9-4719-2016_ , 2016 - OTC: _https://otc.icos-cp.eu/data-levels-quality-access_ _Specify the length of time for which the data will remain re-usable_ ICOS is a long term infrastructure that is foreseen to exist for at least 20-25 years. This would guarantee operation and data availability until 2040. Following OAIS recommendations, periodic consultations with the designated scientific end-user communities are planned to ensure that the data remain fully FAIR throughout this time. # 3\. ALLOCATION OF RESOURCES _Estimate the costs for making your data FAIR. Describe how you intend to cover these costs_ Most of the work to make ICOS and RINGO data FAIR is performed at the Carbon Portal and the ICOS Thematic Centers and concerns at least 50% of the cost of these. Total cost is thus roughly 3 M€ per year. Hardware costs are about 10 k€ per year. EUDAT/CDI services like B2FIND and B2SAFE are about 50 k€ per year. _Clearly identify responsibilities for data management in your project_ The data management responsibilities are clearly described in the ICOS Data Policy document and the ICOS Data Lifecycle document. The latter is under continuous development. In general Carbon Portal is responsible for the data identification and minting of PIDs and DOIs, publishing of data and metadata and archiving to the trusted repository. The Thematic Centers of ICOS are responsible for the data curation and provenance. The project participants and task leaders are responsible for the provenance of the raw data and in some cases of curation of the experimental results. _Describe costs and potential value of long-term preservation_ The trusted repositories at B2SAFE from EUDAT at CSC and KFA Jülich will preserve all ICOS data objects for the foreseeable future. B2SAFE will be part of the EOSC service portfolio. Cost is covered from the ICOS budget that is secured for the long term (>20 years). # 4\. DATA SECURITY _Address data recovery as well as secure storage and transfer of sensitive data_ Transfer of sensitive data is not applicable. Data is backupped at all individual instances of ICOS, these locations are the stations, experiments, thematic centers, and Carbon Portal. All raw data and higher-level data is streamed at ingestion to a trusted repository from EUDAT CDI (B2SAFE) that replicates the data over two centers in Europe (in Finland and Germany), that each also provide a full backup. All data objects are identified with a persistent identifier minted by ICOS CP that contains the AES256 checksum of the data for unique identification and consistency check. NOTE: the (primary) repository in this context is the Carbon Portal (which operates the “ICOS repository”), not the EUDAT data center! # 5\. ETHICAL ASPECTS _To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former_ None # 6\. OTHER _Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)_ Documentation for Atmosphere, Ocean and Ecosystem community, see section 2.4. All other information is at _https://github.com/ICOS-Carbon-Portal/meta_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1200_VetBioNet_731014.md
## 1\. Data Summary ### 1.1 Purpose of data collection/generation and its relation to the objectives of the project The H2020 topic under which VetBioNet has been funded (INFRAIA-01-2016-2017) participates per default in the open access to research data pilot which aims to improve and maximize access to and re-use of research data generated by H2020 projects. This data management plan is part of this pilot and describes the data VetBioNet will generate, whether and how they will be exploited or made accessible for verification and re-use, and how they will be curated and preserved. VetBioNet will collect and generate data to advance research on epizootic and zoonotic diseases, with the final objective of strengthening the present European capacity and competence to meet the challenges of (re)emerging animal infectious diseases. VetBioNet will collect data generated from its Joint Research Activities (JRA), which are designed to improve the scientific and technological standards of the integrated services provided by the network infrastructures. For this, the _FAIR principles_ (Findability, Accessibility, Interoperability, and Reusability) and _ARRIVE guidelines_ for scientific reporting will be adopted. Moreover, VetBioNet will also collect data from Transnational Access (TNA) research projects on a voluntary basis, namely when the owners of the data at stake agree to make them available and with a possible delay of 2 years after the end of their TNA projects. ### 1.2 Types and formats of data generated/collected VetBioNet JRA will generate data of several types, including phenotypic, genotypic and sequencing data from pathogens and hosts. JRA and Networking Activities (NA) will also deliver novel guidance documents, training tools, harmonized protocols and other scientific information of interest for the research community. The final research data produced by JRA will be curated in a harmonized way to fit the purpose and stored in publicly available standard data repositories (e.g. Gene Expression Omnibus, Sequence Read Archive, etc.) or in the dedicated VetBioNet database, facilitating sharing of information among project partners and finally the wider community of scientists and end- users through specifically designed interfaces. Research data (phenotypic, genotypic, sequencing and other kinds of data) generated by researchers outside the VetBioNet consortium through TNA Activities will be covered by data management on a case-by-case basis, as these data may be required to be protected by confidentiality terms specific to each researcher selected for conducting a TNA project within VetBioNet. To stimulate the public sharing of these data, specific agreements will be sought with researchers conducting TNA projects - allowing them to openly share only a part of their research data and with a possible delay of 2 years after the end of their TNA projects. Further details on the types and formats of data to be generated or collected by VetBioNet are included in the table here below: <table> <tr> <th> **Type** </th> <th> **Source** </th> <th> **Volume** </th> <th> **Format** </th> </tr> <tr> <td> Spreadsheet </td> <td> JRA and TNA Activities (Laboratory experiments / simulation / observation / compilation) </td> <td> Small files, up to few megabytes per experiment </td> <td> XLSX CSV </td> </tr> <tr> <td> Image </td> <td> JRA Activities (Laboratory experiments) </td> <td> Few terabytes per experiment </td> <td> RAW TIFF </td> </tr> <tr> <td> Sequencing data </td> <td> JRA Activities (Laboratory experiments) </td> <td> 200GB per sequence </td> <td> FASTQ FASTA SAM/BAM GFF/GTF BED VCF </td> </tr> </table> ### 1.3 Existing data re-used, origin of data, expected size and who will find data useful VetBioNet will use existing genomic data and related publications from NADIR (FP7 project, GA n. 228394). Particularly, data generated by NADIR experiments on fishes, chickens and sheep could be re-used. These data will not be available for use outside the VetBioNet network unless the owners give permission. Overall, the data collected, generated and made openly available by the project will originate from VetBioNet JRA and NA, and possibly from VetBioNet TNA Activities and NADIR final results. It is expected that VetBioNet will not publicly share some experimental data to be obtained through WP7 ( _Experimental models for animal infectious diseases_ ) and WP8 ( _Development of novel analytical tools and reagents to help interrogate the host pathogen interaction_ ) activities, because these data could be included at a later stage in scientific publications or used for pursuing patents. It is expected that, due to the nature of the studies, mainly small size data sets will be collected, at the exception of imaging files, sequencing data and data from instrumented behavioral and physiological monitoring. The largest amount of raw data is expected to come from imaging and NGS (next generation sequencing) genomic data. The data that VetBioNet will make openly available will be useful for the veterinary field, particularly for the scientific community interested in animal infectious diseases, characterizing models and livestock production. The data will also be useful to policy makers, funders and industry who have an interest in biosecurity for animal diseases and zoonoses through sustainable and safe production of livestock species for the public good. ## 2\. FAIR Data ### 2.1. Making foreground data findable, including provisions for metadata VetBioNet final research data will be stored in publicly available standard data repositories or in the project dedicated database, which will be accessible through VetBioNet website ( _http://www.vetbionet.eu/_ ) . The data stored in VetBioNet database will be searchable with metadata and a standard identification mechanism will be used to describe the data. The naming convention followed will be: [work package].[task].[(JRA/TNA id].[text description].[version].[format] Search keywords will be provided to optimize possibilities for re-use of data and raw data will be curated accordingly. The keywords to be chosen will relate to animal species and pathogens analyzed by the project and followed methodologies. Clear version numbers will be provided. Metadata will be produced to describe how the data were analyzed and summarize important portions of data. A simple form will be designed by WP7, and will be filled by partners involved in the development of experimental models for animal infectious diseases. Researchers will use the taxonomic international accepted names for pathogens, the common pathogen titration methodologies expressing doses such as TCDI50, pfu, etc. The form will include the following fields: Animal species, Pathogen, Doses, Route of inoculation, Biosecurity requirements (following OIE classification), Mortality percentage, Clinical signs, Pathogenesis, Immunity and Reference to published data. This form could show in summary the main project achievements, and could facilitate the search for information about animal models related to a particular pathogen, animal species, etc. ### 2.2. Making foreground data openly accessible It is the goal of VetBioNet to share final research data (i.e., factual data on which summary statistics are based). Data sharing with third parties will be subject to a data-sharing agreement established by the IPUDC (Intellectual Property Use and Dissemination Committee). The agreement will indicate the conditions of use, criteria for access, and acknowledgements. Project participants who wish to withhold patentable or proprietary data can do so, and advice on this point will be given by the IPUDC. Final research data will be made openly available only after one of the 3 following criteria are met: * relevant scientific publications based on the data at stake have been accepted; * a patent application has been published;  2 years after the project end. Biosecurity reasons will be considered before making data openly available, as well as possible IP protection measures that will allow further exploitation of produced data. As explained in previous paragraphs, VetBioNet will also collect and publicly share data from TNA research projects on a voluntary basis, with the consent of the owners of the data. Finally, VetBioNet partners are committed to give public access to the raw data that will not be subject to a patent application, at the latest two years after the end of the project. VetBioNet partners will autonomously store the raw data used to generate scientific papers in a repository of their choice. A part of the data made openly available by VetBioNet will be accessible by means of MS Office applications. To access the statistical data publicly shared by VetBioNet, Graphpad Prism or similar software will be needed. Documentation about the software needed to access the data will be provided upon request. All freeware programs needed to access the project open data will be provided in the data repositories used by VetBioNet. Where possible, open data and associated metadata, documentation and codes will be deposited in certified repositories which support open access. Access to open data stored in the VetBioNet database and other (e.g. institutional) repositories will be provided to registered users (name, affiliation and email will be requested) who have accepted the terms of use. Data that could raise societal concerns (such as biosecurity risks) will not be shared in open access mode. The VetBioNet Executive Committee (i.e. the decision-implementing body of the project) will act as data access committee, and will be supported by two experts in social sciences and ethical evaluation. ### 2.3. Making data interoperable Formats of data will be decided by the leaders of each Joint Research Activity Leader following, when possible, the guidelines already developed in the NADIR project and in accordance with the work of Networking Activity 3 (Best practices for biosafety, biosecurity and quality management in farmed animal high containment facilities). To ensure interoperability of data across the project and where applicable (e.g., data on gene sequences), project participants will upload basic datasets in standardized forms in a primary database as required by the journal in which they publish their results. Where adequate, data generated in VetBioNet will be defined according to the Animal Trait Ontology for Livestock (ATOL: _http://www.atolontology.com/index.php/en/_ ) . ATOL is an on-going initiative and project participants will contribute to the development of the ontology when relevant. Most partners in the project are ISO 9001 certified. Partners will agree on common terminology, key words, units and formats and apply those to the data and other documents to be incorporated in the consortium’s database. Standard vocabularies for all data types present in VetBioNet data sets will be employed, to allow interdisciplinary interoperability. ### 2.4. Increase data re-use (through clarifying licences) FAIR principles will be applied, also for optimal formatting of data. Final research data will be made openly available through publicly available standard data repositories or the dedicated VetBioNet database only after one of the 3 following criteria are met: * relevant scientific publications based on the data at stake have been accepted; * a patent application has been published;  2 years after the project end. The data produced and/or used in the project will be made openly available to third parties after the end of the project. Possible restrictions could be put in place due to the need to finalize patenting processes or publish scientific works based on project data. Final research data stored in the project dedicated database will remain re- usable until the end of the project and for an additional 10-years term if adequate financial sponsors will be found. Raw and final research data stored in other open repositories will remain re-usable for a minimum of 10 years. Each animal experiment conducted by VetBioNet will require approval from an ethical committee. In order to ensure data quality, guidelines will be produced by INRA and shared among all project participants to establish common procedures for acquiring, storing and amending data. Finally, _ARRIVE guidelines_ will be adopted for ensuring metadata quality. ## 3\. Allocation of resources All data produced will initially be recorded in numbered workbooks, signed and dated and retained at each respective site, or stored on network attached file systems that will be regularly archived and automatically backed up. Data will be checked for quality and accuracy and all protocols recorded and adapted to clear guidelines. Where necessary, data will be archived in each institution’s archive system. Data will be made fully available to the consortium in a timely manner after passing each partner’s quality control criteria and after Intellectual Property considerations have been taken into account. To enable long-term accessibility and validation, data will be stored in formats that are open, non-proprietary, and in common use by the research community. Each partner of VetBioNet will be responsible for uploading its own final research data on publicly available standard data repositories or the dedicated VetBioNet database. Management of final research data uploaded on the VetBioNet database is the responsibility of EAAP. Once final research data are collected and stored in the VetBioNet database (which will have a maximum capacity of 5 terabytes), there will be a monthly preservation cost of about 750 Euro. Therefore, final research data will be preserved in one of the following databases: 1. publicly available standard data repositories (e.g. Gene Expression Omnibus, Sequence Read Archive, etc.); 2. VetBioNet database until the end of the project and for an additional 10-years term if adequate financial sponsors will be found. Open raw data will be stored and preserved by the project partners who generated/collected them. Raw data will be retained and remain accessible for at least 10 years after completion of the project. Data storage facilities will be maintained in accordance with the manufacturer’s warranty and guidelines and data backed up at regular intervals and stored safely and securely. Moreover, project participants are encouraged to publish research data as supporting materials together with their publications, to facilitate preservation of data for future re-use by other projects or research initiatives. ## 4\. Data security If not stored in publicly available standard data repositories, VetBioNet final research data will be stored in the project dedicated database (primary site) and in a secondary site, which will be geographically distant from the primary one. Data backups will be then run from the secondary site, without any impact on the primary project database. ## 5\. Ethical aspects Ethical issues that can have an impact on data sharing will be analyzed in WP4 ( _Ethical aspects, 3Rs and social impact_ ). The project will not launch questionnaires dealing with personal data. ## 6\. Other issues Currently, VetBioNet partners do not make use of other national/funder/sectorial/departmental procedures for data management.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1201_EUROVOLC_731070.md
metadata formats are _SEED_ and _QuakeML_ ). In other fields, like _GNSS_ the data format ( _RINEX – Receiver Independent Exchange Format_ ) has been established, but the metadata structure is being created within _EPOS_ . In fields where standards do not exist (f. ex. in geochemistry and for some hazard products), they are defined within the relevant harmonization group of _EPOS_ . _EUROVOLC_ will adopt the existing data and metadata standards implemented in _EPOS_ and the _EUROVOLC_ Data Management Plan assumes that wherever possible, the implementation of data access will utilize the _EPOS VO-TCS_ , which also forms one of the Virtual Access Activities of _EUROVOLC_ (WP20). These data, which follow the data and metadata standards defined in _EPOS_ and for which services already have been created, will be Findable, Accessible and Re- usable, and through further development of the _EPOS_ _Integrated Core Services_ ( _ICS_ ) the data are expected to become interoperable during the lifetime of _EUROVOLC_ . _EUROVOLC_ thus adheres to the _FAIR_ data principles. For data and products where standards do not exist, _EUROVOLC_ will define and implement standards, emphasizing their full implementation within the _EPOS VO-TCS_ . For some data however, it is to be expected that either data or metadata, or even both will need to be managed by the suppliers themselves. Many of the _EUROVOLC_ partners are active partners in the _EPOS VO-TCS_ where they contribute either as (i) _Service Providers_ responsible for aggregating, collecting and ensuring access to the _DDSS_ for the _EPOS ICS_ and the _EPOS VO-TCS_ , or as (ii) _Data Suppliers_ providing the _DDSS_ to the Service Providers and grant the rights of redistribution through _EPOS_ by signing a supplier letter. In _EUROVOLC_ these roles are expected to be maintained to ensure the continued provision of access beyond the duration of _EUROVOLC_ . In general, the management of data networked in _EUROVOLC_ falls into three main categories 1. Data and metadata made accessible through the _EPOS-VO-TCS_ . 2. Metadata made accessible through the _EPOS VO-TCS_ , but access to the data themselves managed by the Data Supplier. 3. Access to data and metadata managed by the Data Supplier. EUROVOLC will emphasize data management under category 1, but some DDSS will still fall under category 2 and 3. The EUROVOLC project has three types of activities: i) _Networking_ , ii) _Joint Research_ , and iii) _Trans-national_ and _Virtual Access_ . Although there may be some restricted access, the focuses of many of the Networking activities in _EUROVOLC_ (WP4, 5 and 6) are to open access to existing volcanological data from the _Volcano Observatories_ ( _VO_ ) and _Volcanological_ _Research Institutions_ ( _VRI_ ), constructing new databases and/or applying the metadata structures defined in _EPOS_ to make the data findable and accessible. The _Networking_ activities will also harmonize and define best practices for data collection and the curation of datasets to serve as testbeds for testing and validation of new models developed in future research, as well as in _EUROVOLCs_ _Joint Research_ activities. These include for example models for volcanic plumeatmosphere interaction, volcanic ash transport and dispersion, and new eruption early warning processes (WP8 and WP9). The _Trans-national Access_ ( _TA_ ) activities (WP13-WP19) will open access to the observational infrastructures of the _VOs_ and _VRIs_ to enable collection of new data and modeling results. The data collected within the _TA_ activities will be made accessible, and whenever possible, through the _EPOS_ services. These data may include time-limited embargoes while the researchers are processing the results for publications. The data policy statement for the TA projects is published in the 1 st TA call for applications on the EUROVOLC website ( _www.eurovolc.eu_ ) . The _Virtual Access_ activities (WP18 and WP20-25) will develop and provide access services (some in realtime) to data, data products software and modeling tools at the providers websites, which will be prominently linked to the _EUROVOLC_ web site ( _www.eurovolc.eu_ ) . At month 6, the data are not foreseen to be otherwise distributed through _EUROVOLC_ . **_Specific data and products to be networked in _EUROVOLC_ : _ ** # Atmospheric gas and aerosol observations Access will be opened to existing _VO_ data on eruptions and volcano- atmosphere interaction, mainly from Italian and Icelandic volcanic areas and also from French overseas volcanic areas (WP4.1). These data products include: * ( **a** ) Construction of a database of Icelandic tephra layers and its implementation within the _Catalogue of Icelandic Volcanoes_ ( _CIV_ ) ( _www.icelandicvolcanoes.is_ ) , an open-access web resource maintained by IMO for long-term access for the Icelandic Volcano Observatory, stakeholders and general users. The tephra database will be accessible to registered users through the _CIV_ . This work includes the definition of data and metadata standards and will consider experience from the _DynVolc_ data base ( _http://wwwobs.univ-bpclermont.fr/SO/televolc/dynvolc/index.php_ ) , which contains a multi-disciplinary dataset for explosive eruptive events in French territories (including magmatic textures, porosity, crystal size distribution). Expected data volume is <100 Gb. ( **b** ) Access to selected data from other existing tephra datasets from Deception Island, Eyjafjallajökull, Sakurajima, Campi Flegrei and others, definition of standards and construction of a database. Details to be defined during the project and provided in a later update to the DMP. Expected volume of the data is unknown at month 6\. * Access to a remote-sensing database for real-time monitoring of volcanic activity, early warning and determination of volcanic plume parameters. The database includes quantitative parameters (plume top height and velocity, grain-size distribution, Mass Eruption Rate and fine ash concentration in the plume) retrieved from processed satellite data for several volcanoes (Icelandic, Italian, French) and following Earth-Observation standards for data and metadata. Also access to eruption data from high-speed, visible and thermal infrared cameras, Lidar system (UVVIS), FTIR, video surveillance system, L-Band Doppler radars, ASHER and PLUDIX instruments, Optical-Particle- and Chemical-Particle Counters as well as ash and gas data from Unmanned Aerial Vehicles (UAV). Access and data policy will follow those of _EPOS_ . More specific definitions of these datasets, their standards and their volume will be defined in later updates to the DMP. * Access to ash/tephra datasets (plume height, initial velocity, mass flux rates, etc.) obtained from meteo/volcano observations of recent eruptions of Icelandic and Italian volcanoes, made available through the _European Catalogue of Volcanoes_ ( _ECV_ ). The _ECV_ is an open-access web resource to be constructed in WP11 of _EUROVOLC_ . The tephra dataset will be a complete, official and multidisciplinary database and test bed that can be used for benchmarking of all current models, from 1D-colum models, such as f. ex. _PPM_ , to _VATD_ numerical forecast models, such as _NAME_ . Data and metadata standards have not been specified. Expected volume of the data is <100 Gb. # Geochemical gas monitoring across VOs Standards and protocols will be defined for volcanic gas observations and access opened to new and existing gas observations (WP5). These include: * Datasets collected in field surveys carried out in _EUROVOLC_ to determine best practice standards. * Atmospheric gas data from volcanic areas across Europe, which will be made available through the _EPOS VO-TCS_ . The use cases for integrating the geochemical data in _EPOS_ will be summarized at the end of the project. Details of the data and metadata standards and implementation of services will be provided in a later update to the DMP. Expected volume of the data is <100 Gb. # Volcano geophysical and geochemical observations of sub-surface processes Access will be made to volcanic data not yet implemented in _EPOS-IP_ and the data will be standardized according to _EPOS_ standards (WP6.1). * The networked data include: seismic and infrasound array parameters, processed continuous gravity, borehole strainmeter data, geodetic campaign surveys, collection of rocks, volcanological parameters (e.g. lava flow thickness, area, volume, effusion rate), chemical and isotope analyses of volcanic glass and minerals, analysis of pyroclastite (e.g. grain size distribution, component abundance analysis, tephra isopach maps). Which datasets will be selected for networking will be determined by month 18 and the expected volume of the data will be evaluated at that time. * Services will be implemented to provide access to case studies from recent eruptions in Italy and Iceland. The datasets will contain multidisciplinary data from specific eruptions of Mt. Etna, as well as the Grímsvötn 2011 and Eyjafjallajökull 2010 eruptions. The volume will be defined later in the project and provided in an update to the DMP. Access will be Initiated to multidisciplinary observations from the Krafla Volcano Laboratory in the Krafla caldera (WP6.2), where magma has been penetrated in boreholes at shallow level and several wells have detected a superheated environment below 1800-2000 m depth. These data, which are largely owned by the Landsvirkjun power company include: * (i) borehole data (loggings and geothermal fluids) in Landsvirkjun’s database, (ii) seismic data, in a _SeisComp3_ data base, from permanent surface and borehole stations, and seismic data from field experiments, (iii) geodetic data, (iv) surface measurement data including resistivity (TEM and MT) gravity and magnetic and (v) surface manifestation monitoring data. Wherever possible, access to the data will be through the _EPOS VO-TCS_ . Which data and the volume will be determined by month 18, when an update to the DMP will be made. * A platform to access data not accessible through the _EPOS VO-TCS_ will be implemented at Landsvirkjun’s web site and will be managed by them. The access to the database will be open to the project members by conditions which fulfill the company’s security requirements. # **Data and products compiled or generated in Joint Research Activities** __European Catalogue of Volcanoes and Volcanic Areas (ECV) and related volcanic hazards – Version 1.0_ _ Existing databases of European _VOs_ on active volcanoes and volcanic areas will be unified for the purpose of creating the first coherent pan-European volcano catalogue (WP11). The _ECV_ will contain detailed information, graphs, maps and references on the characteristics and hazards of selected volcanoes. It will be an open-access web resource for scientists at _VOs_ and _VRIs_ , stakeholders and the public. Access to the integrated data will be through the web interface of the catalogue and will follow the same construction as the Icelandic catalogue, _CIV_ ( _www.icelandicvolcanoes.is_ ) . Initially the _ECV_ will be implemented and maintained at the _IMO_ . Long-term hosting and management of the _EVC_ will be decided during EUROVOLC; possibly at the _EPOS VO-TCS_ . Data volume will be estimated in the next update of the DMP. ## Assimilation of geophysical data to initialize Volcanic Ash Transport and Dispersal Models Ash and volcanic plume data (grain-size parameters, mass eruption rate, et c.) will be compiled to test algorithms developed in _EUROVOLC_ (WP8). These include data from ground-based observations like weather radars, thermal and visible imaging cameras, ASHERs, ash samples from field surveys, infrasound, as well as satellite data. The datasets may somewhat overlap with data to be networked in WP4. Future updates of the DMP will specify which data from WP8 will be prepared for open access. ## Tools and techniques for rapid characterization of volcanic plumes Plume observations and derived products from the mobile _Icelandic Atmospheric Observatory_ ( _IAO_ ) will be made available in near-real time to researchers and operational practitioners on the _IAO_ website, which will be constructed (WP22). The data set includes observations with Lidar, ceilometer, weather station, visual cameras and wind measurements. Descriptions of access and estimated volume of the data will be provided in the next update of the DMP. ## Database construction for testing and quality control of Eruption Early Warning algorithms To test algorithms for Eruption Early Warning developed in _EUROVOLC_ (WP9.7), multiparametric data (including seismic, infrasound, GNSS, tilt and InSAR) preceding the eruptions of Eyjafjallajökull 2010, Grímsvötn 2011 and Bárdarbunga 2014 will compiled, as well as already available data from over 25 eruptions of Etna and Piton de la Fournaise. These datasets may somewhat overlap with those planned for networking in WP6.1. Future updates of the DMP will specify which data from WP9 will be prepared for open access. ## Integration of petrological and real-time monitoring data Cutting-edge petrological studies will be combined with real-time monitoring data (deformation, gas chemistry and other geophysical phenomena) from recent eruptions (WP10.2). The monitoring data may be part of the datasets of WP6.1. Future updates of the DMP will specify which data from WP10 will be prepared for open access. **_Standards defined in EUROVOLC_ ** EUROVOLC will produce summaries of best practices and standards, as well as summaries of lessons learned and information on tools and guidelines for managing volcanic hazards. All these will be submitted as different deliverable reports (in pdf format). The reports will be made accessible as a type of _EUROVOLC_ standards product through the _EPOS VO-TCS_ in a service similar to the existing _EPOS WP11-DDSS-031_ service for reports on volcanic activity (see Appendix I). The service will be built during _EUROVOLC_ and will include the following reports on: a) Standards EUROVOLC standard of best practices (D2.3) Best practice for direct sampling of fumarolic gases (D5.1) Best practice for ground-based, remote-sensing measurements of volcanic plumes (D5.2) Best practice in petrological monitoring of eruptions (D10.4) Guidelines to assess the monitoring level of European volcanoes (D11.3) 2. Information and lessons learned for response to and management of volcanic hazard Forensic examination of multidisciplinary data from past volcanic crisis events (D2.2) EUROVOLC outreach box (D3.5) Inception of a Scientific Advisory Group (D7.2) On-line searchable catalog of pre-existing volcanic hazard assessment tools (D12.1) 3. Guidelines on Volcano community – stakeholder interaction Outcome of the VAAC workshop (D4.4) Consultation document on European Civil Protection needs and inventory of hazard communication styles (D7.1) During the first year of EUROVOLC, surveys are being carried out in the Networking work packages (WP4 and WP6) to catalogue the available and suitable data to be networked. The results of these efforts will be available by month 12, at which time an update of the DMP will be made defining the specific data sets and products to be networked in EUROVOLC, how they will be managed (category 1, 2 or 3) and estimates of their volume. 1. **MAKING DATA FINDABLE** _(dataset description: metadata, persistent and unique identifiers e.g.,_ _DOI)_ <table> <tr> <th> _EUROVOLC_ will, wherever possible, adopt the existing metadata structures/standards defined in _EPOS_ ( _https://www.epos-ip.org/_ ) . This entails mapping metadata for _EUROVOLC_ data and products to the existing _EPOS_ standards and making the metadata accessible to the _EPOS Integrated Core Service_ ( _ICS_ ) and the _EPOS Volcanological Thematic Core service_ ( _VO-TCS_ ). By doing so, the data will be _findable_ through the _EPOS_ services. − The corresponding data themselves may also be made _accessible_ through existing _EPOS services_ (category 1), in which case they will either be assigned _Digital Object Identifiers_ ( _DOI_ ) by the Data Suppliers themselves, or through _EPOS_ . − For data which are findable through _EPOS_ services, but only accessible on the Data Suppliers websites (category 2), links will be provided on the _EPOS VO-TCS_ site. These data may or may not be assigned _DOIs_ by the Data Suppliers. − Data and metadata, which will only be accessible on the Data Suppliers sites (category 3), may or may not adhere to _EPOS_ data and metadata standards and may or may not be assigned _DOI_ s. Domain names and key words have been defined for the main thematic fields within _EPOS_ . The domain name for volcanological data is __Volcano Observations_ _ . To facilitate searches and discovery of volcanological data, the following key words have already been defined for the main DDSS: Volcanic activity Faults Satellite Volcanic plume Soil fluxes Radar Volcanic tremor Thermal anomaly SAR Vent opening Chemical analysis Aviation Colour Code Lava flow Isotope BET Tephra fallout Thin sections HASSET magmatic rocks Infrared QVAST As more types of data and products are networked through _EPOS_ in the future, additions to these key words will be implemented in later, updated versions of the _EPOS_ _Volcano Observations_ standards. First versions of standards definitions specifically made within the _Volcano Observations_ domain are maintained on gitlab, where updated versions are also expected to be maintained. The final metadata format required by _EPOS ICS_ is _EPOS DCAT-AP_ format. All metadata from the _Volcano Observations_ domain is mapped into this format to make the data searchable and discoverable within _EPOS_ . Standards, for data and their corresponding metadata which fall outside the domain of _Volcano Observations_ _(f. ex. geodetic, seismic, satellite)_ , are maintained by their respective domains in _EPOS_ . _EUROVOLC_ data falling under these other domains will, wherever possible, be mapped to the corresponding community standards to make them findable through the _EPOS_ services and accessible through the avenues decided by the Data Suppliers. Additional metadata standards are expected to be required for some of the data networked in _EUROVOLC_ , f. ex. the tephra databases in WP4. These metadata structures will be defined as required during the project and specified in future updates of the _EUROVOLC_ DMP. </th> </tr> </table> 2. **MAKING DATA OPENLY ACCESSIBLE** _(which data will be made openly available and if some datasets remain closed, the reasons for not giving access; where the data and associated metadata, documentation and code are deposited (repository?); how the data can be accessed (are relevant software tools/methods provided?)_ <table> <tr> <th> In general, the data access policy of _EUROVOLC_ will follow that of _EPOS_ making the data access as open as possible, as closed as necessary. _EPOS’s_ classification of user access is: 1. Anonymous access; 2. Registered (identified) access; 3. Authorized access (Identified and authenticated, requiring specific permissions). EPOS’s classification of access rights is: 4. Open access (freely accessible for download or use); 5. Restricted access (available under restrictions set by the Service Providers and Data Suppliers); 6. Embargoed access (available after a predefined, limited time not exceeding 3 years). Access to metadata for all three data-access classes is always freely open. As detailed in section 1 of this DMP, efforts will be made to make the metadata for the networked _EUROVOLC_ data and products (from Work Packages 4, 5 and 6) according to the _EPOS_ standards and therefore findable through the _EPOS_ services. In addition, access to large amounts of data and information will also be generated through new or existing web services, like the Catalogue of Icelandic Volcanoes ( _www.icelandicvolcanoes.is_ ) and the European Catalogue of Volcanoes (website to be established). Access to the data and products gathered or generated within _EUROVOLC_ will, in general be open, but may have initial embargoes to protect publication of research findings and writing of student theses. Where embargoes apply, this status and the time frame will be clearly indicated on the _EPOS VO-TCS_ service. Wherever possible, the access to data will be provided through the _EPOS VO-TCS_ , where the access will be according to definitions of a – f above, as stipulated by each Data Supplier. In cases where only the metadata are findable on the _EPOS VO-TCS_ , the data themselves will need to be accessed at the web sites of the Data Suppliers or by special arrangements with them. Rules of access to these data (anonymous, registered, or authorized) will be determined by the Data Suppliers. Limitations and/or restrictions to data access, as specified by the _EUROVOLC_ partners, are detailed in the _EUROVOLC_ Consortium Agreement, (Attachment 1: Background included, p. 41-51) accessible at: _https://public.3.basecamp.com/p/mmLkXiB2CXvG1tYCcid4AQNJ_ Regarding access to data and products from the Krafla caldera, owned by Landsvirkjun power company (WP6.2), these will only be accessible to registered users during the project at the web site of the Data Supplier Landsvirkjun. Specifics of the access to the data selected to be networked in _EUROVOLC_ will be developed during the project. Data generated and used within the TA activities, will be accessible at the end of the TA projects or after a reasonable embargo period, and definitions of their data and metadata standards are a required part of the final report from the TA users. Wherever possible, these standards will adhere to _EPOS_ standards. Access to software developed within _EUROVOLC_ will be defined in later stages of _EUROVOLC_ and the details of the access provided in an update to the DMP. </th> </tr> </table> 1. **MAKING DATA INTEROPERABLE** _(which standard or field-specific data and metadata vocabularies and methods will be used)_ _EUROVOLC_ aims to apply the data formats and metadata standards specified in the thematic domains of _EPOS_ and make the metadata for all networked data accessible on the _EPOS VO-TCS_ , thus making the data findable through the _EPOS ICS_ and _VO-TCS_ services. The vocabularies used in the metadata descriptions will be those defined by the different thematic domains of _EPOS_ , which will also make the _EUROVOLC_ data interoperable with other _EPOS_ datasets. For data where metadata standards do not exist in _EPOS_ , _EUROVOLC_ will define the required standards and harmonize them with other metadata standards in the _Volcano Observations_ domain and other relevant domains which use the same data. Later DMP updates will specify which datasets will require standards to be defined and implemented. 2. **INCREASE DATA RE-USE** _(what data will remain re-usable and for how long, is embargo foreseen; how the data is licensed; data quality assurance procedures)_ The data distributed by the _EPOS_ infrastructure will be owned by the original Data Suppliers, who will sign a Memorandum of Understanding (MoU) with Service Providers, regarding their rights to redistribute the data and service the data access to _EPOS ICS and EPOS VO-TCS_ . The Service Providers will sign a contract with the _EPOS-ERIC_ ( _EPOS European Research Infrastructure Consortium_ ) legal entity, specifying their responsibilities regarding the long-term maintenance of the data access services. Several of the partners in _EUROVOLC_ are already committed to become Service Providers to the _EPOS-ERIC_ and, where needed, are expected to take over this responsibility for the data and products generated and networked by _EUROVOLC_ . Statements describing the level of quality checks and procedures for all the metadata, data and products networked in _EUROVOLC_ will be expected from all the Data Suppliers. The dissemination of these information will be decided during the project. _EPOS_ assumes, the responsibility of quality control rests with the Data Supplier. 3. **ALLOCATION OF RESOURCES and DATA SECURITY** _(estimated costs for making the project data open access and potential value of long-term data preservation; procedures for data backup and recovery; transfer of sensitive data and secure storage in repositories for long term preservation and curation)_ The costs of making both pre-existing and new data openly accessible within _EUROVOLC_ will be covered by the project. This includes, mapping the metadata and data into the required _EPOS_ standards, or when standards do not exist, defining and implementing them. Long-term data preservation is the responsibility of the Data Suppliers. Long- term maintenance of the services provided through the Data Suppliers own web sites will also be the responsibility of the Suppliers and at their own costs. Long-term maintenance of data services accessible through the _EPOS_ infrastructure, will follow the _EPOS_ convention, that is, the Data Suppliers will sign an MoU with the Service Providers, who will then sign a contract with _EPOS-ERIC_ to guarantee long-term maintenance of the data service. The costs of maintaining the services will be covered by the Service Providers, _EPOS-ERIC_ or a combination of the two. The specifics will be determined in the final year of EPOS, 2019, and will be provided in a later update of the DMP. The descriptions of data preservation and archiving will also be provided in a later updates of the DPM. # **Appendix I DDSS elements to be produced within EPOS-IP project (2015-2019)** List of Data, Data products, Software and Services (DDSS) to be delivered in the EPOS VO-TCS (WP11 of EPOSIP project) during the Implementation phase of EPOS. Standards for data formats and structures for metadata are defined, and interoperable services made to make the data findable and accessible. The EPOS ID of the elements are shown in the first column, where the WP11 refers to the _EPOS-IP_ work package hosting the VOTCS. The category, or scientific field of the element is shown in the second columns and the name of the product in the third column. The high priority elements will be implemented before the end of 2018, while the medium priority ones are planned for implementation during 2019-2020. <table> <tr> <th> **DDSS - ID** </th> <th> **Category** </th> <th> **DDSS Name** </th> </tr> <tr> <td> </td> <td> **High priority (end of 2018)** </td> </tr> <tr> <td> WP11-DDSS-001 </td> <td> Seismological data </td> <td> Velocity seismic waveforms </td> </tr> <tr> <td> WP11-DDSS-002 </td> <td> Seismological data </td> <td> Acceleration /Accelerometer waveforms </td> </tr> <tr> <td> WP11-DDSS-003 </td> <td> Geodetic data </td> <td> GNSS raw data (Rinex Data) </td> </tr> <tr> <td> WP11-DDSS-004 </td> <td> Geodetic data </td> <td> Borehole strainmeter and pressure data colocated with strain meter </td> </tr> <tr> <td> WP11-DDSS-005 </td> <td> Geodetic data </td> <td> Tiltmeter </td> </tr> <tr> <td> WP11-DDSS-006 </td> <td> Geodetic data </td> <td> Tide gauge </td> </tr> <tr> <td> WP11-DDSS-007 </td> <td> Geodetic data </td> <td> Continuous gravity </td> </tr> <tr> <td> WP11-DDSS-018 </td> <td> Satellite data </td> <td> RAW SAR data and SAR SLC </td> </tr> <tr> <td> WP11-DDSS-019 </td> <td> Satellite data </td> <td> VIS/IR Sensors onboard polar orbiting satellites (AVHRR, MODIS) </td> </tr> <tr> <td> WP11-DDSS-023 </td> <td> Ground-based remote sensing data&products </td> <td> Ground-based visible and thermal / IR camera </td> </tr> <tr> <td> WP11-DDSS-024 </td> <td> Ground-based remote sensing data&products </td> <td> Ground-based doppler radar near-source eruptive parameters </td> </tr> <tr> <td> WP11-DDSS-031 </td> <td> Volcanological /petrological </td> <td> Reports on volcanic activity </td> </tr> <tr> <td> WP11-DDSS-032 </td> <td> Volcanological /petrological </td> <td> Aviation colour codes for volcanoes </td> </tr> <tr> <td> WP11-DDSS-036 </td> <td> Geochemical /petrological </td> <td> Chemical analysis and physical properties of gas, water and rocks </td> </tr> <tr> <td> WP11-DDSS-047 </td> <td> Satellite data </td> <td> Volcanic Plume (Ash + SO2) </td> </tr> <tr> <td> WP11-DDSS-049 </td> <td> Satellite data </td> <td> Thermal anomaly (lava flow) </td> </tr> <tr> <td> WP11-DDSS-050 </td> <td> Satellite data </td> <td> Wrapped Differential Interferograms (Phase and Amplitude) </td> </tr> <tr> <td> WP11-DDSS-056 </td> <td> Geohazards </td> <td> Spatial probability analysis/maps </td> </tr> <tr> <td> WP11-DDSS-057 </td> <td> Geohazards </td> <td> Lava flow invasion hazard maps </td> </tr> <tr> <td> WP11-DDSS-058 </td> <td> Geohazards </td> <td> Tephra fallout hazard maps for explosive volcanoes </td> </tr> <tr> <td> WP11-DDSS-059 </td> <td> Geohazards </td> <td> PDCs hazard maps (EH;GVF; Deception) </td> </tr> <tr> <td> WP11-DDSS-060 </td> <td> Geohazards </td> <td> Probabilistic volcanic hazard assessment (maps) </td> </tr> <tr> <td> WP11-DDSS-064 </td> <td> Geohazards </td> <td> Effects on health and recommendations for response to SO2 from volcanic eruptions </td> </tr> <tr> <td> WP11-DDSS-065 </td> <td> Geohazards </td> <td> Daily ash/gas forecasting maps </td> </tr> <tr> <td> WP11-DDSS-070 </td> <td> Data services </td> <td> Software catalogue for petrological to geophysical modelling </td> </tr> <tr> <td> WP11-DDSS-072 </td> <td> Satellite data </td> <td> Mean LOS velocity </td> </tr> </table> <table> <tr> <th> **Medium priority (2019- 2020)** </th> </tr> <tr> <td> WP11-DDSS-008 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> River stage data </td> </tr> <tr> <td> WP11-DDSS-009 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> Electrical conductivity in river </td> </tr> <tr> <td> WP11-DDSS-010 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> River temperature </td> </tr> <tr> <td> WP11-DDSS-011 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> Piezometer data </td> </tr> <tr> <td> WP11-DDSS-012 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> Groundwater electrical conductivity </td> </tr> <tr> <td> WP11-DDSS-013 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> Groundwater temperature </td> </tr> <tr> <td> WP11-DDSS-014 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> Atmospheric temperature </td> </tr> <tr> <td> WP11-DDSS-015 </td> <td> Physico-chemical parameters in water (river water or groundwater) </td> <td> Atmospheric pressure </td> </tr> <tr> <td> WP11-DDSS-016 </td> <td> Geochemical data </td> <td> Fumarole temperature </td> </tr> <tr> <td> WP11-DDSS-017 </td> <td> Geochemical data </td> <td> CO2 concentration in groundwater </td> </tr> <tr> <td> WP11-DDSS-022 </td> <td> Ground-based remote sensing data&products </td> <td> Ground-based radar data </td> </tr> <tr> <td> WP11-DDSS-025 </td> <td> Ground-based remote sensing data&products </td> <td> Ground-based UV scanner spectra </td> </tr> <tr> <td> WP11-DDSS-026 </td> <td> Rock sample properties </td> <td> Collections of magmatic rocks </td> </tr> <tr> <td> WP11-DDSS-027 </td> <td> Seismological data </td> <td> Earthquake parameters (hypocentral or magnitude) </td> </tr> <tr> <td> WP11-DDSS-028 </td> <td> Seismological data </td> <td> Tremor parameters (amplitude information) </td> </tr> <tr> <td> WP11-DDSS-029 </td> <td> Seismological data </td> <td> ShakeMaps </td> </tr> <tr> <td> WP11-DDSS-030 </td> <td> Geodetic data </td> <td> GNSS time series </td> </tr> <tr> <td> WP11-DDSS-033 </td> <td> Volcanological /petrological </td> <td> Catalogue of eruptions </td> </tr> <tr> <td> WP11-DDSS-034 </td> <td> Volcanological /petrological </td> <td> Maps of recent and past lava flows </td> </tr> <tr> <td> WP11-DDSS-035 </td> <td> Volcanological /petrological </td> <td> Maps of faults </td> </tr> <tr> <td> WP11-DDSS-041 </td> <td> Geochemical data </td> <td> Soil CO2 fluxes </td> </tr> <tr> <td> WP11-DDSS-043 </td> <td> Satellite data </td> <td> Brightness temperature and/or surface temperature by Optical Satellite (Visible, IR) </td> </tr> <tr> <td> WP11-DDSS-045 </td> <td> Ground-based remote sensing data&products </td> <td> Processed atmospheric Lidar Data for mapping airborne ash </td> </tr> <tr> <td> WP11-DDSS-046 </td> <td> Ground-based remote sensing data&products </td> <td> SO2 flux </td> </tr> <tr> <td> WP11-DDSS-051 </td> <td> Satellite data </td> <td> InSAR lava flow maps </td> </tr> <tr> <td> WP11-DDSS-052 </td> <td> Satellite data </td> <td> Eruptive physical parameters (e.g. MER *, altitude) </td> </tr> <tr> <td> WP11-DDSS-053 </td> <td> Ground-based remote sensing data&products </td> <td> Ground-based Doppler radar spectra </td> </tr> <tr> <td> WP11-DDSS-054 </td> <td> Geohazards </td> <td> SO2 concentration probabilistic hazard maps </td> </tr> <tr> <td> WP11-DDSS-055 </td> <td> Geohazards </td> <td> Volcanic hazard Event Tree </td> </tr> <tr> <td> WP11-DDSS-061 </td> <td> Software </td> <td> Spatial probability Tool (QVAST) </td> </tr> <tr> <td> WP11-DDSS-062 </td> <td> Software </td> <td> Event Tree HASSET tool </td> </tr> <tr> <td> WP11-DDSS-063 </td> <td> Software </td> <td> Bayesian Event Tree (BET) tools: computation and visualization of probabilistic long- and short-term volcanic hazard </td> </tr> <tr> <td> WP11-DDSS-067 </td> <td> Data services </td> <td> Station information </td> </tr> <tr> <td> WP11-DDSS-073 </td> <td> Seismological data </td> <td> Infrasound waveforms </td> </tr> </table> WP11-DDSS-1, -3, -5, -6, -7, 18, -23, -31, -36, -57, -58, -60, initially made available by some partners in 2018, will be made available by additional partners in 2019-2020.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1202_OpenRiskNet_731075.md
# SUMMARY This report describes the first updated version of the data management plan (DMP) for the OpenRiskNet e-infrastructure projects. The current DMP covers the general aspects of the OpenRiskNet data management based on the FAIR (findable, accessible, interoperable and reusable) guidelines, ethics considerations for re-sharing of public datasets and the first examples of shared data sources including diXa, BridgeDb, WikiPathways, AOP-Wiki and ToxCast/Tox21. More specific data source and clearly-defined measures will be added in parallel to their integration into the infrastructure, which will follow the time plan enforced by the case study requirements on data availability. # INTRODUCTION The European Commission is running a flexible pilot under Horizon 2020 called the Open Research Data Pilot (ORD Pilot). The ORD pilot aims to improve and maximise access to and re-use of research data generated by Horizon 2020 projects and takes into account the need to balance openness and protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions [1 ​ ] .​ Open data is data that is free to access, re-use, repurpose, and redistribute. The Open Research Data Pilot aims to make the research data generated by selected Horizon 2020 projects accessible with as few restrictions as possible, while at the same time protecting sensitive data from inappropriate access [2 ​ ] .​ Projects starting from January 2017 are by default part of the Open Data Pilot, including the Research infrastructures (including e-Infrastructures) are required to participate in the ORD Pilot. Since one of the main aims of OpenRiskNet is to allow for a simpler, more harmonised access to public, open data sources and workflows and enrich these with semantic annotation to improve their interoperability between each other and with predictive toxicology and risk assessment software, OpenRiskNet is fully supporting the ORD Pilot, is developing best- practice approach and trying to act as a role model for data management and sharing, To help optimising the potential for future sharing and re-use of data, the OpenRiskNet Data Management Plan (DMP) helps the partners to consider any problems or challenges that may be encountered and helps them to identify ways to overcome these. This DMP is a “living” document developed using the online tool described above and given as a snapshot of the current status (November 2018) in this document, It outlines how the research data collected or generated, including redistribution of existing data sources as well as results from the i​ n silico investigations performed as part of the case studies, are handled during and after a research project. It follows the Guidelines on FAIR Data Management in Horizon 2020 [1 ​ ] and is based around the resources available to the project partners in a realistic way taking the current knowledge into account. The ongoing activities to keep the DMP up to date follow an online, distributed approach as outlined in the Guidelines for creating an online DMP (see Figure 1) [3 ​ ] .​ Here, we summarise the concepts for the description of data sets as well as data sharing and archiving approaches adopted in the DMP first followed by the DMP in relevant version of the time of this writing. Since the OpenRiskNet case studies are under ongoing development, which define the integrated data sources, the current DMP covers the general aspects of the OpenRiskNet data management but also specific and clearly-defined measures for the first data sources integrated. Additional sources will be added in parallel to the data integration. **Figure 1** . DMP tool interface used to create the OpenRiskNet plan ​ [4 ​ ] ## DATA SET DESCRIPTION This section give the general concepts of what is considered data in OpenRiskNet and a listing of what kind of datal the project collects, makes available for redistribution and sharing oor generates as part of the case studies (i​ n silico only), and to whom might they be useful later. More information and specific details are given in the DMP below. Data refers to: * Data generated in i​ n vivo​, i​ n vitro​, i​ n chemico and i​ n silico experiments broadly related to toxicology and risk assessment in the form of raw, processed and summary data as well as metadata to describe the type of data and how it was produced (protocols and method descriptions for the experimental, processing and analysis procedures); * More specifically, data and metadata needed to execute the case studies and validate results in scientific publications, and * Other curated and/or raw data and metadata that may be required for validation purposes or with reuse value. The metadata provided with the datasets allow to answer questions to enable data to be found and understood, ideally according to the particular standards applied. Such questions include but are not limited to: * What is the data about? * Who created it and why? * In what forms it is available? * Standards applied. Finally the **metadata** ​ ,​ **documentation** ​ and **standards** ​ will help in making the data FAIR (Findable, Accessible, Interoperable and Re- usable) by not only provide the technical requirements like global, persistent identifier and clear access protocols (data application programming interfaces (APIs)) and licenses but by harmonizing and improving the scientific interoperability of the data by semantic annotations and allowing combination and enrichment of data sets using linked-data approaches (Combined OpenAPI and JSON-LD description of data APIs). Data integrated, for which the integration is in progress or produced can be grouped into the following areas: * Existing toxicology, chemical properties and bioassay databases for redistribution * Existing omics databases for redistribution * Existing knowledge bases for redistribution and information extracted by data mining * Intermediate or final results of i​ n silico studies performed as part of the case studies ## DATA SHARING According to the ORD Pilot programme, by default as much of the resulting data as possible should be archived as Open Access. Most data handled in OpenRiskNet is is provided by international publicly funded projects, not-for- profit consortia or governmental and regulatory agencies. This data sources are already available under open-data licenses and will be redistributed by OpenRiskNet in a restructured and enriched form under the same license. Newly generated data are results from the improved processing, analysis and modelling workflows developed as examples in the case studies and OpenRiskNet is fully committed to make these publicly available either as part of the publicly shared workflows or, if they have value outside the case studies, as a separate information and knowledge source. Working together with associated partners especially from the commercial sector (service providers and end users from SMEs and larger industry) might put some restriction on the sharing of data generated in these collaborations. Such legitimate reasons for not sharing resulting data will be explained in the DMP in the rare cases they have to be applied. Additionally, OpenRiskNet is committed to protect personal data and IPR agreements and to responsible data sharing and is taking all steps reasonably necessary to ensure that data is treated securely and in accordance with the OpenRiskNet privacy policy (see section below on the Privacy Policy). No personal data will be transferred to an organization or a country unless there are adequate controls in place including the security of data and personal information. Complementing these general data sharing policies, the DMP describes any ethical or legal issues that can have an impact on data sharing. Since in many cases, the data production is not under control of the OpenRiskNet partners and is only redistributed by them, the obligation to guarantee that the data is generated from high quality, ethical research and can be shared under an open license is in the hand of the original data provider or the primary data distributor. This includes the obligation to operate in conformity with the requirements of their own institution, and fulfil all necessary national and international regulatory and ethical requirements. OpenRiskNet is working together with the original data providers as well as ethical experts on producing workflows and checklist (see attachments) for the ethics evaluation and on documenting the measures adopted during the data generation process (ethical approval of the i​ n vivo and i​ n vitro experiments by the relevant authorities) as well as for protection personal data e.g. by anonymization of data before sharing. Whenever agreed on by the data provider and technical feasible, this data is made available to the OpenRiskNet user as part of the data service description. ## ARCHIVING AND PRESERVATION To ensure that publicly funded research outputs can have a positive impact on future research, for policy development, and for societal change, it is also important to assure the availability of data for a long period beyond the lifetime of a project. This does not refer only to storage in a research data repository, but also to consider the usability of the data. One of the main goals of the infrastructure created by the OpenRiskNet project is to harmonise data, make it interoperable and sustainable and in some cases even enable data sharing or replace existing data sharing solutions. Therefore, the project has a special obligation for preserving data not only produced in the project but also from other projects redistributed by OpenRiskNet and software or any code produced to perform specific analyses or to render the data as well as being clear about any proprietary or open source tools that will be needed to validate and use the preserved data. OpenRiskNet is build upon software engineering and infrastructure components developed, supported and adopted by a large community guaranteeing, on one hand, some stability and sustainability of the data sharing, accessing and processing solutions provided even in the relatively quickly changing field of microservice architectures and deployments. On the other hand, the containerization approach adopted by OpenRiskNet allows for the storage of the data and software in the version used during the execution of the analysing and modelling workflows allowing for complete and exact repeatability using the same code and improved reproducibility due to better documentation. It has to be noted here that many of the data sources are only redistributed by OpenRiskNet. The primary data provider for e.g. diXa and ToxCast are big European infrastructures or US agencies, ELIXIR and US EPA, respectively. For these, data archiving and preservation have to be guaranteed by these institutions. However, OpenRiskNet and more specifically the OpenRiskNet partner responsible for the integration into the OpenRiskNet infrastructure is in charge of maintaining and updating the alternative method to access the data (OpenRiskNet-compliant data API), guaranteeing that the data available within OpenRiskNet is on the same technical and curation level and at the same version as in the primary source, and sustaining the solution beyond the OpenRiskNet project. The same is true for data sources, where OpenRiskNet also takes the responsibility of hosting the data and thus, becomes the primary data source. In the later case, archiving and preservation of the data source containers is of uttermost importance since otherwise there is the danger that the data is lost completely. Negotiations with the Birmingham Environment for Academic Research (BEAR) are underway to provide archiving and preservation facilities for containerised data and software services for at least the next 5 years, for which this cannot be guaranteed by the service provider. # DATA MANAGEMENT PLAN (DMP) This data management plan will address all data-related problems or challenges that may be encountered by partners during the execution of the project. It consists of general guidelines and project-internal rules and regulations dealing with the type of data collected, data sharing following the FAIR principles, hard- and software resources as well as with data security, privacy and ethics. Additionally, more details for specific data sources on all these aspects will be provided whenever necessary. ## 1\. DATA SUMMARY <table> <tr> <th> **Summary of the data addressing the following issues:** * State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful </th> </tr> </table> ### 1.1 Purpose of the data collection The main purpose in collecting and use of data and metadata in the OpenRiskNet project is to fulfill its main objectives in providing and improving solutions on data availability to the risk assessment scientific community, data quality, interoperability, standardization and sustainability and overcome some of the data-related issues, e.g.: * Fragmentation of data across different databases; * Low quality due to insufficient data curation; * Poor explanation and insufficient details on experimental design and protocols applied; * Data available in different formats and with different annotations. Another goal is to generate guidelines and templates for data exchange, harmonise the use of ontologies as well as develop criteria and solutions for controlling the quality of a dataset or i​ n silico tool, for quantifying the uncertainty of predictive models and for improving the repeatability and reproducibility of processing, analysis and modelling workflows. ### 1.2 Relation to the objectives of the project The OpenRiskNet project aims to establish the infrastructure and services functions providing a centralised and standardised set of data and computing resources, accompanied by standardised operating procedures and guidance: * Provision of quality sources of data to facilitate more accurate evaluation of toxicity; * Data infrastructure will offer a centralised repository for data created during other research programs, including the import of relevant research i​ n vitro​, i​ n vivo ​and human data from other sources. * Well-designed data import facilities to support ongoing data collection according to quality guidance. * Use and further development of data annotation and exchange standards for describing toxicity data based on application programming interfaces in order to reduce errors and enable data integration from different laboratories, including data sources outside the program * Integrate regulatory reporting requirements with respect to metadata and documentation details and completeness as well as to export options into file formats like ISA, ToxML and OECD harmonised templates. The OpenRiskNet project aims also to develop and optimise computational models and automated and reliable analysis workflows in order to increase the mechanistic understanding of toxicity: * The models will permit identification of mechanistic links between omics data at different levels of functional organisation; * The models will help to advance the understanding of the relationship between toxicity, architecture, function and risk; * Computational sensitivity analyses components will aid in identifying most sensitive parameters relevant to toxicity and guide further data acquisition and experiments towards increased chemical safety. The data sources integrated during the project are highly relevant to the predictive toxicology and risk assessment community and therefore, are used to showcase and evaluate the concepts and solutions provided by OpenRiskNet and how these are addressing the aims just mentioned. Additionally, they are used in the case studies to provide the example workflows on how to apply and combine the different tools for effective problem solving for the different aspects of risk assessment. ### 1.3 Types and formats of data OpenRiskNet is structured around the concept of semantic-annotated application programming interfaces, which will also be used to search and access data from OpenRiskNet-compliant data sources. As serialised exchange format, JSON or the semantically annotated form JSON-LD is recommended and enforced whenever possible. These formats will cover mainly the metadata associated to the data and in the case of small numbers of readouts (experimental toxicology endpoints) per sample also the data. Especially for omics data or imaging techniques, these files will be accompanied by data in standard file formats to keep the compatibility and interoperability with tools developed in these areas like gene- and pathway-enrichment approaches or image recognition software, respectively. Additionally, to be able to integrate legacy data and provide the final results in the format required for e.g. regulatory reporting, OpenRiskNet is supporting import and export of standard file formats like ISA (-tab or -json), ToxML and OECD harmonised templates. However, such technical file format conversion solutions require a scientific harmonization of the metadata completeness and data description levels, which are better defined for summary data used for regulatory purpose than for raw data, where the reporting style is very dependent on the individual data provider and application area. Therefore, we are working together with other big projects (EU-ToxRisk and NanoCommons) to define the amount and content of metadata, which has to be provided for each experimental assay or computational investigation, and data formats if no standards exist so far as well as providing the means for future additions and adaptations based on flexible data schema specifications. However, only the strict usage of ontologies in this data and metadata descriptions can guarantee that the information is easily understood by the user or automatically transferred between services. ### 1.4 Reuse of data For data and computational models, we use, as much as possible, existing data, software tools, open and readily available to all partners. We will aim at re- usable and extensible tools. OpenRiskNet is not producing any new experimental data but is reusing data from publicly available data sources or if absolutely needed for interaction with an associated partner or a case study, data from other projects not yet under an open license. It is the clear goal of the project, to provide all input data independent of the original source as well as the results from the processing, analysis and modelling workflows under an open-data license and provide it in a easy way for reuse by others. On one hand, making sharing, accessing and reusing of data easier is the main goal of the data solutions provided by OpenRiskNet and the integration of the reference data sources. On the other hand, results from the i​ n silico investigations are considered as equally valuable for sharing and reuse especially with the goal to improve the evaluability, repeatability and reproducibility of these computational studies. Full documentation of the workflows including intermediate results and permanent storage of the final outcomes highly annotated by metadata describing the procedures is, therefore, another central goal of the OpenRiskNet infrastructure. ### 1.5 Origin of the data As mentioned before at many places, the main data integrated, used and provided for easy reuse by OpenRiskNet are coming from other publicly funded research and infrastructure projects or institutions and are already in the public domain or will be made public available soon. However, users might also want to access commercial data services provided by associated partners or and use their in-house data as part of the infrastructure and partly share them with a selected users under a specific license. These considerations lead to three different classes grouping the origins of the data: * Data and models owned and provided by OpenRiskNet Partners and Associated Partners as part of the project work under an open-data license; * Open Source data and models provided under the license mentioned by the owners; * Data from third parties including associated partners and commercial services of OpenRiskNet partner, and not yet available in existing open databases used provided under the conditions specified by the data owner and included in a formal agreement. For all these data sources, the original license of data usage as to be considered and also applied (in the original or more restricted form) to the version integrated in OpenRiskNet environments. To prevent unauthorised data access even in virtual environments shared by multiple users like the reference environment, an authentication and authorisation service is integrated in OpenRiskNet infrastructure, which also handles the license management. In the same way, commercial software or free software requiring a registration is handled. ### 1.6 Expected size of the data As described above the idea of the OpenRiskNet infrastructure is not to combine data from different sources into one data warehouse but to access the data from its original source and use the interoperability layer added to the data services to harmonise them. In this way, no additional capacity for storage of the original data is needed. However, two aims of the OpenRiskNet project might lead to additional requirements on data storage. 1. Some of the data sources considered for integration are not yet available in open-accessible databases, these cannot be accessed via application programming interfaces or don’t comply with the FAIR principles. In such cases, OpenRiskNet will negotiate with the data owners if the data should be either transferred to standard data repositories or if the existing solution should be improved within the framework of the associated partner programme. 2. Most if not all of the data sources should also be provided in a form suitable for in-house deployment. Even if the user system administrator setting up the in-house virtual research environment (VRE) is responsible for providing the required resources for such deployments, the data sources have to be containerised and provided to the users for download via the OpenRiskNet service catalogue. To assess the needed storage space for the containers and to give the users guidance on the needed computational resources for the VRE, expected sizes for all data services are given in section 1.8 below. ### 1.7 Utility of data and models OpenRiskNet solutions will work towards making data available to its main stakeholders, researchers, risk assessors and regulators, in an easy accessible, standardised and harmonised way in order to be able to base conclusions and recommendations about the safeness of a chemical, drug, cosmetic ingredient and nanomaterial on all available evidence. The same principles are applied also to data processing, analysis and modelling tools involved in risk assessment. The access to the data infrastructure part of OpenRiskNet by industry has the merit of providing a wide spectrum of data, with which industry could perform parts of research and development activities and to lower the barriers to real innovation resulting in new products, processes and services. Close cooperation with the regulatory agencies is also key to push the regulatory acceptance of the integrated tools and workflows. Possible beneficiaries of the data, computational models and e-infrastructure: * Industry represented by chemicals, pharma, food, cosmetics or other consumer products companies which are required use all available information and to address the ‘3Rs’ principles and report on alternative methods used (including i​ n silico​); * Regulatory agencies (e.g. ECHA, EMA, EFSA); * SMEs as they frequently do not have in-house tools and knowledge resources for the regulatory risk assessment requirements; * R&D community: the translation of these methods to industrial and regulatory science will result in a deeper understanding of biological response to perturbations supporting e.g. better designed and safer drugs and clinical practice; * Consumers: OpenRiskNet aims to support integration of apps that can be used by consumers on their mobile phones supporting everyday activities, such as obtaining knowledge on ingredients in products they are purchasing or using. ### 1.8 Specific information on individual shared data sources In this section, we will address specific information and requirements of individual data sources provided by OpenRiskNet with respect to their purpose, origin, relationship to the project, data type and format, size and potential users. Section 2.5 and 5.3 below are fulfilling the same purpose for issues on FAIR data sharing and ethics. These are meant as additions or clarifications to the general descriptions relevant only for the specific data set / database. Points completely covered by the general remarks will not be repeated here again and thus some of the subsections above will not appear in the database descriptions #### 1.8.1 diXa Like OpenRiskNet, diXa was an e-infrastructure project for collecting data from different research projects and make them publicly available via a common interface. Thus, its goals and objectives fit those of the OpenRiskNet project. As part of the integration of diXa Data Warehouse into the OpenRiskNet infrastructure, the data access will be semantically annotated and further harmonised to other OpenRiskNet services. 1.8.1.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The Data Infrastructure for Chemical Safety (diXa) project ( _http://www.dixa-fp7.eu_ ​ _/_ ) _​_ was funded by EU FP7 to provide a single resource for the capture of toxicogenomics data produced by past, present and future EU research projects, and to ensure sustainability of such a resource for use by the wider research community. Therefore, the diXa Data Warehouse was established ( _http://wwwdev.ebi.ac.uk/fg/dixa/index.htm_ ​ _l_ ).​ Data from the diXa Data Warehouse as well as other sources (i.e. NCBI GEO and EBI ArrayExpress) is currently being used in a meta-analysis for genotoxicity prediction using data from multiple i​ n vitro cell models as part of the TGX case study. The results using only the human data have been presented as poster at EUROTOX 2018 ( _https://doi.org/10.1016/j.toxlet.2018.06.60_ ​ _8_ ) _​_ . 1.8.1.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The diXa Data Warehouse comprises of 95 studies, 29609 samples and 469 compounds (including solvents). I​ n vitro human and rodent data and i​ n vivo rodent data were collected from the EU FP6 projects carcinoGENOMICS, PredTox, NewGeneris, Predictomics, the EU FP7 projects ESNATS, Predict-iv, the Dutch project of the Netherlands Toxicogenomics Centre, the Japanese project Open TG-GATEs and the US project DrugMatrix. Most studies consist of transcriptomics data, whereas a few also contain metabolomics and/or proteomics data. It should be noted that the 2 i​ n vivo ​human studies of the EU FP7 Envirogenomarkers project do not contain data as these have been retracted based on objections from the Swedish biobank with regard to personal data protection. In addition, 188 human disease transcriptomics data sets have been added to the data warehouse. Metadata for all studies and disease data sets are captured in the ISA-tab format. In addition to this ‘omics data collection links to other globally-available chemical/toxicological databases were provided. The diXa Data Warehouse has been further used in the EU FP7 project HeCaToS coordinated by UM. In this project the data warehouse has become part of EBI’s BioStudies ( _https://www.ebi.ac.uk/biostudies_ ​ _/_ ) _​_ and the data generated in HeCaToS were directly uploaded to BioStudies. Upon public release of these data they can also be used within OpenRiskNet. 1.8.1.3 Additional details to 1.6 Expected size of the data The raw data of the ~30,000 samples are at least 400 GB in size. #### 1.8.2 BridgeDb The BridgeDb project was set up to provide both identifier mapping data and a general framework that provides an API to access identifier mapping data [5 ​ ] ​. BridgeDb is used in smaller and larger projects, the latter including WikiPathways, Cytoscape and Open PHACTS [6 ​ ] .​ It is available in various forms, including an Open API web service, Java library, Docker image, and BioConductor package. The platform supports two kinds of identifiers. The first are simple identifier-data source combinations. The second is Internationalised Resource Identifiers, for use in semantic web technologies. 1.8.2.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project Data interoperability requires identifier mappings. The mapping data is collected by the BridgeDb project and reshared in OpenRiskNet (possible because of the open licenses). Availability of identifier mappings allows simplifications of workflows. Data is part of the BridgeDb Docker services, and either preloaded (as in the current OpenRiskNet services) or loaded when the service is fired up (this approach is currently not actively used in OpenRiskNet). 1.8.2.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data BridgeDb identifier mapping databases are commonly available in two formats: Derby data files and as link sets. Both formats have been developed for different use cases. BridgeDb identifier mapping databases are available under open licenses or CC-Zero. Identifier mapping is essential to data set interoperability. Existing identifier mappings databases suffice for the current needs, but mapping databases are expected to be needed for other entities, like nanomaterials and AOP entities (e.g. stressors, key events, outcomes). 1.8.2.3 Additional details to 1.6 Expected size of the data Identifier mappings databases are released under the data management plan of the BridgeDb project. Data is shared in different ways depending on the type of entity. Metabolics identifier mappings databases are released on Figshare, and gene-variant databases are planned to be released on Figshare or Zenodo. The gene/protein and interaction mapping databases are currently still released using a custom approach, using a download server and not actively archived yet. The sizes of these databases vary, but typically are in the order of 500MB to 1GB in size. Exception are the gene-variant databases which are much larger. All sizes are still well within the scope of what archival websites allow. 1.8.2.4 Additional details to 1.7 Utility of data and models Identifier mapping is essential to data set interoperability since there are multiple competing identifier systems available for labeling e.g. chemical compounds, genes and pathways. Existing identifier mappings databases suffice for the current needs, but mapping databases are expected to be needed for other entities, like nanomaterials and AOP entities (e.g. key events, outcomes). Additionally, access to these tools from other services for e.g. cross-database searches and data curation and enrichment will be facilitated by the OpenRiskNet integration. #### 1.8.3 WikiPathways WikiPathways is a molecular pathway database, established by the WikiPathways team, a collaboration between the Department of Bioinformatics of Maastricht University and the Gladstone Institute, San Francisco. Its purpose is to facilitate the contribution and maintenance of pathway information by the biology community by utilizing the open, collaborative platform of WikiPathways. 1.8.3.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The contents of WikiPathways comprise of molecular pathways, consisting of nodes that are annotated for genes, proteins, and metabolites, which can be utilised for omics data analysis through pathway analysis in PathVisio. The WikiPathways database captures the biological knowledge in biological pathway diagrams, supported by scientific literature. Because molecular pathways can describe processes in any field of biology, it is relevant for toxicological risk assessment workflows. Pathways describe the connections between biological entities and show how a disturbance by a chemical or nanomaterial could cause downstream effects. 1.8.3.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The molecular pathways in WikiPathways are developed and curated by researchers, and are based on scientific literature. Pathways are available in multiple formats, including but not limited to the original Graphical Pathway Markup Language (GPML), Resource Description Framework (RDF), gene lists (GMT format), and nanopublications. The CC-Zero license puts no restrictions on reuse. 1.8.3.3 Additional details to 1.6 Expected size of the data The complete collection of GPML files is less than 100MB. 1.8.3.4 Additional details to 1.7 Utility of data and models Biological pathways are used for data analysis, biological interpretation of omics data, and data integration. #### 1.8.4 AOP-Wiki The AOP-Wiki is the primary repository of qualitative, mechanistic Adverse Outcome Pathway (AOP) knowledge. It was developed by the Organisation for Economic and Co-operation and Development (OECD), representing a collaboration between the European Commission DG Joint Research Centre and US Environmental Protection Agency. The AOP-Wiki is part of the AOP-Knowledge Base, which was launched by the OECD to allow everyone to build AOPs. 1.8.4.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The AOP-Wiki data comprises of mechanistic toxicological knowledge relevant for risk assessment. While most of the knowledge is present as free-text, literature-supported descriptions, essential aspects, such as biological processes, objects, cell types, and stressor chemicals that cause a disturbance, among other things are annotated with ontologies and chemical identifiers. Therefore, the AOP-Wiki serves as a knowledge base for toxicological effects related to a variety of chemicals, which summarises relevant literature. 1.8.4.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data Knowledge in the AOP-Wiki data is stored partly as free text and partly as ontology annotations and chemical identifiers. The data originates from the AOP-Wiki database, and is supported by scientific literature that is gathered and written by researchers. The contents of the AOP-Wiki are reviewed by the OECD Extended Advisory Group on Molecular Screening and Toxicogenomics (EAGMST). Nightly exports of the AOP-Wiki contents are available, but only quarterly downloads are stored and maintained permanently on the Wiki which allows citation when the information is reused. 1.8.4.3 Additional details to 1.6 Expected size of the data While the contents of the AOP-Wiki are increasing rapidly on a daily basis, the latest permanent download of the data (October 2018) does not exceed 12Mb. 1.8.4.4 Additional details to 1.7 Utility of data and models In order to perform risk assessment, one has to gather all relevant knowledge about the mechanistic effects of a compound that requires assessment. The AOP- Wiki allows for reusing mechanistic knowledge of toxicological events upon disturbance by a stressor, often a chemical. As the AOPs are developed in a way that knowledge is separated in biological events (called Key Events) and are chemical-agnostic, their major purpose is the re-usability of toxicological knowledge. Therefore, the contents of the AOP-Wiki can be relevant for each risk assessment workflow, providing mechanistic information about biological processes and linking these together. #### 1.8.5 ToxCast The United States Environmental Protection Agency Toxicity forecaster (ToxCast) 1 has generated toxicity screening data on thousands of chemicals in commerce and of interest to the agency and general public. The project also uses computational approaches to prioritise and rank chemicals for risk assessments and regulatory decision making. The data is publicly available, widely distributed and can be annotated to fit into the OpenRiskNet data harmonisation and integration framework. 1.8.5.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The ToxCast research project data is generated on high-throughput in vitro toxicity screens for a variety of chemicals and biological targets. One of the goals of the project is to prioritise and evaluate the potential human health risk of chemicals in a cost efficient way. The data generated also includes computational and predictive models to predict toxicity potential of the chemicals in humans. The results of these analysis are being used actively to inform the context of decision making such as endocrine disruptor screening. The data can be integrated with other data services of the OpenRiskNet infrastructure such as ontology mapping, pathway identification and mapping, and AOP development tools that. Users of the OpenRiskNet service will be able to take advantage of the information gaps filled through the integration of these datasets and models to develop predictive toxicology and risk assessment models e.g. read-across models. Examples of such uses are created in the case studies collecting evidence from all available data sources to create profiles of specific compounds, complementing omics data in bioinformatics workflows, data-driven developing and validating AOP, and model building based on chemical and biological data. The US EPA is also making the data publicly available and accessible through various means. As the project progresses, and more chemicals are screened, the agency makes periodic updates to the public release as more data is generated. 1.8.5.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The data currently available on the ToxCast dashboard includes over 9076 chemicals tested in 1192 assays (as at November 26, 2018) that map to hundreds of genes in both humans and rats. The chemicals screened span various uses including industrial, individual, food additive and potentially safer alternatives to already existing older chemicals. The assays tested are usually of two types: i.) cell-based assays which measure changes in cellular response to the test substances; and ii) Biochemical assays which measure the activity of a biological macromolecule. The cell typically used may be human or rat primary cells and cell lines. To inform chemical safety decisions, the computational toxicology research group at the US EPA makes both the archived and current versions of the data available to the public through 1.) a database called invitroDB which is a MySQL download of all the data, 2.) summary data in flat-file format (e.g. comma-separated value files and tab separated files), 3.) concentration response plots in pdf format, and 4.) a ToxCast dashboard web application which serves as a portal for users to search and query the data. No personal data is collected during the data generation process as all data in this set is in vitro and commercial cell lines are used for the testing. 1.8.5.3 Additional details to 1.6 Expected size of the data The current version of the invitroDB database download is less than 8GB. The accompanying summary files and analysis pipeline and concentration response plots have a cumulative size of less than 30GB. 1.8.5.4 Additional details to 1.7 Utility of data and models The high-throughput screening toxicity data and models available in ToxCast cover a wide chemical and biological space useful for risk assessment and as such of great value to the OpenRiskNet stakeholders. Integrating this dataset to the OpenRiskNet infrastructure will allow for easier access to the data by users who may not have background or expertise to setup and run the local databases and modelling pipelines. In addition being able to access this data from the OpenRiskNet service will also create greater utility for the data as it can be directly cross-referenced and used for modelling or analysis with other data in the service. ## 2\. FAIR DATA ### 2.1 Making data findable, including provisions for metadata <table> <tr> <th> _●_ </th> <th> Outline the discoverability of data (metadata provision) </th> </tr> <tr> <td> _●_ </td> <td> Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? </td> </tr> <tr> <td> _●_ </td> <td> Outline naming conventions used </td> </tr> <tr> <td> _●_ </td> <td> Outline the approach towards search keyword </td> </tr> <tr> <td> _●_ </td> <td> Outline the approach for clear versioning </td> </tr> <tr> <td> _●_ </td> <td> Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how </td> </tr> </table> OpenRiskNet is integrating existing data sources and make them easier findable, accessible and interoperable. This is based, on one hand, on the metadata provided by the data sources and, on the other hand, on the interoperability layer, which harmonises these metadata into service descriptions and data schemata, which can be queried through the OpenRiskNet discovery service. The description of the capabilities of a database and the data schema allows for: * Accessing specific search functionality, and * Identify the data fields to be searched (e.g. where information on the biological assays are stored); * Finding the best format for data exchange; * Understanding all the data and tools, with transparent access to metadata describing the experimental setup or computational approaches. In the case that the original data sources don’t provide all features required by the FAIR principles as e.g. unique identifiers, the interoperability layer added to the service in the context of OpenRiskNet can add these features and in this way improve the quality of the data source. ### 2.2 Making data accessible <table> <tr> <th> _●_ </th> <th> Specify which data will be made openly available? If some data is kept closed provide rationale for doing so </th> </tr> <tr> <td> _●_ </td> <td> Specify how the data will be made available </td> </tr> <tr> <td> _●_ </td> <td> Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? </td> </tr> <tr> <td> _●_ </td> <td> Specify where the data and associated metadata, documentation and code are deposited </td> </tr> <tr> <td> _●_ </td> <td> Specify how access will be provided in case there are any restrictions </td> </tr> </table> The OpenRiskNet approach will enable the early and transparent sharing and analysis of data between organisations involved in many sectors and programs. OpenRiskNet APIs and the used transfer formats were openly released immediately after their definition had reached a first stable form and updates will be made available throughout the project. In the prioritization of services to be integrated, open source tools are favoured for the use in the case studies and the reference workflows targeting specific question but commercial services will be equally important to sustain the infrastructure in the long run. However, also these commercial services are required to openly share their API definitions and data formats to allow for integration and combination with other tools. Open Standards applied: * Data and models are stored and served using well-developed and widely applied standards and technologies that promote data reuse and integration, such as JSON-LD, RDF and related semantic web technologies; * OpenRiskNet resources are aligned with activities of toxicology communities like OpenTox in developing open standards for predictive toxicology resources; * Tools to access study data and metadata description in standards file formats such as ISA, already in use in a number of omics, toxicogenomic and nanosafety resources (e.g. ToxBank, diXa, eNanoMapper), further simplify the integration; * Model descriptions are provided encoded guided by suitable open standards (e.g. QMRF, BEL, SBML) and annotated advancing appropriate minimal information standards (MIRIAM) for dissemination through appropriate repositories (e.g. BioModels) to cover the extended requirements of the semantic interoperability layer of OpenRiskNet. OpenRiskNet does not propose to create new file standards rather to employ the existing approaches as to define a core set of information, on which the scientific community agrees that they are important to document, but which can also be modified and extended if necessary for a specific application. For defining this core set, regulatory files formats like **OECD** ​ **harmonised templates (OECD HT)** [7 ​ ] and **Standard** ​ **for Exchange of Nonclinical Data (SEND)** [8 ​ ] are considered. Even if these file formats are too limited and do not have the flexibility to be used outside regulatory purposes and especially for early stage research and method development, the OpenRiskNet partners developing the guidelines are including as much information needed for these reports as possible in the data transfer templates. ### 2.3 Making data interoperable <table> <tr> <th> _●_ </th> <th> Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability </th> </tr> <tr> <td> _●_ </td> <td> Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? </td> </tr> </table> OpenRiskNet interoperability layer opens possibilities to provide data schemata, which describe the format of the data using a controlled vocabulary: * Metadata standards and data documentation approaches consider the existing standards that can be consolidated and the equivalent data that can be retrieved independent of the file format; * Developments towards the integration of ontologies under a single framework are ongoing together with partner projects mainly from the EU NanoSafety Cluster, which will contribute to the goal of automatic harmonization and annotation of datasets. The goal is not to develop new ontologies. Instead, already existing ontologies (e.g. OBI, ChEBI, PATO, UO, NCIT, EFO, OAE, eTOX, eNanoMapper, MPATH, etc.) are consolidated and integrated into applications ontologies for the toxicology community and specifically for the requirements of OpenRiskNet service annotation. * Another requirements to establish the comprehensive use of ontologies and in this way foster the interoperability not only of the major data sources but also user-provided data are user-friendly capturing frameworks supporting the selection of ontology terms during data curation and an ontology mapping service resolving issues of using synonyms from different ontologies (e.g. CAS numbers can be annotated using the National Cancer Institute Thesaurus, the EDAM ontology or even the Chemical Information Ontology reused in the eNanoMapper ontology, where it is available under the term “CAS registry number”). OpenRiskNet is working with experts in the field to integrate such tools in the infrastructure. * Allowing mapping between related items in different database (e.g. different gene-identifiers, linking genes to proteins or RNA identifiers, or mapping between equivalent chemical structures in different databases. BridgeDb, which can perform such mappings and is already part of the OpenRiskNet Services, is thus a core interoperability service. Additionally, we provide guidelines and training on the usage of standard data transfer/sharing formats and ontologies in the context of OpenRiskNet: * Best-practice examples like diXa and ToxBank are used to create templates for data storage and sharing; * Data schemata for different endpoints as already available in file formats like ISA and ToxML will be transformed into more flexible data transfer approach able to accompany modifications needed due to changed and enhanced experimental and computational protocols. An important part of these procedure is searching and accessing data from different sources supported by the semantic annotation of the data sources based e.g on the Bioschemas and BioAssays ontology: * The databases will be accessible by the OpenRiskNet APIs (similar like the computational tools) including the interoperability layer; * Searches throughout multiple databases will be possible, removing the need to search in everyone independently * The interoperability layer can be used to inspect the data schema and find out if the needed information is available from the databank and if it can be provided in a form for further analysis. OpenRiskNet provides to its potential data managers or developers complete control over the provided data and associated functionalities but requires from them to describe the interfaces and transfer formats in a generally understandable OpenRiskNet-compliant way through the interoperability layer. This work is based on and extends: * OpenTox APIs, which were designed to cover the field of QSAR-based predictive toxicology with dataset generation, model building, prediction and validation; * Open PHACTS APIs, which handle knowledge collection and sharing; * Various other APIs for accessing databases like BioStudies, EGA, ToxBank, and PubChem. ### 2.4 Increase data re-use (through clarifying licenses) <table> <tr> <th> _●_ </th> <th> Specify how the data will be licenced to permit the widest reuse possible </th> </tr> <tr> <td> _●_ </td> <td> Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed </td> </tr> <tr> <td> _●_ </td> <td> Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why </td> </tr> <tr> <td> _●_ </td> <td> Describe data quality assurance processes </td> </tr> <tr> <td> _●_ </td> <td> Specify the length of time for which the data will remain re-usable </td> </tr> </table> Most of the data sources are already available in the public domain. OpenRiskNet will redistribute the data using the same license as the original data provider or, if this is demanded by the data provider, in a more restricted form. New data will be made available through the OpenRiskNet access methods as soon as it is released by the original data provider, i.e. no additional embargo period will be enforced by the OpenRiskNet project. Thus, users can access the same data with respect to the number of datasets and version of each dataset either from the original service provider, via e.g. a web interface specifically design for the data warehouse, or through the OpenRiskNet mechanism, where the latter has the advantage of easy integration into workflows and interoperability with other data sources and software tools. Besides this simpler access, OpenRiskNet aims to improve the quality of the data with the following measures: Data quality assurance processes: * Tools for performing automatic validation, analysis and (pre)processing are developed to find inconsistencies in the data and databases and, in this way, improve the quality of the source and are made available in OpenRiskNet, e.g. _http://arrayanalysis.org/_ and _ https://github.com/BiGCAT-U ​ M _ .​ Additionally, efforts to establish a general, cross-database data curation framework, in which users can flag possible errors in the data and semantic annotation, is supported. * Some partners (e.g. UM) developed their own pipelines for quality control and analysis of sequencing data (RNA-seq and MeDIP-seq). * We also integrate tools for manual curation of datasets. The modified dataset are stored (similar to the pre-reasoned datasets) in the original databases as a new version of the dataset or in other OpenRiskNet-compliant databases with a link to the original source. Quality assurance in the processing, analysis and modelling tools: * Protocolling of the performed calculations increasing the repeatability and reproducibility of the studies, is supported by the automatic logging and auditing functionalities of modern microservices frameworks as well as the integrated workflow management systems. * Validation of the services are enforced by the consortium and appropriate measures of uncertainty are requested for all models. ### 2.5 Specific information on individual shared data sources #### 2.5.1 diXa The diXa Data Warehouse is open access. Data retrieval as well as upload takes place via the diXa Data Warehouse webpage _http://wwwdev.ebi.ac.uk/fg/dixa/index.htm_ ​ _l_ using the ISA-tab format for the metadata. There is no API available for diXa. Since the diXa Data Warehouse has become part of BioStudies diXa’s data can also be accessed here. Data uploaded via BioStudies using the PageTab format (Page layout Tabulation format) will be part of BioStudies and therefore, will not be visible via the diXa Data Warehouse. The same is true the other way around. BioStudies also provides other formats, such as JSON and XML. Furthermore, BioStudies is part of the ELIXIR infrastructure and an Rest API 2 is available. #### 2.5.2 BridgeDb The BridgeDb software is available under the OSI-approved Apache License 2.0. Identifier mappings files are available under open licenses too, following the open licenses of the upstream resources (Ensembl, Rhea) or CCZero in case of the metabolite mapping database. The BridgeDb web service and data for identifier mappings is made available on the OpenRiskNet cloud using an OpenAPI specification wrapped around a REST services. #### 2.5.3 WikiPathways All contents of WikiPathways are licenced with the Creative Commons CC0 waiver, which states that all contents of the database are free to share and adapt. WikiPathways adopts a customised quality assurance protocol to curate the database, which is done on a weekly basis. #### 2.5.4 AOP-Wiki The AOP-Wiki provides quarterly downloads for the complete database, which are permanently maintained by the OECD. The AOP-Wiki does not provide licence information, but states that the data can be reused. All AOPs undergo review by EAGMST to ensure the quality of the contents of the AOP-Wiki. <table> <tr> <th> **FAIR Principles** </th> <th> **WikiPathways** </th> <th> **AOP-Wik i** </th> </tr> <tr> <td> F1. (Meta)data are assigned and globally unique and persistent identifiers </td> <td> 2 </td> <td> 1 </td> </tr> <tr> <td> F2. Data are described with rich metadata </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> F3. Metadata clearly and explicitly include the identifier of the data they describe </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> F4. (Meta)data are registered or indexed in a searchable resource </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> A1.1. The protocol is open, free and universally implementable </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> A1.2. The protocol allows for an authentication and authorization where necessary </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> A2. Metadata are accessible, even when the data are no longer available </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> I1. (Meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> I2. (Meta)data use vocabularies that follow FAIR principles </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> I3. (Meta)data include qualified references to other (meta)data </td> <td> 2 </td> <td> 1 </td> </tr> <tr> <td> R1.1 (Meta)data are released with a clear and accessible data usage license </td> <td> 2 </td> <td> 1 </td> </tr> <tr> <td> R1.2. (Meta)data are associated with detailed provenance </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> R1.3. (Meta)data meet domain-relevant community standards </td> <td> 1 </td> <td> 2 </td> </tr> </table> **Table 1.** Compliance to FAIR principles ​ [9 ​ ] by AOP-Wiki and WikiPathways. Score meanings:​ 1 = partial compliance, 2 = compliance #### 2.5.5 ToxCast All data produced by the U.S EPA including ToxCast is by default in the public domain (U.S. Public Domain license) and is not subject to domestic copyright protection under 17 U.S.C. § 105. This allows to reproduce the work in print or digital form, create derivative works, perform the work publicly, display the work and distribute copies or digitally transfer the work to the public by sale or other transfer of ownership, or by rental, lease, or lending. The​ release currently used in OpenRiskNet is a REST API developed by Douglas Connect of the official summary data release made available as downloadable CSV files. These were restructured to fit the OpenRiskNet concept and to make them fit for semantic annotation. ## 3\. ALLOCATION OF RESOURCES <table> <tr> <th> **Explain the allocation of resources, addressing the following issues** ​: * Estimate the costs for making your data FAIR. Describe how you intend to cover these costs * Clearly identify responsibilities for data management in your project * Describe costs and potential value of long term preservation </th> </tr> </table> Making data FAIR is a central task of the integration of data source in the OpenRiskNet infrastructure. Many of the sources considered for integration already follow the FAIR principles and no additional cost are foreseen other than the effort needed to make the sources OpenRiskNet-compliant. For data sources not at a sufficient high level, the OpenRiskNet partners owning the data or responsible for its integrating covers the costs of the integration from their allocated budget. In the case of third-party data sources, the integration will be performed in collaboration of the associated partners partly financially supported through the Implementation Challenge and an OpenRiskNet partner designated as main contact point of the associated partner. ## 4\. DATA SECURITY <table> <tr> <th> _●_ </th> <th> Address data recovery as well as secure storage and transfer of sensitive data </th> </tr> </table> The OpenRiskNet approach on data recovery, secure storage and transfer of sensitive data includes: * Responsible and secure management processes for personal data including anonymisation, encryption, logging of data usage as well as data deletion after usage are implemented; * To ensure that all ethical guidelines are followed by all OpenRiskNet Partners and Associated Partners and implemented in every step of the infrastructure, a p​ rivacy by design approach is followed in the project, documented in the OpenRiskNet privacy policy (see below) and controlled by an independent Data Protection Officer; * The most sensible way to protect sensitive data offered by the OpenRiskNet infrastructure is to bring the virtual environment and all data sources behind a company’s firewall by in-house deployment. ## 5\. ETHICAL ASPECTS <table> <tr> <th> _●_ </th> <th> To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former </th> </tr> </table> The ethical aspects are covered by the Protection Of Personal Data (POPD) requirements and submitted deliverables: * D6.1 - NEC - Requirement No. 3: Statements regarding the full compliance of third country participants to the H2020 rules * D6.2 - POPD - Requirement No. 4: Copies of the previous ethical approvals of data to be collected and used in the project, approvals which must also allow the possible secondary use of data * D6.3 - POPD - Requirement No. 5: Consent forms and information sheets before interviews and surveys and any other data and personal data collection activity in the project * D6.4 - POPD - Requirement No. 6: A statement by third party testers that they will comply with the applicable EU law and H2020 rules * D6.5 - POPD - Requirement No. 7: Data Protection Officer Report 1 And the two future deliverables: * D6.6 - POPD - Requirement No. 8: Data Protection Officer Report 2 \- D6.7 - POPD - Requirement No. 9: Data Protection Officer Report 3 Based on the ethics report received as part of the evaluation of the proposal and the first report of the Data Protection Officer summarizing the relevant regulations and legislation for the different data types and test models, OpenRiskNet has worked together with an external expert to develop elements on an ethics review framework and ethics requirements checklist, which have to be considered in the project and the infrastructure implementation before integrating a data source. One important component for working on the project case studies and the future usage of the e-infrastructure is access to existing data provided by other European projects in this field or from international consortia. The OpenRiskNet platform is based on a data management system that will make existing data sources mainly from i​ n vitro ​human and animal and i​ n vivo animal experiments available to all stakeholders in a harmonised way. In this context, one important additional task of the DMP, becoming even more relevant with the new EU General Data Protection Regulation (GDPR) in effect since May 2018, is to support and give recommendations in achieving the highest impact without jeopardising the ethics integrity of the OpenRiskNet infrastructure. To fulfil the requirements from the ethics review, a step-by-step decision process was developed addressing how important legacy data sources need to be handled by the project. On top of the workflows provided in the first review of the Data Protection Officer, a hierarchical data source analysis and evaluation of the ethical implications for OpenRiskNet was performed. Different categories of data sources have been analysed, including references to the legislation in place and the conditions for primary, secondary and tertiary data collection and use. Also special measures that need to be considered for some specific cases are included. Data sources considered: I. Human Biomaterial Use, Collection or Storage (Donors) II. Primary Results Data processing (Clinics) 3. a) Compound Storage/Processing (Commercial cell lines data providers), and b) Secondary Results Processing (Experimentalists) 4. Tertiary Use - Storage/Provider - No Processing (Database) Based on the assessment of each of these categories, a checklist is proposed for each data source to be included and/or used in the OpenRiskNet platform (
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1204_OpenRiskNet_731075.md
# SUMMARY This report is based on the initial data management plan (DMP) (Deliverable 3.1) [1 ​ ] and includes the final DMP for the OpenRiskNet e-infrastructure project. The current document covers the aspects of the OpenRiskNet data management based on the FAIR (findable, accessible, interoperable and reusable) guidelines, ethics considerations for re-sharing of public datasets, and details on the data sources made available and will be sustained after the project as OpenRiskNet data sources. These are alternative access to US EPA ToxCast/Tox21 data, intensities and fold changes obtained by processing the TG-GATEs and DrugMatrix transcriptomics data, BridgeDb, semantic annotated versions of WikiPathways, AOP-Wiki and AOP-DB in RDF format, ToxicoDB including the dataset for the TGX case study, the nano Daphnia dataset, ToxPlanet, SCAIview and as a specific feature of OpenRiskNet also the library of risk assessment workflows in form of Jupyter notebooks. Sustainability of OpenRiskNet developments and achievements other than data will be covered in the separate sustainability plan being part of the second Periodic Report. # INTRODUCTION The European Commission is enforcing the openness of data produced in all projects under Horizon 2020 based on the Open Research Data Pilot (ORD Pilot). The ORD pilot aims to improve and maximise access to and re-use of research data and takes into account the need to balance openness and protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions [2 ​ ] .​ Open data is data that is free to access, re-use, repurpose, and redistribute. The Open Research Data Pilot aims to make the research data generated by selected Horizon 2020 projects accessible with as few restrictions as possible, while at the same time protecting sensitive data from inappropriate access. Projects starting from January 2017 are by default part of the Open Data Pilot, including the Research infrastructures (including e-Infrastructures) are required to participate in the ORD Pilot. Since one of the main aims of OpenRiskNet is to allow for a simpler, more harmonised access to public, open data sources and workflows and enrich these with semantic annotation to improve their interoperability between each other and with predictive toxicology and risk assessment software, OpenRiskNet is fully supporting the ORD Pilot, is developing best-practice approach and trying to act as a role model for data management and sharing [2 ​ ] .​ To help optimising the potential for future sharing and re-use of data, the OpenRiskNet Data Management Plan (DMP) helped the partners to consider any problems or challenges that may be encountered and helped them to identify ways to overcome these. Throughout the project, the DMP was handled as a “living” document and is available here in its final version as to November 2019. It outlines how the research data collected or generated, including redistribution of existing data sources as well as results from the i​n silico investigations performed as part of the case studies, were handled by the OpenRiskNet consortium and associated partners. It follows the Guidelines on FAIR Data Management in Horizon 2020 [2 ​ ] and OpenAIRE guidelines ​ [3 ] ​, and is based around the resources available to the project partners in a realistic way taking the current knowledge into account. The activities of all OpenRiskNet partners to keep the DMP up to date followed an online, distributed approach as outlined in the above mentioned guidelines for creating an online DMP (see Figure 1). In this report the general concepts for the description of datasets, as well as data sharing and archiving approaches adopted in the DMP are described first. This is then followed by the actual DMP in its final version as by the time of this writing. It covers the general aspects of the OpenRiskNet data management but also specific and clearly-defined measures for the data sources integrated by the project into the OpenRiskNet infrastructure specifically also including the library of toxicology and risk assessment workflows. **Figure 1** . DMPonline​ 1 tool used to guide the creation of the OpenRiskNet plan ( _https://dmponline.dcc.ac.uk/plans/1626_ ​ _1_ ) _​_ ## DATA SET DESCRIPTION This section give the general concepts of what is considered data in OpenRiskNet and a listing of what kind of data and information the project collected, made available for redistribution and sharing or generated as part of the case studies (i​n silico only), and to whom they might be useful later. More information and specific details are given in the DMP below. Data refers to: * Data generated in i​n vivo​, i​n vitro​, i​n chemico and i​n silico experiments, the first three coming from third-parties including other projects and associated partners, broadly related to toxicology and risk assessment in the form of raw, processed and summary data as well as metadata to describe the type of data and how it was produced (protocols and method descriptions for the experimental, processing and analysis procedures including the computational workflows); * More specifically, data and metadata, which form part of an OpenRiskNet service or were needed to execute the case studies and validate results in scientific publications, and * Other curated and/or raw data and metadata that may be required for validation purposes or with reuse value. The metadata provided with the datasets allows to answer questions to enable data to be found and understood, according to the particular standards applied. Such questions include but are not limited to: * What is the data about? * Who created it and why? * In what forms is it available? * Which standards were applied. Finally, the **metadata** ​ ,​ **documentation** ​ and **standards** ​ were selected and are now provided to make the data FAIR (Findable, Accessible, Interoperable and Re-usable) as well as to guide future project. They are not only satisfying the technical requirements like global, persistent identifier, clear access protocols (data application programming interfaces (APIs)) and licenses but are an attempt for harmonizing and improving the scientific interoperability of the data by semantic annotations and allowing combination and enrichment of datasets using linked-data approaches (Combined OpenAPI and JSON-LD description of data APIs). Data integrated can be grouped into the following areas: * Existing toxicology, chemical properties and bioassay databases for redistribution; * Existing omics databases for redistribution; * Existing knowledge bases for redistribution and information extracted by data mining; * Document libraries; * Computational workflows for redistribution and adaptation to specific scientific questions; and * Intermediate or final results of i​n silico studies performed as part of the case studies as part of the Jupyter notebooks. ## DATA SHARING According to the ORD Pilot programme, the resulting data should be archived as Open Access by default as much as possible. Most data handled in OpenRiskNet is provided by international publicly funded projects, not-for-profit consortia or governmental and regulatory agencies. This data sources are already available under open-data licenses and will be redistributed by OpenRiskNet in a restructured and enriched form under the same license. Newly generated data are results from the improved processing, analysis and modelling workflows developed as examples in the case studies, including the workflows themselves, and OpenRiskNet makes these publicly available either as part of the publicly shared workflows or, if they have value outside the case studies, as a separate information and knowledge source. Working together with associated partners especially from the commercial sector (service providers and end users from SMEs and larger industry) has put some restrictions on the sharing of data generated in these collaborations. The legitimate reasons for not sharing resulting data is explained in the DMP in the one case they had to be applied (ToxPlanet chemical hazard and toxicology literature repository). OpenRiskNet was fully committed to protect personal data and IPR agreements and to responsible data sharing and has taken all steps reasonably necessary to ensure that data is treated securely and in accordance with the OpenRiskNet privacy policy (see section below on the Privacy Policy). No personal data was transferred to an organization or a country unless there were adequate controls in place including the security of data and personal information. Complementing these general data sharing policies, the DMP below describes any ethical or legal issue that has an impact on data sharing of specific data sources. Since in many cases, the data production was not under the control of the OpenRiskNet partners and the data was only redistributed by them, the obligation to guarantee that the data is generated from high quality, ethical research and can be shared under an open license is in the hand of the original data provider or the primary data distributor. This includes the obligation to operate in conformity with the requirements of their own institution, and fulfil all necessary national and international regulatory and ethical requirements. OpenRiskNet concortium members were working together and will continue to do so with the original data providers as well as ethical experts on producing workflows and a checklist (see attachments for a draft version) for the ethics evaluation and on documenting the measures adopted during the data generation process (ethical approval of the i​n vivo and i​n vitro experiments by the relevant authorities) as well as for protection personal data e.g. by anonymization of data before sharing. Whenever provided by the data provider and when technical feasible, this licensing, legal and ethics information was made available to the OpenRiskNet user as part of the data service description. ## ARCHIVING AND PRESERVATION To ensure that publicly funded research outputs can have a positive impact on future research, for policy development, and for societal change, it is also important to assure the availability of data for a long period beyond the lifetime of a project. This does not refer only to storage in a research data repository, but also to consider the usability of the data. One of the main goals of the infrastructure created by the OpenRiskNet project was to harmonise data, make it interoperable and sustainable and in some cases even enable data sharing or replace existing data sharing solutions. Therefore, the project had a special obligation for preserving data not only produced in the project but also from other projects redistributed by OpenRiskNet and software or any code produced to perform specific analyses or to render the data as well as being clear about any proprietary or open source tools that will be needed to validate and use the preserved data. OpenRiskNet was building on software engineering and infrastructure components developed, supported and adopted by a large community guaranteeing, on the one hand,stability and sustainability of the data sharing, accessing and processing solutions provided even in the relatively quickly changing field of microservice architectures and deployments. On the other hand, the containerization approach adopted by OpenRiskNet allows for the storage of the data and software in the version used during the execution of the analysing and modelling workflows allowing for complete and exact repeatability using the same code and improved reproducibility due to better documentation. It has to be noted here that many of the data sources are only redistributed by OpenRiskNet. The primary data provider for e.g. ToxicoDB, ToxCast/Tox21 and TG-GATEs are international research groups or agencies. Raw data archiving and preservation have to be guaranteed by these institutions. However, OpenRiskNet and more specifically the OpenRiskNet partner responsible for the integration into the OpenRiskNet infrastructure are in charge of maintaining and updating the alternative method to access the data (OpenRiskNet-compliant data API), guaranteeing that the data available within OpenRiskNet is on the same technical and curation level and at the same version as in the primary source, and sustaining the solution beyond the OpenRiskNet project. The same is true for data sources, where OpenRiskNet also takes the responsibility of hosting the data and thus, becomes the primary data source. In the later case, archiving and preservation of the data source containers is of uttermost importance since otherwise there is the danger that the data is lost completely. Short-term sustainability (2 years) is secured by the reference infrastructure running at Johannes-Gutenberg Universität Mainz. Negotiations with the Birmingham Environment for Academic Research (BEAR) are underway to provide mid- to long-term archiving and preservation facilities for containerised data and software services for at least the next 5 years. # DATA MANAGEMENT PLAN (DMP) This data management plan addresses all data-related measures adopted by the OpenRiskNet project including problems or challenges that were encountered by partners during the execution of the project. It consists of general guidelines and project-internal rules and regulations dealing with the type of data collected, data sharing following the FAIR principles, hard- and software resources as well as with data security, privacy and ethics. Additionally, more details for specific data sources on all these aspects are provided whenever necessary. ## 1\. DATA SUMMARY <table> <tr> <th> **Summary of the data addressing the following issues:** * State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful </th> </tr> </table> ### 1.1 Purpose of the data collection The main purpose in collecting and use of data and metadata in the OpenRiskNet project was to fulfill its main objectives in providing and improving solutions on data availability to the toxicology and risk assessment scientific community, data quality, interoperability, standardization and sustainability and overcome some of the data-related issues, e.g.: * Fragmentation of data across different databases without common ways to query the data; * Low quality, interpretability and reusability due to insufficient data curation; * Poor explanation and insufficient details on experimental design and protocols applied; * Data available in different formats and with different annotations. Another goal was to generate guidelines and templates for data exchange, semantic annotations and harmonised use of ontologies as well as develop criteria and solutions for controlling the quality of a dataset or i​n silico tool for quantifying the uncertainty of predictive models and for improving the repeatability and reproducibility of processing, analysis and modelling workflows. ### 1.2 Relation to the objectives of the project The OpenRiskNet project aimed to establish an e-infrastructure and services functions providing a centralised and standardised set of data and computing resources, accompanied by standardised operating procedures and guidance: * Provision of quality sources of data to facilitate a more accurate evaluation of toxicity; * Data infrastructure offering a centralised repository for data created during other research programs, including the import of relevant research i​n vitro​, i​n vivo ​and human data from other sources. * Well-designed data import facilities to support ongoing data collection according to quality guidance. * Use and further development of data annotation and exchange standards for describing toxicity data based on application programming interfaces in order to reduce errors and enable data integration from different laboratories, including data sources outside the program * Integrate regulatory reporting requirements with respect to metadata and documentation details and completeness. The OpenRiskNet project aims also to develop and optimise computational models and automated and reliable analysis workflows in order to increase the mechanistic understanding of toxicity: * Models permitting identification of mechanistic links between omics data at different levels of functional organisation; * Models helping to advance the understanding of the relationship between toxicity, architecture, function and risk; * Computational sensitivity analyses components aiding in identifying most sensitive parameters relevant to toxicity and guide further data acquisition and experiments towards increased chemical safety. The data sources integrated during the project are highly relevant to the predictive toxicology and risk assessment community and therefore, are used to showcase and evaluate the concepts and solutions provided by OpenRiskNet and how these are addressing the aims just mentioned. Additionally, they were used in the case studies to provide the example workflows on how to apply and combine the different tools for effective problem solving for the different aspects of risk assessment. ### 1.3 Types and formats of data OpenRiskNet was structured around the concept of semantic-annotated application programming interfaces, which can be used to search and access data from OpenRiskNet-compliant data sources. As serialised exchange format, JSON or the semantically annotated form JSON-LD is recommended and enforced whenever possible. These formats cover mainly the metadata associated to the data and in the case of small numbers of readouts (experimental toxicology endpoints) per sample also the data. Especially for omics data or imaging techniques, these files will be accompanied by data in standard file formats to keep the compatibility and interoperability with tools developed in these areas like gene- and pathway-enrichment approaches or image recognition software, respectively. Working together with other big projects (EU-ToxRisk and NanoCommons), the amount and content of metadata were defined, which has to be provided for each experimental assay or computational investigation, and data and protocol/test method description formats have been created in the cases standards didn’t exist so far providing the means for future additions and adaptations based on flexible data schema specifications to cover scientific advances. However, only the strict usage of ontologies in this data and metadata descriptions can guarantee that the information is easily understood by the user or automatically transferred between services. ### 1.4 Reuse of data Since OpenRiskNet aimed at improving the reusability of data and software tools, it was the clear major goal of the project, to provide all input data independent of the original source as well as the results from the processing, analysis and modelling workflows under an open-data license and offer it in an easy way for reuse by others. On one hand, making sharing, accessing and reusing of data easier was achieved by the data solutions provided by OpenRiskNet and the integration of the reference data sources. On the other hand, results from the i​n silico investigations are considered as equally valuable for sharing and reuse especially with the goal to improve the evaluability, repeatability and reproducibility of these computational studies. Full documentation of the workflows including intermediate results and permanent storage of the final outcomes highly annotated by metadata describing the procedures was, therefore, organized and promoted for adoption by other projects using the capacities of workflow management tools like Jupyter storing not only the computational workflows but also the produced results. ### 1.5 Origin of the data As mentioned before, the main data integrated, used and provided for easy reuse by OpenRiskNet are coming from other publicly funded research and infrastructure projects or institutions and are already in the public domain or will be made public available soon. However, users might also want to access commercial data services provided by associated partners or use their in-house data as part of the infrastructure and partly share them with a selected users under a specific license. These considerations lead to three different classes grouping the origins of the data: * Data and models owned and provided by OpenRiskNet consortium members and associated Partners as part of the project work under an open-data license; * Open Source data and models provided under the license mentioned by the owners; * Data from third parties including associated partners and commercial services of OpenRiskNet partner, and not yet available in existing open databases provided under the conditions specified by the data owner and included in a formal agreement. For all these data sources, the original license of data usage has to be considered and applied (in the original or more restricted form) to the version integrated in OpenRiskNet environments. To prevent unauthorised data access even in virtual environments shared by multiple users like the reference environment, an authentication and authorisation service was integrated in OpenRiskNet infrastructure, which also handles the license management. In the same way, commercial software or free software requiring a registration is handled. In cases, where even more protection was needed, the data services continued to be operated by the data provider and only restricted but harmonized access using the OpenRiskNet authentication and authorisation service was integrated into the infrastructure. In this way, the data provider keeps complete control over the data and can shield it against attacks to obtain unauthorised access, which would be easier possible if the containerised data is deployed into virtual environments on local machines. ### 1.6 Expected size of the data The idea of the OpenRiskNet infrastructure was not to combine data from different sources into one data warehouse but to access the data from its original source and use the interoperability layer added to the data services to harmonise them. In this way, no additional capacity for storage of the original, mainly raw data was needed. However, two aims of the OpenRiskNet project led to additional requirements on data storage. 1. Some of the data sources considered for integration are not yet available in open-accessible databases, cannot be accessed via application programming interfaces from these original sources or don’t comply with the FAIR principles. 2. Data sources were made available in a form suitable for in-house deployment. Even if the user system administrator setting up the in-house virtual environment (VE) is responsible for providing the required resources for such deployments, the data sources have to be containerised and provided to the users for download via the OpenRiskNet service catalogue. Sizes for all data services are given in section 1.8 below together with other more specific details. This information can be used by OpenRiskNet users to assess the needed storage space for the containers and to give guidance on the needed computational resources for the VE. ### 1.7 Utility of data and models OpenRiskNet solutions make data available to its main stakeholders (researchers, risk assessors and regulators) in an easy accessible, standardised and harmonised way in order to be able to base conclusions and recommendations about the safety of a chemical, drug, cosmetic ingredient and nanomaterial on all the available evidence. The same principles are applied also to data processing, analysis and modelling tools involved in risk assessment. The access to the data infrastructure part of OpenRiskNet by academia, industry, risk assessors and regulators has the merit of providing a wide spectrum of data, with which users can perform parts of research and development activities and to lower the barriers to real innovation resulting in new products, processes and services. Close cooperation with the regulatory agencies is key to push the regulatory acceptance of the integrated tools and workflows. Possible beneficiaries of the data, computational models and e-infrastructure: * Software developers in academia and industry developing advanced risk assessment approaches based on the data and provide these to risk assessment experts; * Industry represented by chemicals, pharma, food, cosmetics or other consumer products companies which are required to use all available information and to address the ‘3Rs’ principles and report on alternative methods used (including i​n silico​); * Regulatory agencies (e.g. ECHA, EMA, EFSA); * SMEs as they frequently do not have in-house tools and knowledge resources for the regulatory risk assessment requirements; * R&D community as the translation of these methods to industrial and regulatory science will result in a deeper understanding of biological response to perturbations supporting e.g. better designed and safer drugs and clinical practice; * Consumers as OpenRiskNet infrastructure support the integration and development of apps that can be used by consumers on their mobile phones supporting everyday activities, such as obtaining knowledge on ingredients in the products they are purchasing or using. ### 1.8 Specific information on individual shared data sources In this section, specific information and requirements of individual data sources provided and sustained by OpenRiskNet with respect to their purpose, origin, relationship to the project, data type and format, size and potential users are summarized. Section 2.5 below is fulfilling the same purpose for issues on FAIR data sharing. These are meant as additions or clarifications to the general descriptions relevant only for the specific dataset/database. Points completely covered by the general remarks are not repeated here and thus some of the subsections above will not appear in the following descriptions. OpenRiskNet sustained data sources are: * Library of OpenRiskNet computational workflows * BridgeDb * WikiPathways * AOP-Wiki * AOP-DB * ToxCast/Tox21 * TG-GATEs * DrugMatrix * Nano Daphnia dataset * ToxicoDB * ToxPlanet * SCAIView annotated document corpora #### **1.8.1 Repository of OpenRiskNet computational workflows** OpenRiskNet has created a large number of computational workflows in relation to the work performed in the case studies and to demonstrate the functionality of single OpenRiskNet tools and how they can be combined to address complex risk assessment tasks profiting from the improved harmonization and interoperability. These build an important part of the achievements of OpenRiskNet and need to be sustained as an information and training resource. 1.8.1.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project As just described, the workflows build a central piece of the OpenRiskNet activities and achievements. They not only document the performed work but are also a way to offer the workflows composed of the computational procedure, the intermediate data and results for reuse by others. These users of the resource can rerun the workflow to show repeatability and more importantly modify the workflows to specifically address their risk assessment questions. 1.8.1.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The workflows are in the specific format used to exchange Jupyter, Squonk and nextflow workflows. Since they are highly related to the service development and integration, they are stored in a way in which they can be linked to other development resources and disseminated to computational toxicologies and tool developments. GitHub was created exactly for this purpose. An OpenRiskNet organisation has been set up in GitHub covering multiple repositories designated for the development, support and dissemination activities of the project. Besides source code control, issue tracking and developer documentation, one of the repositories is dedicated for sharing and disseminating workflows mainly in the form of Jupyter or Squonk notebooks and can be accessed at https://github.com/OpenRiskNet/notebooks. All OpenRiskNet workflows have been uploaded there and are publically available under open- source licences to allow reuse, modification and any publication of derived work. Sustainability of the content is guaranteed by the GitHub environment. 1.8.1.3 Additional details to 1.6 Expected size of the data The overall size of the workflows is well below 1GB and is covered by the standard offering of GitHub. 1.8.1.4 Additional details to 1.7 Utility of data and models Computational workflows offer the optimal way to present functionality of OpenRiskNet services and the e-infrastructure in total. They help users get easily started by allowing to rerun the predefined workflows (since all needed data and service access routes are defined) and adapt them to their questions at hand. Examples for specific risk assessment task are already available and the coverage of the full risk assessment framework will be continuously improved by additional workflows provided by the OpenRiskNet consortium members also after the end of the project and by opening up the workflows repository (read and write access) to all users to provide and share their workflows and even let others comment on and improve them. #### **1.8.2 BridgeDb** The BridgeDb project was set up to provide both identifier mapping data and a general framework that provides an API to access identifier mapping data [4 ​ ] .​ BridgeDb is used in smaller and larger projects, the latter including WikiPathways, Cytoscape and Open PHACTS [5 ​ ] .​ It is available in various forms, including an Open API web service, Java library, Docker image, and BioConductor package. The platform supports two kinds of identifiers. The first are simple identifier-data source combinations. The second is Internationalised Resource Identifiers, for use in semantic web technologies. 1.8.2.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project Data interoperability requires identifier mappings. The mapping data is collected by the BridgeDb project and reshared in OpenRiskNet (possible because of the open licenses). Availability of identifier mappings allows simplifications of workflows. Data is part of the BridgeDb Docker services, and either preloaded (as in the current OpenRiskNet services) or loaded when the service is fired up (this approach is currently not actively used in OpenRiskNet). 1.8.2.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data BridgeDb identifier mapping databases are commonly available in two formats: Derby data files and as link sets. Both formats have been developed for different use cases. BridgeDb identifier mapping databases are available under open licenses or CC-Zero. Identifier mapping is essential to data set interoperability. Existing identifier mappings databases suffice for the current needs, but mapping databases are expected to be needed for other entities, like nanomaterials and AOP entities (e.g. stressors, Key Events, outcomes). 1.8.2.3 Additional details to 1.6 Expected size of the data Identifier mappings databases are released under the data management plan of the BridgeDb project. Data is shared in different ways depending on the type of entity. Metabolics identifier mappings databases are released on Figshare, and gene-variant databases are planned to be released on Figshare or Zenodo. The gene/protein and interaction mapping databases are currently still released using a custom approach, using a download server and not actively archived yet. The sizes of these databases vary, but typically are in the order of 500MB to 1GB in size. Exception are the gene-variant databases which are much larger. All sizes are still well within the scope of what archival websites allow. 1.8.2.4 Additional details to 1.7 Utility of data and models Identifier mapping is essential to data set interoperability since there are multiple competing identifier systems available for labeling e.g. chemical compounds, genes and pathways. Existing identifier mappings databases suffice for the current needs, but mapping databases are expected to be needed for other entities, like nanomaterials and AOP entities (e.g. Key Events, outcomes). Additionally, access to these tools from other services for e.g. cross-database searches and data curation and enrichment will be facilitated by the OpenRiskNet integration. #### **1.8.3 WikiPathways** WikiPathways is a molecular pathway database, established by the WikiPathways team, a collaboration between the Department of Bioinformatics of Maastricht University and the Gladstone Institute, San Francisco. Its purpose is to facilitate the contribution and maintenance of pathway information by the biology community by utilizing the open, collaborative platform of WikiPathways. 1.8.3.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The contents of WikiPathways comprise of molecular pathways, consisting of nodes that are annotated for genes, proteins, and metabolites, which can be utilised for omics data analysis through pathway analysis in PathVisio. The WikiPathways database captures the biological knowledge in biological pathway diagrams, supported by scientific literature. Because molecular pathways can describe processes in any field of biology, it is relevant for toxicological risk assessment workflows. Pathways describe the connections between biological entities and show how a disturbance by a chemical or nanomaterial could cause downstream effects. 1.8.3.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The molecular pathways in WikiPathways are developed and curated by researchers, and are based on scientific literature. Pathways are available in multiple formats, including but not limited to the original Graphical Pathway Markup Language (GPML), Resource Description Framework (RDF), gene lists (GMT format), and nanopublications. The CC-Zero license puts no restrictions on reuse. 1.8.3.3 Additional details to 1.6 Expected size of the data The complete collection of GPML files is less than 100MB. 1.8.3.4 Additional details to 1.7 Utility of data and models Biological pathways are used for data analysis, biological interpretation of omics data, and data integration. #### **1.8.4 AOP-Wiki** The AOP-Wiki is the primary repository of qualitative, mechanistic Adverse Outcome Pathway (AOP) knowledge. It was developed by the Organisation for Economic and Co-operation and Development (OECD), representing a collaboration between the European Commission DG Joint Research Centre and US Environmental Protection Agency. The AOP-Wiki is part of the AOP-Knowledge Base, which was launched by the OECD to allow everyone to build AOPs. 1.8.4.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The AOP-Wiki data comprises of mechanistic toxicological knowledge relevant for risk assessment. While most of the knowledge is present as free-text, literature-supported descriptions, essential aspects, such as biological processes, objects, cell types, and stressor chemicals that cause a disturbance, among other things are annotated with ontologies and chemical identifiers. Therefore, the AOP-Wiki serves as a knowledge base for toxicological effects related to a variety of chemicals, which summarises relevant literature. 1.8.4.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data Knowledge in the AOP-Wiki data is stored partly as free text and partly as ontology annotations and chemical identifiers. The data originates from the AOP-Wiki database, and is supported by scientific literature that is gathered and written by researchers. The contents of the AOP-Wiki are reviewed by the OECD Extended Advisory Group on Molecular Screening and Toxicogenomics (EAGMST). Nightly exports of the AOP-Wiki contents are available, but only quarterly downloads are stored and maintained permanently on the Wiki which allows citation when the information is reused. For the OpenRiskNet service, the AOP-Wiki has been transformed into the Turtle-syntax, which described the data in RDF, providing semantic annotations of the database and its contents, and includes persistent identifiers to improve interoperability with external tools and resources. 1.8.4.3 Additional details to 1.6 Expected size of the data While the contents of the AOP-Wiki are increasing rapidly on a daily basis, the latest permanent download of the data (October 2019) does not exceed 15MB. The RDF that is exposed in the Virtuoso SPARQL endpoint on the OpenRiskNet e-Infrastructure has a size of approximately 6MB. 1.8.4.4 Additional details to 1.7 Utility of data and models In order to perform risk assessment, one has to gather all relevant knowledge about the mechanistic effects of a compound that requires assessment. The AOP- Wiki allows for reusing mechanistic knowledge of toxicological events upon disturbance by a stressor, often a chemical. As the AOPs are developed in a way that knowledge is separated in biological events (called Key Events) and are chemical-agnostic, their major purpose is the re-usability of toxicological knowledge. Therefore, the contents of the AOP-Wiki can be relevant for each risk assessment workflow, providing mechanistic information about biological processes and linking these together. #### **1.8.5 AOP-DB** The AOP-DB (The Adverse Outcome Pathway Database) serves to link molecular targets identified as molecular initiating events (MIEs) and key events (KEs) in the AOP-Wiki (https://aopwiki.org) to publically available data (e.g. gene- protein, pathway, species orthology, chemical, disease), in addition to ToxCast assay information. 1.8.5.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The AOP-DB service provides information related to AOPs that extend the existing AOP-Wiki service. Currently, the AOP-DB SPARQL endpoint contains a variety of resource types, all of which are linked to genes that are present in the AOP-Wiki. It has the links of those genes with diseases, ToxCast assays, and protein-protein interactions. By the integration of these different types of information from several databases allows for convenient extensions of knowledge captured in AOPs. 1.8.5.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The AOP-DB is a SQL database that integrates many types of data of several databases and repositories. The AOP-DB RDF that is developed in the OpenRiskNet project contains only a small section of the complete AOP-DB. The data that is captured in the AOP-DB RDF originates from a variety of public data sources, including NCBI genes, DisGeNet, Comparative Toxicogenomics Database (CTD), and EPA ToxCast Data available through the EPA Chemistry Dashboard ( _https://comptox.epa.gov/dashboar_ ​ _d_ ) _​_ . 1.8.5.3 Additional details to 1.6 Expected size of the data At the time of November 2019, the data that is loaded into the Virtuoso SPARQL endpoint on the OpenRiskNet e-Infrastructure has a size of approximately 320MB. 1.8.5.4 Additional details to 1.7 Utility of data and models Similarly to the AOP-Wiki, the AOP-DB provides knowledge related to AOPs, which can be used to inform risk assessments. However, it can also generate hypotheses by creating links between chemical interactions and adverse effects by integrating the various resources that it captures. Therefore, the AOP-DB has multiple purposes, for several types of user communities. #### **1.8.6 ToxCast/Tox21** The United States Environmental Protection Agency (US EPA) Toxicity forecaster (ToxCast) 2 has generated toxicity screening data on thousands of chemicals in commerce and of interest to the agency and the general public. The project also uses computational approaches to prioritise and rank chemicals for risk assessments and regulatory decision making. Toxicology in the 21st Century (Tox21) was created to continue supporting regulations by developing better toxicity assessment methods and publicly sharing of the generated data and is a federal collaboration among US EPA, NIH, including the National Center for Advancing Translational Sciences and the National Toxicology Program at the National Institute of Environmental Health Sciences, and the Food and Drug Administration (FDA). The number of tested environmental chemicals was increased to 10,000 (called the Tox21 10K library) while the number of endpoints was reduced to 200 endpoints coming from about 70 quantitative high- throughput screening assays. Both data sources are publicly available from the EPAas a MySQL database dump with additional supporting CSV files with information on the assays and chemicals used, are widely distributed and applied in risk assessment and can be annotated to fit into the OpenRiskNet data harmonisation and integration framework. 1.8.6.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The ToxCast research project data was generated on high-throughput in vitro toxicity screens for a variety of chemicals and biological targets. One of the goals of the project was to prioritise and evaluate the potential human health risk of chemicals in a cost efficient way. Processing and analysis workflows as well as computational and predictive models are also provided to estimate the toxicity potential of the chemicals in humans. The results of these analyses are being used actively to inform the context of decision making such as endocrine disruptor screening. To even expand these screening activities, the US EPA, National Toxicology Program (NTP) headquartered at the National Institute of Environmental Health Sciences, National Center for Advancing Translational Sciences (NCATS), and the Food and Drug Administration (FDA) formed the Tox21 Consortium. Tox21 is a US federal research collaboration focused on driving the evolution of Toxicology in the 21st Century by developing methods to rapidly and efficiently evaluate the safety of commercial chemicals, pesticides, food additives/ contaminants, and medical products. 3 To date, the Tox21 Consortium has been successful generating data on pharmaceuticals and thousands of data poor chemicals, developing a better understanding of the limits and applications of i​n vitro methods, and enabling the new data generated to be incorporated into regulatory decisions. Tox21 data is publicly available through the National Library of Medicine’s PubChem 4 , the EPA’s CompTox Dashboard 5 also providing the ToxCast data, and NTP’s Chemical Effects in Biological Systems 6 . Even if the first can be accessed via APIs, PubChem doesn’t offer the rich metadata and mechanistic annotation available from US EPA. The other two sources 2 _ https://www.epa.gov/chemical-research/toxicity-forecastin ​ g _ 3456 _ https://comptox.epa.gov/dashboar ​ https://tox21.gov/wp- content/uploads/2019/02/Tox21_FactSheet_Oct2018.pd ​ https://pubchem.ncbi.nlm.nih.gov ​ https://tripod.nih.gov/tox2 ​ 1 _ _/_ _d_ _f_ are currently only accessible via web frontends and no APIs are available at the moment even if the US EPA is planning to release an API to their CompTox database in the future. To allow for the data to be integrated with other data services of the OpenRiskNet infrastructure such as ontology mapping, pathway identification and mapping, and AOP development tools that already now, OpenRiskNet transferred the MySQL database provided by the US EPA as an alternative access point for computational toxicologists into a data management solution easy to access by less experienced users and automated workflows. Users of the OpenRiskNet service are able to take advantage of the information gaps filled through the integration of these datasets and models to develop predictive toxicology and risk assessment models e.g. read-across models. Examples of such uses are created in the case studies collecting evidence from all available data sources to create profiles of specific compounds, complementing omics data in bioinformatics workflows, data-driven developing and validating AOP, and model building based on chemical and biological data. 1.8.6.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The ToxCast and Tox21 data as provided by the US EPA includes over 9076 chemicals tested for 1473 toxicity endpoints (as at November, 2019) that map to hundreds of genes, biological pathways and cellular mechanisms in both humans and rats. The chemicals screened span various uses including industrial, individual, food additive and potentially safer alternatives to already existing older chemicals. The assays tested are usually of two types: i.) cell-based assays which measure changes in cellular response to the test substances; and ii) biochemical assays which measure the activity of a biological macromolecule. The cell typically used may be human or rat primary cells and cell lines. To inform chemical safety decisions, the computational toxicology research group at the US EPA makes both archived and current versions of the data available to the public through 1.) a database called invitroDB which is a MySQL download of all the data, 2.) summary data in flat- file format (e.g. comma-separated value files and tab separated files), 3.) concentration response plots in pdf format, and 4.) a CompTox dashboard (replacing the ToxCast dashboard) which serves as a portal for users to search and query the data. As the Tox21 project progresses and more chemicals are screened in more assays, the US EPA make periodic updates to the public release. At the time of this writing (November 2019), the invitroDB MySQL database is available in the version 3.2 (released 5 August 2019). The data is released under the US government public domain license 1.0 and is not subject to domestic copyright protection under 17 U.S.C. § 105. It states that unless the work falls under an exception, anyone may, without restriction under U.S. copyright laws, reproduce the work in print or digital form, create derivative works, perform the work publicly, display the work and distribute copies or digitally transfer the work to the public by sale or other transfer of ownership, or by rental, lease, or lending. Therefore, the redistribution of the data by OpenRiskNet under the same conditions is completely covered by the license. With respect to ethics aspects, no personal data is collected during the data generation process as all data is from i​n vitro experiments and commercial cell lines are used for the testing. OpenRiskNet has transferred the data into the OpenRiskNet-compliant EdelweissData management solution, with the advantage to provide the data via APIs in a semantically annotated form. For this, data was extracted from the MySQL dump of the invitroDB in version 3.1 and 3.2. Additional information on the assays and compounds were extracted from the provided CSV files and enriched by additional chemical identifiers taken from the PubChem service. It was then transferred into table format for the data and metadata in JSON format and the data schema was semantically annotated. These steps prepared the data for upload to the EdelweissData system fully integrated into the OpenRiskNet infrastructure, from which data and metadata can be accessed via the APIs in JSON format for re-use in OpenRiskNet services and workflows. 1.8.6.3 Additional details to 1.6 Expected size of the data The current version of the invitroDB database available from EdelweissData has a total size of approximately 50 GB of data and metadata. 1.8.6.4 Additional details to 1.7 Utility of data and models The high-throughput screening toxicity data and models available in ToxCast cover a wide chemical and biological space useful for risk assessment and as such of great value to the OpenRiskNet stakeholders. Integrating this dataset to the OpenRiskNet infrastructure will allow for easier access to the data by users who may not have background or expertise to setup and run the local databases and modelling pipelines. In addition being able to access this data from the OpenRiskNet service will also create greater utility for the data as it can be directly cross-referenced and used for modelling or analysis with other data in the service. #### **1.8.7 TG-GATEs** Open TG-GATEs [6 ​ ] is a Japanese public toxicogenomic database resulting from two joint government-private sector projects [7 ​ ] no date); [8 ​ ] ​ , no date) organized by the National Institute of Biomedical Innovation, National Institute of Health Sciences and multiple pharmaceutical companies. The Japanese Toxicogenomics Project generated gene expression and toxicity data in rats and the primary cultured hepatocytes of rats and humans following exposure to 170 compounds (mainly pharmaceutical products). The follow-up Toxicogenomics Informatics Project discovered over 30 different safety biomarkers using the data and generated additional data for verifying the biomarkers and analyzing their mechanisms. 1.8.7.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project Open TG-GATEs is the public toxicogenomics database developed so that a wider community of researchers can utilize the fruits of TGP and TGP2 research. This database provides public access to data on 170 of the compounds catalogued in TG-GATEs. Data searching can be refined using either the name of a compound or the pathological findings by organ as the starting point. Gene expression data linked to phenotype data in pathology findings is available for download as a CEL file. Such data has to run through different processing steps before it can be used in risk assessment e.g. to run biological pathway enrichment analysis to identify areas of concern and get mechanistic insight. Even if these are standard procedures performed by bioinformaticians and are used to optimize the information available in the data, risk assessors first want to have a quick look at the data to understand the cellular mechanisms caused by a chemical. Therefore, OpenRiskNet provides processed data (intensities and fold changes) generated using a standardized approach, which can directly be used for gene, pathway and mechanism analysis and linked to AOPs using tools like AOP-Wiki and AOP-DB described above. 1.8.7.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data Open TG-GATEs datasets are based on the Affymetrix platforms Rat230-2 (for a rat) and HG-U133_Plus_2 (for a human) using i​n vivo and i​n vitro samples exposed to the chemicals in different concentrations and different timepoints. More information on the exposure scenario and the number of compounds are given in the table below. <table> <tr> <th> **Organ or cell type** </th> <th> **Organism** </th> <th> **Study type** </th> <th> **Dose type** </th> <th> **Dose level** </th> <th> **No. of compounds tested** </th> </tr> <tr> <td> Liver </td> <td> Rat </td> <td> In vivo </td> <td> Repeat dose </td> <td> Control, low, middle and high </td> <td> 143 </td> </tr> <tr> <td> Liver </td> <td> Rat </td> <td> In vivo </td> <td> Single dose </td> <td> Control, low, middle and high </td> <td> 158 </td> </tr> <tr> <td> Kidney </td> <td> Rat </td> <td> In vivo </td> <td> Repeat dose </td> <td> Control, low, middle and high </td> <td> 41 </td> </tr> <tr> <td> Kidney </td> <td> Rat </td> <td> In vivo </td> <td> Single dose </td> <td> Control, low, middle and high </td> <td> 41 </td> </tr> <tr> <td> Liver </td> <td> Rat </td> <td> In vitro </td> <td> Single dose </td> <td> Control, low, middle and high </td> <td> 145 </td> </tr> <tr> <td> Liver </td> <td> Human </td> <td> In vitro </td> <td> Single dose </td> <td> Control, low, middle and high </td> <td> 158 </td> </tr> </table> For the generation of the processed data to be provided by OpenRiskNet, the raw data were downloaded from ftp.biosciencedbc.jp/archive/open- tggates/LATEST. Data was stored and processed locally using the R scripting language, which contains well maintained and documented libraries for gene expression analysis. Generally the workflow consisted of two independent parts - extraction of intensity readouts and calculation of fold changes. The steps for intensity data file generations include (see also Figure 2 for a schematic representation): 1. For every CEL file, **the** ​ **intensity readouts and probe IDs were extracted** and stored as a new dataset in the form of a CSV file. The CSV file contains only two columns - probe ID and intensity. 2. For every CEL file, the relevant **metadata** ​ **associated with the particular assay was extracted** (such as organism, organ, study type, compound, dose level, dose type, route of exposure, duration, vehicle) and stored in the form of a JSON file. This is crucial information for easy and efficient searching/filtering in any subsequent data analysis. 3. Both databases provide only compound names as chemical identifiers. In order to support easier searching/filtering through compounds, we **extracted** ​ **additional chemical identifiers from the PubChem and CACTUS** (NIH/NCI) services, such as CAS number, SMILES string, InChI, InChI key, IUPAC name, PubChem ID. This information also became part of the metadata file. 4. For every CEL file, the **intensity** ​ **data and metadata files were combined and uploaded to the EdelweissData** as a separate dataset. Along with that, a short description in a human-readable format was generated from the metadata and uploaded along with the dataset to EdelweissData **Figure 2.** Workflow to generate intensity readouts for DrugMatrix and Open TG-GATEs data​ and upload to EdelweissData. Critical steps in the workflow are numbered according to the description in the main text. Fold changes were produced using the following workflow (see also Figure 3 for a schematic representation): 1. Firstly, **every** ​ **CEL file was normalized** using the single-channel array normalization function of the SCAN.UPC library available through Bioconductor [9 ​ ] [ ​ 10]. ​ The latter has been shown to have the same or better performance as the other competing methods (such as RMA), while providing a crucial advantage due to a one-at-a-time normalization of CEL files (hence there is no need to reprocess all the affymetrix datasets, when an existing database is updated) [9 ​ ] 2. For every unique set of conditions (compound, dose, organ or cell type, study type, vehicle, route, duration) the corresponding **treatment** ​ **and control CEL files were identified** .​ 3. Normalized data of treatment and control CEL files were used as an input for the **differential expression analysis of microarray data** using the very well known limma library available through Bioconductor [11 ​ ] .​ The empirical Bayes statistics for differential expression has been used to calculate the t- and p-values for every probe ID. Note that for the i​n vivo studies we used the usual t-test/ANOVA as it considers two independent groups of samples and fits the linear model to the expression data of each gene, while for the i​n vitro studies we used the paired t-test statistics, which consider dependent groups of samples. 4. Additionally, to aid the further analysis of fold changes, we have converted the probe ID column to the corresponding **gene** ​ **identifiers** (gene symbol, Entrez ID, Ensembl ID) using the AnnotationDbi library and rat2302.db or hgu133plus2.db array annotation data available through Bioconductor. Processed data has been stored in the form of CSV files with multiple columns (probe ID, gene symbol, Ensembl ID, Entrez ID, logarithm of fold change, average expression, t, p-value, adjusted p-value, B). 5. For every processed file (i.e. for every set of conditions) the **relevant metadata associated with the particular assay was extracted** (such as organism, organ, study type, compound, dose level, dosing type, route of exposure, vehicle, duration) and stored in the form of a JSON file. As for the intensities, this is crucial for easy and efficient searching/filtering in any subsequent data analysis. 6. The **additional** ​ **chemical identifiers** generated in step 3 of the previous workflow were again used as part of the metadata file. 7. For every set of conditions the **processed** ​ **data and metadata files were combined and uploaded to the EdelweissData** as a separate dataset.​ **Figure 3.** Workflow for the calculation of fold-changes of DrugMatrix and Open TG-GATEs​ datasets. Critical steps in the workflow are enumerated and harmonized with the description in the main text. 1.8.7.3 Additional details to 1.6 Expected size of the data The overall size of the TG-GATEs processed data is approximately 160GB. The much larger raw data in CEL format is not managed by OpenRiskNet but can be accessed at the original source. 1.8.7.4 Additional details to 1.7 Utility of data and models Processed data created using standardized normalization and processing procedures offer the advantage that they can be directly combined with similar data from other sources. At the moment, this can be done for TG-GATEs and DrugMatrix but will be extended to other sources in the future. The information can then be combined and re-used in OpenRiskNet services for gene and pathway enrichment analysis using tools like WikiPathways as well as linked to AOPs based on the knowledge covered e.g. in AOP-Wiki and AOP-DB. #### **1.8.8 DrugMatrix** DrugMatrix is one of the world’s largest toxicogenomic reference resources provided by the National Toxicology Program of the US Department of Health and Human Services. It provides access to the toxicogenomic profiles of over 600 different compounds generated with Affymetrix and Codelink microarrays. While both types of microarray cover liver, kidney, thigh muscles, heart and cultured hepatocytes, the Codelink microarrays additionally cover bone marrow, spleen, intestine and brain. 1.8.8.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project As for TG-GATEs, OpenRiskNet provides processed data (intensities and fold changes) generated using standardized approach for normalization and processing starting from the DrugMatrix transcriptomics data, which can directly be used for gene, pathway and mechanism analysis and linked to AOPs using tools like AOP-Wiki and AOP-DB described above via simple to use APIs. 1.8.8.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data Only data from the Affymetrix microarrays is available as OpenRiskNet service up to now. All datasets are based on the Affymetrix microarray platform RG230-2.0 and male Sprague Dawley rats. <table> <tr> <th> **Organ or cell type** </th> <th> **Study type** </th> <th> **Dose type** </th> <th> **Dose level** </th> <th> **No. of compounds tested** </th> </tr> <tr> <td> Heart </td> <td> In vivo </td> <td> Repeat dose </td> <td> Control, low and high </td> <td> 88 </td> </tr> <tr> <td> Kidney </td> <td> In vivo </td> <td> Repeat dose </td> <td> Control, low and high </td> <td> 139 </td> </tr> <tr> <td> Liver </td> <td> In vivo </td> <td> Repeat dose </td> <td> Control, low and high </td> <td> 200 </td> </tr> <tr> <td> Thigh Muscle </td> <td> In vivo </td> <td> Repeat dose </td> <td> Control, low and high </td> <td> 21 </td> </tr> <tr> <td> Cultured Hepatocytes </td> <td> In vitro </td> <td> Single dose </td> <td> Control and high </td> <td> 125 </td> </tr> </table> 1.8.8.3 Additional details to 1.6 Expected size of the data For the generation of the processed data to be provided by OpenRiskNet, the raw data were downloaded from _https://ntp.niehs.nih.gov/results/drugmatrix/index.htm_ ​ _l_ , _​_ then processed using the workflows described for TG-GATEs above and uploaded to the EdelweissData system for easy access in OpenRiskNet workflows. The overall size of DrugMatrix datasets is 40 GB. 1.8.8.4 Additional details to 1.7 Utility of data and models See section 1.8.7.4 for a general description of the utility of transcriptomics processed data. #### **1.8.9 KIT Daphnia data on nanoparticles** "KIT Daphnia data on nanoparticles" dataset is based on "Meta-analysis of Daphnia magna nanotoxicity experiments in accordance with test guidelines" study [12 ​ ] and contains the raw “original_daphnia” data file and its eight derived processed files. 1.8.9.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The "original_daphnia" data is compiled from research articles in which nanotoxicity toward Daphnia Magna was measured according to the test guidelines from OECD and US EPA. Toxic response caused by nanomaterials have been assayed; however, the assay outcomes were varied between research articles. Therefore, this data set was compiled for meta-analysis to figure out what may be the cause of data heterogeneity between diverse assay outcomes. This data contains physicochemical properties of nanomaterials, experimental conditions for assays and toxic response measurement. Most of physicochemical properties of nanomaterials were taken from the research articles without considering measurement details for them since measurement details were often absent in the articles. Therefore, further details could only be found in reference research article. Since nanomaterials were easily aggregated in media, dispersion methods were applied to make nanomaterials being nano-sized particle. Centrifuge, stir, sonication, and filter were four dispersion methods used among research articles. The eight processed files are prepared by the authors and contain numeric values obtained from quantitative values of the raw "original_daphnia" data. They are “carbon_pec50”, containing the carbon-based nanomaterials with EC50 values, “coated_m_pec50” and “coated_m_class”, containing the coated metal nanoparticle data, “fullerene_class”, containing the fullerene nanomaterials data, “metal_pec50” and “metal_class”, containing the metal nanoparticle data, “meox_pec50” and “meox_class”, containing the metal oxide nanoparticle data. The eight processed files are used to build models, the ones with the label “pec50” are used for regression models and the ones with label “class” are used for classification models. This nano datasets are very important for standardization and validation of test methods for regulatory usage and identifying reasons for discrepancies in results, even if test guidelines exist, and are a good example of a relatively small data source profiting highly from data management solutions integrated in the infrastructure and the harmonization and interoperability effort of OpenRiskNet. Integration into OpenRiskNet made it publicly available following the FAIR principles. 1.8.9.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The Daphnia data as used in the meta-analysis (Shin, Hyun Kil & Seo, Myungwon & Shin, Seong & Kim, Kwang-Yon & Park, June-Woo & No, Kyoung Tai. (2018)) was available only from the author composed out of the raw “original_daphnia” data file and its eight derived processed files, all being in tabular CSV (comma separated values) format. By providing it on the EdelweissData system, the data and associated metadata is now available through the APIs in JSOn format and can be reused in an interoperable way in combination with other data sources. Even if the data sources provided by the OpenRiskNet consortium cover nanomaterials only to a very small extent, the uptake of OpenRiskNet solutions e.g. by the NanoCommons infrastructure will result in new interoperable resources in the near future. Models for nanomaterials using the data are already part of the infrastructure. 1.8.9.3 Additional details to 1.6 Expected size of the data The total size of the data and metadata of this small data source are below 1 MB. 1.8.9.4 Additional details to 1.7 Utility of data and models Even if this is a very small data source, it provides important information on the reproducibility of nanomaterial hazard data generated following a validated OECD guideline. It can now guide researchers with the task to perform equivalent experiments on new nanomaterials, and encourage them to provide the results in the same level of detail and preferable also to deposit the data on an OpenRisk data solution allowing combined analysis and monitoring progress with respect to reproducibility. #### **1.8.10 ToxicoDB** ToxicoDB is a web application to mine large and small scale toxicogenomics datasets. To​ better understand the molecular mechanisms underlying compound toxicity, great efforts have been made in screening of drugs/chemicals by various groups to generate datasets such as Open-TG GATEs and DrugMatrix.​ 1.8.10.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The inspiration for this application comes from the need for a common ground for analyzing toxicogenomics datasets with maximum overlap and consistency. The ToxicoDB will provide an intuitive interface for all users (including users that are not computational-savvy) to mine the complex toxicogenomic data. This is in line with OpenRiskNet’s objectives. 1.8.10.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data ToxicoBD will provide curated toxicogenomics datasets, which includes gene expression data and toxicological information on drugs and chemicals used to generate these gene expression data. These datasets are and will be obtained from publicly available resources, such as the diXa data warehouse, NCBI GEO, EBI’s ArrayExpress. All datasets are and will be preprocessed and checked for quality. 1.8.10.3 Additional details to 1.6 Expected size of the data As a resource provided by an associated partner of OpenRiskNet, the data management and sustainability is in the hand of this institution. As work on ToxicoDB is still ongoing it is unclear what the size of the data will be. Currently, only data from TG-GATEs and DrugMatrix are available. However, by integrating the service into the OpenRiskNet infrastructure, OpenRiskNet guarantees that the metadata will be publicly available. 1.8.10.4 Additional details to 1.7 Utility of data and models Users of the database can find drug and gene annotations, visualize gene expression data for datasets as well drugs of interest and will be able to download these data. #### **1.8.11 ToxPlanet** ToxPlanet aggregates and curates toxicology and chemical hazard information from over 500 sources. A web-based GUI as well as access via APIs to the commercial service are possible. 1.8.11.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project ToxPlanet gives access to regulatory reports and other structured and unstructured information sources. In this way, it is a comprehensive and well organized source for legacy data and documents the current state of regulation for a large number of compounds. This information and can be used in all application areas of OpenRiskNet partly automatically extracted by the text- mining workflows developed in OpenRiskNet. 1.8.11.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The data is mainly in the form of reports available in pdf format and originate from all major regulatory agencies and literature sources. Most of the documents are in the public domain while the search and browsing features of ToxPlanet are proprietary and their use needs a commercial license. 1.8.11.3 Additional details to 1.6 Expected size of the data Due to the commerciality of the service, the data is managed by the provider. However, by integrating the service into the OpenRiskNet infrastructure, OpenRiskNet guarantees that the metadata will be publicly available. 1.8.11.4 Additional details to 1.7 Utility of data and models OpenRiskNet provides text mining workflows to extract data from the ToxPlanet repository under the restriction that the user needs to acquire a license first. #### **1.8.12 SCAIView** The information retrieval system SCAIView allows for semantic searches in large text collections by combining free text searches with the ontological representations of entities derived by JProMiner. SCAIView gives answers to questions such as “Which genes / proteins are related to a certain disease, pathway or epigenetics?”. SCAIView´s key features are: * A user-friendly search environment with a query builder supporting semantic queries with biomedical entities * Fast and accurate search and retrievals, based on the newest technologies of semantic search engines * Visualization and ranking of the most relevant entities and documents * Exportation of the search results in various file formats Documents are retrieved by precisely formulated questions using ontological representations of biomedical entities. The entities are embedded in searchable hierarchies and span from genes, proteins, accompanied single- nucleotide polymorphisms to chemical compounds and medical terminologies. SCAIView supports the selection of the suitable entities by an autocompletion functionality and a knowledge base for each entity. This includes a description of the entity, structural information, pathways and links to relevant biomedical databases like _EntrezGen_ ​ _e_ , _​_ _ dbSN ​ P _ ,​ _​ KEG G _ , _​_ _ G ​ O _ and _DrugBank_ . _​_ SCAIView represents the search results using a color-coded highlighting of the different entity- classes, statistical search results and various ranking functions. The selected biomedical entities are found by an approximate search algorithm implemented in the Fraunhofer-Gesellschaft information extraction tool JProMiner® which additionally disambiguates synonyms of entities to unique identifiers in public available entity databases. The SCAIView data collection comprises three different public document collections and an index of extracted biomedical entities from these documents. 1.8.12.1 Additional details to 1.1 Purpose of the data collection and 1.2 Relation to the objectives of the project The main purpose of the data collection is to enable a semantic search functionality for researchers for public available full text collections. The documents are retrieved via free text queries in combination with semantic or ontological search of biomedical entities of interest. The biomedical entities are embedded in searchable hierarchies and span from genes, proteins, accompanied SNPs to chemical compounds and medical terminology. With Ontological Filtering, it is possible to restrict the result to a subset e.g. genes on a KEGG pathway or in a Cytoband region. Advanced retrieval technology allows answering complex queries such as: * Which genes/proteins are related to a certain context (e.g. disease/pathway/epigenetics)? * Give me an overview of relevant biomedical concepts in my subcorpus * Which drugs are relevant for this context? * To which diseases is my gene associated? * Which chromosomes show linkage to the disease? * Which variations are mentioned in the context of the disease and could they be found in dbSNP? * What other diseases are possibly co-occurring with my relevant disease? The collection has been used in the Data Cure case study to find relevant information for chemical compounds and their cancer hazard to humans. 1.8.12.2 Additional details to 1.3 Types and formats of data, 1.4 Reuse of data and 1.5 Origin of the data The data collection has been derived from publicly available xml collections from the _U.S_ ​ _._ _National_ _Library_ _of_ _Medicine_ (NLM: _PubMe_ ​ _d_ ,​ _ PM ​ C _ under _Term_ ​ _s_ _ & _ _Conditions_ _of_ _NLM_ ) _​_ and the _ Unite ​ d _ _States_ _and_ _Trademark_ _Office_ (USPTO: _PatF_ ​ _T_ ).​ The annotated files are converted to JSON format and can be downloaded via the SCAIView API. 1.8.12.3 Additional details to 1.6 Expected size of the data The size of the raw xml data is: PubMed 40G, PMC 61G, Patents 79G. The processed data comprises: PubMed 664G, PMC 431G, Patents 179G. The publication numbers increase constantly every year in the _NL_ ​ _M_ and the ​ _ USPTO ​ _ . The processed data even grows at a larger rate since more terminologies and ontologies are indexed each year. The processed data is loaded into a _ sol ​ r _ _enterprise_ _search_ _engine_ and is hosted and regularly updated by _ Fraunhofer SCA ​ I _ _​_ under the following _Terms & Condition _ ​ _s_ .​ ## 2\. FAIR DATA ### 2.1 Making data findable, including provisions for metadata <table> <tr> <th> _●_ </th> <th> Outline the discoverability of data (metadata provision) </th> </tr> <tr> <td> _●_ </td> <td> Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? </td> </tr> <tr> <td> _●_ </td> <td> Outline naming conventions used </td> </tr> <tr> <td> _●_ </td> <td> Outline the approach towards search keyword </td> </tr> <tr> <td> _●_ </td> <td> Outline the approach for clear versioning </td> </tr> <tr> <td> _●_ </td> <td> Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how </td> </tr> </table> OpenRiskNet was integrating existing data sources and made them more easily findable, accessible and interoperable. This was based, on the one hand, on the metadata provided by the data sources and, on the other hand, on the interoperability layer and exploitation of semantic web standards (Resource Description Framework, RDF) , which harmonises these metadata leading to data service descriptions and data schemata, which can be queried through the OpenRiskNet discovery service. The description of the capabilities of a database and the data schema allow for: * Accessing specific search functionality, and * Identify the data fields to be searched (e.g. where information on the biological assays are stored); * Finding the best format for data exchange; * Understanding all the data and tools, with transparent access to metadata describing the experimental setup or computational approaches. In the case that the original data sources don’t provide all the features required by the FAIR principles as e.g. unique persistent identifiers or clear access protocols, OpenRiskNet worked together with the data providers to either integrate this into the original service, transfer the data to more advanced data management solutions provided by OpenRiskNet or provide missing features as part of the interoperability layer added to the service in the context of OpenRiskNet all leading to improved quality of the data source. ### 2.2 Making data accessible <table> <tr> <th> _●_ </th> <th> Specify which data will be made openly available? If some data is kept closed provide rationale for doing so </th> </tr> <tr> <td> _●_ </td> <td> Specify how the data will be made available </td> </tr> <tr> <td> _●_ </td> <td> Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? </td> </tr> <tr> <td> _●_ </td> <td> Specify where the data and associated metadata, documentation and code are deposited </td> </tr> <tr> <td> _●_ </td> <td> Specify how access will be provided in case there are any restrictions </td> </tr> </table> The OpenRiskNet approach enables the easy and transparent sharing and analysis of data between organisations involved in many sectors and programs. OpenRiskNet APIs and the used transfer formats were openly released immediately after their definition had reached a stable form at the end of the project. However, updates might be necessary to follow scientific and technical advances. In the prioritization of services to be integrated, open source tools were favoured for the use in the case studies and the reference workflows targeting specific question but commercial services are equally important to build in the specific requirements of restricted access rights and to help sustain the infrastructure in the long run. However, also these commercial services had to openly share their API definitions and data formats as well as provide features to make medatadata even of such restricted data findable to allow for integration and combination with other tools and comply with the FAIR principles. Open Standards applied: * Data and models are stored and served using well-developed and widely applied standards and technologies that promote data reuse and integration, such as JSON-LD, RDF and related semantic web technologies; * OpenRiskNet resources are aligned with activities of toxicology communities like OpenTox, NanoCommons and EU-ToxRisk in developing open standards for predictive toxicology resources; * Tools to access study data and metadata description in standards file formats already in use in a number of omics, toxicogenomic and nanosafety resources (e.g. ToxBank, diXa, eNanoMapper), further simplify the integration; * Model descriptions are provided encoded guided by suitable open standards (e.g. QMRF, BEL, SBML) and annotated advancing appropriate minimal information standards (MIRIAM) for dissemination through appropriate repositories (e.g. BioModels) to cover the extended requirements of the semantic interoperability layer of OpenRiskNet. OpenRiskNet did not create new file standards but rather employed existing approaches as to define a core set of information, on which the scientific community agrees that they are important to document, but which can also be modified and extended if necessary for a specific application. For defining this core set, regulatory files formats like **OECD** ​ **harmonised templates (OECD HT)** [13 ​ ] and **Standard** ​ **for Exchange of Nonclinical Data (SEND)** [14 ​ ] were included when collection the requirements for file transfer. Even if these file formats are too limited and do not have the flexibility to be used outside regulatory purposes and especially for early stage research and method development, the guidelines for data and metadata management proposed by OpenRiskNet and is continued to be developed in NanoCommons and other projects ensures that all relevant metadata and information needed for regulatory reporting is included in the data transfer templates. ### 2.3 Making data interoperable <table> <tr> <th> _●_ </th> <th> Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability </th> </tr> <tr> <td> _●_ </td> <td> Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to </td> </tr> </table> more commonly used ontologies? OpenRiskNet interoperability layer opens possibilities to provide data schemata, which describe the format of the data using a controlled vocabulary: * Metadata standards and data documentation approaches consider the existing standards that can be consolidated and the equivalent data that can be retrieved independent of the file format; * Developments towards the integration of ontologies under a single framework have been started and are ongoing together with partner projects mainly from the EU NanoSafety Cluster, which will contribute to the goal of automatic harmonization and annotation of datasets. The goal is not to develop new ontologies. Instead, already existing ontologies (e.g. OBI, ChEBI, PATO, UO, NCIT, EFO, OAE, eTOX, eNanoMapper, MPATH, etc.) are consolidated and integrated into applications ontologies for the toxicology community and specifically for the requirements of OpenRiskNet service annotation. * Another requirements to establish the comprehensive use of ontologies and in this way foster the interoperability not only of the major data sources but also user-provided data are user-friendly capturing frameworks supporting the selection of ontology terms during data curation and an ontology mapping service resolving issues of using synonyms from different ontologies (e.g. CAS numbers can be annotated using the National Cancer Institute Thesaurus, the EDAM ontology or even the Chemical Information Ontology reused in the eNanoMapper ontology, where it is available under the term “CAS registry number”). OpenRiskNet was working with experts in the field to integrate such tools in the infrastructure and has partly integrated them into data management services provided by OpenRiskNet partners. However, since OpenRiskNet was not a major primary data provider, it is even more important that these tools are now made available and easily accessible to the community for integration in new and existing data sources of EU funded research projects. * Allowing mapping between related items in different databases (e.g. different gene-identifiers, linking genes to proteins or RNA identifiers, or mapping between equivalent chemical structures in different databases. BridgeDb, which can perform such mappings and is part of the OpenRiskNet Services, is thus a core interoperability service. Additionally, we provide guidelines and training on the usage of standard data transfer/sharing formats and ontologies in the context of OpenRiskNet. Best practice examples like ToxCast, AOP-Wiki and AOP-DB show how semantic annotation can be applied either directly by providing the data as RDF or via semantic annotated OpenAPI definitions or both and used to make the data and information understandable and, in this way, easier integrable and re-usable. An important and heavily used part of improved data management throughout the OpenRiskNet project is searching and accessing data from different sources supported by the semantic annotation of the data sources based e.g on the Bioschemas and BioAssays ontology: * The databases are accessible by the OpenRiskNet APIs (similar to the computational tools) including the interoperability layer; * Searches throughout multiple databases are possible, removing the need to search in everyone independently * The interoperability layer can be used to inspect the data schema and find out if the needed information is available from the databank and if it can be provided in a form for further analysis. All this work was based on and extended: * OpenTox APIs, which were designed to cover the field of QSAR-based predictive toxicology with dataset generation, model building, prediction and validation; * Open PHACTS APIs, which handle knowledge collection and sharing; * Various other APIs for accessing databases like BioStudies, EGA, ToxBank, and PubChem; which led to a fast uptake by the community due to existing familiarity with the underlying concepts. ### 2.4 Increase data re-use (through clarifying licenses) <table> <tr> <th> _●_ </th> <th> Specify how the data will be licenced to permit the widest reuse possible </th> </tr> <tr> <td> _●_ </td> <td> Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed </td> </tr> <tr> <td> _●_ </td> <td> Specify whether the data produced and/or used in the project is usable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why </td> </tr> <tr> <td> _●_ </td> <td> Describe data quality assurance processes </td> </tr> <tr> <td> _●_ </td> <td> Specify the length of time for which the data will remain re-usable </td> </tr> </table> Most of the data sources are already available in the public domain. OpenRiskNet redistributes the data using the same license as the original data provider or, if this is demanded by the data provider, in a more restricted form. New data is made available through the OpenRiskNet access methods as soon as it is released by the original data provider, i.e. no additional embargo period is/was enforced by the OpenRiskNet project. Thus, users can access the same data with respect to the number of datasets and version of each dataset either from the original service provider, via e.g. a web interface specifically designed for the data warehouse, or through the OpenRiskNet mechanism, where the latter has the advantage of easy integration into workflows and interoperability with other data sources and software tools. Besides this simpler access, OpenRiskNet improved the quality of the data with the following measures: Data quality assurance processes: * Tools for performing automatic validation, analysis and (pre)processing are developed to find inconsistencies in the data and databases and, in this way, improve the quality of the source and are made available in OpenRiskNet, e.g. _http://arrayanalysis.org/_ and _ https://github.com/BiGCAT-U ​ M _ .​ Additionally, efforts to establish a general, cross-database data curation framework, in which users can flag possible errors in the data and semantic annotation, is supported. * Some partners (e.g. UM) developed their own pipelines for quality control and analysis of sequencing data (RNA-seq and MeDIP-seq). * We also integrate tools for automatic or manual curation of datasets as well as deriving processed data. The modified dataset are stored (similar to the pre-reasoned datasets) in OpenRiskNet-compliant databases with a link to the original source. Discussions were started, are underway and will continue with the original data providers to transfer the curated datasets back into the original database so that users preferring to use the data from the primary source are also profiting from the curation effort. Quality assurance in the processing, analysis and modelling tools: * Protocolling of the performed calculations increasing the repeatability and reproducibility of the studies, is supported by the automatic logging and auditing functionalities of modern microservices frameworks as well as the integrated workflow management systems. * Validation of the services were enforced by the consortium and appropriate measures of uncertainty were requested for all models. ### 2.5 Specific information on individual shared data sources #### 2.5.1 Workflows All OpenRiskNet workflows created as part of the case studies or demonstrating the functionalities of individual services or combinations of services are available under open licenses from the Github repository. These can be downloaded, re-executed to demonstrate repeatability and modified to answer specific scientific questions of the user. #### 2.5.2 BridgeDb The BridgeDb software is available under the OSI-approved Apache License 2.0. Identifier mappings files are available under open licenses too, following the open licenses of the upstream resources (Ensembl, Rhea) or CCZero in case of the metabolite mapping database. The BridgeDb web service and data for identifier mappings is made available on the OpenRiskNet cloud using an OpenAPI specification wrapped around a REST services. #### 2.5.3 WikiPathways All contents of WikiPathways are licenced with the Creative Commons CC0 waiver, which states that all contents of the database are free to share and adapt. WikiPathways adopts a customised quality assurance protocol to curate the database, which is done on a weekly basis. #### 2.5.4 AOP-Wiki The AOP-Wiki provides quarterly downloads for the complete database, which are permanently maintained by the OECD. The AOP-Wiki does not provide licence information, but states that the data can be reused. All AOPs undergo review by EAGMST to ensure the quality of the contents of the AOP-Wiki. <table> <tr> <th> **FAIR Principles** </th> <th> **WikiPathways** </th> <th> **AOPWiki** </th> </tr> <tr> <td> F1. (Meta)data are assigned and globally unique and persistent identifiers </td> <td> 2 </td> <td> 1 </td> </tr> <tr> <td> F2. Data are described with rich metadata </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> F3. Metadata clearly and explicitly include the identifier of the data they describe </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> F4. (Meta)data are registered or indexed in a searchable resource </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> A1.1. The protocol is open, free and universally implementable </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> A1.2. The protocol allows for an authentication and authorization where necessary </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> A2. Metadata are accessible, even when the data are no longer available </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> I1. (Meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> I2. (Meta)data use vocabularies that follow FAIR principles </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> I3. (Meta)data include qualified references to other (meta)data </td> <td> 2 </td> <td> 1 </td> </tr> <tr> <td> R1.1 (Meta)data are released with a clear and accessible data usage license </td> <td> 2 </td> <td> 1 </td> </tr> <tr> <td> R1.2. (Meta)data are associated with detailed provenance </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> R1.3. (Meta)data meet domain-relevant community standards </td> <td> 1 </td> <td> 2 </td> </tr> </table> **Table 1.** Compliance to FAIR principles ​ [15 ​ ] by AOP-Wiki and WikiPathways. Score​ meanings: 1 = partial compliance, 2 = compliance #### 2.5.5 AOP-DB The AOP-DB provides data that originates from a variety of data sources. The OpenRiskNet services that exposes the AOP-DB RDF contains data from NCBI, ToxCast, CTD and DisGeNET. NCBI and ToxCast are produced by the U.S. Government and the information is by default in the public domain. The DisGeNET database is made available under the Attribution-NonCommercial- ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. CTD is subject to the terms of reuse by the MDI Biological Laboratory and NC State University ( _http://ctdbase.org/about/legal.js_ ​ _p_ ) _​_ . #### 2.5.6 ToxCast All data produced by the U.S EPA including ToxCast/Tox21 is by default in the public domain (U.S. Public Domain license) and is not subject to domestic copyright protection under 17 U.S.C. § 105. This allows to reproduce the work in print or digital form, create derivative works, perform the work publicly, display the work and distribute copies or digitally transfer the work to the public by sale or other transfer of ownership, or by rental, lease, or lending. The release currently provided as part of OpenRiskNet is using the EdelweissData system including a REST API developed by Edelweiss Connect and is based on the MySQL database dump and additional downloadable CSV files. The data was extracted and restructured to fit the OpenRiskNet concept and semantic annotated. #### 2.5.7 TG-GATEs Open TG-GATEs is provided as a public source by the Japanease National Institutes of Biomedical Innovation, Health and Nutrition. OpenRiskNet used standard procedures to normalize and processed and now provides the data under the Attribution CC BY 4.0 license based on the EdelweissData platform, which is designed to comply to the FAIR principles. #### 2.5.8 DrugMatrix DrugMatrix is provided as a public source by the National Toxicology Program of the US Department of Health and Human Services. As for TG-GATEs, OpenRiskNet used standard procedures to normalize and processed and now provides the data under the Attribution CC BY 4.0 license based on the EdelweissData platform, which is designed to comply to the FAIR principles. #### 2.5.9 KIT Daphnia data on nano particles KIT Daphnia data on nano particles dataset is provided by the Department of predictive toxicology, Korea Institute of Toxicology. OpenRiskNet provides the data under the Attribution CC BY 4.0 license based on the EdelweissData platform, which is designed to comply to the FAIR principles. #### 2.5.10 ToxicoDB ToxicoDB is provided by the University Health Network, Toronto, Canada and is available under GNU Lesser General Public License 3 (LGPLv3.0). The database provides toxicological data and toxicogenomics datasets (incl. TG-GATEs and DrugMatrix) from publicly available sources and thereby complying to the FAIR principles. #### 2.5.11 ToxPlanet ToxPlanet is available under a commercial license. However, metadata is publicly available to comply with the FAIR principles. #### 2.5.12 SCAIview The main resources on which SCAIView builds (namely PubMed and US Patents) are very metadata rich and SCAIView makes this metadata searchable and accessible via API. SCAIView harmonizes the metadata of the different sources into a common JSON schema and therefore makes the data more interoperable and reusable. SCAIView adds additional metadata from text mining by adding semantic annotations. These annotations are derived from publically available ontologies (eg. OBO foundry). Each annotation contains a referable identifier (URI), a source, a preferred label and workflow provenance on how this annotation has been produced. I.e. the generated data is by definition FAIR. ## 3\. ALLOCATION OF RESOURCES <table> <tr> <th> **Explain the allocation of resources, addressing the following issues** ​: * Estimate the costs for making your data FAIR. Describe how you intend to cover these costs * Clearly identify responsibilities for data management in your project * Describe costs and potential value of long term preservation </th> </tr> </table> Making data FAIR was a central task of the integration of data source in the OpenRiskNet infrastructure. Many of the sources integrated already follow the FAIR principles at least to some extent and limited additional budget was needed other than for the effort needed to make the sources OpenRiskNet- compliant. For data sources not at a sufficient high level, the OpenRiskNet partners owning the data or responsible for its integrating covered the costs of the integration from their allocated budget. In the case of third-party data sources, the integration was performed in collaboration with the associated partners partly financially supported through the Implementation Challenge with an OpenRiskNet partner designated as main contact point of the associated partner. ## 4\. DATA SECURITY <table> <tr> <th> _●_ </th> <th> Address data recovery as well as secure storage and transfer of sensitive data </th> </tr> </table> The OpenRiskNet approach on data recovery, secure storage and transfer of sensitive data included: * Responsible and secure management processes for personal data including anonymisation, encryption, logging of data usage as well as data deletion after usage are implemented; * To ensure that all ethical guidelines are followed by all OpenRiskNet Partners and Associated Partners and implemented in every step of the infrastructure, a p​ rivacy by design approach was followed in the project, documented in the OpenRiskNet privacy policy (see below) and controlled by an independent Data Protection Officer; * The most sensible way to protect sensitive data offered by the OpenRiskNet infrastructure was to bring the virtual environment and all data sources behind a company’s firewall by in-house deployment; * All these data protection measured were documented and distributed as part of the terms of usage to guide existing and future data providers and research projects in need for secure and ethical data management. ## 5\. ETHICAL ASPECTS <table> <tr> <th> _●_ </th> <th> To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former </th> </tr> </table> The ethical aspects are covered by the Protection Of Personal Data (POPD) requirements and deliverables: * D6.1 - NEC - Requirement No. 3: Statements regarding the full compliance of third country participants to the H2020 rules * D6.2 - POPD - Requirement No. 4: Copies of the previous ethical approvals of data to be collected and used in the project, approvals which must also allow the possible secondary use of data * D6.3 - POPD - Requirement No. 5: Consent forms and information sheets before interviews and surveys and any other data and personal data collection activity in the project * D6.4 - POPD - Requirement No. 6: A statement by third party testers that they will comply with the applicable EU law and H2020 rules * D6.5 - POPD - Requirement No. 7: Data Protection Officer Report 1 * D6.6 - POPD - Requirement No. 8: Data Protection Officer Report 2 ● D6.7 - POPD - Requirement No. 9: Data Protection Officer Report 3 Based on the ethics report received as part of the evaluation of the proposal and the three reports of the **Data** ​ **Protection Officer (DPO)** (D6.5, D6.6 and D6.7) summarising the relevant regulations and legislation for the different data types and test models, OpenRiskNet has worked also together with an external expert to develop elements on an ethics review framework and a document in the form of a “Proposal for ethics requirement for data providers to European e-infrastructures like OpenRiskNet” was generated (version 3 in **Annex** ​ **1** ​). This document was further reviewed and improved based on the recommendation received from the DPO. The OpenRiskNet platform is based on a federated data management system that makes existing data sources provided by other European projects in this field or from international consortia mainly from i​n vitro human and animal and i​n vivo animal experiments available to all stakeholders in a harmonised way. Such a federated systems imposes highest demands on data management and sharing with respect to data security, data integrity and ethics becoming even more relevant with the new EU General Data Protection Regulation (GDPR) in effect since May 2018\. Thus, not only OpenRiskNet but the complete scientific community is in demand of support and recommendations to achieve the highest impact without jeopardising the ethics integrity of the e-infrastructure. As a starting point and to fulfil the requirements from the ethics review, a step-by-step decision process was first designed addressing how important legacy data sources need to be handled by the project. On top of the workflows provided in the reviews of the Data Protection Officer, a hierarchical data source analysis and evaluation of the ethical implications for OpenRiskNet was performed. Different categories of data sources have been analysed, including references to the legislation in place and the conditions for primary, secondary and tertiary data collection and use. Also special measures that need to be considered for some specific cases are included. Data type and additional aspects considered: * General requirements * Additional Requirements for Human Data ○ Basic requirements for ALL types of human data ○ Further requirements according to type of biomaterial being provided ■ Commercial Cell Lines ■ Research Using, Producing or Collecting Human Cells and Tissues (EXCLUDING Embryonic Stem Cells (hESC) and Human-Induced Pluripotent Stem Cell (hIPSC) ■ Research using Human Embryonic Stem Cells hESCs and Human-induced Pluripotent Stem Cell (hiPSC) Lines ■ Research using Clinical Trial Data * Additional Requirements for Animal Data **Reasoning** The European Commission is enforcing strong data management guidelines following the FAIR principles (Findable, Accessible, Interoperable and Re- usable) on all funded projects and the open, public sharing of all data generated in these projects. However, this sharing of research data needs to be following all ethics requirements and highest standards of privacy protection. To support the definition of such standards, OpenRiskNet has developed this checklist to help data providers understand the requirements for privacy-protecting data sharing and to guide them in fulfilling the existing guidelines and the ethics requirements for the specific data supplied. The checklist has general requirements that apply to all data types, and additional requirements specifically for data used in the toxicology and risk assessment fields. The final goal is to allow for an evaluation following the relevant criteria and to generate a statement confirming that the applicable requirements are fulfilled before a dataset can be used. Since data management and sharing is becoming a more centralised, European or even global effort spanning different infrastructures and disciplines, recommendations and guidelines adopted by OpenRiskNet cannot be developed autonomously and have to be aligned and harmonised with the ongoing discussions on the EU level and with changing regulations. Therefore, this document needs to be considered only as an initial attempt to cover issues in the specific scientific area and it will be provided to working groups established within the governance of the European Open Science cloud (EOSC) on FAIR data requirements. Until additional feedback is collected and other players are consulted (i.e. EOSC) the information included in Annex 1 will not be implemented yet as an online tool linked to the OpenRiskNet infrastructure, but kept as an input document to the e-infrastructure community. The current draft document of the checklist proposes specific ethics evaluation steps for each data source to be included and/or used in the OpenRiskNet platform based on the assessment to one of the data categories listed above (animal i​n vitro and i​n vivo​, human in vivo​,...) aligned with specific regulatory and ethical requirements. Even if the obligation to fulfil all necessary national and international regulatory and ethical requirements including obtaining legal and ethics clearance of all experiments and to operate in conformity with the institutional regulations is ultimately in the hands of the original data producer, OpenRiskNet is committed to continue providing the framework for the ethics evaluation described above to all providers of data services (OpenRiskNet internal, associated partners and other third-parties) and supported the execution and documentation of the data source evaluation. However, as already mentioned, all scientific disciplines are now facing the same challenge how to document ethical and privacy-protecting generation, management and sharing of research data within the open research data and open science goals of the European Union. Therefore, we did not enforce the adoption of the checklist as a requirement for data services on OpenRiskNet since this would be very specific to this e-infrastructure while the services are also of interest in a more general setting like e.g. the European Open Science Cloud. This makes it clear that discussions on ethics standard have to pushed to a higher level and harmonised across all relevant European infrastructures or even better globally to avoid that data providers have to deal with multiple parly contradicting, incomplete or outdated requirements, guidelines and checklists depending on the setting their services are used in. To funnel our experience, know-how and results of the ethics reviews into these discussions, we opened the checklist now available in a draft version for comments, raised the importance of clear licenses, privacy-protection and ethics guidelines at different project but even more important ELIXIR and EOSC meetings (including the Building EOSC through the Horizon 2020 projects current status and future directions” workshop, 9 – 10 Sep 2019 in Brussels), and engaged with interested parties and the EOSC-Secretariat to improve and extend the OpenRiskNet guidelines and checklist to make them fit for multiple disciplines and infrastructures and to foster the uptake of by the scientific community. ### 5.1 Privacy Policy The privacy policy 2 is implemented (version 3 from 20 November 2019) that discloses the ways OpenRiskNet website manages the content, the personal data or analytics on website usage. Specifically, the disclaimer implemented refers to the following aspects related to the Personal Data Protection and Privacy Policy: * Website content disclaimer * External links disclaimer * Copyright and acknowledgement of sources * Data Protection and Privacy Policy ○ Responsibility for the processing of Personal Data ○ Categories of Personal Data processed by OpenRiskNet ○ Storage of Personal Data ○ Sharing of Personal Data ○ Data Protection Rights under the General Data Protection Regulation (GDPR) ■ Service Providers ■ Analytics ○ Security and Integrity ○ Cookies ○ SSL or TLS encryption ○ Email communication ○ Changes of the Data Protection and Privacy Policy * Legal effect of disclaimer * Contact details The latest version of the Privacy Policy is included in **Annex 2** ​ .​ ### 5.2 Terms of use The terms of use 3 (version 3 from 20 November 2019) regulate the use of the OpenRiskNet infrastructure and is structured as follows: * Definition of terms used * About the OpenRiskNet e-infrastructure * Data providers and data users of OpenRiskNet e-infrastructure * Use of OpenRiskNet e-infrastructure * Confirmation of Acceptance of the terms of use The latest version of the OpenRiskNet e-infrastructure terms of use is included in **Annex** ​ **3** .​ # GLOSSARY The list of terms or abbreviations with the definitions, used in the context of OpenRiskNet project and the e-infrastructure development is available at: _https://github.com/OpenRiskNet/home/wiki/Glossary_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1205_DESIR_731081.md
# Executive Summary This document is the first version of the Data Management Plan (DMP) for data collected and created by DESIR. It describes the datasets generated during the course of the project, how the data will be produced and analysed. It details also how the data generated will be shared, disseminated and preserved. <table> <tr> <th> </th> <th> </th> <th> Nature of the deliverable </th> </tr> <tr> <td> </td> <td> R </td> <td> Document, report </td> </tr> <tr> <td> </td> <td> DEM </td> <td> Demonstrator, pilot, prototype </td> </tr> <tr> <td> </td> <td> DEC OTHER </td> <td> Websites, patent fillings, videos, etc. </td> </tr> <tr> <td> ✓ </td> <td> ORDP </td> <td> Open Research Data Pilot </td> </tr> <tr> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> Dissemination level </td> </tr> <tr> <td> ✓ </td> <td> P </td> <td> Public </td> </tr> <tr> <td> </td> <td> CO </td> <td> Confidential only for members of the consortium (including the Commission Services) </td> </tr> <tr> <td> </td> <td> EU-RES </td> <td> Classified Information: RESTREINT UE (Commission Decision 2005/444/EC) </td> </tr> <tr> <td> </td> <td> EU-CON </td> <td> Classified Information: CONFIDENTIEL UE (Commission Decision 2005/444/EC) </td> </tr> <tr> <td> </td> <td> EU-SEC </td> <td> Classified Information: SECRET UE (Commission Decision 2005/444/EC) </td> </tr> </table> Disclaimer DESIR has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731081. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein. # Introduction The DESIR project sets out to strengthen the sustainability of DARIAH and firmly establish it as a long-term leader and partner within arts and humanities communities. By DESIR’s definition, sustainability is an evolving 6-dimensional process, divided into the following challenges: 1. Dissemination: DESIR will organise a series dissemination events, including workshops in the US and Australia, to promote DARIAH tools and services and initiative collaborations. 2. Growth: DESIR sets out to prepare the ground for establishing DARIAH membership in six new countries: the UK, Finland, Spain, Switzerland, Czech Republic and Israel. 3. Technology: DESIR will widen the DARIAH research infrastructure in three areas, vital for DARIAH’s long-term sustainability: entity-based search, scholarly content management, visualization and text analytic services. 4. Robustness: DESIR will make DARIAH’s organizational structure and governance fit for the future and develop a detailed business plan and marketing strategy. 5. Trust: DESIR will measure the acceptance of DARIAH, especially in new communities, and define mechanisms to support trust and confidence in DARIAH. 6. Education: Through training and teaching DESIR will promote the use of DARIAH tools and services. Funded under the Work Programme part European Research infrastructures (including eInfrastructures), DESIR participates to the Open Research Data Pilot (ORDP). As such, this document introduces the first version of the project Data Management Plan (DMP). The DMP describes the data management life cycle for the data to be collected, processed and/or generated by the DESIR Consortium. It includes information on how the research data will be handled during and after the end of the project, what data will be collected or generated, which methodology and standards will be applied, how the data will be disseminated and shared, how the data will be curated and shared during and after the end of the project. The DMP is to be considered as a living document: the information contained therein will be updated following the implementation of the project and when significant changes occur. Moreover, in a second phase, the Consortium intends to deliver a version which will contain recommendations to the community and specifically tailored to Digital Humanities. Research data generated and processed in DESIR will be FAIR (findable, accessible, interoperable and reusable). # What kind of data is considered in the DMP According to the _Guidelines to the Rules on Open Access to Scientific Publications and Open Access to Research Data in Horizon 2020_ (Version 3.0, 26 July 2016), research data “ _refers to information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation._ _In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form._ ” DESIR focuses on the sustainability of the DARIAH ERIC and will develop Actions consisting primarily of accompanying measures such as standardisation, dissemination, awarenessraising and communication, networking, coordination, support services, policy dialogues and mutual learning exercises. As such, we expect that limited research data will be generated during the course of the project. # Structure of the template Each dataset will cover issues identified in the template below. The data will follow best practises defined in the _Guidelines on FAIR Data Management in Horizon 2020_ to make the DESIR research data findable, accessible, interoperable and re-usable 1 . <table> <tr> <th> Data Summary </th> <th> • </th> <th> State the purpose of the data collection/generation </th> </tr> <tr> <td> </td> <td> • </td> <td> Explain the relation to the objectives of the project </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify the types and formats of data generated/collected </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify if existing data is being re-used (if any) </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify the origin of the data </td> </tr> <tr> <td> </td> <td> • </td> <td> State the expected size of the data (if known) </td> </tr> <tr> <td> </td> <td> • </td> <td> Outline the data utility: to whom will it be useful </td> </tr> <tr> <td> 2. FAIR Data 2.1. Making data findable, including provisions for metadata </td> <td> • • • </td> <td> Outline the discoverability of data (metadata provision) Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? </td> </tr> <tr> <td> </td> <td> • </td> <td> Outline naming conventions used </td> </tr> <tr> <td> </td> <td> • </td> <td> Outline the approach towards search keyword </td> </tr> <tr> <td> </td> <td> • </td> <td> Outline the approach for clear versioning </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how </td> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td> • </td> <td> Specify which data will be made openly available? If some data is kept closed provide rationale for doing so </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify how the data will be made available </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify where the data and associated metadata, documentation and code are deposited </td> </tr> <tr> <td> </td> <td> • </td> <td> Specify how access will be provided in case there are any restrictions </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> * Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. * Specify whether you will be using standard vocabulary for all data types present in your dataset, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? </td> </tr> <tr> <td> 2.4. Increase data re-use (through clarifying licences) </td> <td> * Specify how the data will be licenced to permit the widest reuse possible * Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed * Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why * Describe data quality assurance processes * Specify the length of time for which the data will remain reusable </td> </tr> <tr> <td> 3\. Allocation of resources </td> <td> * Estimate the costs for making your data FAIR. Describe how you intend to cover these costs * Clearly identify responsibilities for data management in your project * Describe costs and potential value of long term preservation </td> </tr> <tr> <td> 4\. Data security </td> <td> • Address data recovery as well as secure storage and transfer of sensitive data </td> </tr> <tr> <td> 5\. Ethical aspects </td> <td> • To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former </td> </tr> <tr> <td> 6\. Other </td> <td> • Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) </td> </tr> </table> # Datasets description The DESIR Consortium identified the datasets that will be produced during the different phases of the project. The list is, however, to be considered indicative and datasets may be adapted, added or removed following the evolution of the project. ## Datasets 1. Source code of software developed in WP4 2. Survey conducted in WP6 <table> <tr> <th> Dataset 1: Source code of software developed in WP4 </th> </tr> <tr> <td> Data Summary </td> <td> Computer source code generated as part of the programming activities in WP4, through development activities by the project partners contributing to WP4 and through the Code Sprint planned for Summer 2018\. The efforts will follow the recommendations to encourage best practices in research software: _https://softdev4research.github.io/recommendations/_ </td> </tr> <tr> <td> 2\. FAIR Data 2.1. Making data findable, including provisions for metadata </td> <td> The code will be licenced under OSI approved licenses. When no other choice is preferred (e.g. when contributing to an existing ecosystem), the default license will be Apache-2.0 or EUPL-1.2 (In each case the decision for or against Copyleft and other implications can be readdressed.). The source code will be designed, formatted and commented according to the established standards and practices of the corresponding programming languages. Additional documentation will be provided in text-based form along with the code. The identifiers, versions, licenses and contributors will be collected according to existing open source standards. The code repositories will be registered via DataCite or Zenodo. </td> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td> The source code will be made publicly available. The code will be licenced under OSI approved licenses. When no other choice is preferred (e.g. when contributing to an existing ecosystem), the default license will be Apache-2.0 or EUPL-1.2 (In each case the decision for or against Copyleft and other implications can be readdressed.). </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> By using the established Github platform and adhering to each programming language’s individual software design standard, interoperability with third- party development will be ensured. </td> </tr> <tr> <td> 2.4. Increase data re-use (through clarifying licences) </td> <td> Code specifically developed within DESIR will be published on GitHub using the responsible partner's or the DARIAH ERIC's organisational account, using appropriate OSI approved licenses, see 2.2. Where existing solutions are extended, applicable platforms will be chosen on a per-case basis, and efforts will be undertaken to merge </td> </tr> <tr> <td> </td> <td> improvements back upstream. This will allow re-use and discoverability and constitutes the publication. </td> </tr> <tr> <td> 3\. Allocation of resources </td> <td> Publication of source code on GitHub is part of the development process and free of charge. Archiving of code repositories will be done by consortium members. </td> </tr> <tr> <td> 4\. Data security </td> <td> Source code will be archived in institutional repositories at the end of the project, on Zenodo and/or the DARIAH repository. </td> </tr> <tr> <td> 5\. Ethical aspects </td> <td> n/a </td> </tr> <tr> <td> 6\. Other </td> <td> n/a </td> </tr> </table> <table> <tr> <th> Dataset 2: Survey conducted in WP6 </th> </tr> <tr> <td> Data Summary </td> <td> This survey aims to identify cross-disciplinary DARIAH communities and new core groups, considering gender and diversity as main variables. It also aims to explore to what extent is DARIAH is reaching such research communities in terms of use and access. It is expected to analyse to what extent these communities perceive DARIAH as a reliable, trustworthy and sustained infrastructure. Generated data will be the source to the empirical study of DARIAH’s usage in new communities, and define new strategies and the data will be exported in xsl format. Original data will be collected from researchers defined within target groups (three cross-disciplinary communities and core groups: i) early career researchers, including MA and PhD students; ii) academics without permanent institutional affiliations (no reliable access to RIs); iii) academics with cross-disciplinary backgrounds and research interests (not clearly associated merely with one academic discipline). Data size cannot be specified yet and no specific data re-use is expected within this data collection. The collected data will be mostly useful for WP6 study. The analysed data will be, nonetheless, useful for future comparative studies. </td> </tr> <tr> <td> 2\. FAIR Data 2.1. Making data findable, including provisions for metadata </td> <td> Files will have common identifiers (respondents’ identification number and country codes) and identification numbers will be consistent across all data files. Users will be given access to contextual, multilevel and thematic data. </td> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td> In accordance with data protection, only anonymous data will be available to users. No specific methods or software tools are needed to access the data to be made available. </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> n/a </td> </tr> <tr> <td> 2.4. Increase data re-use (through clarifying licences) </td> <td> Analysed data and final report will be published under Creative Commons Attribution 4.0 License, allowing data re-use by third parties after the end of the project </td> </tr> <tr> <td> </td> <td> Data quality will be assured through quality assessment, including the quality and comparability of measurement instruments, the assessment of target-groups sample composition and the output quality of the survey. </td> </tr> <tr> <td> 3\. Allocation of resources </td> <td> The data collection and analysis is part of WP6 tasks within the project and no additional costs are expected. </td> </tr> <tr> <td> 4\. Data security </td> <td> Data will be archived in institutional repositories at the end of the project and/or the DARIAH repository. </td> </tr> <tr> <td> 5\. Ethical aspects </td> <td> Research data will be generated from the participation of humans through a survey. Respondent, which will be anonymous and will participate on a volunteer basis, will be fully aware of the nature and purpose of the research, what their role in it will be, and how the data they provide will be subsequently used. The template of the survey will has been included in D6.1 and will be provided on request. </td> </tr> <tr> <td> 6\. Other </td> <td> n/a </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1207_VESSEDIA_731453.md
# Chapter 1 Introduction The H2020 programme has implemented a pilot action on open access to research data. VESSEDIA, as a participating project to this pilot action, is required to develop a **Data Management Plan (DMP)** . This DMP has been identified in the Description of Action (DoA) as VESSEDIA deliverable D6.2. This document is drafted according to the **“Horizon 2020 FAIR** **Data Management Plan template”** . The major aim of the DMP is to ensure that our research data is FAIR – **F** indable, **A** ccessible, **I** nteroperable, and **R** e-usable. Thus, by the DMP we aim to address data set references and names, data set descriptions, standards and metadata, data sharing and archiving and preservation (including storage and backup) on a dataset by dataset basis. The DMP is intended to be a **living document** . It will be periodically revised to reflect changes in the data that may be made available by the project, and to provide additional information on the datasets as this information is developed during the specifications of the experimental phases. All partners have contributed to the document, particularly through the use of a project wide questionnaire. Since each partner will generate and use data, the document is organized with one section per partner. Each section is structured following the **6-points structure** described thereafter: 1. **Data summary** gives a description of the data. 2. **FAIR data** ensures that data are **F** indable, **A** ccessible, **I** nteroperable, and **R** e-usable. 1. **Making data findable, including provisions for metadata** gives a description of the data. 2. **Making data openly accessible** explains how the data are made openly accessible. 3. **Making data interoperable** assesses the interoperability of the data. 4. **Increase data re-use (through clarifying licences)** discusses how the reusability of the data is ensured. 3. **Allocation of resources** refers to costs and other resources for making the data FAIR. 4. **Data security** addresses data recovery as well as secure storage and transfer of sensitive data. 5. **Ethical aspects** clarifies if there any ethical or legal issues that can have an impact on data sharing. In the following, we will address the methodology that was used to set up the deliverable. Furthermore, we will provide a Data summary, followed by a summary of the use of FAIR data in the VESSEDIA project. Finally, allocation of resources, management of data security, as well as ethical aspects will be elaborated. . # Chapter 2 Methodology In order to compile the data management plan, a questionnaire was first elaborated, covering the main questions that need to be answered in the template provided by the European Commission. In a second phase, each project partner responded to the questionnaire, filling it with as much detail as possible at this stage of the project. Completed questionnaires were stored for analysis and traceability in the project’s SVN repository. In a third phase, the Data Management Plan was created as a synthesis of the questionnaire results, attempting to take advantage of commonalities between responses, in order to provide a simple view of data management procedures within the consortium. Further revisions of the document will be based on updates to partner questionnaires. Therefore, the DMP will be updated at least by the mid-term and final review to be able to finetune it to the data generated and the uses identified by the consortium. In addition, a confidential index of datasets will be created and maintained in the project when the datasets are created. The DMP itself is a confidential document. Therefore the information about the datasets will only be provided to the EU and the reviewers. The VESSEDIA project will consider open licenses and open availability for the datasets. The reasons for not offering open access will be documented in the partner questionnaires and in the appendix describing the datasets. # Chapter 3 Data Summary ## 3.1 Generalities VESSEDIA is a research project that produces mainly data through **Research and software Developments (R &D) ** . The partners will use background R&D data, such as original research articles and other projects’ reports and tools, to do research, and produce new data of various kinds, including reports and articles, source and binary code (mainly for x86_64 platforms running Linux, MS Windows and MacOS X, as well as IoT target platforms), archives (containing various kinds of files), log files (containing execution traces and results, proof traces, intermediate results, etc.), text files (containing documentation, traces, etc.), script files (e.g. with compilation, installation and execution procedures), etc. **Various tools will be used and developed** by the project: * Software development tools (commonly called CASE tools), * Software V&V tools (for the testing and analysis of code and models), * Modelling tools (for the development of software models), * Documentation writing tools (such as text editors), * Configuration management tools (mainly GIT, SVN and Zenodo), - Workflow management tools (often internal to partner organisations), - etc. The second category of tools is the main purpose of the project on which the partners will focus. **The data will be formatted** as much as possible using standard format types (and respective naming conventions): * Reports are mostly in HTML, Markdown, Office Open XML, OpenDocument, and LaTeX format, and therefore files have the corresponding extensions (.docx, .pptx, .xlsx, tex, .html, .md, .pdf, etc.). * Source files of tools are plain text files using standard naming convention corresponding to the programming languages and computing platforms used: .c, .cpp, .ml, .java, bash, sh, perl, .py, etc. Some text documentation (e.g. README) and script files (e.g. Makefile) are often distributed with these source files (they have no specific extension). These lists are not exhaustive. **Documentation** is produced alongside with research activities, either when new results are found or when planned deliverables need to be produced. Metadata, as stated above, are automatically associated to the tools that serve to prepare the documentation (text editors, MS Office, LibreOffice, LaTeX, etc.), or by the **software development tools** (gcc, MS Visual C++, Ocaml, etc.). The **metadata types** for documentation have been chosen by the project among the most widespread standards. The metadata types for software files are imposed by the development tools used on Linux, MS Windows and MacOS computing platforms. The size of these kinds of data is not known nor bounded. The data will be **useful to several parties** : * The Computer Science community, especially to those interested in Formal Methods and Testing, Embedded systems, and IoT. This includes all possible kinds of organisations, ranging from academia to SME and industries, mainly those producing safety and security critical software. * The project members as well as other similar research projects members (either national or European) for cross-fertilisation purposes. **Their purpose** is manifold and includes: * Research * Training: users, students, etc. * Consultancy * Applications developments: tools will be used on various kinds of use-cases to demonstrate their effectiveness and usage on real cases * Tools developments: new and improved tools * Certification: data is useful for CC evaluation and certification purposes. ## 3.2 Partners specificities Below are the specificities of the project partners in terms of data types, collection and generation. Accessibility of data will further be dealt with in section 4.2. **TEC:** As the project Coordinator, Technikon will generate data for the Quality Management of the project. This data will contain text, tables, graphs, electronic measurements (e.g. SRAM start-up behaviour) and maybe also some code. Data formats are chosen accordingly, to make the data shareable and accessible in the long term. Data will be reused from scientific publications, conclusions from conference papers, technical data sheets from manufacturers and from other research projects, such as UNIQUE, HECTOR, certMILS, etc. **CEA:** The Frama-C V&V platform, the Papyrus Eclipse project, the Diversity tool and the 6LowPan use-case are part of the background material. **DA:** DA is building its use-cases and possibly complementary analysis tools. DA will use an internal development methodology for this purpose. Its use-case will be partly available publicly, possibly on the project’s website. Confidential parts will not be available. **SLAB:** SLAB data consists of reports and evaluations and will be publicly accessible. **FOKUS:** FOKUS data consists mainly of reports, specifications, some source code, and scientific articles that are produced by original research activities of the project. **INRIA:** The Contiki source code is part background material it is available publicly on Github. **TUAS:** The Economic rationale, as a tool for supporting decision making, can be used for a wide range of products/systems' owners/designers/builders, not only applicable to products/systems close in nature to the Vessedia pool of use-cases. So the Economic rationale has a lot of potential for being integrated and re-used. **KU Leuven:** The VeriFast V&V tool is part of the background material. **FD:** FD will be using the benchmark SPEC for testing efficiency of our tools. **AMO:** AMO will use an internal development methodology. # Chapter 4 FAIR Data In this chapter we describe what is done to make the data FAIR ( **F** indable, **A** ccessible, **I** nteroperable, and **R** e-usable). ## 4.1 Making data findable, including provisions for metadata ### 4.1.1 Generalities The project’s data is discoverable. As described above, the standards chosen for identifying data correspond to naming conventions (file extensions) at the data files level. Such files are generally compact, easy to manage and can be managed by standard applications, as follows: * HTML files (suffixes .html, .htm) can be opened by any web browser * Markdown files (suffix .md) can be handled by any text editor (e.g. emacs) * Microsoft Office documents (suffixes .docx, .xlsx, .pptx, etc) must be handled by MS Office * Microsoft Project documents (suffix .mpp) can be edited using MS Project or any compatible tool on Linux * OpenOffice documents (suffixes .odt, .xml, .rtf, etc.) can be handled by OpenOffice and LibreOffice * LaTeX format (suffix .tex) must be handled by LaTex * Acrobat PDF format (suffix .pdf) must be opened by Adobe Acrobat. * Source files of tools (suffixes .h, .c, .cpp, .ml, .mli, bash, sh, .bat, .perl, .py, etc.) must be handled by the development tools (e.g. MS Visual C++, gnu tools, etc.) or by any text editor (e.g. emacs) * Assembly files (suffixes .s, .asm) for Intel x86 and ARM architectures mainly that will be handled by assemblers or assembly code analysers. * Proof script files must be handled by proof tools (such as Qed, Why3, PVS, AltErgo, Z3 and Coq) * Binary files are mostly the result of the development tool chains (suffixes .exe, .com, .o, .so, .a, .dll, .lib, etc.) and can be handled using some debugger (e.g. gdb) or profiling tools or binary code analysers. These files naming conventions have the benefit of allowing Operating Systems to identify them easily as well as associating some tool(s) for opening them automatically. This association can be changed easily in each OS. ### 4.1.2 Partners specificities Below are the specificities for each project partner: **TEC:** None. **CEA:** Versions of Frama-C will be named by an element of the periodic table of chemical elements and a version number. Papyrus follows the Eclipse release train and has a persistent identifier according to the release. Plugins for Papyrus, developed within VESSEDIA, may have their own persistent identifier but it is loosely correlated to the Papyrus identifier. Diversity’s components will be made available at different moments of the projects according to their maturity. They will have an incremental persistent identifier. **DA:** Data will be available at the delivery date in the public deliverable reports. No specific identifiers are used. **SLAB:** None. **FOKUS:** The report on formal methods will have a version number that also refers to the version number of Frama-C. **INRIA:** None. **TUAS:** Online E- training materials will also be generated by TUAS. Notice that the volume of data of TUAS is hard to assess at this point. For example, the ISO standard documents are large documents (approximately 80 pages). **KU Leuven:** Platform/software versions information are used too. **FD:** None. **AMO:** None. ## 4.2 Making data openly accessible ### 4.2.1 Generalities The major part of the project data is accessible. The project produces whenever possible: 1. Software components with an open-source licence (e.g. LGPL, Berkeley, GPL and MIT), and 2. Public documents (such as deliverables or white papers or thesis reports). When such components are deemed to be of sufficiently high quality, open source software components can be downloaded on a public web site, on the project web site or on Gitlab or through some Linux packages manager (such as Synaptic on Ubuntu). Other software components can be made available to partners in prototype state, depending on the author’s decision. They may then be made available on the internal and/or external project web site or may be transmitted directly between partners. Finally, when data is classified as confidential, it will not be available outside of the project. Data embedded in some public document is public data and becomes visible. In order to access public data, web browsers are the most suitable. Users may also use, when necessary, some FTP, GIT or SVN client software (e.g. filezilla, rapidsvn, etc.) to transfer the data to a local computer. In terms of locations, potential users can find the project’s public data on the following places: * Documents, dissemination of reports, open source prototype tools, test cases, white papers and demonstration videos can be found on the project’s web site: _https://vessedia.eu_ . Additionally, a contact page of the project website will allow interested persons to contact the project coordination team through some contact form * In the case of conference papers, they can be obtained as part of conference proceedings, if their organisers decide to let the material be publicly available. Otherwise, they can be obtained within the paper versions of these proceedings. ### 4.2.2 Partners’ specificities Here are some specificities for each project partner: **TEC:** It is possible that some public data will also become available through a thesis developed within the scope of the VESSEDIA project. The thesis may be closed for a few years after its finalization. Any relevant information (research data like SRAM readouts or statistical evaluations) will be shared with the partners internally via the project subversion repository ( _https://vessedia.technikon.com_ ) . Other information, which is also relevant for the global publis will be shared via the project website, e.g. the “results section:” https://vessedia.eu/results. **CEA:** Frama-C can be found on the public web site _http://www.frama-c.com_ , in the repositories of the major Linux distributions such as Ubuntu, as well as in the repository of the OCaml package manager ( _https://opam.ocaml.org_ ) . Papyrus is and Eclipse project and it is available at _http://www.eclipse.org/papyrus_ . The source code of Papyrus is available in an Eclipse repository, available on the Papyrus website. Diversity is distributed open source as an Eclipse project and can be found at the public website of EFM _https://projects.eclipse.org/projects/modeling.efm_ and associated Eclipse repositories. Sharing other components that are not part of the distribution will be done directly by CEA according to some agreements. The 6LowPan use-case provided by CEA is confidential and its source code will not be made public. However, it can be transferred to project partners after a NDA has been signed with CEA. Some further data may come from PhD students, master degree or postdoc theses. **DA:** None. **SLAB:** None. **FOKUS:** Data will be available and useful for the wider public but is not usable for nonspecialists. **INRIA:** The verified Contiki OS source code will be provided on a dedicated, publicly hosted repository. **TUAS:** Data is discoverable and accessible to a certain degree. TUAS will share data with project partners and also with students of their university. **KU Leuven:** Versions of VeriFast, together with its documentation will be public and available on the author’s server at _https://people.cs.kuleuven.be/~bart.jacobs/verifast/_ **FD:** Data produced by FD will be available to people inside Program Analysis field, and more generally to Computer Science Scientists. **AMO:** Most AMO data is public but some reports are confidential and will thus not be available outside of the project. **All:** To guarantee open access to scientific publications and research data, several repositories were selected (see above). They are convenient to access and also easy to use. Those repositories allow to easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. Further, each uploaded publication and dataset receives a persistent identifier (DOI), which ensures the long term preservation. ## 4.3 Making data interoperable The project’s reports are trivially interoperable as they can be edited and reused with any compatible editor (see above). In terms of software produced by the project, these are not easily interoperable with other software tools as they generally require the same tool equipment. Software source code can be reused by installing the same development tools as for their production, to read, edit and recompile them. Verification data (predicates, lemma, axioms, proofs, proof scripts, etc.) is generally added to the source code and can only be processed by the same specialised tools as the project (Frama-C, Verifast, Flinder, Why3, VeriFast, etc.) and thus is not interoperable with other tools. * Frama-C state and script files can only be read by Frama-C. * Software models are stored in XMI (XML Metadata Interchange) format are also specialised data that can be exchanged with other tools accepting XML. XMI data is compliant to EMF-UML2 1 that is am implementation of UML2 by EMF. * Diversity has a proper language and the UML format can be used to exchange data. Other data in text format such as log or trace files can be edited and reused. ## 4.4 Increase data re-use (through clarifying licences) Data reuse is an issue specific to each partner who produces data. Below is described how each partner plans to manage data reuse within the VESSEDIA project: **TEC:** Management and project quality data do not need any licence and there is no restriction on the reuse of third-party data, nor any restrictions to share it. Quality of data produced is ensured by internal checks such as peer reviews. **CEA:** * **Licencing issues** * In terms of licensing, an open source (LGPL 2.1) version of Frama-C has been released since the beginning of the VESSEDIA project, and new versions will be published during the course of the project, which will incorporate extensions made within the project. The latest release, version 15 - Phosphorus, has been published on May 31 th , 2017. New components that will be developed for FramaC platform and that will be distributed get an open-source licence whenever possible. * Papyrus and all of its plugins are open source and distributed under the Eclipse Public License 1.0 (EPL 1.0). Papyrus implements open-source standards provided by the Object Management Group (OMG). Within VESSEDIA, new plugins developed for Papyrus will continue to be open-source and licensed under EPL 1.0. * Diversity is under an open source process as an Eclipse project “Eclipse Formal Modelling Project” (EFM for short), licensed under Eclipse Public License 1.0 (EPL 1.0): A first open source version of Diversity will be realised at the beginning of the VESSEDIA project. New versions with upgrades will be released during the project in the same context. New formal analysis modules and gateways of Diversity that will be developed will be licenced under EPL and distributed through the EFM project whenever possible. * The CEA 6LowPan application is confidential and will only be shared within the project between partners that signed a NDA with CEA. * **Sharing** * Frama-C will be shared only when CEA estimates that it is mature enough and by means of new public versions (as for version 15 - Phosphorus). Some new components may be restricted when they are not part of the Frama-C distribution or when they are subject to some agreement with a third party. * New plugins developed for Papyrus, within VESSEDIA, will be shared only when CEA estimates that they are mature enough. Some new components may be restricted if they are subject to some agreement with a third party. * Diversity will be shared only when CEA estimates that it is mature enough and by means of new public versions (under EFM project, EPL licence). Some new components may be restricted if they are not part of the Diversity distribution or if they are subject to some agreement with a third party. * When data is part of some agreement with another company, access will be defined by both parties specifically. * Otherwise and if data is not part of the Frama-C distribution, it can be accessed through particular agreements with CEA. * In the case of Papyrus, if the data is not part of the Papyrus distribution it can be accessed through particular agreements with CEA. * For Diversity: when data is part of some agreement with another company, access will be defined by both parties specifically. Otherwise, if data is not part of the Diversity distribution it can be accessed through particular agreements with CEA. * **Data quality** is ensured at CEA in several ways: articles and deliverables are project internally and peer reviewed in order to ensure a high quality standard; software components are intensively tested and peer reviewed within the development team (who are assumed to understand the internals of the tools as well as mastering the programming languages used). Furthermore, bug tracking systems and discussion lists ensure that users can report bugs, discuss issues and contact the development team to solve problems. See https://frama-c.com/support.html for instance. **DA:** Parts of the use case shared on the public project site, and public reports (project deliverables), unless duly mentioned, have no particular license. Then, there is no restriction on the reuse and sharing of these unlicensed data. **SLAB:** Same as for DA. **FOKUS:** FOKUS will continue updating a report on formal methods that is licenced under _https://creativecommons.org/licenses/by-nc-sa/3.0_ . There are a few restrictions, see _https://creativecommons.org/licenses/by-nc-sa/3.0/_ Potential users can find reports and data on the VESSEDIA website and on the website of Fraunhofer FOKUS _https://www.fokus.fraunhofer.de_ . The manual “ACSL by Example” is accessible at _https://gitlab.fokus.fraunhofer.de/verification/open-acslbyexample.git_ **INRIA:** Concerning Contiki OK, the new source code will be part of Contiki and will adopt the same BSD license. There is no restriction on the reuse of third-party data, nor any restrictions to share it. New versions are available on the _http://www.contiki-os.com_ main site. The quality of Contiki is ensured by peer review and checks. New distributions of the OS are intensively tested before release. Bugs are centralized through the Contiki support (at _http://www.contiki-os.org/community.html_ ) in order to be processed. **TUAS:** ISO/IEC will licence only the final version of the standard document. All the draft versions of ISO/IEC standards will stay available for any expert willing to give comments through a national body or a liaison organisation. **KU Leuven:** KUL is using permissive open source license (MIT)/permissive creative commons license. **FD:** FD data will be open-source. The precise license has still to be determined. **AMO:** AMO reports are public but some reports are confidential and will thus not be available outside of the project. There are no restrictions for the reuse, sharing or access of data. # Chapter 5 Allocation of resources ## 5.1 Generalities The costs for making the data FAIR are different for each partner and will be discussed below. In terms of **data management** , each partner is responsible for handling its internal data: * The management of data shared in the project, such as on the project’s SVN, is done by TEC and the partners. The administrator is TEC, who established rules for partners to use this facility to store, share, modify and query data. Data volumes are not limited so far and will be reasonable. * Data which is made public on the project’s web site has a shared responsibility even though the management of this server is done by TEC. * Other published data, such as articles, white papers and tools, engage the responsibility of authoring partners as well as the entire project, who approves such data before publication. ## 5.2 Partners specificities In terms of **long term preservation** , each partner has a different policy. **TEC:** Costs incur for server provision and maintenance (expected costs of about 2000€). TEC has foreseen to retain the project’s data for 3 years after project. The expected costs are those that arise through server provision and maintenance. **CEA:** The Frama-C website is hosted on a rented server. Costs for renting the server are part of the lab’s operating costs, and are shared among all projects and industrial contracts that are using Frama-C. The same applies to Papyrus. There are no costs for Diversity. Components of Frama-C and Papyrus that are not released will be kept until mature enough, unless a cooperation agreement is signed with another organisation that makes use of it. Frama-C as well as public plug-ins are archived on the public web site _http://www.frama-c.com._ Papyrus and its plugins are available at _http://www.eclipse.org/papyrus_ and the source code repository is available on the website. For Diversity, data that is not released will be kept until mature enough, unless a cooperation agreement is signed with another organisation that makes use of it. Diversity source code is available at the following public repositories _http://git.eclipse.org/c/efm/org.eclipse.efmmodeling.git_ (for the GUI and modelling part) and _http://git.eclipse.org/c/efm/org.eclipse.efmsymbex.git_ (for the formal analysis modules). In terms of data preservation, tools will be kept for several years (including released versions and tool prototypes). Released tools survive years and prototypes decay unless researchers take up in the scope of a new project. This is the case for Frama-C, Papyrus and Diversity. **DA:** There are no extra costs as data is stored on the project’s sites. **SLAB:** There are no extra costs as data is stored on the project’s sites. **FOKUS:** No cost as it is hosted by Fraunhofer FOKUS. Maintenance of server is guaranteed by basic funding of Fraunhofer FOKUS. There are no plans to limit the availability. **INRIA:** No costs are incurred to make data FAIR. The data will be publicly hosted on Github on a free account and retained as long as Github provides free public repositories. Applications specific data as well as non open-source modules will be retained. **TUAS:** No costs are incurred to make data FAIR. TUAS data will be discoverable and accessible to a certain degree. When this is the case, it will be usable beyond the original purpose, e.g. the project’s ISO standard will enable building new ISO standards, new tools, etc. **KU Leuven:** KUL data will be kept indefinitely. KUL will use Zenodo for storage. We already prepare the data for review by reviewers of the scientific publications. Extra effort for preservation is expected to be minimal. **FD:** There are no extra costs as data is stored on the project’s sites. **AMO:** There are no extra costs as data is stored on the project’s sites. # Chapter 6 Data Security Data security is concerned by the secure handling, storage, loss prevention, recovery and transfer of project data, especially for sensitive data. Data security is managed differently by each partner, as follows: **TEC:** Data is secured by saving it regularly on external server (daily), which also avoids losses. Data can be recovered by seeking for the latest copy on external server (max. 24 hours are lost). Sensitive data is handled inside the SVN repositories. **CEA:** Frama-C related data is stored on a server which runs a regularly updated Linux distribution, and only accessible through ssh by a few administrators. Data on the server is part of a git repository that is mirrored in several places, physically distinct from the main server. Data recovery can be done by simply cloning from one of the mirror repositories. The 6LowPan application is sensitive data and is stored by CEA internal servers and its security is compliant with the CEA security policies. **DA & AMO: ** The security of DA publicly available data relies on the project’s server managed by TEC. Data is duplicated into DA internal repository, including official deliverables (following ISO 9001 quality management systems - requirements). Sensitive data will be handled by any means in accordance to the project public web site management. **SLAB:** The security of data of SLAB data is built into the project’s server managed by TEC. Data is backed up into an SLAB internal repository. Sensitive emails are encrypted by PGP. Secure data can be stored in an offline network. **FOKUS:** Security will be maintained by Fraunhofer’s IT infrastructure. Backups are maintained. GIT- repositories can simply be fully-cloned, making recovery from a clone very easy. **INRIA:** The data will be as secure as Github allows. Loss of data is prevented by Github by offering cutting-edge redundancy services. All contributors have copies of the repository on their work station. Data recovery can be done by simply cloning from one of the mirror repositories. There is no sensitive data. **TUAS:** Internal TUAS security measures enable data to be stored securely and avoid any loss of data. Sensitive data will be handled according to effective national and EU-wide privacy regulations. **KU Leuven:** Security and recovery procedures are dealt with by Zenodo. There is no sensitive data. **FD:** The security of data of FD data is built into the project’s server managed by TEC as well as in the Git servers. # Chapter 7 Ethical Aspects To the best of our knowledge, neither the project nor its later use, nor other planned exploitation of the results have an impact on ethics, but complies with the general social rules of protecting the citizens’ right on its own data and purpose of communications. The partners will conform to their current national legislation and regulations. # Chapter 8 Summary and Conclusion The Data Management Plan of VESSEDIA describes the activities of the partners related to datasets and is a key element of good data management. In the above sections we described the methods used for these activities and provided an extensive definition of the data and their formats. We described how data is made FAIR in the VESSEDIA project as well as the choices made by the partners for the allocation of resources and data security. We have structured all these items into Generalities and Partners Specificities. This document contains a summary of all the information available as of 9 th June, 2017\. All partners intend to create data and make it available within the consortium. The DMP needs to be updated in the course of the project whenever significant changes arise, such as new data, changes in consortium policies (e.g. new innovation potential, decision to file for a patent), changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving). This will be done within the periodic reports in M18 and M36. The partners’ questionnaires have helped to understand how each partner uses and manages data. Through the questionnaire, we have also been able to establish that a set of common practices and tools already exist to manage data, allowing date to be shared and reused easily.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1208_DECIDE_731533.md
# Executive Summary This deliverable aims to present a plan for the data management, collection, generation, storage and preservation related to DECIDE activities. In this action, we envision five different types of data: data related to the use cases, data related to the meta-analysis to be done in the social-sciences tasks, data coming from publications, public deliverables and open source software. The document presents, following the EC template [1], how these different types of data will be collected, who the main beneficiaries are, and how DECIDE will store them, manage them, and if the project will make them accessible, findable and re-usable. The text continues with the foreseen resources needed for the openness and data to finalize with security and ethical aspects that will be taken into consideration in the context of DECIDE. This plan is the first version of the data management plan, which will be updated in subsequent versions (M18 and M36) as part of the Technical Reports, having as input the work carried out in the use cases (WP6), the technical work packages (WP2 – WP5) and the dissemination activities (WP8). # Introduction ## About this deliverable This deliverable focuses on the management of the data in DECIDE. In DECIDE there will be two different data strands. The first strand relates to the publications generated as part of the research activities, and the second strand relates to the data collected from the use cases, that will be used as part of the implementation of the different key results established in the project. ## Document structure The document follows the established H2020 template for a Data Management Plan (DMP) [1]. Section 2 presents a summary of what the purpose of the data collection and generation is in the case of DECIDE. Section 3 explains how the data and metadata will be made fair, and thus accessible, findable and reusable. Section 4 briefly explains how the financial resources for this openness are envisioned at this stage to be allocated. Section 5 and 6 focus on the security and ethical aspects respectively. Section 7 presents the conclusions and future work. # Data Summary DECIDE aims to create a set of tools to design, deploy and operate multi-cloud aware applications in an ecosystem of reliable, interoperable and legally compliant services. DECIDE will not generate any data beyond those of the public deliverables and open access publications. DECIDE will use data in the use cases. AIMES’ use case with its clinical trials application may seem especially relevant but it should be noted that all data used shall be fictional. To realize the main goal of the project, DECIDE will develop the following Key Results (KR): * **KR1: Multi-cloud native applications DevOps framework:** This Key Result integrates KR2KR5, explained next, in addition to the integration and extension of existing tools in the Open Source (OS) communities covering development, continuous integration (CI), continuous quality (CQ) and continuous delivery (CD). * **KR2: DECIDE ARCHITECT:** ARCHITECT will provide architectural patterns and modelling practices for implementation, optimization and deployment of multi-cloud native applications. Apart from the theoretical description of the patterns and explanations of how they can be implemented, DECIDE ARCHITECT will provide a supporting tool, with suggestions on which pattern is to be applied and in which order. * **KR3: DECIDE OPTIMUS:** DECIDE will address Multi-cloud deployment simulations as well. DECIDE OPTIMUS aims to simulate the behaviour, in stressful conditions, of the profiled and classified components of a multi-cloud native application deployed on multiple CSPs and using multiple cloud services, so as to provide the most adequate candidate deployment topologies for that application, based on a predefined prioritized set of requirements (e.g. for example, legal compliance, performance and cost, vs. cost, security, legal awareness) defined by the developer. The candidate topologies will be calculated by making use of big data optimization algorithms such as Dandelion codes, genetic algorithms, or Harmony search. * **KR4: Advanced Cloud Service (meta-) intermediator (ACSmI): ACSmI** will provide means to assess continuously, through real time verification, the fulfillment by the cloud services of their non-functional properties fulfilment and legal compliance. ACSmI will also provide a cloud services store where companies and developers across Europe can easily access centrally negotiated deals of compliant and accredited cloud services for applications developed by the software sector. * **KR5: DECIDE ADAPT:** ADAPT is DECIDE’s self-adaptation tool for the multi-cloud native applications. It is a Software tool to deploy, monitor and (semi-) automatically self-adapt multi-cloud native applications. Out of all the Key Results envisioned for DECIDE, the one that is most data- related is the ACSmI (KR4). In the timeframe of the project, the ACSmI will use fictional user data to contract the services from the CSPs and it will provide encryption techniques for login-related information. The ACSmI will be the DECIDE tool involved in the contracting of the CSP services, but the data relevant for the execution of the use cases will not be stored in the ACSmI but rather in the services offered by the selected CSPs and therefore, external to DECIDE’s tools. Following the vision and goal of DECIDE of contracting only legally compliant services, the ACSmI will only contract only CSP services from CSPs that state that they comply with the laws and regulations relevant for the domain of DECIDE’s use cases. This is especially relevant for the case of the AIMES study. In DECIDE two distinct environments are envisioned, namely, an integration environment, and a production environment. The integration environment will be deployed at Innovati’s DevOps infrastructure and will include the different components developed by the technological partners. For testing purposes, these partners will use synthetic, fictional, data or ‘persona’ data, but never real data, anonymized or not, coming from the use cases. The production environment’s final location is at this stage not yet decided. Different types of data will be collected and generated in the context of the project. These can be summarized as follows: 1. Data related to the execution of the use cases, testing KR1 –KR5. 2. Data related to the ACSmI 3. Data related to scientific publications. 4. DECIDE public deliverables. 5. DECIDE Open Source Software ## Data related to the use cases The aim of this section is to establish what the purpose is of the collection and generation of data within DECIDE, which the sources of this data are, if relevant, and how this is important to realize the Key Results of DECIDE. Use cases are a key pillar of DECIDE as they will validate the work (in WP6) to be performed in the social and technological work packages (WP2-WP5). Next we proceed to describe the data available at this stage of the project. ### AIMES With respect to the AIMES’ use case, the following considerations with respect to data need to be stated: * No personal or identifiable data will be used as part of the AIMES use case for the DECIDE project * The streamline use case and other eHealth services will not use, host or process patient identifiable data as part of DECIDE * Any eHealth related services which need to be accessed as part of demonstrating the viability and usability of tools will require the creation of a separate instance which is entirely separate to any service which hosts sensitive/personal data. In the event data is required to test the DECIDE tools, this data will be ‘fake’ and randomly generated. While the AIMES use case does not involve the processing of personal data as a constitutive part, it can never be completely excluded that some accidental processing of personal data might take place, especially in relation to the use case partner’s own employees. Any such processing shall however be minor and will take place in accordance with applicable law. ### ARSYS Data used and stored in the ARSYS use case are typically not personal, but rather non-personal data concerning the business continuity of ARSYS. While the ARSYS use case does not involve the processing of personal data as a constitutive part, it can never be completely excluded that some accidental processing of personal data might take place, especially in relation to the use case partner’s own employees. Any such processing shall however be minor and will take place in accordance with applicable law. ### INNOVATI INNOVATI’s data needed for the execution of the use case is fictional, used only for testing purposes, aiming to simulate the behaviour of the system in production. INNOVATI stores only fictional data for the use case. While the INNOVATI use case does not involve the processing of personal data as a constitutive part, it can never be completely excluded that some accidental processing of personal data might take place, especially in relation to the use case partner’s own employees. Any such processing shall however be minor and will take place in accordance with applicable law. ## Data related to scientific publications DECIDE will publish scientific publications in conferences and journals as part of the planned dissemination activities. Following the EC Mandate on Open Access [2], DECIDE adheres to the Open Access policy, choosing the most appropriate route for each case. Whenever possible, DECIDE favours the ‘green’ open access route, in which the published article or the final peer-reviewed manuscript will be deposited in an online repository, before, at the same time as, or after publication, ensuring that the embargo period requested by certain publishers has elapsed. Scientific publications data are often made available using accessible pdf files. The metadata to be used will be compliant with the format requested by OpenAire as well as the one requested by the repository where the papers are to be deposited. The format in which the data related to the scientific publications will be accessible pdf files. The metadata to be used will be compliant with that of the repository where the paper is to be deposited and will be compliant with the format requested by Open Aire, as to ease the index. DECIDE’s partners will use Zenodo [3] for their joint publications. Some institutions, like TECNALIA, have developed their own Open Aire compliant repositories, where TECNALIA researchers have to upload their contributions to. When TECNALIA’s researchers do so, these publications are automatically indexed in OpenAire. ## DECIDE public deliverables All information and material related to the public such as public deliverables, brochures, posters etc., will be freely available on the project website in the form of accessible pdf files. When IPR of foreground knowledge needs to be protected, the corresponding disclosures will be published. All deliverables include a set of keywords and a brief description, which are meant to facilitate the indexing and search of the deliverables in search engines. The keywords in each deliverable aim to stress the main topics addressed in the document, be it a report or a software- related document. The audience of the public deliverables of DECIDE range from general audiences, interested in the activities performed in the project, to more specialized audiences such as developers and operators of multi-cloud application or those who wish to learn about the benefits of DECIDE through the experiences gathered through the pilots. ## DECIDE Open Source Software DECIDE will develop the ICT tools mentioned above. The source code will be released, whenever the IPR of the partners is not breached, under a friendly open source licensing schema still to be decided as part the of the exploitation activities. DECIDE tools will be developed in a variety of programming languages but deployed using a container-based approach following a micro-services [4] architecture. The size of the source code, the readme files, the user manual and technical specifications as well as the docker scripts cannot be known at the moment. The open source software is aimed at developers and operators of multi-cloud native applications. # Fair Data This section focuses on the following aspects, namely, on the feasibility and appropriateness of making data findable, openly accessible, interoperable and reusable in the context of DECIDE. ## Data related to the use cases At this stage, for all use cases, both the collected and generated data, anonymized or fictional, are not envisioned to be made openly accessible. In principle, if needed, all data collected, stored and processed will be fictional and treated as strictly confidential, and kept for a specific period of time as stated on the consent form. This time period shall be no longer than necessary to achieve the aims of the scenario and to validate the project objectives, and after this point, the data will be destroyed as required. ### AIMES AIMES’ use case data will not be made accessible, and thus not ‘FAIR’ 1 . ### ARSYS ARSYS’ use case data will not be made accessible, and thus not ‘FAIR’. ### INNOVATI INNOVATI’s use case data will not be made accessible, and thus not ‘FAIR’. ## Data related to scientific publications The project will favour, whenever possible, the ‘green’ open access, in which the published article or the final peer-reviewed manuscript will be deposited in an online repository, before, at the same time as, or after publication, ensuring that the embargo period requested by certain publishers has elapsed. The Consortium will ensure open access to the publication within a maximum of six months. DECIDE partners have the liberty to choose the repository where they will deposit their publications, although Open Aire – compliant and Open Aire - indexed repositories such as Zenodo [3] will be favoured. The partner TECNALIA will use its own repository, already indexed by Open Aire. For the case of the scientific publications, a persistent identification number will be provided when uploading the publications to the selected repository / repositories. ## Data related to deliverables For the project’s publications on the website, the naming convention to be used will be << _Dx.y Deliverable name _ date in which the deliverable was submitted.pdf_ >>. All deliverables include a set of keywords and a brief description that are aimed to facilitate the indexing and search of the deliverables in search engines, in accordance to the defined template. The deliverables will be stored at TECNALIA’s hosting provider, and for three years beyond the duration time frame of the project. ## Open Source Software DECIDE has envisioned a freemium business model for the project’s Key Results which implies a free version of the software as well as a premium one. In addition to offering a free version of the software, these free versions of the open source components of the DECIDE KRs will be released as open source in a source code repository, namely GitHub, which will be stored at Innovati’s premises. The free versions of these components will be findable and reusable by any developer that is interested in the DECIDE Key Results, in agreement with the open source licenses of the components. Furthermore, DECIDE will explore the possibility of developers outside of the consortium to experience with such components, which will ensure the uptake and sustainability of the software results of the project. For every software component, a readme file as well as the technical specification document will be released. Moreover, a docker script will also be released with the aim of facilitating the deployment of the DECIDE containerized components in any desired infrastructure. # Allocation of resources DECIDE does not foresee additional needs for resources beyond the duration of the action to handle data or making the data fair. As expressed before, open access repositories will be favoured. In the case of open source software, the partner TECNALIA will ensure that the github repository is available after the project duration, either by keeping it in its own premises or by transferring it to existing open source projects and communities. # Data Security Out of the Key Results envisioned in DECIDE, the one that is most affected by security requirements is the ACSmI. To this respect, a Security management component has been included. The aim of this component is to be in charge of designing and developing the means to guarantee the secure operation of the ACSmI, including: Identity propagation and federated authentication and authorization. The sub-modules included in this component are: * The _User management_ is responsible for gathering information to create, delete and modify users. * The _Policy Manager,_ is responsible for creating, deleting and modifying the policies of the ACSmI and when a new user is created, is responsible for assigning the policies that apply to this new user and to properly update the user registry. * The _Role Manager_ , is responsible for creating, deleting and modifying the roles of the ACSmI. When a new user is created it is also responsible for assigning the roles to this user. This component will also properly update the user registry. * The _Authentication Manager,_ is responsible for the authentication of the users of the ACSmI (This activity will be carried out at DECIDE framework level, if the ACSmI is integrated). If the credentials are valid, the authentication manager will provide to the console the token of the identification for propagating the identity to the rest of the module when appropriate. * The _User Registry_ , is a database where all the information related to the users is stored. * The _Data Encryption_ , is responsible for encrypting the data within the ACSmI in order to keep these data secure in case of cyber-attacks. * The _Back-up_ is responsible for carrying out incremental back-ups in order to allow the recovery of data of the ACSmI should this be necessary. * The _Communication security_ is responsible for providing secure communication using SSL transport layer encryption both between client and platform and between platform and cloud infrastructures. Moreover, DECIDE will ensure that the General Data Protection Regulation (GDPR), which will enter into force in May 2018, is complied with. The security components shown above will be implemented with that goal in mind. # Ethical Aspects The basis of ethical research is the principle of informed consent. All participants in DECIDE use cases evaluation will be informed of all aspects of the research that might reasonably be expected to influence their willingness to participate. Moreover, project researchers will discuss before and after each practical exercise (e.g. interview, co-creation session, etc.) to maintain on-going consent. Participants will be recruited by each organization leading the use cases (AIMES, ARSYS and INNOVATI) to perform the planned qualitative assessment of the DECIDE Key Results. For instance, in the case of the evaluation of the usability of the tools this will be performed through questionnaires (see D6.1). This data will be anonymized and reported as aggregated data (when relevant) in the documents related to the evaluation of DECIDE outcomes. If participants wish to withdraw from the participation in the use cases at any time, they will be able to do it, and their data, even anonymized data, will be destroyed. # Conclusions This deliverable has presented the plan for the management of data in the DECIDE project. In this action, mostly data related to scientific publications will be generated. Data coming from fictional users logging into the ACSmI will be encrypted and they will not be made available compliant with the FAIR principles. Furthermore, data coming from the use cases will also be fictional, especially relevant in the case of Innovati and AIMES, and in principle also not disclosed as FAIR. Data coming from publications will be stored in Open Aire indexed repositories favouring the green model whenever possible. Other publications such as deliverables will be stored at TECNALIA’s hosting services. This deliverable will be updated in subsequent releases, namely in M18 and M36 as part of the technical reports. It is envisioned that in those versions the aspects that at this stage are not fully clear will be clarified as work progresses in all the work packages. References 1. European Commission;, "Data Management," July 2016. [Online]. Available: http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cutting-issues/openaccess-data-management/data-management_en.htm#A1-template. [Accessed 9 January 2017]. 2. European Commission;, "Guidelines on Open Access to Scientific Publications and Research Data," July 2016. [Online]. Available: http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa- pilot-guide_en.pdf. [Accessed December 2016]. 3. Zendo, “Zenodo,” [Online]. Available: www.zenodo.org. [Accessed 2016]. 4. C. Richardson, “Microservice architecture patterns and best practices,” 2017. [Online]. Available: https://microservices.io/. [Accessed March 2017].
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1209_REASSURE_731591.md
# Introduction ## REASSURE objectives Implementing cryptography on embedded devices is an ongoing challenge: every year new implementation flaws are discovered and new attack paths are being used by real life adversaries. Whilst cryptography can guarantee many security properties, it crucially depends on the ability to keep the used keys secret even in face of determined adversaries. Over the last two decades a new type of adversary has emerged, able to obtain, from the cryptographic implementation, side channel leakage such as recording of response times, power or EM signals, etc. To account for such adversaries, sophisticated security certification and evaluation methods (Common Criteria, EMVCo, FIPS...) have been established to give users assurance that security claims have withstood independent evaluation and testing. Recently the reliability of these evaluations has come into the spotlight: the Taiwanese citizen card proved to be insecure, and Snowden’s revelations about NSA’s tampering with FIPS standards eroded public confidence. REASSURE will: 1. improve the efficiency and quality of all aspects of certification using a novel, structured detectmapexploit approach that will also improve the comparability of independently conducted evaluations, 2. cater for emerging areas such as the IoT by automating leakage assessment practices in order to allow resistance assessment without immediate access to a testing lab, 3. deliver tools to stakeholders, such as reference data sets and an open-source leakage simulator based on instruction-level profiles for a processor relevant for the IoT, 4. improve existing standards by actively pushing the novel results to standardization bodies. REASSURE’s consortium is ideal to tackle such ambitious tasks. It features two major circuits manufacturers (NXP, MORPHO), a highly respected side channel testing lab (Riscure), an engaged governmental representative (ANSSI), and two of the most prominent research institutions in this field (UCL, University of Bristol). ## Data sharing policy Very early in the construction of the REASSURE project, it was decided that not all leakage traces would be made available publicly (the consortium explicitly opted out of the open access pilot plan). Indeed, parts of the power traces manipulated by industrial partners correspond to evaluations of their own, or their customer’s, product, and these data sets are critical from a security point of view, and also from a customer’s confidence point of view. Similarly, some elements, such as the exact identification of the product being evaluated, might not always be fully described in order not to expose company-specific products. Yet, as discussed in Sec.2.2, it is the conviction of the consortium that largely sharing experimental data is paramount to improve the comparability and quality of evaluations. Consequently, whilst REASSURE will not commit to sharing all experimental data, we will in practice provide as many practically useful data sets as possible. # Data summary ## Purpose of data collection/generation Data generated in the framework of REASSURE mainly consists in data sets related to side-channel analysis, i.e. leakage traces (e.g. power or electromagnetic traces) acquired or simulated when performing side-channel analysis of a given device. This data will then be processed in order to try and expose the cryptographic keys manipulated by the device or, on the contrary, to assess the efficiency of side-channel countermeasures by showing that these keys cannot be recovered. ## Purpose of data sharing The efficiency of a side-channel attack depends on a large number of factors, including the quality of the physical data (which in turn depends on the measuring equipment, amount of noise, skills and knowledge of the person performing the measurement), the knowledge of the implementation and device’s behaviour, the quality of the exploitation strategy...As a consequence, comparing different attack techniques is very difficult: when a paper describing a new attack technique appears in the literature, it is not always easy to decide whether it yields better results because of its intrinsic quality, or due to more favorable initial data. For the same reason, comparing the efficiency different countermeasures is quite hard to achieve. By sharing leakage traces at various stages of their processing, REASSURE aims to: * Allow reproducing experiments, either to validate claims or as a tool facilitating learning. * Allow a fair comparison of specific substeps of attacks, by making it possible to compare the efficiency of different methods when exploiting _the same data_ . * Provide a common reference basis: in the future, it is our hope to see new results documented by applying them on some openly-accessible reference leakage traces. * Remove the burden of having to actually run the acquiring of physical leakage traces. This should prove useful for researchers working on side channel evaluation who have a mathematical/algorithmical background (with strong expertise in signal processing, information extraction or statistical evaluation), but little to no knowledge on physical measurements or access to laboratories allowing them to practically generate and collect traces. ## Target audience REASSURE-related data are expected to be of use for: * academic researchers; * device manufacturers; * implementers of embedded cryptographic algorithms; * evaluators assessing the side-channel resistance bodies; * working groups and standardization bodies, such as the JHAS, ANSSI or BSI, to help them decide on the relevance of newly published attacks. ## Types and formats of data Data include: 1. Raw data acquired when performing physical measurements of a given chip using specific laboratory equipment. 2. Simulated data, generated by running a simulator emulating a chip’s behaviour based on a more or less sophisticated model. 3. Post-processed data, generated based on raw or simulated data by applying some post-treatment (e.g. point-of-interest identification, template modeling...). Post-treatment is an iterative process, so various levels of post-processed data may exist. Raw and simulated data typically consist in a set of one-dimensional (e.g. power consumption) or multidimensional (e.g. electromagnetic emanations) information collected per unity of time. Post-processed data may take a similar form, or be the output of a statistical treatment – and, more generally, of any mathematical function – applied to the initial data. Typical examples are reduced data sets after points-of-interest identification or dimensionality reduction, or a probability density function when a template attack is performed. ## Expected size of data Depending on the sampling frequencies and measurement precision, the size of leakage data can greatly vary, from a couple of megabytes to several gigabytes. # Data sharing principles Data generated by REASSURE must be shared according to the FAIR principle, meaning that it should be findable, accessible, interoperable and re-usable. We describe below the general principles that will be followed to enforce this. These principles will be further refined as the project evolves. ## Sharing policy As our objective is to allow as widespread use of the data as possible, open licensing schemes similar to the Creative Commons, are being considered. Consortium members, especially academic partners, consider aligning themselves with the University of Bristol “Open Access Software License”, which seems well in line with REASSURE’s intents. The compliance of this with partner’s institutional regulations is being investigated. ## Data localization The exact storage process to be used by REASSURE is still under evaluation. Most likely will it be a mixture of partner’s institutional storage facilities and, especially for very large data pieces, mass storage solutions, e.g. cloud-based. Data replication will be considered to ensure both easy access and disaster recovery. ## Data referencing Data will be made findable mostly through REASSURE’s dissemination activities. Links towards data repositories will be provided on the project’s website, as well as in all related scientific publication, position papers and deliverables. In particular, scientific publications will clearly identify the data used to obtain results and, whenever not prohibited by confidentiality reasons, will provide direct access to these data. ## (Meta-)Data format Each data set will be accompanied by a clear description of * The device under evaluation. * The equipment (hardware and/or software) used for data acquisition or generation. * The treatment applied to the data. * The identity of the involved team(s). * The format of the data file. * Any additional information deemed relevant. Considering the large number of possible configurations and variability of potentially relevant parameters in the measurement setup, no standard naming convention or metadata format will be used. Instead, the configuration will consist in a text describing the device, equipment and measurement setup in a clear and unambiguous way. As described in the introduction, some elements, such as the exact identification of the product being evaluated, might not always be fully described in order not expose company-specific products. More generic information, such as the bus size and operating frequency, will then be provided. ## Data lifetime Nothing in the nature of the data – such as privacy-preserving reasons – makes it necessary to destroy them after a given period of time, hence no action will be taken to delete them nor limit their replication. The consortium’s goal is to maintain data availability for at least three years after project end. Appendices # A University of Bristol – Open Access Software Licence Copyright (c) 2016, The University of Bristol, a chartered corporation having Royal Charter number RC000648 and a charity (number X1121) and its place of administration being at Senate House, Tyndall Avenue, Bristol, BS8 1TH, United Kingdom. All rights reserved Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and thefollowing disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions andthe following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DIS- CLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Any use of the software for scientific publications or commercial purposes should be reported to the University of Bristol (OSI- [email protected] and quote reference REASSURE, H2020 project 731591). This is for impact and usage monitoring purposes only. Enquiries about further applications and development opportunities are welcome. Please contact elisabeth. [email protected] [end of document]
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1210_SPECIAL_731601.md
**Introduction** Processes related to quality, risk and data management within SPECIAL are summarised as follows in the Description of Work: ## ____________________________________________________________________________ **T7.3 Quality, risk and data management** (Lead: ERCIM; Participants: WU; Duration: M1-M36) Quality Assurance and risk management is intended to ensure the production of concrete and high-quality results in line with the project work plan. To achieve this goal, a Quality Assurance Team is appointed to: * Define and widely distribute the Quality Plan, to be a reference for all project participants; Encourage and verify that standards, procedures and metrics are defined, applied and evaluated; * Adopt a procedure for identifying, estimating, treating and monitoring risks; * Perform monthly Quality and Risk Reviews communicated to the General Assembly for appropriate action; * Define a statement on the promotion of gender equality within SPECIAL practices and procedures; Produce a Data Management Plan (DMP) in accordance to [DM_H2020], as described in Section 3.2.5 below. **____________________________________________________________________________** This deliverable reports on the processes put in place by the consortium to ensure that all contractual reports are delivered to the expected level of quality and timeliness. It also re-visits and updates the risk table and mitigation measures proposed in the original Description of Work at proposal time, taking into account the experience gained during the first six months of operation. Finally, the Data Management Plan (DMP), contractually due at M6, makes the third and final chapter of deliverable D7.3. <table> <tr> <th> **2** </th> <th> **Quality** </th> </tr> </table> # 2.1 Overview This chapter details the internal Quality Assurance processes put in place at the start of the project to ensure the production of quality deliverables throughout the lifetime of project SPECIAL. In the Description of Work, the quality plan is one of the output of Task 7.3 ‘Quality, Risk and Data Management’. T7.3 is under the responsibility of the project coordinator (ERCIM), with the active participation of the scientific coordinator (WU). Task T7.3 is active for the full duration of the project. Quality processes described here will apply to all SPECIAL deliverables listed in annex I of this document, as well as to possible additions to this list that may result from annual project reviews. # 2.2 Quality Assurance process ## 2.2.1 Definition As per the above definition of Task 7.3 ‘Quality, risk and data management’, Quality Assurance is intended to guarantee the production of concrete and high-quality results in line with the project work plan. To achieve this goal, a Quality Assurance team is appointed to define and widely distribute the Quality Plan, to be a reference for all project participants, to encourage and verify that standards, procedures and metrics are defined, applied and evaluated. With this in mind, the consortium has defined a Quality Assurance process based on a time line and a set of actions to be repeated for each project deliverable. In graphical terms, this is the sequence of events that will ensure proper internal review of the SPECIAL deliverables: **Figure 2.1 - The SPECIAL Quality Assurance process** Two internal reviewers for each project deliverable have been appointed by the Project Steering Committee (PSC) in the early stages of the project. Figure 3 below shows the detailed list of project deliverables together with their assigned internal reviewers. In a nutshell, the review process described in Figure 1 requires that: * The author of the deliverable supplies a table of content for review at least four weeks before submission, and a first draft of the deliverable at the latest two weeks before submission; * Internal reviewers write their review report using the internal review checklist described in annex 2 and send it to the author within a week; * The author implements the changes and sends the final version back to the reviewers, to the WP leader and to the scientific coordinator, no later than two days before the deadline; * Once last comments are resolved among all players and taken on board by the author, the deliverable is submitted to the EC by the project coordinator via the continuous reporting tool on the participant portal. The end result of the implementation of this process is expected to be a set of quality deliverables delivered on time. ## 2.2.2 Implementation _2.2.2.1 Internal reviewers_ At the kick-off meeting of the project, which took place at the European Commission in Luxembourg on 16-17 January 2017, the consortium discussed in further details and confirmed the Quality Assurance procedures described in the Grant Agreement. As a concrete result, a session at the kick-off meeting was dedicated to reviewing each single deliverable of Year 1, to reach common understanding of what exactly had to be delivered, and to identify the more suitable partners to review Year 1 deliverables. This exercise yielded the following table of internal reviewers for Year 1 deliverables, as shown in this slide extracted from the WP7 presentation at the kick-off meeting: **Figure 2.2 - Initial nomination of internal reviewers at the kick-off meeting** All SPECIAL internal reviewers were made aware of this internal review process and agreed to comply with the timeline described in 2.2.1 above. This initial list of internal reviewers was quickly extended after the kick- off meeting to all deliverables over the full duration of the project. The final list of deliverables and associated internal reviewers is captured in the table below (sorted by delivery date then by WP), which is available on the project repository to all project participants, ensuring the clear and transparent implementation of our Quality Assurance process: **Table 2.1 - List of Internal Reviewers for SPECIAL Deliverables** _2.2.2.2 Continuous monitoring_ The SPECIAL consortium planned to hold a minimum of one plenary Telco or one quarterly face-toface meeting every month of the project. Each one of these physical or virtual meetings will be the occasion to review upcoming deliverables and to ensure at management level that progress is on schedule and that potential issues are under control. Monitoring and control actions from the coordination team have been further described in earlier deliverables D7.1 ‘Technical/Scientific coordination plan’ and D7.2 ‘Administrative management and support plan’. Should any unplanned event arise that negatively impacts the submission deadline, the coordinator will contact the project officer at once to justify the delay and to request in writing an approval to postpone the deliverable by a reasonable amount of time. # 2.3 Supporting Tools With a consortium of nine beneficiaries, the tools required to manage the Quality Assurance process for deliverables do not require a high level of sophistication. Simple yet efficient methods will ensure the consistent quality of SPECIAL deliverables. ## Deliverable templates With the first deliverables due at M2, WP7 produced an initial deliverable template in the first month of the project. This MS Word template includes the logo of the project, which was produced during the kick-off meeting at M1. This template is available to partners on the BSCW project repository, and it is the one used here for this deliverable. Approaching the more technical deliverables of M5 and M6, the scientific partners opted for collaborative writing based on the Latex document preparation system 1 . A Latex template is now available to partners who need or prefer the superior functionality offered by Latex for collaborative editing. The outcome of the collaborative editing, whether done in MS Word or in Latex, is a SPECIAL deliverable in PDF format that looks the same regardless of the software that produced it. ## E-Mail In line with the internal review process described in 2.2.1, partners circulate via e-mail the successive versions of the draft and final deliverables. ## BSCW The final version of each deliverable is available to the consortium after submission to the EC, via a shared folder on the project repository (BSCW). **Figure 2.3 - SPECIAL deliverables on the project repository** Shortly before each project review, a special BSCW folder will be created for the reviewers and populated with the submitted versions of the deliverables for the period. Reviewers will be granted secure access to and invited to download from this location. ## Web site Once approved by the project officer at the project review, public deliverables for the reviewed period will be made available on the project web site 2 . # 2.4 Current State At M6 of SPECIAL, the relevant procedures are well in place to ensure that the project will deliver quality deliverables, on time and in a controlled fashion. For the full duration of SPECIAL, each project deliverable has been assigned two internal reviewers, and all project participants know what their role is in supporting the successful implementation of the deliverable Q&A processes. Should there be any doubt during the course of the project, all necessary documentation is available on the project repository. The consortium is committed to follow this procedure and to submit all SPECIAL deliverables on time and to the best possible level of quality. <table> <tr> <th> **3** </th> <th> **Risks** </th> </tr> </table> # 3.1 Overview This chapter details the internal framework for risk management put in place at the start of the project to ensure the smooth running of the SPECIAL project. In the Description of Work, the risk management plan is one of the outputs of Task 7.3 ‘Quality, Risk and Data Management’. T7.3 is under the responsibility of the project coordinator (ERCIM), with the active participation of the scientific coordinator (WU). In this chapter we: (i) outline the sound and practically applicable risk management methodology we will employ throughout the project; (ii) define procedures and role assignments to tackle risk identification, evaluation, monitoring and mitigation; and (iii) provide and overview of the risk register which will be used to support SPECIAL’s workforce and governance structures to contribute to and follow the entire process. The framework for Risk Management described here will apply to all risks identified to date, as well as to possible additions to this list that may be added as the project progresses. # 3.2 Risk Management Methodology The PRINCE2 risk management procedure 3 provides guidelines to project managers with respect to the identification and evaluation of potential risks and their ongoing monitoring. Although in the DOW we focused on negative risks, it is worth noting that in PRINCE2 risks can have either a negative and a positive impact on success of the project. The risk management methodology described herein and depicted in _Figure 3.1_ is closely aligned with PRINCE2’s five steps to risk management: 1. Risk identification involves the identification of the source, casuse and effect of the risk should it materialise. Potential threats are recorded in a risk register (i.e. a list potential risks, including proposed mitigation measures). 2. Risk assessment involves estimating both the impact and the probability of the risk materialising (probability is ranked as low, medium or high). 3. Risk planning involves proposing risk mitigation measures and assessing said mitigation strategies for seconary risks. 4. Implementation is concerned with monitoring the risks, taking action if needs be and monitoring the effectiveness of any actions taken. 5. Communication is a continuous process that aims to ensure all stakeholders are kept up to date with respect to potential risks and risk management activities **Figure 3.1 - The PRINCE2 RISK management Procedure** **(Source http://prince2.wiki/File:Slide44.PNG)** # 3.3 Roles and Responsibilities ## Task leader responsibilities * Communicate potential risks to the work package leader * Assist the work package leader with risk management activities, namely identification, assessment, planning and implementation * Communicate risk status updates to the work package leader ## Work package leader responsibilities * Communicate potential risks to the project co-ordinators * Perform risk management activities (i.e. identification, assessment, planning and implementation) for work package specific risks * Assist the the project co-ordinators with risk management activities (i.e. identification, assessment, planning and implementation) across the various work packages • Communicate risk status updates to the project co-ordinators ## Project Co-ordinators (Scientific/Technical and Administrative) * Provide feedback to the work package leaders on risk identification, assessment, planning and implementation * Perform risk management activities (i.e. identification, assessment, planning and implementation) for risks that span multiple work package * Communicate risk status updates to the General Assembly # 3.4 Current State At a minimum the risk register should include the following information: * Unique risk number for each risk * Author * Date registered * Risk Category (e.g. managerial, Implementation, impact) * Detailed description of the risk * Impact and probability * Relevant work package(s) * Proposed risk-mitigation measures * Risk owner (e.g. task leader, work package leader, technical/scientific co-ordinator, administrative co-ordinator) * Status _Table 3.1_ provides a list of the initially identified risks from the DOW, along with the measures foreseen to mitigate those risks, that will be migrated into the risk register and expanded to include additional risks identified during the 1 st six months of the project. **Table 3.1 - Critical Risks for Implementation** <table> <tr> <th> **Risk num** </th> <th> **Author** </th> <th> **Date Registered** </th> <th> **Category** </th> <th> **Description of risk** </th> <th> **Impact and probability** </th> <th> **WPs** **involved** </th> <th> **Proposed risk-mitigation measures** </th> <th> **Owner** </th> <th> **Status** </th> </tr> <tr> <td> 1 </td> <td> All </td> <td> Jan 2017 </td> <td> _Managerial_ </td> <td> IPR, licensing or other legal / ethics related issues arise among partners. </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP7 </td> <td> The consortium foresees a number of measures to proactively face such issues, including a Consortium Agreement listing background/foreground. To name some measures: alternative licensing of specific S/W elements is allowed to withdraw related barriers, a clear IPR to inventor(s) policy is adopted, bodies that can handle stages of conflict before they escalate to a problem, and lastly, most partners have been cooperating in various contexts and already have well-established relationships. </td> <td> ERCIM </td> <td> Open </td> </tr> <tr> <td> 2 </td> <td> All </td> <td> Jan 2017 </td> <td> _Managerial_ </td> <td> A partner / body underperforms, defaults or faces other severe operation issues. </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP1, WP2, WP3, WP4, WP5, WP6, WP7, WP8 </td> <td> Governance structure and procedures (meetings, teleconference, collaboration tools, etc.) allow close monitoring of partner activities that allow any turbulence to be spotted </td> <td> ERCIM </td> <td> Open </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> promptly. Moreover, measures are foreseen under the Grant and Consortium Agreement terms for the handling of any defaulting or underperforming partner. </th> <th> </th> <th> </th> </tr> <tr> <td> 3 </td> <td> All </td> <td> Jan 2017 </td> <td> _Managerial_ </td> <td> Lack of required competences for the completion of project’s tasks. </td> <td> __Internal_ _ __Risk;_ _ __Medium_ _ __Probability_ _ </td> <td> WP1, WP2, WP3, WP4, WP5, WP6, WP7, WP8 </td> <td> The consortium members are selected specifically to fill in the pieces of the expertise required in the project. Any underperforming partner in technical or other activities related to lack of expertise will be identified early in the project and immediate rectification measures will be taken. </td> <td> ERCIM </td> <td> Open </td> </tr> <tr> <td> 4 </td> <td> All </td> <td> Jan 2017 </td> <td> _Implement ation_ </td> <td> Policy framework and engine do not scale to the demanded volume and velocity </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP1, WP2, WP3, WP4 </td> <td> The technology partners, led by the architects of the Big Data Europe project (TF) will from the start of the project onwards focus on solutions that scale. Scalability demands and requirements will be oriented by the use case partners and developed in an agile manner with the combined expertise of TUB, WU and CeRICT on </td> <td> WU </td> <td> Open </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> distributed systems, scalable Linked Data query answering and reasoning, and reasoning about policies. </th> <th> </th> <th> </th> </tr> <tr> <td> 5 </td> <td> All </td> <td> Jan 2017 </td> <td> _Implement ation_ </td> <td> Dependency chains between related tasks </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP1, WP2, WP3 </td> <td> WP2’s tasks related to the development of a policy language depend on the requirements elicited from the use cases. In turn, WP 3 depends partly on WP2’s inputs. So any delay or variation in the definition of the use cases may have a domino effect on some of the core technical work packages. If necessary, WP2 may start with a generic approach, by assessing the scalability of the main policy constructs introduced in the literature, studying scalable implementations whenever possible. The agile development process adopted by SPECIAL will help to compensate any delayed specific input coming from the use cases, allowing WP2 and WP3 to start their work in advance, and integrate use- case specific </td> <td> WU </td> <td> Open </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> requirements when they become available. </th> <th> </th> <th> </th> </tr> <tr> <td> 6 </td> <td> All </td> <td> Jan 2017 </td> <td> _Implement ation_ </td> <td> Introduction of new standards, law and certifications affecting the success of the project activities </td> <td> __External_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP1, WP2, WP3, WP4, WP5 </td> <td> As with technological changes, the consortium will monitor pertinent standards and the responsible partners will provide recommendations on how to address these in the project in case changes arise. The fact that there are two rounds of design and development in the project also allows for appropriate adjustments. </td> <td> WU </td> <td> Open </td> </tr> <tr> <td> 7 </td> <td> All </td> <td> Jan 2017 </td> <td> _Implement ation_ </td> <td> Issues obtaining quality simulated data </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP3, WP5 </td> <td> The use case partners will take all necessary actions to ensure that the simulated data is both realistic and representative of the real data used in the proposed usecase scenarios. </td> <td> WU </td> <td> Open </td> </tr> <tr> <td> 8 </td> <td> All </td> <td> Jan 2017 </td> <td> Impact </td> <td> Applicability of project results </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP5 </td> <td> We have chosen use cases from diverse domains (telecom and financial sectors) that expose complementary critical aspects of the challenges we want to solve with respect to privacy-aware solutions that scale to Big Data. Moreover, </td> <td> ERCIM </td> <td> Open </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> we will actively (by having allocated resources in our work plan) liaise with other ICT-14 projects and the CSA in the ICT-18 call to guarantee applicability of results and impact. </th> <th> </th> <th> </th> </tr> <tr> <td> 9 </td> <td> All </td> <td> Jan 2017 </td> <td> _Impact_ </td> <td> Insufficient end-user engagement in the pilot hacking challenges. </td> <td> __Internal_ _ __Risk; Low_ _ __Probability_ _ </td> <td> WP5 </td> <td> The project team will use all available dissemination channels to promote awareness, e.g. through standardisation activities, existing networks, social media outlets. Rewarding services could be included to increase the level of user engagement. </td> <td> ERCIM </td> <td> Open </td> </tr> </table> <table> <tr> <th> **4** </th> <th> **Data Management Plan** </th> </tr> </table> The data management plan described here-in relates to synthesised data ONLY. PROX is the only partner in the project that will collect data via user studies. This data will NOT be made available (even in an anonymized format). Specific guidance with respect to the assessment of ethical issues concerning the collection, processing and disclosure of personal data are described in deliverable D.8.1 SPECIAL Ethics Guidelines. # 4.1 Data Summary _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ * All three industry partners will provide their expertise and domain knowledge in order to enable the generation of simulated data, which we will use in our public challenges. * Additionally, synthesised data (e.g. datasets, queries and policies) will be generated and used for benchmarking and stress testing purposes throughout the projects. _What types and formats of data will the project generate/collect?_ The following types of data will be generated: * Synthesised ledger entries containing processing and sharing events – who processed/shared, what data, for what purpose, with whom, under what usage conditions. * Synthesised usage policies – stipulating what data can be used for what purpose by whom and under what usage conditions. * Synthesised domain specific data (e.g. telecoms and financial data) – encrypted simulated telecoms and financial data and associated metadata. * Ontologies used to describe ledge entries, usage policies, domain specific data and metadata (e.g. temporal, provenance, permissions, obligations). * Certain aspects of the General Data Protection Regulation (GDPR) that are necessary for compliance checking within the remit of SPECIAL will be made available in a machine readable format. All public data will be published according to the 5-star deployment scheme for Open Data 4 4 : Tim Berners-Lee, the inventor of the Web and initiator of the Linked Data project, suggested a 5star deployment scheme for Linked Data. The 5 Star Linked Data system is cumulative. Each additional star presumes the data meets the criteria of the previous step(s). ☆ Data is available on the Web, in whatever format. ☆☆ Available as machine-readable structured data, (i.e., not a scanned image). ☆☆☆ Available in a non-proprietary format, (i.e. CSV, not Microsoft Excel). ☆☆☆☆ Published using open standards from the W3C (RDF and SPARQL). ☆☆☆☆☆ All of the above and links to other Linked Open Data. **Figure 4.1 - 5 Star Linked Data** _Will you re-use any existing data and how?_ * Where possible existing ontologies will be used used to describe both data and metadata (e.g. temporal, provenance, permissions, obligations). PROV 5 and OWL-Time 6 ontologies can be used to represent provenance and temporal information respectively. Additionally there are a number of general event vocabularies such as the Event 7 ontology and the LODE 8 ontology that could potentially be adapted/extended in order to model our data processing events. Likewise, vocabularies for expressing policies such as the Open Digital Rights Language 9 which the W3C’s Permissions and Obligations working group hopes to standardise in the near future, could be adapted/extended. _What is the origin of the data?_ * Synthesised ledger entries, usage policies, domain specific data will be generated by the SPECIAL team. * PROV 5 , OWL-Time 6 , Events 7 , LODE 8 and ODRL 9 are ontologies that are openly available for reuse. Any adaptations/extensions developed by SPECIAL will likewise be given back to the community. What is the expected size of the data? At this stage of the project it is difficult to quantity. Size can have many meanings: * number of data sets * number of data fields * size of data fields (an image is usually bigger that a binary field _To whom might it be useful ('data utility')?_ * Computer Science, Semantic Web and Privacy researchers can make use of some/all of the data for reesearch and benchmarking purposes. * Companies and researchers may leverage and extend the ontologies developed by SPECIAL. * Legislators, companies, and researchers may benefit from the subset of the GDPR that will be made availble in a machine readable format. # 4.2 Findable, accessible, interoperable and reusable (FAIR) data This section is based on the guidelines for effective data management in in the course of a Horizon 2020 project, provided by the European Commission 10 . The primary objective is to ensure that research data is findable, accessible, interoperable and re-usable (FAIR). ## 4.2.1 Making data findable, including provisions for metadata Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? All synthesised data will be made available as Linked Data. Underpinning the Linked Data Web is a set of best practices for publishing and interlinking structured data, know as the Principles of Linked Data. These principles are defined by Tim Berners-Lee 11 as follows: ” 1. Use URIs as names of things. 2. Use HTTP URIs so that people can look up those names. 3. When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL). 4. Include links to other URIs. so that they can discover more things. ” The term thing is used to refer to both real world entities and abstract concepts, commonly referred to as resources. The LDW builds on the existing web infrastructure, by using HyperText Transfer Protocol (HTTP) URIs to identify things, as well as documents. However, URIs only support a subset of the American Standard Code for Information Interchange (ASCII) character set. Later the W3C introduced Internationalised Resource Identifiers (IRIs), that provides support for the richer Unicode character set. Although the principles defined by Berners-Lee refer to URIs, as there is a mapping from IRIs to URIs, it is also possible to use IRIs. However, it is not enough to simply use URIs to refer to things. According to the Linked Data principles, it should be feasible to use the URI to return a description of the resource (commonly referred to as dereferencing). As URIs often represent real world entities, it is common practice to use different URIs to represent the resource and the document that describes it. Two different strategies, that can be used to dereference URIs exist, namely 303 redirects and Hash URI’s. In the case of 303 redirects, when a client attempts to dereference a resource, the server responds with a 303 See Other, and a URI for the document that describes this resource. The client subsequently uses this new URI to retrieve the description of the resource. Whereas, in the case of hash URIs a # separator is used to append an identifier, which identifies the resource, to the end of the URI. Prior to attempting to dereference the resource, the client strips off the # and the identifier, making it possible to distinguish between the physical resource and the document that describes it. The final principle refers to the linking of URI’s. Just like the web of documents uses reference links to enable humans and machines to navigate web pages, the web of data is constructed in a similar fashion. By using RDF to describe resources, it is possible not only to link structured data, but also to describe complex relations between resources in a machine readable format. What naming conventions do you follow? All class, properties and instances will be provided with a unique IRI, that will be accessible via the SPECIAL web server. The prefix used for all public data items will be https://data.specialprivacy.eu/. As the data management plan is a living document more specific naming conventions will be worked out at a later stage if needs be. Will search keywords be provided that optimize possibilities for re-use? SPECIAL will adopt the Comprehensive Knowledge Archive Network (CKAN) 11 web-based open source management system, which is developed by the Open Knowledge Foundation, for the storage and distribution of open data. CKAN is already the tool of choice for many national and local governments, research institutions, and other organisations which collect a lot of data. CKAN provides powerful search and faceting, browsing over distributed data sources. Do you provide clear version numbers? The CKAN ckanext-datasetversions 12 extension provides support for different versions of a dataset. What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how. CKAN provides a rich set of metadata for each dataset. The following information can be found on the CKAN website 13 : “ * _Title – allows intuitive labelling of the dataset for search, sharing and linking._ * _Unique identifier – dataset has a unique URL which is customizable by the publisher._ * _Groups – display of which groups the dataset belongs to if applicable. Groups (such as science data) allow easier data linking, finding and sharing amongst interested publishers and users._ * _Description – additional information describing or analysing the data. This can either be static or an editable wiki which anyone can contribute to instantly or via admin moderation._ * _Data preview – preview .csv data quickly and easily in browser to see if this is the dataset you want._ * _Revision history – CKAN allows you to display a revision history for datasets which are freely editable by users (as is thedatahub.org)_ * _Extra fields – these hold any additional information, such as location data (see geospatial feature) or types relevant to the publisher or dataset. How and where extra fields display is customizable._ * _Licence – instant view of whether the data is available under an open licence or not. This makes it clear to users whether they have the rights to use, change and re-distribute the data._ * _Tags – see what labels the dataset in question belongs to. Tags also allow for browsing between similarly tagged datasets in addition to enabling better discoverability through tag search and faceting by tags._ * _Multiple formats (if provided) – see the different formats the data has been made available in quickly in a table, with any further information relating to specific files provided inline._ * _API key – allows access every metadata field of the dataset and ability to change the data if you have the relevant permissions via API. “_ If needs be, CKAN allows for the specification of additional metadata items in the form of name value pairs. ## 4.2.2 Making data openly accessible Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions. * All three industry partners will provide their expertise and domain knowledge in order to enable the generation of simulated data, which we will use in our public challenges. PROX is the only partner in the project that will collect data via user studies. This data will NOT be made available (even in an anonymized format). * Additionally, synthesised data (e.g. datasets, queries and policies) will be generated and used for benchmarking and stress testing purposes throughout the projects. Note that in multi-beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions are made in the consortium agreement and are in line with the reasons for opting out. By default, commercially sensitive data belonging to SPECIAL usecase partners Prox, TLabs and TR will be closed How will the data be made accessible (e.g. by deposition in a repository)? As stated in section 4.2.1, synthesised data is will be made available as Linked Data. What methods or software tools are needed to access the data? Data will be accessible via HTTP and queryable using RDF (Resource Description Framework) query languages such as SPARQL Is documentation about the software needed to access the data included? Resource Description Framework (RDF) is a standard model for data interchange on the Web. Relevant documentation is provided by the W3C. Is it possible to include the relevant software (e.g. in open source code)? A pointer to documentation on the relevant standards can be include in open source code. Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible. As stated in section 4.2.1, synthesised data is will be accessible via the SPECIAL web server. The prefix used for all public data items will be https://data.specialprivacy.eu/. Have you explored appropriate arrangements with the identified repository? Not applicable. If there are restrictions on use, how will access be provided? All synthesised data will be publically available Is there a need for a data access committee? Not at the moment. Are there well described conditions for access (i.e. a machine readable license)? The default license for SPECIAL public data will be CC-BY. How will the identity of the person accessing the data be ascertained? Not applicable. ## 4.2.3 Making data interoperable Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)? Where possible data will be published as Resource Description Framework (RDF). RDF is a standard model for data interchange on the Web. Relevant documentation is provided by the W3C. What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? By default, SPECIAL will adopt the RDF data model, and meta language such as RDFS and OWL. Like RDF both RDFS and OWL are W3C specifications Additionally we will reuse existing RDF ontologies such as PROV 5 , OWL-Time 6 , Events 7 , LODE 8 and ODRL 9 . Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability? Yes, where possible we will reuse standard vocabularies. In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? Should existing existing ontologies and vocabularies not meet our needs we will be sure to publish our extensions or new ontologies so that others can reuse. ## 4.2.4 Increase data re-use (through clarifying licences) How will the data be licensed to permit the widest re-use possible? The default license for SPECIAL public data will be CC-BY. When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible. Generally speaking, data will be made available when papers are published or releases of the SPECIAL software are deployed. Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why. All synthesised data will be publically available. How long is it intended that the data remains re-usable? The specialprivacy.eu domain name has been reserved by ERCIM/W3C – the project coordinator – for 5 years. It has been registered on 17/01/2017 and will expire on 17/01/2022, over-running the project life time by two years. A sustainability plan will be put in place during year 3 of the project. Are data quality assurance processes described? Data quality assurance processes will be described before the 1 st public release. # 4.3 Allocation of resources What are the costs for making data FAIR in your project? As we aim to use the existing web infrastructure at the current point in time no additional costs are foreseen. How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). Not applicable. Who will be responsible for data management in your project? The technical/scientific co-ordinator. Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)? A sustainability plan will be put in place during year 3 of the project. # 4.4 Data security What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? For security reasons, six months after the end of the project and with the approval of the project coordinator, ERCIM will copy the whole website to a “static” version which will replace the online dynamic version. Content will still be available online but will stop being editable. The servers use RAID hardware and redundant power-supplies to ensure high efficiency and availability of our services. Those machines are hosted in a secured machine room in Sophia Antipolis, France. It is a limited-access facility, with air conditioning and uninterrupted power supplies. The systems are backed-up daily and are up and running 24/7. Is the data safely stored in certified repositories for long term preservation and curation? A sustainability plan will be put in place during year 3 of the project. # 4.5 Ethical aspects Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA). Given that we will only share synthesised data there are no foreseen ethical or legal implication. Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? Given that we will only share synthesised data, informed consent is not a consideration. # 4.6 Other issues Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? No <table> <tr> <th> **5** </th> <th> **Conclusion** </th> </tr> </table> As early as Project Month 1, the SPECIAL consortium has put procedures in place to ensure that the project will deliver quality deliverables, on time and in a controlled fashion. At Project Month 6, those procedures are completely documented, well publicised within the consortium, and adhered to by all project participants. The good team work on each deliverable between its owners, its participants, its internal reviewers and the coordination team reinforces the good spirit within the consortium, keeps everyone aware of issues and progress, and ensures that all partners remain committed to the global success of the project. For a project and a consortium the size of SPECIAL, we feel that the processes and tools put in place to limit risk and ensure quality are adequate and will prove sufficient and successful over the project life time.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1211_GOAL_731656.md
1. Introduction to the background of the research; 2. Objectives of the evaluation study; 3. Detailed description of the different phases of the study, and the participant’s expected actions and contributions; 4. Statement on the eligibility criteria of the study; 5. Clear description of possible risks in participation; 6. Description of the benefits (for the participant or researcher) of participating in the study; 7. Confidentiality clause (i.e. who has access to data); 8. Statement of voluntary participation (i.e. participants are always able to drop out of the study without any argumentation given); 9. Possible compensation of travel costs or other (financial) compensation for participating in the study; 10. Signature page. ## 5.1 Informed Consent for Final Demonstration Below in Figure 1 and Figure 2, the final informed consent forms used for the GOAL final demonstration are presented in English and Dutch respectively. _Figure 1: Informed Consent form used in the GOAL Final Demonstration (English)._ _Figure 2: Informed Consent form used in the GOAL Final Demonstration (Dutch)._ These informed consent forms were provided to all users in the four phases of the final GOAL demonstration. For phase 1 and 2 an English information leaflet was provided additionally (see Figure 3). _Figure 3: Information about the GOAL project in Phase 1 of the final demonstration._ The information leaflet on the previous page for Phase 1 and 2 was updated and translated to Dutch for the Dutch phase 3 participants (see Figure 4 below). _Figure 4: Information leaflet used in Phase 3 of the GOAL final demonstration (Dutch)._ Finally, the leaflet was updated once again for the fourth and final phase of the evaluation (see Figure 5 below). _Figure 5: Information leaflet for the fourth and final phase of the GOAL final demonstration (Dutch)._ Additionally, in phase four, users were provided an information “cheatsheet” to help them navigate through the different application, and specifically for working with the Fitbit activity trackers: _Figure 6: Fitbit and GOAL Application "Cheat Sheet" Page #1/2._ _Figure 7: Fitbit and GOAL Application "Cheat Sheet" Page #2/2._ # 6 Other issues N/A
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1212_WaterSpy_731778.md
# Introduction This document deals with the research data produced, collected and preserved during the project. This data can either be made publicly available or not according to the Grant Agreement and to the need of the partners to preserve the intellectual property rights and related benefits derived from project results and activities. Essentially, the present document will answer to the following main questions: * What types of data will the project generate/collect? * What data is to be shared for the benefit of the scientific community? * What data cannot be made available? Why? * What format will the shared data have? * How will this data be exploited and/or shared/made accessible for verification and re-use? * How will this data be curated and preserved? The data that can be shared will be made available as Open access research data; this refers to the right to access and re-use digital research data under the terms and conditions set out in the Grant Agreement. Openly accessible research data can typically be accessed, mined, exploited, reproduced and disseminated free of charge for the user. The WaterSpy project abides to the European Commission's vision that information already paid for by the public purse should not be paid for again each time it is accessed or used, and that it should benefit European companies and citizens to the full. This means making publicly-funded scientific information available online, at no extra cost, to European researchers, innovative industries and citizens, while ensuring long-term preservation. The Data Management Plan (DMP) is not a fixed document, but evolves during the lifespan of the project. The following are basic issues that will be dealt with for the data that can be shared: * **Data set reference and name** The identifier for the datasets to be produced will have the following format: WaterSpy_[taskx.y]_[descriptive name]_[progressive version number]_[date of production of the data] * **Data set description** Description of the data that will be generated or collected will include: * Its origin, nature, scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. * Its format * Tools needed to use the data (for example specialised software) * Accessory information such as possible video registration of the experiment or other. * **Standards and metadata** Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created, if necessary. * **Data sharing** Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy- related, security-related). * **Archiving and preservation (including storage and backup) and access modality** Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. # Types of data generated within the project The project is expected to generate the following types of data: 1. Data from the preliminary experiments 1. regarding the identification of the IR bands of the targeted bacteria (WP2-T2.2) 2. required to improve the S/N ratio of an optimized QCL-ATR-detector configuration (as need identified in WP5) 2. Data regarding the Quantum Cascade Lasers (QCL) testing and characterization (WP3) 3. Data regarding the photodetector testing and characterization (WP4) 4. Data from the testing and characterization of the molecular recognition elements (MREs) used for the smart surface (WP5-T5.4). This also includes work on possible mobile smart surfaces. 5. Simulation and testing data collected during the development of the ultrasound (US) pre-concentration module (WP5-T5.2) 6. Simulation data collected during the optimization of the fluid flow in the microfluidic cell (WP5-T5.3) 7. Design of the main processing unit and firmware (WP5-T5.6-T5.7) 8. Modulation scheme approach and signal conditioning concept data (WP5-T5.5) 9. Mid-project testing results (WP6-T6.3) 10. Lab validation results of the WaterSpy device (WP6-T6.4) 11. Device performance assessment data after the field tests (WP7-T7.3) 12. Impact assessment data and system usability analysis (WP7-T7.4) ## Data to be shared and not Some of the above data will be shared with the scientific community, following the principles of open data sharing. In particular: * Data from category **(a)** might be shared with the scientific community, following research publications by CNR, TUW and FAU. In case of accepted publications, selected datasets will be shared with the community for verification purposes. * Data from categories **(b)** and **(c)** will not be shared under any circumstances, since those refer to prototypes and possibly commercial products of ALPES and VIGO. * Data from category **(d)** might be shared with the scientific community, following a research publication by CNR and TUW. In case of accepted publications, selected datasets will be shared with the community for verification purposes. * Data from category **(e)** might be shared with the scientific community, following a research publication by TUW. In case of accepted publications, selected datasets will be shared with the community for verification purposes. * Data from category **(f)** are not expected to be publishable. Nevertheless, if a related publication is accepted, selected datasets will be shared with the community for verification purposes. * Data from category **(g)** will not be shared under any circumstances, since those refer to prototypes and possibly commercial products of CyRIC and NTUA. * Data from category **(h)** might be shared with the scientific community, following research publications by FAU. In case of accepted publications, selected datasets will be shared with the community for verification purposes. * Data from category **(i)** will not be shared, since those will be intermediate data, used to improve the system. * Data from categories **(j)** , **(k)** and **(l)** might be shared with the scientific community, following research publications by all partners. In case of accepted publications, selected datasets will be shared with the community for verification purposes. Particular emphasis will be paid to IPR protection, since possibly sensitive data might be available particularly in the results of those tests. # Management plan for the different sharable categories of data In the sections that follow, details on the data to be shared with the community are presented. Focus is put on: a) Dataset description b) Standards and metadata, where applicable ## Data from the preliminary experiments, regarding fingerprint regions of bacteria The data related to the preliminary experiments for the fingerprint regions of bacteria identification are mainly the output of WP2, in particular they are generated as an output of task T2.2 and are mostly experimental measurements. The following types of data will be produced: 1. Protocol and results 2. Experimental data and figures, used also in the deliverables (D2.2 and D2.3) _Information about tools and instruments:_ Documentation related documents will be available either in the Microsoft Word format or as a PDF file. Measured data will be available in the form of Bruker OPUS software, an Excel table or Origin files derivate from the Origin software. _Standards and metadata:_ To guarantee a high academic standard, results will be documented considering the typical nomenclature of the particular field of research. Measured data are either self-explanatory or will be prepared or described by an additional document. Metadata will be made available for each document or data set. _Accessibility:_ Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. ## Data from testing and characterization of the MREs Data from the testing and characterization of the molecular recognition elements (MREs) used for the smart surface will be mainly generated as an output of WP5, in particular as an output of task T5.4. Such data will include the description of the experimental procedure, as well as measurements needed for the characterization of the MREs selected, such as ELISA, surface plasmon resonance (SPR) and fluorescence steady state measurements. The following types of data will be produced: 1. Protocol and results from the experiments and characterization 2. Scientific publications related to the _smart surface_ development and preparation using the selected MREs _Information about tools and instruments:_ Documentation related documents will be available either in the Microsoft Word format or as PDF file. Measured data will be available as Excel table or Origin files derivate from Origin software. _Standards and metadata:_ To guarantee a high academic standard, results will be documented considering the typical nomenclature of the particular field of research. Measured data are either self-explanatory or will be prepared or described by an additional document. Metadata will be made available for each document or data set. _Accessibility:_ Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. ## Data related to the development of the US pre-concentration module Data from the testing and characterization of the ultrasound (US) pre- concentration module used for sample preconcentration will be mainly generated as an output of WP5, in particular as an output of task T5.2 and T5.3. Such data will include the description of the experimental set-up and the experimental procedure. In addition, operational parameters of the ultrasound device as well as IR data (being FTIR spectra or QCL readings) will be recorded. The following types of data will be produced: * Scientific publications * Documentation of experimental set-ups * Documentation of experimental results * Documentation of hardware related topics * Measured data * Source code _Information about tools and instruments:_ Publications and documentation related documents will be available either in the Microsoft Word format or as PDF file. For reading original FTIR data, the Bruker OPUS software is required. Selected measured data will be available as MATLAB data file, as an Excel table or depending on the size of the dataset also as plain text files. Large files may be compressed to .zip files. The source code that may be made available can be read by any text editor but specialized software (mainly MATLAB) will be needed to execute the programs. _Standards and metadata:_ To guarantee a high academic standard, results will be documented considering the typical nomenclature of the particular field of research. Measured data are either self-explanatory or will be prepared or described by an additional document. Source code will be commented and, if useful, explanatory examples will be made. Metadata will be made available for each document or data set. _Accessibility:_ Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. ## Data related to the laser modulation scheme and signal conditioning _Data set description:_ The data in this category will be mainly the output of WP5 T5.5 and be mostly the result of theoretical investigations. Additionally, measurements that prove the developed concept will be part of the generated data. The following types of data will be produced: * Scientific publications * Documentation of theoretical investigations * Documentation of simulation results * Documentation of hardware related topics * Measured data * Source code _Information about tools and instruments:_ For reading the data, no special tools are necessary. Publications and documentation related documents will be available either in the Microsoft Word format or as PDF file. Measured data will be available as Excel table or depending on the size of the dataset as plain text files. Large files may be compressed to .zip files. The source code that may be made available can be read by any text editor but specialized software (mainly MATLAB) will be needed to execute the programs. _Standards and metadata:_ To guarantee a high academic standard, results will be documented considering the typical nomenclature of the particular field of research. Measured data are either self-explanatory or will be prepared or described by an additional document. Source code will be commented and, if useful, explanatory examples will be made. Metadata will be made available for each document or data set. _Accessibility:_ Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. ## Data related to the lab validation of the WaterSpy device The data in this category is collected during lab tests of the WaterSpy device. Data collected is related to the developments of T6.4. The task is planned to run from M20 to M30. Data that may be suitable for publicly sharing could be related to the overall device testing and its outputs, as well as the possible testing of individual components performance. The data will be available after ensuring the protection of possible IP issues and following the acceptance of related scientific or technical publications. Data will be shared in the form of .zip packets and may include: spreadsheets with experimental results, photos from the setup, documents describing the experiment and the sample (standards) used. At the moment, it is not possible to estimate the file size, but it is not expected to be higher than 100MB per experiment, mostly due to the possible presence of images. The possible use or re-use by the scientific community is provided by adhering to terms and conditions set out in the Grant Agreement. _Information about tools and instruments_ : It is not foreseen at the moment the need to use specialised tools for using the research data, apart from common spreadsheet and text file editors. If any specialised instrument is required for a particular dataset, then information about this tool will be published along with the dataset. If the tool is freely available, a link for downloading it will also be provided. Any scientific papers to be published that will make use of the datasets, will also be associated to the datasets and be accessible through the same page. _Standards and metadata_ : We will also create metadata for easier data discovery, including through search engines. During system validation, reference samples of know concentration of the target analytes will be used (standards). Information about these standards, including bacteria concentration, ionic strength, salt content, pH etc. will also be published along with the dataset. _Accessibility_ : Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. ## Device performance assessment data Data from this category might be shared with the scientific community, following research publications by all partners. In case of accepted publications, selected datasets will be shared with the community for verification purposes. Particular emphasis will be paid to IPR protection, since possibly sensitive data might be available particularly in the results of those tests. The data in this category is generated and collected during and after the field validation of the WaterSpy device. Data collected is related to the developments of T7.3. The task is planned to run from M30 to M36. Data that may be suitable for publicly sharing could be related to the overall device testing and its outputs, as well as the possible testing of individual components performance. The data will be available after ensuring the protection of possible IP issues and following the acceptance of related scientific or technical publications. Data will be shared in the form of .zip packets and may include: spreadsheets with experimental results, photos from the setup, documents describing the validations, information about the test site and the comparison measurements with golden standard procedures. At the moment, it is not possible to estimate the file size, but it is not expected to be higher than 100MB per experiment, mostly due to the possible presence of images. The possible use or re-use by the scientific community is provided by adhering to terms and conditions set out in the Grant Agreement. _Information about tools and instruments_ : It is not foreseen at the moment the need to use specialised tools for using the research data, apart from common spreadsheet and text file editors. If any specialised instrument is required for a particular dataset, then information about this tool will be published along with the dataset. If the tool is freely available, a link for downloading it will also be provided. Any scientific papers to be published that will make use of the datasets, will also be associated to the datasets and be accessible through the same page. _Standards and metadata_ : We will also create metadata for easier data discovery, including through search engines. During system field testing, comparison with golden standard methodologies is foreseen, in order to generate ground truth data. Information about the golden standard methodology used for comparison will also be published. _Accessibility_ : Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. ## Data related to impact assessment and system usability analysis The impact of the WaterSpy pilot campaigns in both locations will be evaluated after the demos. Initial situation information will first be collected, in order to be able to evaluate the effect of the device use. Assessment will be carried out based on the KPIs mentioned in the DoA. Section 1.1 of this document. Impact assessment takes into consideration the RoI for the end-users. System usability will also be evaluated with the help of the end-users. The data in this category is generated and collected during and after the field validation of the WaterSpy device. Data collected is related to the developments of T7.4. The task is planned to run from M31 to M36. Data that may be suitable for publicly sharing could be related to the impact assessment of the device use or to the usability analysis. The data will be available after ensuring the protection of possible IP issues and following the acceptance of related scientific or technical publications. Data will be shared in the form of .zip packets and may include: photos from the demos, documents describing the initial situation, possible financial considerations, information about the test site, system usability questionnaires analysis. At the moment, it is not possible to estimate the file size, but it is not expected to be higher than 100MB per experiment, mostly due to the possible presence of images. The possible use or re-use by the scientific community is provided by adhering to terms and conditions set out in the Grant Agreement. _Information about tools and instruments_ : It is not foreseen the need to use specialised tools for using the research data, apart from common spreadsheet and text file editors. Any scientific papers to be published that will make use of the datasets, will also be associated to the datasets and be accessible through the same page. _Standards and metadata_ : We will also create metadata for easier data discovery, including through search engines. If data related to financial projections will be published, information about the methods used for the projections will also be published. _Accessibility_ : Once a dataset is made available on the project repository, it will freely be available to the community, in the way described in the appropriate section below. No embargo period is foreseen. # Data sharing All sharable data will be published and hosted as per individual availability on the project’s public website i.e. _www.waterspy.eu_ . Partners generating the data are also encouraged to publish the sharable data on other online repositories (for example, Zenodo.org). The WaterSpy website has a friendly and easy to use navigation. It will be modified in due time to accommodate additional sections (pages) where the publishable data will be stored. The consortium will make sure that available data will be easily recoverable by any interested party. The data will be made available on the website through adaptive webpages. The pages will cover the topics and project descriptive information to an appropriate level for each set of information or dataset. The data will be formatted as per the description of each section, provided previously in this document, and will be presented for access along with the necessary links to download the appropriate software tools, if necessary. The pages will be available to the public domain, enriched with the necessary metadata and will be open to web crawlers for search engine listing, so they will be available to the public through standard web searches. Despite the publicly available pages, the downloadable data will be presented along with possible restrictions provided in each previously described section. This means that the following will apply on the website in order to gain access to the information: 1. Terms and Conditions will apply and will have to be accepted prior to any download 2. Registration will be compulsory (free of charge) to maintain a data access record 3. For certain and limited number of datasets, a form will be available to request access to the data, but this will be subject to approval from the consortium Downloadable formats will be: 1. PNG, BMP and JPEG file formats 2. ZIP and other public domain compressed archives 3. PDF formatted documents 4. WMV, MP4 or AVI formats for possible videos All available datasets will be downloadable in their entirety. # Archiving, preservation and access modality The data in all the various formats will be stored in standard storage. It has been mentioned in the previous section that no specialised storage or access is necessary, as all datasets will be downloadable in their entirety. All the information, including the descriptive metadata, when available, will be available throughout the lifetime of the website, which is expected to be in the public domain for a period of at least five (5) years after the completion of the project. Due to the possibly large size of individual downloadable files, the storage used will be based on a Cloud based services and it is estimated (based on current prices) to be according to the table below: _Geographically Monthly Price Estimated Estimated Price per Estimated Price over 5_ _Redundant Storage for per GB Storage month years_ _Disaster/Recovery requirements_ _purposes_ <table> <tr> <th> _First 1 TB/Per month_ _Next 49 TB/Per month_ _(1-50TB)_ </th> <th> €0.0358 per GB </th> <th> 1 TB (1024 GB) </th> <th> €36.66 per month </th> <th> €2,199.60 </th> </tr> <tr> <th> €0.0352 per GB </th> <th> 1 TB (1024 GB) </th> <th> €36.05 per month </th> <th> €2,162.69 </th> </tr> </table> _€4,362.29_ Please note that we estimate a maximum of no more than 2 TB of data to be made publicly available. The cost of storage will be covered by the consortium members. Please note that the prices are based on the Microsoft Azure current pricing ( _http://azure.microsoft.com/enus/pricing/details/storage/_ ) . Please note that some data may be stored under the facilities of the consortium members that own them, but will still be referenced via the website. Data may also be shared with the community through public repositories. # Conclusions The procedures described in this document have been designed to guarantee smooth project execution. They describe the way that deliverables submission will be handled, the way that project communications will take place and also the project dissemination and meetings’ organisation procedures. They will be used by all partners for the entire project duration.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1213_UNICORN_731846.md
**1 Executive Summary** The results of the UNICORN project will be openly published to communicate and spread the knowledge to all interested communities and stakeholders. Published results generate wider interest towards the improvements achieved by the project in order to facilitate and potentiate exploitation opportunities. The goal of this deliverable is listing publishable results and research data and investigating the appropriate methodologies and open repositories for data management and dissemination. The UNICORN's partners aim to offer as much information as possible generated by the project through open access. Such information includes scientific publications issued by the UNICORN consortium, white papers published, open source code generated, anonymous interview results, or mock-up datasets used for gathering customer feedback. In general, there are two types of project results which differ in the way they are published; namely publications and other research data. Our publication strategy follows the ideas of open access and open research data. Figure 1 depicts how research activities lead to different results and how these results are disseminated and exploited. Scientific publications and related research data is as far as possible published openly. On the other hand, not all data collected or generated can be published openly, as it may contain private information or interfere with legal aspects. This kind of data must be identified and protected accordingly. One of the targets of the project is to deliver a specification/standard which, however, will not be publicly available. 2. **Introduction** 1. **Purpose of The Document** Each project in the EC's Horizon 2020 program has to define what kind of results are generated or collected during the project's runtime and when and how it is published openly. This document describes which results will be published in UNICORN within the whole project duration. For all results generated or collected during UNICORN a description is provided including the purpose of the document, the standards and metadata used for storage, the facility used for sharing openly, based on the EC template recommended (European Commission, 2013). This document is updated on a regular basis. However, it does not describe how the results are exploited, which is part of D5.1, D5.4 and D5.5. 2. **Terminology** **Open Access** : Open access means unrestricted access to research results. Often the term open access is used for naming free online access to peer- reviewed publications. Open access is expected to enable others to a) build on top of existing research results, 2. avoid redundancy, 3. participate in open innovation, and 4. read about the results of a project or inform citizens. All major publishers in computer science - like ACM, IEEE, Elsevier, or Springer - participate in the idea of open access. Both green, or gold open access levels are promoted. Green open access means that authors eventually are going to publish their accepted, peer-reviewed articles themselves, e.g. by deposing it to their own institutional repositories. Gold open access means that a publisher is paid (e.g. by the authors) to provide immediate access on the publishers website and without charging any further fees to the readers. **Open Research Data** : Open research data is related to the long-term deposit of underlying or linked research data needed to validate the results presented in publications. Following the idea of open access, all open research data needs to be openly available, usually meaning online availability. In addition, standardized data formats and metadata has to be used to store and structure the data. Open research data is expected to enable others to: 1. understand and reconstruct scientific conclusions, and 2. to build on top of existing research data. **Metadata** : Metadata defines information about the features of other data. Usually metadata is used to structure larger sets of data in a descriptive way. Typical metadata are names, locations, dates, storage data type, and relations to other data sets. Metadata is very important when it comes to index and search larger data sets for a specific kind of information. Sometimes metadata can be retrieved automatically from a dataset, but often it needs some manual classification also. The well-known tags in MP3-recordings are a good example why metadata is necessary to find a specific kind of genre or composer in a larger number of songs. 3. **Structure of The Document** The rest of the document is structured into two sections. Section 2 defines the process that will be applied to all results collected or generated during UNICORN. The process defines if a result has to be published or not. In addition, we provide a summary of all publishing platforms to be used by the UNICORN consortium. Section 3 lists publications and other public related data that is already or may be generated or collected during UNICORN. For each result, a short description, the chosen way of open access, and a long-term storage solution are specified according to the EC's data management guidelines (European Commission, 2013). 3. **Publishing Infrastructure for Open Access** The UNICORN publication infrastructure consists of a process and several web- based publication platforms that together provide long-term open access to all publishable, generated or collected results of the project. The implementation of the project fully complies with law in national and EU level and especially with the **Directive 95/46** related to the Protection of personal data. More specifically, there are not cases where personal data information or sensitive information of internet users is collected (IP addresses, email addresses or other personal information) or processed. For the whole duration of the project, from the beginning to its end, the Data Protection Officer (DPO – Ms. Meltini Christodoulaki (FORTH)) will carefully examine the legality of the activities and the tools (including platforms) that will be produced for not violating the personal data of internet users. In the potential future case where the UNICORN consortium will collect, record, store or process any personal information, it will be ensured that this will be done on a basis of respecting citizens’ rights, preventing their identification and keeping their anonymization. Both the process and the used web-based platforms are described in the following subsections. The section ends with some research items and data that they project partners foresee that they will be reflected on respective publication actions. The research data will be the main subject of analysis with respect to data management in the next section. 1. **Publishing Process** UNICORN partners defined a simple, deterministic process that decides if a result in UNICORN must be published or not. The term result is used for all kind of artefacts generated during UNICORN like white papers, scientific publications, and anonymous usage data. By following this process, each result is either classified public or nonpublic. Public means that the result must be published under the open access policy. Non-public means that it must not be published. For each result generated or collected during UNICORN runtime, the following questions must be answered to classify it: 1. _Does a result provide significant value to others or is it necessary to understand a scientific conclusion?_ If this question is answered with yes, then the result is classified as public. If this question is answered with no, the result is classified as non- public. Such a result could be code that is very specific to UNICORN platform (e.g., a database initialization) which is usually of no scientific interest to anyone, nor does it add any significant contribution. 2. _Does a result include personal information that is not the author's name?_ If this question is answered with yes, the result is classified as non-public. Personal information beyond the name must be removed if it should be published. This also bares witness on the repetitive nature of the publishing process, where results which are deemed in the beginning as non-publishable can become publishable once privacy-related information is removed from them. 3. _Does a result allow the identification of individuals even without the name?_ If this question is answered with yes, the result is classified as non-public. Sometimes data inference can be used to superimpose different user data and reveal indirectly a single user's identity. As such, in order to make a result publishable, the included information must be reduced to a level where single individuals cannot be identified. This can be performed by using established anonymisation techniques to conceal a single user's identity, e.g., abstraction, dummy users, or non-intersecting features. 4. _Does a result include business or trade secrets of one or more partners of UNICORN?_ If this question is answered with yes, the result is classified as non-public, except if the opposite is explicitly stated by the involved partners. Business or trade secrets need to be removed in accordance to all partners' requirements before it can be published. 5. _Does a result name technologies that are part of an ongoing, project-related patent application?_ If this question is answered with yes, then the result is classified as non- public. Of course, results can be published after patent has been filed. 6. _Can a result be abused for a purpose that is undesired by society in general or contradict with societal norms and UNICORN’s ethics?_ If this question is answered with yes, the result is classified as non-public. _7\. Does a result break national security interests for any project partner?_ If this question is answered with yes, the result is classified as non-public. **3.2 Publishing Platforms** In UNICORN, we use several platforms to publish our results openly. The following list presents the platforms used during the project and describes their concepts for publishing, storage, and backup. 1. **The project Website** The partners in the project consortium decided early to setup a project- related website. This website describes the mission and the general approach of UNICORN and its development status. A blog informs about news on a regular basis. Later in the project the developed UNICORN platform will be announced. A dedicated area for downloads is used to publish reports and white papers as well as scientific publications (in pre-camera ready form, or through links to the publisher’s websites in case these are not open access). All documents are published using the portable document format (PDF). All downloads are enriched by using simple metadata information, such as the title and the type of the document. The website is hosted by partner FORTH. All webpage-related data is backed on a regular basis. All information on the project website can be accessed without creating an account. The webpage is backed up once per month. Web-Link: _http://www.UNICORN-project.eu/_ 2. **Zenodo** Zenodo is a research data archive / online repository which helps researchers to share research results in a wide variety of formats for all fields of science. It was created through EC's OpenAIRE+ project and is now hosted at CERN using one of Europe's most reliably hardware infrastructures. Data is backed nightly and replicated to different locations. Zenodo not only supports the publication of scientific papers or white papers, but also the publication of any structured research data (e.g., using XML). Zenodo provides a connector to GitHub that supports open collaboration for source code and versioning for all kinds of data. All uploaded results are structured by using metadata, like for example the contributors' names, keywords, date, location, kind of document, license, and others. Considering the language of textual metadata items, English is preferred. All metadata is licensed under CC0 license (Creative Commons 'No Rights Reserved'). The property rights or ownership of a result does not change by uploading it to Zenodo. All public results generated or collected during the project lifetime will be uploaded to Zenodo for long-term storage and open access. Web-Link: _http://zenodo.org_ 3. **GitHub** GitHub is a well-established online repository which supports distributed source code development, management, and revision control. It is primarily used for source code data. It enables world-wide collaboration between developers and provides also some facilities to work on documentation and to track issues. GitHub provides paid and free service plans. Free service plans can have any number of public, open-access repositories with unlimited collaborators. Private, non-public repositories require a paid service plan. Many open-source projects use GitHub to share their results for free. The platform uses metadata like contributors' nicknames, keywords, time, and data file types to structure the projects and their results. The terms of service state that no intellectual property rights are claimed by the GitHub Inc. over provided material. For textual metadata items, English is preferred. The service is hosted by GitHub Inc. in the United States. GitHub uses a rented Rackspace hardware infrastructure where data is backed up continuously to different locations. All source-code components that are implemented during this project and decided to be public will be uploaded to an open access GitHub repository. Web-Link: _https://github.com/_ **3.3 Project Artefacts** This section just attempts to enumerate specific datasets, software and research items (from which publications can be produced) and whether all these artefacts can be publishable or not according to which means. More detailed analysis spanning the different categories of artefacts is provided in the next chapter. # Table 1: Project Artefacts <table> <tr> <th> **Artefact Type** </th> <th> **Artefact** </th> <th> **Publication Means** </th> </tr> <tr> <td> **Research Item** </td> <td> Cloud Governance Mechanisms </td> <td> Zenodo, open-access journals, project web site, research-oriented social media </td> </tr> <tr> <td> **Research Item** </td> <td> Continuous Security and Data Privacy Enforcement Mechanisms </td> <td> Zenodo, open-access journals, project web site, research-oriented social media </td> </tr> <tr> <td> **Research Item** </td> <td> Risk, Cost and Vulnerability Assessment </td> <td> Zenodo, open-access journals, project web site, research-oriented social media </td> </tr> <tr> <td> **Research Item** </td> <td> Policy Validation & Identity Signing </td> <td> </td> <td> Zenodo, open-access journals, project web site, research-oriented social media </td> </tr> <tr> <td> **Software** </td> <td> Privacy by Design Libraries </td> <td> </td> <td> Github, Zenodo </td> </tr> <tr> <td> **Software** </td> <td> Orchestrator for Microservices </td> <td> </td> <td> Github, Zenodo </td> </tr> <tr> <td> **Software** </td> <td> Cloud Monitoring specialized Microservices </td> <td> for </td> <td> Github, Zenodo </td> </tr> <tr> <td> **Software** </td> <td> Elasticity Control specialized Microservices </td> <td> for </td> <td> Github, Zenodo </td> </tr> <tr> <td> **Software** </td> <td> Eclipse Che Plugin </td> <td> </td> <td> Github, Zenodo </td> </tr> <tr> <td> **Dataset** </td> <td> Anonynous usage statistics </td> <td> </td> <td> Zenodo </td> </tr> <tr> <td> **Dataset** </td> <td> UseCase data </td> <td> </td> <td> Non-publishable </td> </tr> <tr> <td> **Dataset** </td> <td> ... </td> <td> </td> <td> Zenodo </td> </tr> </table> **4 Project Results / Data Sets** In this section, we first introduce the template that has been proposed by the EC in order to form the description of the data that will be produced in the context of UNICORN. Every use case will fill in such a template and subsequently all the templates will be collected with the beginning of WP6, the demonstration applications work package of the project. Then, two sub-sections will shortly focus on two types of specific artefacts produced by UNICORN, i.e., publications and software and finally the third one will be dedicated to the analysis of all data produced in UNICORN according to the previously presented template. # Table 2: Initial Dataset Template <table> <tr> <th> **Initial Dataset Template** </th> </tr> <tr> <td> **Dataset reference name** </td> <td> Identifier for the data set to be produced. </td> </tr> <tr> <td> **Dataset description** </td> <td> Description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a </td> </tr> <tr> <td> </td> <td> scientific publication. Information on the existences (or not) of similar data and the possibilities for integration and reuse. </td> </tr> <tr> <td> **Standards and metadata** </td> <td> Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created. </td> </tr> <tr> <td> **Data sharing** </td> <td> Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling reuse, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> <td> Description of the procedure that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end, volume, what the associated costs are and how these are planned to be covered. </td> </tr> <tr> <td> **Additional Dataset explanation** </td> </tr> <tr> <td> **Discoverable** </td> <td> Are the data and associated software produced and / or used in the project discoverable (and readily located), identifiable by means of a standards identification mechanism (e.g. Digital Object Identifier)? </td> </tr> <tr> <td> **Accessible** </td> <td> Are the data and associated software produced and / or used in UNICORN accessible and in what modalities, scope, licenses (e.g. licencing framework for research and education, embargo periods, commercial exploitation, etc.)? </td> </tr> <tr> <td> **Assessible and intelligible** </td> <td> Are the data and associated software produced and / or used in the project assessable for and intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for the purpose of peer review, are data is provided in a way that judgments can be made about their reliability and the competence of those who created them)? </td> </tr> <tr> <td> **Usage beyond the original purpose for which it was collected** </td> <td> Are the data and associated software produced and / or used in UNICORN useable by third parties even long time after the collection of the data (e.g. is the data safely stored in certified repositories for long term preservation and curation; is it stored together with the minimum software, metadata and documentation to make it useful,; is the data useful for the wider public needs and usable of the likely purpose of nonspecialists. </td> </tr> <tr> <td> **Interoperable to specific quality standards** </td> <td> Are the data and associated software produced and / or used in the project interoperable allowing data exchange between researchers, institutions, organisations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins)? </td> </tr> </table> **4.1 Publications** The management of the dissemination and publication process will be performed in WP 7 and will be reported in deliverables D7.1, D7.2, D7.3, D7.4 and D7.6. Publications will be related to main research results that will be achieved within UNICORN which will fall in the following research directions: All scientific publications will be indexed, thus also obtaining a DOI, and will be made available to:  Zenodo, * respective literature databases, such as Scopus and DBLP, * research databases like ResearchGate and Academia.edu. **4.2 Software** WP7 will guide the management and exploitation of the software produced within this project and the respective reporting on this will be reflected on D7.5 deliverable leaded by STW. The distribution of software mainly depends on the intended business model of the respective owning parties. In UNICORN, we foresee that there will be two types of software distribution: 1. open-source software distributed under a public license and mainly related to research/academic institutions as well as organisations with open-source distribution policies; 2. licensed software distributed with an appropriate (e.g. commercial) license and mainly related to commercial organisations. Licensed software will be announced according to the respective dissemination channels of the commercial organisation as well as on UNICORN web site. Open- source software will be made available both in the UNICORN web site and will also be published in respective repositories like Zenodo and github. This will enable also the further development and sustainability of the software even beyond the lifetime of UNICORN. The publishing of the open-source software will depend on the respective community targeted by the partner organisation. Respective dissemination actions will be performed to also reach such a community. The repository on which software will be distributed will handle the creation of respective meta-data, such as author and commit history. Dependencies over third-party tools, operating systems, runtime environments will also be included, along with respective versioning information. Usage-related material like documentation and tutorials will also be developed and published to ease the exploitation of the software. Depending on the publication medium (e.g., Zenodo vs github), each partner in UNICORN will use its own state of the art version management mechanisms to store its source code contributions and will guarantee the proper back up of the code on a regular basis. Software published in github will obtain a URL via which it can then be accessed and exploited. It can also be made discoverable via respective web- based search engines, like Google. If software is also published in Zenodo, a proper DOI will be assigned to it. **4.3 Research Data** **4.3.1 Template Example** # Table 3: Dataset template example <table> <tr> <th> **Initial Dataset Template** </th> <th> </th> </tr> <tr> <td> **Dataset reference name** </td> <td> Dataset XXX. </td> </tr> <tr> <td> **Dataset description** </td> <td> The dataset covers the description of ... </td> </tr> <tr> <td> **Standards and metadata** </td> <td> Dataset description was performed in standard language X. </td> </tr> <tr> <td> **Data sharing** </td> <td> Dataset will be published in Zenodo to be made freely available for everyone. </td> </tr> <tr> <td> **Archiving and preservation** </td> <td> This is mainly responsibility of the open-access repository (e.g., Zenodo). </td> </tr> <tr> <td> **Additional Explanation** </td> <td> </td> </tr> <tr> <td> **Discoverable** </td> <td> The dataset will be assigned a DOI via which it can be discoverable. </td> </tr> <tr> <td> **Accessible** </td> <td> A respective public licence will be associated to this dataset to govern its accessing. </td> </tr> <tr> <td> **Accessible and Intelligible** </td> <td> N.A </td> </tr> <tr> <td> **Usage beyond original collection purpose** </td> <td> Dataset can be exploited during and even beyond the course of UNICORN. </td> </tr> <tr> <td> **Interoperable to specific quality standards** </td> <td> Respective methodologies, best practices and standards </td> </tr> </table> **4.3.2 More Concrete Example - Anonymous User Statistics** # Table 4: Anonymous User Statistics example <table> <tr> <th> **Initial Dataset Template** </th> <th> </th> </tr> <tr> <td> **Dataset reference name** </td> <td> Anonymous User Statistics. </td> </tr> <tr> <td> **Dataset description** </td> <td> UNICORN platform will provide online services not only to developers but also to customers (end-users). The idea of UNICORN includes social collaboration and anonymous customer profiles that help developers to recruit potential users for a project or to customize software for a market. After UNICORN platform has gone online, real users will interact with our system. The behaviour of these users will be studied and used for improving our platform. Any results that are published will be reduced to anonymous user statistics. The details of this data set will be decided during the implementation and updated accordingly in the following versions of the data management plan. </td> </tr> <tr> <td> **Standards and metadata** </td> <td> The data is stored in a platform-independent format (e.g. XML). Any names or features that may be used for single user identification (e.g. e-mail address, phone number, date of birth) will be removed and summarized in a more abstract way (e.g. 300 users in the age of 30- 40). </td> </tr> <tr> <td> **Data sharing** </td> <td> User-related data that may lead to single user identification will not be published. Only abstract user statistics that do not allow for single user identification will be made openly accessible, but only if it contributes to the results of a scientific publication. Any anonymous user data (e.g. statistics) will be published openly using either existing institutional </td> </tr> <tr> <td> </td> <td> repositories or Zenodo. The access is free for everyone and without restrictions. </td> </tr> <tr> <td> **Archiving and preservation** </td> <td> Any user-related data is stored in project-related databases, which are hosted by UNICORN partners. Each partner restricts access to private information by implementing a security policy. Databases are backed on a regular basis by each partner. All public user statistics will be added to Zenodo for long term preservation during or at the end of UNICORN. </td> </tr> <tr> <td> **Additional Explanation** </td> <td> </td> </tr> <tr> <td> **Discoverable** </td> <td> The dataset will be assigned a DOI via which it can be discoverable. </td> </tr> <tr> <td> **Accessible** </td> <td> A respective public licence will be associated to this dataset to govern its accessing. </td> </tr> <tr> <td> **Accessible and Intelligible** </td> <td> N.A </td> </tr> <tr> <td> **Usage beyond original collection purpose** </td> <td> Dataset can be exploited during and even beyond the course of UNICORN. </td> </tr> <tr> <td> **Interoperable to specific quality standards** </td> <td> Respective methodologies, best practices and standards </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1214_SOLUS_731877.md
# FAIR DATA 3.1. Making data findable, including provisions for metadata: * **Outline the discoverability of data (metadata provision)** * **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** * **Outline naming conventions used** * **Outline the approach towards search keyword** * **Outline the approach for clear versioning** * **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** The two datasets will be identified using two unique identifiers (DOI) by uploading them onto a public repository. Data discoverability will be facilitated by adding a data description with keywords related to potential users (e.g. developers of new analysis tools), as described above. For the Phantom dataset, different updated measurement sessions are possible depending on updated versions of the prototype. Conversely, for the Clinical dataset a single measurement session is foreseen, since there is no provision to recall back the same patient. Different versions of analysis are possible, depending on the update of the analysis tools. Therefore, the versioning will foresee a first number for the raw data acquisition (only for phantoms) and a second number for the analysis. Apart from clinical images (e.g. US images) for which the DICOM standard is usually adopted, there are no specific standards for optical data. In general, we will create metadata files in XML, embedding large binary data in XML with Base91 encoding. The software developed for the SOLUS project generates data from experiments. These data are the results of a sequence that acquires data on an instrument and then performs signal processing on them. An experiment takes place in a context that must be attached to the result to allow for a proper data management. All these data are categorized into eight classes: <table> <tr> <th> **Class** </th> <th> **Description** </th> <th> **Example** </th> </tr> <tr> <td> Project </td> <td> Name of the project or the sub-project for which the experiment is done </td> <td> SOLUS, SOLUS_characterization, SOLUS_clinical </td> </tr> <tr> <td> User </td> <td> Name of the operator during the experiment </td> <td> Researcher during the characterization, radiologist during the clinical tests </td> </tr> <tr> <td> Subject </td> <td> The acquisitions are performed on the subject </td> <td> Multimodal phantoms during the characterization, patient during the clinical tests </td> </tr> <tr> <td> Instrument </td> <td> Equipment used to acquire the data </td> <td> Prototype #1 </td> </tr> <tr> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Device </td> <td> Subsystems of an instrument </td> <td> Bimodal probe #1, bimodal probe #2 </td> </tr> <tr> <td> Sequence </td> <td> Set of instructions mixing acquisition and data processing </td> <td> Sequence for phantom characterization, sequence for clinical tests </td> </tr> <tr> <td> Processing </td> <td> Algorithm for signal analysis developed during the project and available in the sequence </td> <td> Ultrasound image segmentation, optical parameter estimation </td> </tr> <tr> <td> Results of experiments </td> <td> Set of acquisitions and processed signals </td> <td> Results for characterization of phantom #3, results for patient ID31 </td> </tr> </table> They are all organized as records (i.e. a container for data), which are stored in the eight tables (i.e. a container for records) of a database. In order to avoid a dependency to a third-party software, to make the data easy to read from an external program and to ease the data recovery as well as the custom use of the database, a dedicated solution has been developed. The database is written on the hard drive as a hierarchy of folders and files. It has the following structure: <table> <tr> <th> </th> <th> **Box A:** (top folder) installation folder **Box B:** (subfolder of A) root folder for the database named `SOLUS' (prefix DBROOT_) **Box C:** (subfolders of B) set of tables required for the software (prefix TABLE_) **Box D:** (subfolders of C) set of records for the selected table (prefix RECORD_) </th> </tr> </table> Each table stores one of the eight type of data. The folder of a table has the prefix _TABLE__ . It contains the records and the file _metadata.xml_ that describes the configuration. The name of the folder of a record is _RECORD_key_ . The key is a unique identifier that is automatically generated at the record creation (refer to java UUID for more information about the identifier). There are three types of records depending on data to save and policy for history: <table> <tr> <th> </th> <th> **Single** </th> <th> **Version** </th> <th> **Bag of data** </th> </tr> <tr> <td> Number of data </td> <td> One </td> <td> One </td> <td> On purpose </td> </tr> <tr> <td> History </td> <td> No </td> <td> Yes </td> <td> No </td> </tr> <tr> <td> Tables </td> <td> User, project, device, processing </td> <td> Instrument, sequence, subject </td> <td> Result </td> </tr> <tr> <td> Common files </td> <td> * **metadata.xml:** short description of the record * **format.xml:** definition the object that is saved * **misc:** file(s) or folder(s) containing the data (examples below: definition.xml, sequence.mat, data.mat) </td> </tr> <tr> <td> Thumbnail </td> <td> One </td> <td> One per version </td> <td> One </td> </tr> <tr> <td> Specific folders </td> <td> None </td> <td> **VERSION_###** contains the data for version number ### </td> <td> **DATA_LINK_*** contains a pointer to another record of the database **DATA_x_out_y** contains the data of output named y that are generated by the item named x **ITEM_#####** is a subfolder of DATA_*. It contains the data for iteration ##### </td> </tr> </table> <table> <tr> <th> Structure </th> <th> </th> <th> </th> <th> </th> </tr> </table> The open data will be organized closely to this internal software structure. However, the exact content of the open database as well as the release of software tools to ease the reading of these data is still subject to discussion. 3.2. Making data openly accessible: * **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so** * **Specify how the data will be made available** * **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?** * **Specify where the data and associated metadata, documentation and code are deposited** * **Specify how access will be provided in case there are any restrictions** Data will be made "as open as possible, as closed as necessary". In this respect, all data described above will be made open apart from: * algorithms for data analysis, which could be considered for IP protection * personal data subject to privacy protection, accordingly to GDPR Regulation (EU) 2016/679, and ethical provisions. * SW elastography images. In particular, clinical data retrieved at Ospedale San Raffaele (OSR) will be directly saved without any personal data. As defined in Deliverable D5.1(“Definition of the clinical protocol”) the following data will be saved for scientific purposes: * age * menopausal status * pathology results * results from color Doppler * BI-RADS score Due to the fact that the possibility to share SW images as open-access is still ongoing, more details will be given in the final version of the DMP, due by month 48. The software will automatically assign a numerical ID to each patient for data analysis, as described in section 3.1. All specifications required to access the data will be inserted in the data repository. The segmentation of US images, and in general the extraction of optical properties for suspect lesions/inhomogeneities require advanced analysis tools, generally pertaining to the methods of inverse problems in diffuse optics. If already published or not involved in IP protections, the algorithms will be described in detail to permit replications. Inclusion of software tools for data processing will be considered if not causing significant overburden distracting important energies from the fulfilment of the project aims. A three-phase process for data storage is foreseen. Initially, data will be collected by the SOLUS prototype and stored locally on the instruments, while other information will be gathered by clinicians and recorded on paper (as described in Deliverable D5.1). In the second phase, all collected data will be stored at POLIMI data warehouse, apart from protected clinical information which will be retained at OSR. This will permit construction of the database and initial tests on analysis. In the third phase, when data acquisition is complete, data will be uploaded on an open repository. At present, the choice is for Zenodo, because of perfect match with requirements, and increased interest in the International community. Still final decision will be made close to the actual deposition (not earlier than month 36) to take into account the updated status of public repositories. 3.3. Making data interoperable: * **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** * **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** The realm of clinical optical data is at present not covered by standards or specific vocabularies. The numerosity of the clinical study limits its potential use mainly to researchers and operators within the field. The definition of metadata and in particular the fields in the XLM will match the vocabularies most often covered by scientific publications in diffuse optics. 3.4. Increase data re-use (through clarifying licenses): * **Specify how the data will be licenced to permit the widest reuse possible** * **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** * **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** * **Describe data quality assurance processes** * **Specify the length of time for which the data will remain re-usable** A 6-12 months embargo after acceptance of relevant publications will be considered. Data will be made available and reusable through open data repositories for periods of 10 years. # ALLOCATION OF RESOURCES **Explain the allocation of resources, addressing the following issues:** * **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** * **Clearly identify responsibilities for data management in your project** * **Describe costs and potential value of long term preservation** Since data deposit in a local data warehouse and an external repository will not start earlier than 2 years from now, cost estimate will be performed at due time since policies and costs are rapidly changing under great internal and external pressure on data preservation and sharing. In general terms, it is highly probable that no extra-costs will be incurred for the storage of data since the overall dimension of data will be handled by standard POLIMI data facilities and fit in the free allowances of Zenodo repository. Concerning the POLIMI warehouse, in September 2018 the Institution proposed to each Department the possibility of a support for data storage related to European projects. This data storage will be directly managed by the central ICT staff of POLIMI, providing high-level services in terms of backup, robustness, protection and restricted access. A request has been already sent to get access to that space. The feasibility of that opportunity will be definitely updated in the Final version of DMP due by month 48. Dr Andrea Farina is responsible for the coordination of the overall data management. # DATA SECURITY **Address data recovery as well as secure storage and transfer of sensitive data** The second phase of data storage will be performed internally at a data warehouse of POLIMI and at OSR for protected clinical information. No access external to the consortium will be possible. The actual data repository in force for the research group at POLIMI is located in secure hard-drives provided by a redundant system (RAID 5) that is backed up every week by an incremental back-up script (rsbackup) to other external servers. The data servers are located in the basement of the Physics Department of Politecnico di Milano in a restricted access area. The data servers have an access controlled by passwords, and they are part of a VLAN without access from outside the POLIMI institution. The VLAN at which not only the data servers are connected but all the PCs used for this project is part of an institutional network protected by a firewall. We note that the POLIMI group has a proven track-record in long-term data storage and access going back to the 80s. In case POLIMI benefits of access to the institutional repository, the data stored will inherit the high-level security of the overall institutional network. More detail about the security level will be specified in the Final version of the DMP (M48). In the final phase, the public repository will be chosen to grant requirements of long-term secure storage. The most probable choice - Zenodo - already fulfils all requirements. Sensitive data - mostly personal data of the clinical study - will not be shared and will be stored only at OSR to comply with the privacy policies foreseen in the clinical protocol. # ETHICAL ASPECTS **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** The Clinical protocol (Deliverable 5.1 - Definition of the clinical protocol - produced at M3) and the ethical requirements in terms of protection of personal data (Deliverable D5.2 - Approval of clinical protocol by ethical committee - due at month 36) set specific requirements for anonymization of data and protection of personal data of patients. These requirements will be strictly followed and will prevent sharing of information. All data stored at POLIMI data warehouse and deployed at public repository will be completely anonymized. The patient information and consent will follow the guidelines set forth in ISO 14155 for patient information and informed consent, and will imply also sharing of data excluding sensitive data. # OTHER **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** At present, the main local procedures for data management are related to the requirements of sensitive data protection described in the clinical protocol (Deliverable D5.1) and operated by OSR. No other prescriptive procedures are identified so far. # CONCLUSIONS In summary, we have shown that two independent datasets will be generated: i) Phantom data and ii) Clinical data. These two datasets will be organized into an XML-based database whose folders are saved on the hard drive of the clinical machine. Every measurement will be associated to an ID number for anonymization. A copy of this database will be placed in a local repository hosted at the Physics Department of Politecnico di Milano with restricted access to the consortium members. A request of space with restricted permission on the recent institutional repository has been sent, whose result will be discussed in the last version of the DMP, due by month 48. The two datasets will be, eventually, uploaded for open-access on two independent public repositories on Zenodo, except for: * algorithms for data analysis, which could be considered for IP protection. * personal data subject to privacy protection, accordingly to GDPR Regulation (EU) 2016/679, and ethical provisions. * SW elastography images. The policy on SW elastography images is still under discussion, thus updates on that will be given at month 48 with the final version of the document.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
1215_InKreate_731885.md
# Executive summary The Data Management Plan (DMP) of the InKreate project is designed according the guidelines “Guidelines on FAIR Data Management in Horizon 2020”. Research data should be findable, accessible, interoperable and reusable (FAIR). The DMP is a living document. Current version includes the data that will be collected according to the DoA (description of Action). The DMP will be updated if any data not foreseen in the DoA is collected during the project. The only partner who collects data during the project is the Institute of Biomechanics of Valencia (IBV). IBV collects human shapes (3D scans) and personal data according to ethical principles and legislation to scientific research, as described in the Deliverable 9-1. The collected data will be curated using interchangeable formats to facilitate the access of third parties: STL for 3D data and CSV for personal data. All data is deposited in Zenodo repository (https://www.zenodo.org) with Creative Commons Licences CC-BY. This way, third parties are enabled to access, mine, exploit, reproduce and disseminate (free of charge for any user) the data collected during the project. # Purpose of the data collected Part of the developments that will be carried out during INKREATE will be based on libraries of 3-D body shapes, and applications to generate models of such shapes from user data and 2-D pictures. Those development activities will require 3-D scans of human bodies. That collection will be done by IBV, in experimental activities with external volunteers. This involves recording, storing and managing personal data, including images of the participants’ bodies. All pictures and scans will be taken in the Laboratory of Human Shapes of IBV. These activities will be done according to ethical principles and legislation to scientific research, as described in the Deliverable 9-1. Figure 1. 3-D body scanner. # Data curation 3D scans will be stored in STL files. STL is a standard file format supported by many software packages, and it is widely used for rapid prototyping, 3D printing and computer-aided manufacturing. STL files describe the surface geometry of a three-dimensional object. An STL file describes a raw unstructured triangulated surface by the unit normal and vertices (ordered by the right-hand rule) of the triangles using a three-dimensional Cartesian coordinate system. In addition, the following personal data will be recorded for each participant: * Gender * Birth date * Country of residence Version 100 4 of 5 Date 05-06-2017 _Deliverable 8.4 Data Management Plan INKREATE_ * Weight * Stature and other anthropometric measures These data will be recorded in electronic format in using comma-separated values (CSV) file. CSV files stores tabular data in plain text. CSV is a common data exchange format that is widely supported by consumer, business, and scientific applications (e.g. Microsoft EXCEL). The data will be deposited in Zenodo repository ( _https://www.zenodo.org_ ) with Creative Commons Licences CC-BY. This way, third parties are enabled to access, mine, exploit, reproduce and disseminate (free of charge for any user) this research data. # Conclusion This report explains the Data Management Plan (DMP) of the InKreate project. The DMP is designed according the guidelines “Guidelines on FAIR Data Management in Horizon 2020”. Research data should be findable, accessible, interoperable and reusable (FAIR). Version 100 5 of 5 Date 05-06-2017
https://phaidra.univie.ac.at/o:1140797
Horizon 2020