filename
stringlengths 18
35
| content
stringlengths 1.53k
616k
| source
stringclasses 1
value | template
stringclasses 1
value |
---|---|---|---|
1429_COMED_779306.md
|
**EXECUTIVE SUMMARY**
In December 2013, the European Commission has launched a flexible pilot for
open access to research data (ORD pilot), as part of the Horizon 2020 Research
and Innovation Programme. The aim of the ORD pilot is to disseminate results
of publicly-funded research more broadly and faster, for the benefit of
researchers, innovative industry and citizens.
As part of H2020, COMED is committed to open its research data through the ORD
pilot, and the Data Management Plan (DMP) is one of the instrument to achieve
this objective.
COMED’s DMP is one of COMED’s deliverables and gives an overview of available
research data, access and the data management and terms of use. The
Deliverable outlines how the research data collected or generated will be
handled during and after COMED action, describes which standards and
methodology for data collection and generation will be followed, and whether
and how data will be shared. The DMP is intended as a living document and
reflects the current state of the discussions, plans and ambitions of the
COMED partners, and will be updated as work progresses. This document follows
the template provided by the European Commission in the Participant Portal.
This deliverable provides the first version of the DMP elaborated by the COMED
project and has been produced jointly by all members of COMED consortium.
# 1\. INTRODUCTION
_1.1. Why is a DMP needed?_
In December 2013, the Commission has launched a flexible pilot for open access
to research data (ORD pilot), as part of the Horizon 2020 Research and
Innovation Programme. The pilot aims to improve and maximise access to and re-
use of research data generated by Horizon 2020 projects, taking into account
the need to balance openness and protection of scientific information,
following the principle of _'as open as possible, as closed as necessary'_ .
The rationale behind the choice of committing to open data through the ORD
pilot is to disseminate results of publicly-funded research more broadly and
faster, for the benefit of researchers, innovative industry and citizens.
Open Access allows accelerating dissemination process as well as research
results to reach the market, but also avoids a duplication of research
efforts. Open Access policy is also beneficial to researchers. Making the
research publicly available increases the visibility of the performed
research, as well as foster collaboration potential with other institutions in
new projects. It also eases reproducibility of results, pushing in the
direction of the current debates among the scientific community 1 .
Projects must aim to deposit the research data needed to validate the results
presented in the deposited scientific publications, known as "underlying
data". In order to effectively supply this data, projects need to consider at
an early stage how they are going to manage and share the data they create or
generate. The Data Management Plan (DMP) specifies the implementation of the
pilot, in particular with regard to the data generated and collected, the
standards in use, the workflow to make the data accessible for use, reuse and
verification by the community and define the strategy of curation and
preservation of the data.
## 1.2. Implementation of the DMP in COMED
The partners of COMED participate in the Open Access Pilot for Research Data.
DMP is included in the Description of Work (DoW) as a deliverable (D11.2).
This DMP is a living document, and the creation of an initial version is
scheduled for project-month 6. It is drafted in compliance with the guidelines
given on data management in the Horizon 2020 Online Manual 2 . This
deliverable will evolve during the lifetime of the project and represent
faithfully the status of the project reflections on data management. Updates
of the DMP are thus planned and will be submitted to the EC as an integral
part of the Project Periodic Reports.
Lead for this task will be with UB, though all partners are involved in the
compliance of the DMP. The partners agree to deliver datasets and metadata
produced or collected in COMED according to the rules described in the DMP,
and contribute to the document for the part relative to the working package
(WP) of which they are leader ( _section 4_ ). The project office and in
particular the Scientific Officer are also central players in the
implementation of the DMP and will track the compliance of the rules agreed.
_1.3. What kind of data are considered in the DMP?_
In the last updated version of the Guidelines to the Rules on Open Access to
Scientific Publications and Open Access to Research Data in Horizon 2020 2 ,
it is stated that **research data** _refers to information, in particular
facts or numbers, collected to be examined and considered as a basis for
reasoning, discussion, or calculation. In a research context, examples of data
include statistics, results of experiments, measurements, observations
resulting from fieldwork, survey results, interview recordings and images. The
focus is on research data that is available in digital form. Users can
normally access, mine, exploit, reproduce and disseminate openly accessible
research data free of charge._
The Open Research Data Pilot applies to two types of data:
1. the 'underlying data' (the data needed to validate the results presented in scientific publications), including the associated metadata (i.e. metadata describing the research data deposited), as soon as possible
2. any other data (for instance curated data not directly attributable to a publication, or raw data), including the associated metadata, as specified and within the deadlines laid down in the DMP – that is, according to the individual judgement by each project/grantee.
To the purposes of the DMP as a deliverable of the COMED project, we will
distinguish collected research data and generated research data. The former
are meant as existing data produced by various sources, which will be
systematically collected and stored together in data libraries/platforms. We
will produce metadata for this type of data, describing the availability of
the datasets included in the library/platform that will be created. Generated
data are those data that will be created ex novo as part of the project.
Different data pose different challenges to accomplish open data through the
ORD pilot.
## **2\. COMED PROJECT**
### _2.1. Project’s objectives_
The overarching objective of the COMED project is to push the boundaries of
existing methods for cost and outcome analysis of healthcare technologies, and
to develop tools to foster the use of economic evaluation in policymaking.
Within this general agenda, the COMED project explores a very broad range of
healthcare technologies that are classified under one specific category:
medical devices.
The main objectives of COMED are:
1. to improve economic evaluation methods for medical devices in the context of the health technology assessment framework by increasing their methodological quality and integrating data from different data sources
2. to investigate health system performance through analysis of variation in costs and outcomes across different geographical areas
3. to strengthen the use of economic evaluation of medical devices in policy making
The integration of (existing) data from different data sources as well as the
generation of ad hoc new data are the key vehicles to achieve the objectives
of the COMED’s project. Such data will also be used and of use after the end
of the project, not only for the member of the consortium but also for other
stakeholders and researchers.
A complete list of the data that will be collected and created with the
corresponding timetable and leading partner is shown in Table 1 for each WP
and relevant task.
**Table 1.** COMED Data
<table>
<tr>
<th>
**WP**
</th>
<th>
**Task**
</th>
<th>
**Dataset name**
</th>
<th>
**Data collected/generated**
</th>
<th>
**Timelines**
</th>
<th>
**PI**
</th> </tr>
<tr>
<td>
1
</td>
<td>
1
</td>
<td>
RWD Mapping
</td>
<td>
Collected
</td>
<td>
M1-M12
</td>
<td>
UB
</td> </tr>
<tr>
<td>
2
</td>
<td>
Expert solicitation
Learning Curve
</td>
<td>
Generated
</td>
<td>
M6-M12
</td>
<td>
EUR
</td> </tr>
<tr>
<td>
2
</td>
<td>
2
</td>
<td>
Surrogate Outcome
Mapping
</td>
<td>
Collected
</td>
<td>
M6-30
</td>
<td>
UEMS
</td> </tr>
<tr>
<td>
</td>
<td>
3
</td>
<td>
Semi-structured interviews on Surrogate
Outcomes
</td>
<td>
Generated
</td>
<td>
M18-30
</td>
<td>
UEMS
</td> </tr>
<tr>
<td>
3
</td>
<td>
1
</td>
<td>
mHealth Mapping
</td>
<td>
Collected
</td>
<td>
M6-12
</td>
<td>
UB
</td> </tr>
<tr>
<td>
4&5
</td>
<td>
</td>
<td>
Data library
</td>
<td>
Collected
</td>
<td>
M6-M18
</td>
<td>
HCHE
</td> </tr>
<tr>
<td>
6
</td>
<td>
2
</td>
<td>
Expert survey Early
Dialogue
</td>
<td>
Generated
</td>
<td>
M6-M18
</td>
<td>
UBERN
</td> </tr>
<tr>
<td>
3
</td>
<td>
Case Study Early Dialogue
</td>
<td>
Collected/Generated
</td>
<td>
M6-M24
</td>
<td>
UBERN
</td> </tr>
<tr>
<td>
7
</td>
<td>
1
</td>
<td>
Interviews
</td>
<td>
Generated
</td>
<td>
M24-M30
</td>
<td>
UB
</td> </tr>
<tr>
<td>
8
</td>
<td>
1
</td>
<td>
RWE on Transferability of
MD HTA/EE
</td>
<td>
Collected
</td>
<td>
M6-M12
</td>
<td>
SYREON
</td> </tr>
<tr>
<td>
2
</td>
<td>
Focus group
</td>
<td>
Generated
</td>
<td>
M18-24
</td>
<td>
SYREON
</td> </tr>
<tr>
<td>
3
</td>
<td>
Stakeholders miniconference
</td>
<td>
Generated
</td>
<td>
M18-M33
</td>
<td>
SYREON
</td> </tr> </table>
### _2.2. Project’s data_
As said, COMED will both collect existing data from partners and third
parties, and will create new data. Research data will be collected/generated
and metadata produced; the project will also produce reviews, manuscripts and
dissemination material. While the aim of this DMP is to explain and describe
Research Data and Metadata according to the H2020 framework, in the following
we briefly outline all different output that will be originated in this
project.
* **Research data:** this category comprehends on the one hand, existing source of data including databases; surveys; patient chart reviews; randomized controlled trials; pragmatic clinical trials; observational data from cohort studies; registries; routine administrative databases; etc.- that will be mapped and structured as data library and made available with metadata, according to open access rules of each data. On the other hand, generated research data will consist on various forms such as surveys, structured and semi-structured interviews, focus group, discrete choice experiments, and others. These data will be created to address specific objectives of different working packages and will be produced with respective metadata.
* **Metadata:** refers to “data about data”, it is the information that describes the data that is being published with sufficient context or instructions to be intelligible for other users.
Metadata will allow a proper organization, search and access to the generated
information and will enable to identify and locate the data via a web browser
or web based catalogue. In the context of data management, metadata will form
a subset of data documentation, either collected or generated, that will
explain the purpose, origin, description, time reference, creator, access
conditions and terms of use of a data collection.
* **Reviews:** reviews, where possible systematic, will be the starting point of most WPs. They will synthesize all the key findings in the current literature on specific topics and investigate contributions from a broad range of scientific disciplines. In the reviews existing works are synthesized and elaborated in a new (generated) piece of evidence. The present DMP does not consider literature reviews among the COMED datasets. All reviews published within the COMED project will be open access.
* **Manuscripts:** manuscripts will consist of all the reports and peer reviewed articles generated during the project, all deliverables, publications and internal documents. Microsoft Word (DOCX) and PDF will be used for final versions, while intermediate versions can consider the usage of alternative software, such as ODT or TEX (LateX) files.
* **Dissemination material:** COMED will produce dissemination material in a diversity of forms: website, project meetings, workshops, flyers, public presentations in national and international conferences.
All partners will be actively involved in the production of each type of data.
## **3\. FAIR DATA**
This DMP contains information on Research data and metadata and is conceived
as a living document. At this stage in the research, as shown in Table 1, most
of data collection and generation have still to be conducted, and a lot of
questions concerning the data are open for discussion, mostly concerning the
FAIR principles (Findable, Accessible, Interoperable, Re-use). We will add
relevant information to the DMP over the course of the project. An
intermediate and final version will be issued before the end of the project,
and additional editions will be produced if needed.
To compose this DMP, the work package leaders have been asked to describe the
different datasets that will be collected/generated within their WP. However,
since many sections can only be provisionally filled in, and some general
guidelines apply to all datasets, in this section we report the common FAIR
principles and general rules that will be followed to collect, generate and
manage the data during the project.
For data generation, we will follow the EC guidelines for ethics self-
assessment 3 , to inform individual research subjects about all aspects of
the research in which they are being asked to participate, including the
future use of the data they might provide, the complete details and possible
dangers they might face. We will inform that participation is entirely
voluntary and document participants’ informed consent in advance, unless
national law provides for an exception (e.g. in the public interest). The
informed consent will be delivered making sure that participants can fully
understand; it will illustrate clearly the aims, methods and implications of
the research, the nature of the participation and any benefits, risks or
discomfort that might ensue. We will seek participants’ consent in written
form whenever possible (e.g. by signing the informed consent form and
information sheets). For data collection, we will comply with the General Data
Protection Regulation (GDPR) (EU) 2016/679 which has come into force in May
2018.
In Section 4, a specific description of each dataset is presented separately
for work package tasks, using the standard EC template for a DMP and including
only dataset-specific elements, while FAIR principles applying to all datasets
are described below in section 3.1.
### _3.1. Findability_
All sources of data collected and generated will be complemented by metadata.
Each generated dataset will get a unique Digital Object Identifier (DOI).
For the naming of each dataset, files and folders at data repositories will be
versioned and structured by using a name convention consisting of project
name, working package, dataset name and version (e.g. COMED_WP1_DS_1.xlsx)
Keywords will be added in line with the content of the datasets and with
terminology used in the specific scientific fields to make the datasets
findable for different researchers.
Being COMED a multidisciplinary project, we will use metadata standards for
General Research Data, such as Data Package; if a dataset pertains to specific
discipline only, metadata standards for the specific discipline will be used.
### _3.2. Accessibility_
As described before, our intention is to keep open as many data sets as
possible. This will be balanced with the respect of the principle of
protection of personal data, according to which everyone has the right to the
protection of personal data concerning him or her and to access to data which
has been collected concerning him or her, and the right to have it rectified
4 . Therefore, there might be circumstances under which open access to the
data will not be possible. This can occur if we cannot guarantee the privacy
of the participants, if the collected datasets are not open access, etc. In
principle, if open access is not possible, we will try to make the dataset
open under a restricted license, and in the last instance, if no other option
is possible, we will keep the dataset completely closed and justify why this
is needed. Accessibility to datasets will be decided in agreement with members
of the COMED consortium.
All open datasets will be stored in a trusted repository. Possible
repositories are: Registry of Open Access Repositories (ROAR); Directory of
Open Access Repositories (OpenDOAR).
### _3.3. Interoperability_
Interoperability means allowing data exchange and re-use between researchers,
institutions, organisations, countries. Hence, whenever possible we will
adhere to standards for formats and issue data and metadata in available
(open) software applications.
Data will be shared in cloud only when this is allowed. When this will occur,
either for internal use or involving external stakeholders, data will be
anonymized and personal information will be protected.
### _3.4. Re-Usability_
The datasets will be licensed under an Open Access license, whenever possible.
However this will depend on the level of personal data protection, and the
Intellectual Property Right (IPR) involved in the data set.
Our intention is to make data re-usable for third parties as much as possible
and for the longest possible period. If a period of embargo will be necessary
(e.g. if a dataset contains specific IPR or due to time to publish), we will
specify why and for how long. The length of time that the datasets will be
stored will depend on their content. For example if the dataset contains
medical devices that we foresee will be replaced soon, these may not be stored
indefinitely.
# 4\. DATA MANAGEMENT PER WP
In this section, datasets expected to be collected and/or generated as part of
the WPs of the COMED project are presented. The development and management of
each dataset will be inspired by and follow the general FAIR principles and
procedures described in previous section, which will act as boundaries and
guidelines for the generation of new data and collection of existing sources.
As the DMP is a living document, more details will be provided in the future
versions. Here we commit to apply the FAIR principles to datasets whenever
possible.
## 4.1. WP1: Real-world evidence for economic evaluation of medical devices
### 4.1.1. Task 1
__Section 1: Data summary_ _
The **purpose** of the data collection WP1 task 1 is to provide a
comprehensive assessment of possible sources of real world evidence for
medical devices in EU countries.
Therefore we will collect and then use **existing data** .
The **possible sources of RWD** that will be explored are: Databases; Surveys;
Patient chart reviews; Pragmatic clinical trials; Observational data from
cohort studies; Registries.
Sources of RWD can be collected also at the sub-national or cross-national
level by scientific networks (e.g. scientific societies, hospital networks,
etc.).
The data will be **useful** for other project partners and in the future for
other research groups; and indirectly for policymakers, that will be able to
apply methods to initiate structured collection of RWD based on evidence
produced by the use of these data.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata _
All sources of data identified will be made available and metadata provided. A
template to synthesize data included in each database is in preparation and it
will identify for each dataset the following details on its content: Source of
data; Type of database; Coverage (where); Level of analysis; Coverage period;
Sample size; Socio-Demographic data; Clinical Data; Type of diagnosis
classification; Type of procedure classification; Medical Device traceable;
Costs; Other individual level variables; Other hospital level variables; Other
regional level variables; Data Accessibility _._
Further metadata might be added at the end of the project in line with
metadata conventions _._
* _Making data openly**accessible** _
Data availability will depend on regulation of data source producers. Where
possible, we will act as facilitators of data accessibility.
The metadata produced, as well as the list of RWD collected, will be made
available publicly.
#### • Making data interoperable
Data will be deposited in a repository and measures must be taken to make it
possible for third parties to access, exploit, reproduce and disseminate —free
of charge —(i) data, including associated metadata, needed to validate the
results presented in scientific publications as soon as possible; (ii) other
data, including associated metadata, as specified We will provide information
-via the repository -about tools and instruments at our disposal necessary for
validating the results
• _Increase data**re-use** (through clarifying licenses): _
Given the aim of this WP, making usable and re-usable all collected data will
address one of its objectives, and the RWE collected will be used also for
other WP of the COMED project.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
__Section 4: Data security_ _
Depending on the nature of the dataset that will be selected, traditional
methods of storing data files in a network drive or a file server would be
weak solutions when it comes to complying with 21 CFR Part 11, GLP, GMP norms.
LogiLab SDMS provides a controlled environment for accessing any data that
needs controlled access, audit trails and version control, and will be adopted
when needed.
### 4.1.2. Task 2
__Section 1: Data summary_ _
The **purpose** of the data generation in task 2 of WP 1 is to enable the
estimation of learning curves for medical devices by means of expert opinion,
to be used when no empirical data is (yet) available. This task contributes to
COMEDs objectives, as it is often a challenge to incorporate the impact of
learning in a cost-effectiveness analysis of a medical device, so existing
methods for cost and outcome analysis can benefit from a systematic approach.
Based on findings from the literature, a questionnaire will be developed that
may be used for structured expert solicitation of information on learning
curves. We might want to test this questionnaire by asking physicians to fill
in the questionnaire, and reflect on their experience. In other words, data
will be generated through a **survey** ; no existing data will be re-used.
Metadata, i.e. a summary of the answers to the questions generated during the
pilot testing of the questionnaire, but also the resulting questionnaire
itself, will be **useful** within other tasks of COMED, but also in the future
for other research groups who evaluate the technology of interest or who
evaluate other technologies in which the learning curve plays an important
role. The expected **size** of the data will be small, we will interview a
maximum of 20 physicians.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata: _
The results will be published in a scientific journal. The publication will
get a unique
Digital Object Identifier, and will include the metadata (possibly in an
online appendix).
Keywords in line with the content of the research and the terminology used in
the specific scientific field will be added to the manuscript.
* _Making data openly**accessible** _
It is our intention to provide open access to the metadata generated through
the survey.
* _Making data**interoperable** : _
We intent to adhere to standards for formats, wherever possible, to stimulate
interoperability.
* _Increase data**re-use** (through clarifying licenses): _
We will stimulate re-use of the data, as we intent to license the data under a
creative common open access agreement, with limitations for commercial re-use
(i.e. re-use by commercial entities for-profit reasons).
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables. There are no costs associated
to long term preservation, but the value of long-term preservation will
diminish over time, as the case studies become less relevant. Nevertheless,
the impact of learning in general is expected to remain an important topic in
costeffectiveness analyses of medical devices.
__Section 4: Data security_ _
Data are stored in selectively accessible folders on a continuously backed-up
network drive of the Erasmus University Rotterdam. Please refer to the Erasmus
University Rotterdam data protection policy, to which we adhere.
## 4.2. WP2: Use of surrogate outcomes for medical devices: advanced
methodological issues
### 4.2.1. Task2
__Section 1: Data summary_ _
The **purpose** of the data collection on WP2 task 2 is to illustrate the
range of surrogate validation processes and methods that could be used for the
economic evaluations of medical devices. We will seek access to anonymized
patient-level data from previous randomized and non-randomized clinical
studies of selected technologies to test different surrogate validation
techniques with respect to their potential to inform an HTA report where
evidence is mainly relying on surrogate outcomes. We will collect and re-use
**existing data** . The size of the data is currently not known but will
probably not exceed 5MB.
The data will be **useful** to answer the question of how good a selected
surrogate endpoint is to predict a patient-relevant outcome for the technology
under investigation and to illustrate methodological approaches to surrogacy
validation. Therefore they may be useful for the scientific, regulatory,
clinical and industry community.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata _
List of data collected and sources will be made available through a University
of Exeter repository and metadata provided.
* _Making data openly**accessible** _
Data accessibility will depend on consent expressed by patients in the
original trials. We will ensure that datasets shared as part of the project
include no patient-identifiable information (such as names and addresses), and
that all data storage complies with the regulations governing research at
University of Exeter Medical School. All data will be received and stored in a
secure database at the Clinical Trials Support Network, University of Exeter
Medical School, Exeter, United Kingdom.
#### • Making data interoperable
Individual trial datasets will be combined into one overall dataset with
standardised variables, working to ensure standardisation of variables. We
will provide information -via the repository -about variables, tools and
instruments to use the data.
• _Increase data**re-use** (through clarifying licenses) _
Patient-level data from individual studies will remain the property of the
collaborators/owners who have provided them. They will retain the right to
withdraw the data from the analysis at any time. Possibility of data
accessibility and re-use will depend on consent expressed by patients in the
original trials.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables of WP2.
__Section 4: Data security_ _
All data will be received and stored in a secure database at the Clinical
Trials Support Network, University of Exeter Medical School, Exeter, United
Kingdom. Data management to ensure integrity, security and storage of this
data will be performed according to the UK Clinical Research Collaboration
registered Exeter Clinical Trials Unit data management plan.
### 4.2.2. Task3
__Section 1: Data summary_ _
The **purpose** of the data generation on WP2 task 3 is to develop a
methodological framework and policy tool for the evaluation of medical devices
(and other health technologies) that depends on surrogate outcomes evidence.
Data will be generated from **semi-structured interviews** conducted across
the EU with a purposive sample of participants belonging to different classes
of stakeholders (patients’ and carers’ organisations, healthcare professionals
and their organisations, HTA producers/assessment groups, medicines and
devices manufacturers, HTA agencies’ board and appraisal committee members and
providers and commissioners of health services) and from **surveys** taking
the form of discrete choice experiments. Hence both qualitative and
quantitative data will be generated. The expected size of the data is
currently unknown and will depend on the sample size reached through
interviews and surveys. Data will be **useful** to shed light on stakeholders’
views and opinions on levers and barriers to the practical implementation of
an evidence-based policy framework for the use of surrogate outcomes evidence
in policy making.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata _
Summary of data collected and sources will be made available through a
University of Exeter repository and metadata provided.
* _Making data openly**accessible** _
Data accessibility will depend on consent expressed by interviewees and
respondents.
#### • Making data interoperable
Data will be collected and stored using commonly available softwares. We will
provide information -via the repository at the University of Exeter - about
tools and instruments to use the data.
• _Increase data**re-use** (through clarifying licenses): _
Possibility of data accessibility and re-use will depend on consent expressed
by respondents. Whenever possible, researchers will try to act as facilitators
to ensure this is made possible.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables of WP2.
__Section 4: Data security_ _
All data will be stored in a secure database at the University of Exeter
Medical School, Exeter, United Kingdom. Data management to ensure integrity,
security and storage of this data will be performed according to the UK
Clinical Research Collaboration registered Exeter Clinical Trials Unit data
management plan.
## 4.3. WP3: Outcome measurements and Patient Reported Outcome Measures
assessment for mHealth
### 4.3.1. Task 1
__Section 1: Data summary_ _
The **purpose** of the data collection WP3 task 1 is to provide a
comprehensive overview of existing methods to measure outcomes and PROMs for
mHealth technologies.
For this specific task, we will thus collect and then use **existing data** .
The collection of such data will be instrumental in developing a theoretical
and methodological framework to help push the boundaries of outcome analysis
for such technologies, in accordance with the overarching goal of the COMED
project.
The **sources** that will be explored to collect evidence on will be
prevalently peer-reviewed manuscripts and reviews. Further sources that will
be analysed are experimental studies protocols accessible from online
databases.
The data will be **useful** for other project partners and for future research
groups that will work on this research topic. The evidence produced will be
advantageous for policymakers and practitioners as well, by providing them
with state-of-the-art insight on measures and applications for assessing
patient reported outcomes measures within mHealth settings.
Different sources of data will be collected for other tasks of this WP: the
related DMP will be detailed accordingly in future updates of this document.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata _
All sources of data identified will be made available and metadata provided.
Specific keywords will be added to make the dataset findable for different
researchers.
A template to synthesize data included in each database will be identified and
will include, for each dataset, some of the following details on its contents:
Type of database; Coverage (where); Coverage period; Sample size; Type of
PROMs administered; Frequency of administration; Study design; Primary
endpoint; Secondary endpoints; Socio-Demographic data; Other individual level
variables; Data Accessibility _._
Further metadata might be added at the end of the project in line with
metadata conventions _._
* _Making data openly**accessible** _
The metadata produced will be made publicly available and will include all
sources explored, as long as data availability is guaranteed for each of.
Where possible, we will act as facilitators of data accessibility.
#### • Making data interoperable
Metadata will be stored in a trusted and widely accessible data repository.
All possible measures will be undertaken to make it possible for third parties
to access, exploit, reproduce and disseminate —free of charge – all data types
present in the aforementioned datasets.
We will facilitate the interoperability of the data collected by adhering to
existing standards.
• _Increase data**re-use** (through clarifying licenses): _
The collected data will be available for re-use by both third parties and
COMED partners. Making all collected data re-usable is part of the WP
objectives and will be instrumental in shaping the following objectives of the
projected work.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables. No specific costs are to be
foreseen in the FAIR making process of our data.
__Section 4: Data security_ _
Depending on the nature of the dataset that will be selected, traditional
methods of storing data files in a network drive or a file server would be
weak solutions when it comes to complying with 21 CFR Part 11, GLP, GMP norms.
Data will be kept on secure servers and environments.
## 4.4. WP4 & WP5: Variation in the use of medical devices
__Section 1: Data summary_ _
The task on WP 4/ 5 is to develop and use a model explaining geographical
variation in the use of different medical technologies. The analysis will
focus on showing and explaining warranted and unwarranted geographic variation
within and between the different participating countries.
The **purpose** of the data collection, specifically in WP 5 Task 2 is to
develop a data library suitable for the investigation of the different use of
medical devices. Therefore, we will develop a database, which will give
information on the prevalence of disease and the use of procedures, which
treat the diseases. Additional patient, provider or general explanatory
variables will be used to analyse differences in the usage of the medical
devices.
We will collect and then use **existing data** from the participating
countries.
**Possible sources** are administrative databases on diagnoses and procedures
in inpatient and outpatient care either countrywide or related to a sickness
fund. Furthermore, databases on structural and socio-economic variables are of
interest.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata _
All sources of data identified will be made available and metadata will be
provided. Furthermore, a list of used variables and the corresponding dataset
in each country will be published. Publicly available data or data without
privacy restriction will be published.
* _Making data openly**accessible** _
Data availability will depend on regulation of data owners. Since most of the
data is not publicly available, but restricted due to data protection we aim
on providing aggregated information (e.g. on geographical levels such as NUTS
level (Nomenclature des unités territoriales statistiques)
The metadata produced, as well as the list of variables collected, will be
made available publicly.
### • _Making data**interoperable** _
The data collected are usually from non-public sources such as administrative
data from sickness funds or patient data collected for research purposes or
the development of reimbursement systems. The publication and detailed
description of variables used for the analysis and corresponding years within
each dataset will ensure the interoperability of the data.
• _Increase data re-use (through clarifying licenses):_
Due to the expected strict licences and privacy restrictions a re-use of the
dataset will only be possible to a limited extent respecting the privacy
regulations of data owners in each countries.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
__Section 4: Data security_ _
Data will be stored, used and analysed according to the country specific
requirements for each dataset. All patient related data will be anonymized
before our analysis.
## 4.5. WP6: Early dialogue and early assessment of medical devices
Data of WP6 are collected and generated with the aim to develop methodological
and policy guidelines that improve the early dialogue between manufacturers,
regulators/notified bodies and HTABs in order to overcome barriers to HTA. WP6
contributes to COMEDs objectives by streamlining the HTA process by
facilitating the alignment of evidentiary requirements.
### 4.5.1. Task 2
__Section 1: Data summary_ _
The Survey consists in questionnaires where qualitative primary data will be
generated. The respondents are selected from EU HTA agencies, notified bodies
and manufacturers of medical devices. At this stage data size cannot be
estimated. Due to the qualitative character of the data, the dataset most
likely will not be large. The raw data will be interesting for researchers in
the field of early dialogue or barriers of HTA. Raw data will not be of great
value for the three parties, if not presented in guidelines, a report or
article.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata: _
* The processed data in form of articles will be findable via data base search engines (PubMed, Google Scholar) and will connected to a publishers DOI. o Raw data will not be made findable.
* _Making data openly**accessible** _ o Individual level data will not be published publicly. Data from the EMA parallel regulatoryHTA scientific advice is strictly confidential. Anonymized raw data might be shared with researchers requesting it.
* Published as scientific articles in peer reviewed journals.
* _Making data**interoperable** : _
Not applicable. The research will not produce data suitable for a database.
* _Increase data**re-use** (through clarifying licenses): _
Most likely data will be reused by partners or researchers reading our article
and requesting the raw data.
### 4.5.2. Task 3
__Section 1: Data summary_ _
Case studies will generate qualitative primary data and secondary data,
including interview, documents, registries, reports. Existing data will be re-
used as well as new data will be generated. Data will be originated from
various sources: Clinicaltrials.gov, EPARs, HTAB Dossier assessment reports,
Manufacturers, regulators and HTA agencies that participated in an early
dialogue. As for the survey, it is not possible to estimate the data size. Due
to the qualitative character of the data, the dataset is not expected to be
large.
The raw data will be interesting for researchers in the field of early
dialogue or barriers of HTA. Raw data will not be of great value for the three
parties, if not presented in guidelines, a report or article.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata: _
* The processed data in form of articles will be findable via data base search engines (PubMed, Google Scholar) and will connected to a publishers DOI.
* Raw data will not be made findable.
* _Making data openly**accessible** _ o Individual level data will not be published publicly. Data from the EMA parallel regulatoryHTA scientific advice is strictly confidential. Anonymized raw data might be shared with researchers requesting it.
* Published as scientific articles in peer reviewed journals.
* _Making data**interoperable** : _
Not applicable. The research will not produce data suitable for a database.
* _Increase data**re-use** (through clarifying licenses): _
Most likely data will be reused by partners or researchers reading our article
and requesting the raw data.
Sections 3 and 4 apply to all Tasks.
__Section 3: Allocation of resources_ _
No extra costs, University of Bern has an Open Data Repository BORIS (Bern
Open Repository and Information System) where articles (if possible) and other
data could be made available. The work to be done in making the data FAIR will
be covered by the regular working budget for producing the deliverables.
__Section 4: Data security_ _
To ensure confidentiality and to guarantee internationally-accepted
replicability standards, data will be analysed and stored with code numbers,
never any information that could be used to identify participants. Data will
be kept on secure servers and password-protected computers. The data will be
stored after the termination of the current research for a period no shorter
than 6 years, and at no time will any identifying information about the
participants be stored along with the data.
All responses will be held in confidence. Only the researchers involved in
this study and those responsible for research oversight will have access to
the information. Anonymized records may be shared with other professionals or
authorities from University of Bern, who may be evaluating or replicating the
study, provided the data owner grants them access to the data.
## 4.6. WP7: Coverage with evidence development for medical devices
### 4.6.1. Task 3
__Section 1: Data summary_ _
The overall **purpose** of WP7 is to develop a taxonomy of coverage with
evidence (CED) schemes currently applied to medical devices in Europe, and to
subsequently propose a policy guide for those wishing to design and implement
CED schemes in the future.
In order to ensure maximum impact, the policy guide will be validated by
discussing it with key policy-makers in the national bodies currently
conducting CED schemes (task 3).
Therefore within task 3 we will generate new survey data through structured
interviews conducted by all the partners participating in the work package.
The data may be **useful** for other project partners (e.g. for WP8 on early
HTA) and in the future for other research groups; and indirectly for
policymakers, that will be able to initiate CED schemes for Medical Devices
based on evidence produced by the use of these data.
__Section 2: FAIR Data_ _
* _Making data findable, including provisions for metadata_
We plan to generate a unique dataset containing the responses of the policy
makers participating to the interview. This data will be made available and
metadata provided.
Further metadata might be added at the end of the project in line with
metadata conventions _._
* _Making data openly accessible_
Data from the surveys will be anonymized to guarantee the privacy of the
participants, and then made available publicly. Depending on the type of
information eventually generated, we will make available either the raw
transcripts of the interviews, or a summary of the individual responses to
each question in the survey.
The data will be stored in a trusted repository as indicated in the present
Data Management Plan (section 3.2).
#### • Making data interoperable
Data will be stored in a conventional file format. Particularly we will use
spreadsheets or text documents that are compatible with open source software
applications.
• _Increase data**re-use** (through clarifying licenses): _
We expect to license the data under an Open Access license. The duration of
data availability will be defined later in the current research project, and
will consider a reasonable amount of time after which data will be no longer
considered up to date or relevant for other potential users.
All other general rules to make the data compliant with the FAIR principles,
which are described in the present DMP (section 3), also apply to this WP.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
__Section 4: Data security_ _
Data will be kept on secure servers and password-protected computers or flash
drives. All physical data will be stored in locked filing cabinets that only
members of the research team have access to. Identifying information (names,
contact details, affiliation) will be kept separately from the scripts which
will be anonymous and no identifiable details will be given in reports and
publications, unless explicitly stated by the participants.
## 4.7. WP8: Transferability of medical device HTA/EE and of evidence on
uncertainty factors across EU Member States
### 4.7.1. Task 1
__Section 1: Data summary_ _
The **purpose** of data collection in WP8 task 1 is to assess the
transferability of outcome evaluation using the real world evidence and
learning curves for medical devices across EU countries. The **transferability
of data will be evaluated** based on 1) the feasibility of collecting real-
world effectiveness and safety data in countries with limited resources for
HTA, also considering 2) the heterogeneity of health systems and 3)
differences in real world effect size as a barrier to clinical outcome data
transferability across countries. The **possible sources** of data for
evaluating transferability are surveys, structured and semi-structured
interviews, databases, registries and patient chart reviews. **Existing data**
will be used for evaluation. Task 1 is strongly linked to WP1, therefore the
methodology of data collection is dependent on the outcomes of WP1. The data
will be **useful** for other research groups; and for policymakers, who will
be able to apply methods to evaluate the transferability of evidence to
jurisdictions where local studies are not feasible.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata: _
* Summary of collected and processed data will be findable via database search engines in form of reports and articles. o Raw data will not be made findable.
* _Making data openly**accessible** _ o Individual level data will not be published publicly.
* A summary of the report on transferability of real world evidence will be published as scientific article in peer reviewed journal.
* _Making data**interoperable** : _
Not applicable. The research will not produce data suitable for a database.
* _Increase data**re-use** (through clarifying licenses): _
* Possibility of data accessibility and re-use will depend on consent expressed by respondents. Whenever possible, researchers will try to act as facilitators to ensure this is made possible.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
__Section 4: Data security_ _
The information collected in WP8 will be kept confidential and stored in a
secure database at the Syreon Research Institute. All individual-level
information will be made anonymous. Only the researchers involved in this
study and those responsible for research oversight will have access to the
information collected.
4.7.2. Task 2
__Section 1: Data summary_ _
The **purpose** of data collection in WP8 Task 2 is to assess the requirements
for the acceptability and feasibility of performance based risk sharing
agreements (such as coverage with evidence development) based on foreign data
in Member States in which local studies are not feasible. **Focus group
discussions** will be designed and conducted across invited representatives of
reimbursement decision makers from the selected Member States. A summary of
the focus group report will be channelled into the guideline development on
data collection in WP8, and will be **discussed with a group** of medical
devices reimbursement decision makers and manufacturers from countries with
sufficient economic and geographical diversity. The focus group report will be
**useful** for other research groups; and for policymakers and payers in the
field of reimbursement of medical devices.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata: _
* The focus group report will be findable via database search engines in form of report and article. o Raw data will not be made findable.
* _Making data openly**accessible** _ o The dataset collected in the focus group (e.g. transcript) will have restricted access. o A summary of the focus group report will be published as scientific article in peer reviewed journal. The data included in the publication will therefore be automatically open access in order to make data accessible for verification and re-use.
* _Making data**interoperable** : _
Not applicable. The research will not produce data suitable for a database.
* _Increase data**re-use** (through clarifying licenses): _
* Possibility of data accessibility and re-use will depend on consent expressed by respondents. Whenever possible, researchers will try to act as facilitators to ensure this is made possible.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
__Section 4: Data security_ _
Participation in the focus group will be purely voluntary and all participants
will provide informed consent to take part in the study. In the analysis, data
that may reveal subjects’ identities will be anonymised, in addition,
pharmaceutical companies or healthcare funds will never be mentioned in an
identifiable manner. The information collected in WP8 will be kept
confidential and stored in a secure database at the Syreon Research Institute.
Only the researchers involved in this study and those responsible for research
oversight will have access to the information collected.
4.7.3. Task 3
__Section 1: Data summary_ _
The **purpose** of data collection in WP8 Task 3 is to test the relevance of
policy tools developed in the different work packages of COMED (WP2, WP3, WP6
and WP7) and their transferability to lower income countries. Representatives
of policy makers (involved in reimbursement decisions) and manufacturers will
be invited into a **satellite mini-conference** from interested EU Member
States, with an emphasis of sufficient representation of lower income CEE and
Southern EU countries. Research plans, results and conclusions will be
presented and discussed with the invited stakeholders, and the collected
feedback will be summarized in a conference report, to be channelled into
guideline development in WP9.
The conference report will be **useful** for other research groups; and for
policymakers and payers in the field of health technology assessment and
economic evaluation of medical devices.
__Section 2: FAIR Data_ _
* _Making data**findable** , including provisions for metadata: _
* The conference report will be findable via database search engines in form of report and article. o Raw data will not be made findable.
* _Making data openly**accessible** _ o Individual level data will not be published publicly.
* A summary of the conference report will be published as scientific article in peer reviewed journal.
* _Making data**interoperable** : _
Not applicable. The research will not produce data suitable for a database.
* _Increase data**re-use** (through clarifying licenses): _
* Possibility of data accessibility and re-use will depend on consent expressed by respondents. Whenever possible, researchers will try to act as facilitators to ensure this is made possible.
__Section 3: Allocation of resources_ _
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables.
__Section 4: Data security_ _
The information collected in WP8 will be kept confidential and stored in a
secure database at the Syreon Research Institute. All individual-level
information will be made anonymous. Only the researchers involved in this
study and those responsible for research oversight will have access to the
information collected.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1431_ComSos_779481.md
|
# INTRODUCTION
This Data Management Plan (DMP) describes the COMSOS data management methods.
The DMP includes a description of methodology and standards to be followed and
what data sets are exploitable or made accessible for verification and re-use.
The Data Management Plan will be a dynamic tool that is updated regularly. By
these actions, the project supports the European Commission’s goal of shared
data and open science that serves innovation and growth.
Additionally, this document pools the results generated in the project that
may lead to intellectual property (IP). The DMP will thus contain all forms of
knowledge generated by the project. Whenever significant changes arise in the
project, such as
* New data sets
* Changes in consortium policies
* New, exploitable results
A new version of the DMP shall be uploaded taking into account any major
developments. In any case, the DMP shall be updated as part of the mid-term
(M21) and final project reviews of COMSOS (M42).
# DATA SUMMARY
The objective of the DMP is to provide a structured form of repository for the
data, measurements, facts and know-how gathered during the project, for the
benefit of a more systematic progress in science. Where the knowledge
developed in the EU-funded project is not governed by intellectual property
for the purpose of commercial exploitation and business development, it is
important to valorize the results of project activities by facilitating take-
up of key data and information for further elaboration and progress by other
projects and players in Europe. The DMP includes information on:
* The handling of research data during and after the end of the project
* What data will be collected, processed and/or generated
* Which methodology and standards will be applied
* Whether data will be shared/made open access and
* How data will be curated and preserved (including after the end of the project).
## Data generated in the COMSOS project
Section 9 lists data generated in the **COMSOS** project per partner,
including data set identifier (ID), description (origin, nature and size),
potential use/users, standards and metadata, data sharing including how the
data is shared, access, embargo periods, dissemination means, required
software/tools for using the data, restrictions including the motivation and
repository to be used.
# FAIR DATA
## Processing, standards and metadata
It is then necessary to define the data sets to be gathered within the project
lifetime, both through indexing and description of data origin, nature, scale
and purpose. To facilitate referencing and reuse of data, appropriate meta-
data (data about the data) shall be provided. This implies also a policy on
the ways data can or will be shared. Finally, plans on how the data will be
stored long-term need to be expressed.
The DMP shall be elaborated on behalf of each COMSOS partner to begin with,
and may be redesigned to represent the data repository for COMSOS as a whole
if deemed necessary or more coherent. In detail, the following information
will be requested from each partner in the form of two distinct tables for
data generated and results (exploitable outcome) generated:
## Data collection, handling and processing
Description of the data that will be generated or collected, its origin (in
case it is collected), nature (in case it is result of original work or
elaboration) and whether it underpins a scientific publication. For results,
the nature/form of the outcome should be defined. A description of the
technical purpose of the data/results will be given. The target end user and
the existence (or not) of similar data/results and the possibilities for
integration and reuse may be indicated.
## Standards and methodologies, interoperability and reuse of data
Reference to existing suitable standards, codes, regulations, guidelines or
best practices the data have complied to and/or are akin to. If these do not
exist, an outline on methodology and how metadata can/will be created should
be given. There should be a description of the procedures that will be put in
place for long-term preservation of the data: how long the data should be
preserved, what is approximated end volume, what the associated costs are and
how these are to be covered.
## Data sharing and ownership, IPR management
In accordance with the Consortium Agreement, results are owned by the Party
that generates them. Any restrictions regarding data sharing, ownership and
IPR are further detailed in Section 9.
## Dissemination and Exploitation
Deliverables defined as publications will be published in green or gold open
access, peer-reviewed scientific journals if possible. Journal submission will
be reviewed by the PCM before the submission according to the procedures
defined in the Consortium agreement as follows:
**Consortium Agreement section 8.4.2.1**
_During the Project and for a period of 1 year after the end of the Project,
the dissemination of own Results by one or several Parties including but not
restricted to publications and presentations, shall be governed by the
procedure of Article 29.1 of the Grant Agreement subject to the following
provisions._
_Prior notice of any planned publication shall be given to the other Parties
at least 30 calendar days before the publication. Any objection to the planned
publication shall be made in accordance with the Grant Agreement in writing to
the Coordinator and to the Party or Parties proposing the dissemination within
14 calendar days after receipt of the notice. If no objection is made within
the time limit stated above, the publication is permitted_ .
Section 9 includes a detailed overview of how exploitable outcome will be
brought forward and developed. For data, how these will be shared, including
access procedures, embargo periods (if any), outlines of technical mechanisms
for dissemination. Identification of the repository where data will be stored,
if already existing and identified, indicating in particular the type of
repository (institutional, standard repository for the discipline, etc.).
# ALLOCATION OF RESOURCES
At the beginning of the research project the research consortium will decide
and agree on the tasks, roles, responsibilities and rights relating to data
collection, dataset management and data use.
# DATA SECURITY
The datasets for detailed analysis generated from demonstration units will be
archived at the premises of POLITO for data security reasons and in addition
archived in a common and – if feasible – open data repository depending on the
decision of project steering group. POLITO is responsible for curating,
preserving, disseminating, and deleting the datasets in its possession.
Retention time for curated datasets is the same as other project materials at
POLITO, by default twenty years.
# ETHICS AND PRIVACY
The project will follow the ethics appraisal procedure in H2020. The aim is to
ensure that the provisions on ethics regulation and rules are respected. The
research will comply with applicable international, EU and national
legislation. Ethical aspects will be considered by all consortium participants
and monitored by the Coordinator (WP1).
Specific requirement in accodance with the Grant Agreement:
**D6.1 : POPD - Requirement No. 1 [12]**
6.1. The applicant must confirm that the ethical standards and guidelines of
Horizon2020 will be rigorously applied, regardless of the country in which the
research is carried out. 6.3. The applicant must provide details on the
material which will be imported to/exported from EU and provide the adequate
authorisations. 4.3. Justification must be given in case of collection and/or
processing of personal sensitive data. 4.4. Detailed information must be
provided on the procedures that will be implemented for data collection,
storage, protection, retention and destruction and confirmation that they
comply with national and EU legislation.
# OTHER ISSUES
# COMSOS DATA SETS IDENTIFIER - GENERAL
Call Identifier: H2020-JTI-FCH-2017-1
Type of action: RIA
Project number: 779481
Start project: 01.01.2018
End project: 30.06.2021
Project focus:
The ComSos project aims at strengthening the European SOFC industry’s world-
leading position for SOFC products in the range of 10-60 kW totally 450 kWe.
Through this project, manufacturers prepare for developing capacity for serial
manufacturing, sales and marketing of mid FC CHP products. All manufacturers
will validate new product segments in collaboration with the respective
customers and confirm product performance, the business case and size, and
test in real life the distribution channel including maintenance and service.
In function of the specific segments, the system will be suitable for volumes
from few 10’s to several 1,000 systems per year.
The key objective of the ComSos project is to validate and demonstrate fuel
cell based combined heat and power solutions in the mid-sized power ranges of
10-12 kW, 20-25 kW, and 50-60 kW (referred to as Mini FC-CHP). The outcome
gives proof of the superior advantages of such systems, underlying business
models, and key benefits for the customer. The technology and product
concepts, in the aforementioned power range, has been developed in Europe
under supporting European frameworks such as the FCH-JU.
# PARTNER-SPECIFIC DATA SETS
## VTT
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature,_
_experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Sunfire
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and_
_scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions_
_(Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Convion
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and_
_scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions_
_(Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Polito
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and_
_scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions_
_(Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## EM
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and_
_scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions_
_(Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## SP
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and_
_scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions_
_(Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## HTc
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and_
_scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments,_
_etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed scientific articles_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Results produced during the project for dissemination**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature_
_(dataset, prototype, app, design, etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage, conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1432_BigDataStack_779747.md
|
#### 1\. Executive Summary
BigDataStack aims to deliver a complete stack including an infrastructure
management solution that drives decisions according to live and historical
data, thus being fully scalable, runtime adaptable and highly performant. The
overall objective is for BigDataStack to address the emerging needs of big
data operations and data-intensive applications. The solution will base all
infrastructure management decisions on data aspects (for example the
estimation and provision of resources for each data service based on the
corresponding data loads), monitoring data from deployments and logic derived
from data operations that govern and affect storage, compute and network
resources. On top of the infrastructure management solution, “Data as a
Service” will be offered to data providers, decision-makers, private and
public organisations. Approaches for data quality assessment, data skipping
and efficient storage, combined with seamless data analytics will be realised
holistically across multiple data stores and locations.
To provide the required information towards enhanced infrastructure management
BigDataStack will provide a range of services, such as the application
dimensioning workbench, which facilitates data-focused application analysis
and dimensioning in terms of predicting the required data services, their
interdependencies with the application microservices and the necessary
underlying resources. This will allow the identification of the applications
data-related properties and their data needs, thereby enabling BigDataStack to
provision deployment with specific performance and quality guarantees.
Moreover, a data toolkit will enable data scientists to ingest their data
analytics functions and to specify their preferences and constraints, which
will be exploited by the infrastructure management system for resources and
data management. Finally, a process modelling framework will be delivered, to
enable functionality-based modelling of processes, which will be mapped in an
automated way to concrete technical-level data analytics tasks.
The key outcomes of BigDataStack are reflected in a set of main building
blocks in the corresponding overall architecture of the stack. This
deliverable is a refinement of the key functionalities of the overall
architecture, the interactions between the main building blocks and their
components, as they were described in the previous version of the architecture
(Deliverable D2.4 - Conceptual model and Reference architecture). Comparing to
the previous version of the architecture, key changes refer to the interplay
between the application and data dimensioning and the components that manage
the deployment lifecycle (i.e. deployment patterns generation and ranking and
deployment management), the dynamic orchestrator and the overall quality and
performance assessment during runtime. Additionally, there are changes in the
specifications of several components (reflecting their latest implementation
status) and as such their associated sections have received updates in this
document as well (e.g. seamless analytics framework). It should be noted that
additional design details and evaluation results for all components of the
architecture will be delivered in the corresponding follow-up (WP-specific)
deliverables addressing the user interaction block, the data as a service
block and the infrastructure management block. It should be noted that v2.0 of
this deliverable has been released to include relevant GDPR-related
information
(updates in Appendix 1, Appendix 2 and Appendix 3).
#### 2\. Introduction
The new data-driven industrial revolution highlights the need for big data
technologies, to unlock the potential in various application domains (e.g.
transportation, healthcare, logistics, etc). In this context, big data
analytics frameworks exploit several underlying infrastructure and cluster
management systems. However, these systems have not been designed and
implemented in a “big data context”. Instead, they emphasise and address the
computational needs and aspects of applications and services to be deployed.
BigDataStack aims at addressing these challenges (depicted in Figure 1)
through concrete offerings, that range from a scalable, runtime-adaptable
infrastructure management system (that drives decisions according to data
aspects), to techniques for dimensioning big data applications, modelling and
analysing of processes, as well as provisioning data-as-a-service by
exploiting a seamless analytics framework.
Figure 1 - Technical challenges
##### 2.1. Terminology
The following table summarises a set of key terms used in BigDataStack, not
regarding acronyms but regarding actual usage, given the big number of
concepts and technologies addressed by the envisioned stack.
<table>
<tr>
<th>
Term
</th>
<th>
Description
</th> </tr>
<tr>
<td>
Application services
</td>
<td>
Components/micro-services of a user’s application
</td> </tr>
<tr>
<td>
Data services
</td>
<td>
“Generic” services such as cleaning, aggregation, etc.
</td> </tr>
<tr>
<td>
Dimensioning
</td>
<td>
Analysis of a user’s application services to identify the data and resources
needs/requirements
</td> </tr>
<tr>
<td>
Toolkit
</td>
<td>
Mechanism enabling ingest of data analytics tasks & setting of requirements
(from an end-user point of view)
</td> </tr>
<tr>
<td>
Graph
</td>
<td>
An overall graph including the application services and the data services
</td> </tr>
<tr>
<td>
Process modelling
</td>
<td>
“Workflow” modelling regarding business processes
</td> </tr>
<tr>
<td>
Process mining
</td>
<td>
Analytics tasks per process of the “workflow”
</td> </tr>
<tr>
<td>
Process mapping
</td>
<td>
Mapping of business processes to analytics tasks to be executed
</td> </tr>
<tr>
<td>
Interdependencies between application / data services
</td>
<td>
Data flows between application components and data services
</td> </tr> </table>
Table 1 - Terminology
##### 2.2. Document structure
The document is structured as follows:
* Section 3 provides an overview of the capabilities offered by the BigDataStack environment, including the key offerings and the main stakeholders addressed by each offering.
* Section 4 introduces the identified main phases, to showcase the interactions between different key blocks and offerings of the stack.
* Section 5 presents the overall project architecture.
* Section 6 provides descriptions of the main architecture components.
* Finally, in Section 7, a detailed sequence of events depicting the information flows is provided. It should be noted that these sequence diagrams capture the interactions on the overall architecture level and are not supposed to provide details of the interactions on lower levels given that these are provided by the corresponding design and specification reports of the work package deliverables and will be refined in later reports accordingly.
#### 3\. BigDataStack Capabilities
This section provides an overview of the capabilities that will be offered by
BigDataStack, in terms of offerings towards an extensive set of stakeholders.
The goal is to present a set of “desired” capabilities as the key goals of
BigDataStack. The components providing and realising these capabilities are
thereafter described in the overall architecture.
##### 3.1. Key offerings
BigDataStack offerings are depicted through a full “stack”, that aims not only
to facilitate the needs of data operations and applications (all of which tend
to be data-intensive), but also promote these needs in an optimized way.
As depicted in Figure 2, BigDataStack will provide a complete infrastructure
management system, which will base the management and deployment decisions on
data from current and past application and infrastructure deployments. A
representative example would be that of a service-defined deployment decision
by a human expert (current approach), where he chooses to deploy VMs on the
same physical host, to reduce data transfer latencies over the network (e.g.
for real-time stream processing). On the other hand, the BigDataStack approach
instead will base the decision making according to information from current
and past deployments (e.g. generation rates, transfer bottlenecks, etc.),
which may result in a superior deployment configuration. To this end, the
BigDataStack infrastructure management system would propose a data-driven
deployment decision resulting in containers/VMs placed within geographically
distributed physical hosts. This simple case shows that the trade-off between
service and data-based decisions on the management layer should be re-examined
nowadays, due to the increasing volumes and complexity of data. The envisioned
“stack” is depicted in Figure 2, which captures the key offerings of
BigDataStack.
The first core offering of BigDataStack is efficient and optimised
infrastructure management, including all aspects of management for the
computing, storage and networking resources, as described before.
The second core offering of BigDataStack exploits the underlying data-driven
infrastructure management system, to provide Data as a Service in a
performant, efficient and scalable way. Data as a Service incorporates a set
of technologies addressing the complete data path: data quality assessment,
aggregation, and data processing (including seamless analytics, real-time
Complex Event Processing - CEP, and process mining). Distributed storage is
realised through a layer, enabling data to be fragmented/stored according to
different access patterns in different underlying data stores. A big data
layout and data skipping approach is used to minimize the data that should be
read from the underlying object store to perform the corresponding analytics.
The seamless data analytics framework analyses data in a holistic fashion
across multiple data stores and locations and operates on data irrespective of
where and when it arrives at the framework. A cross-stream processing engine
is also included in the architecture to enable distributed processing of data
streams. The engine considers the latencies across data centres, the locality
of data sources and data sinks, and produces a partitioned topology that will
maximise the performance.
The third core offering of BigDataStack refers to Data Visualization, going
beyond the presentation of data and analytics outcomes to adaptable
visualisations in an automated way. Visualizations cover a wide range of
aspects (interlinked if required) besides data analytics, such as computing,
storage and networking infrastructure data, data sources information, and data
operations outcomes (e.g. data quality assessment outcomes, application
analytics outcomes, etc.). Moreover, the BigDataStack visualisations will be
incremental, thus providing data analytics results as they are produced.
The fourth core offering of BigDataStack, the Data Toolkit, aims at openness
and extensibility. The toolkit allows the ingestion of data analytics
functions and the definition of analytics, providing at the same time “hints”
towards the infrastructure/cluster management system for the optimised
management of these analytics tasks. Furthermore, the toolkit allows data
scientists to specify requirements and preferences as service level objectives
(e.g. regarding the response time of analytics tasks), which are considered by
infrastructure management both during deployment time and during runtime (i.e.
triggering adaptations in an automated way).
The Process Modelling offering provides a framework allowing for flexible
modelling of process analytics to enable their execution. Process chains (as
workflows) can be specified through the framework, along with overall workflow
objectives (e.g. accuracy of predictions, overall time for the whole workflow,
etc) that are considered by mechanisms mapping the aforementioned processes to
data analytics that can be executed directly on the BigDataStack
infrastructure. Moreover, process mining tasks realize a feedback loop towards
overall process optimisation and adaptation.
Finally, the sixth offering of BigDataStack, the Dimensioning Workbench aims
at enabling the dimensioning of applications in terms of predicting the
required data services, their interdependencies with the application micro-
services and the necessary underlying resources.
##### 3.2. Stakeholders addressed
BigDataStack provides a set of endpoints to address the needs of different
stakeholders as described below:
1. Data Owners: BigDataStack offers a unified Gateway to obtain both streaming and stored data from data owners and record them in its underlying storage infrastructure that supports SQL and NoSQL data stores.
2. Data Scientists: BigDataStack offers the Data Toolkit to enable data scientists both to easily ingest their analytics tasks and to specify their preferences and constraints to be exploited during the dimensioning phase regarding the data services that will be used (for example response time of a specific analytics task).
3. Business Analysts: BigDataStack offers the Process Modelling Framework allowing business users to define their functionality-based business processes and optimise them based on the outcomes of process analytics that will be triggered by BigDataStack. Mapping to specific process analytics tasks will be performed in an automated way.
4. Application Engineers and Developers: BigDataStack offers the Application Dimensioning Workbench to enable application owners and engineers to experiment with their application and obtain dimensioning outcomes regarding the required resources for specific data needs and data-related properties.
These actors interact with the corresponding offerings and provide information
that is exploited thereafter by the infrastructure/cluster management system
of BigDataStack. It should be noted that on top of these offerings, the
Visualization Environment is also an interaction point with end users,
providing the outcomes of analytics as well as the monitoring results of all
infrastructure and data-related operations.
#### 4\. Main phases
The envisioned operation of BigDataStack is reflected in four main phases as
depicted in Figure 3 (and further detailed in the following sub-sections):
Entry, Dimensioning, Deployment and Operation.
During the entry phase, data owners ingest their data through a unified
gateway. Analysts design business processes by utilising the functionalities
of the Process Modelling framework in order to describe the overall business
workflows, while data scientists can specify their preferences and pose their
constraints through the Data Toolkit.
During the dimensioning phase, the individual processes / steps of the overall
process model (i.e. workflow) are mapped to analytics tasks, and the graph is
concretized (including specific analytics tasks and application services to be
deployed). The whole workflow is modelled as a playbook descriptor and is
passed to the Dimensioning Workbench. In turn, the Dimensioning Workbench
provides insights regarding the required infrastructure resources, for the
data services and application components, through an envisioned elasticity
model that includes estimates for different Quality of Service (QoS)
requirements and Key Performance Indicators (KPIs).
The goal of the deployment phase is to deliver the optimum deployment patterns
for the data and application services, by considering the resources and the
interdependencies between application components and data services (based on
the dimensioning phase outcomes).
Finally, the operation phase facilitates the provision of data services
including technologies for resource management, monitoring and evaluation
towards runtime adaptations.
##### 4.1. Entry phase
During the entry phase, data is introduced into the system, the Business
Analysts design and evaluate their business processes, and the Data Scientists
specify their preferences and constraints through the Data Toolkit. Thus, the
Entry Phase consists of three discrete steps:
* Data owners ingest their data in the BigDataStack-supported data stores, through a unified gateway. They can directly choose if they want to store (non-) relational data or use the BigDataStack’s object storage offering. The seamless analytics framework brings together the LeanXcale database and the Object Store into a new entity, permitting the definition of rules for automatic balancing of datasets between these two basic data storage components (e.g. data older than 3 months should be moved
to the object store), as well as to describe and use a dataset, which may be
spread over the two data storage components seamlessly. Streaming data can
also be processed, leveraging the BigDataStack’s CEP implementation.
* Given the stored data, Business Analysts can design processes utilising the intuitive graphical user interface provided by the Process Modelling framework, and the available list of “generic” processes (e.g. customer segmentation process). Overall, they compile a business workflow, ready to be mapped to concrete executable tasks. These mappings are performed by a mechanism incorporated in the Process Modelling framework, the Process Mapping component.
* Based on the outcomes of process mapping, the graph of services (representing the corresponding business workflow) is made available to the Data Scientists through the Toolkit. The scientists can specify preferences for specific tasks, for example, what the response time of a recommendation algorithm should be or ingest a new executable in case a task has not been successfully mapped by the Process Mapping mechanism.
The output of the Entry Phase is a Kubernetes-like configuration template file
describing the graph/workflow (which includes all relevant information for the
application graph with concrete “executable” services). We refer to this as a
BigDataStack Playbook. This is passed to the dimensioning phase in order to
identify the resource needs for the identified services.
##### 4.2. Dimensioning phase
The dimensioning phase of BigDataStack aims to optimize the provision of data
services and data-intensive applications, by understanding not only their
data-related requirements (e.g. related data sources, storage needs, etc.) but
also the data services requirements across the data path (e.g. the resources
needed for effective data storage, analytics, etc.), and the interdependencies
when moving from an atomic / single service to an application graph. In this
context, dimensioning includes a two-step approach that is realised through
the BigDataStack Application Dimensioning Workbench:
* In the first step, the input from the Data Toolkit is used to define the composite application (consisting of a set of micro-services) needs with relation to the required data services. The example illustrated in Figure 4 shows that 3 out of the 5 application components require specific data services for aggregation and analytics.
* The second step is to dimension these identified/required data services, as well as all the application components, regarding their infrastructure resource needs. That is achieved by exploiting load injectors generating different loads, to benchmark the services and analyse their resources and data requirements (e.g. volume, generation rate, legal constraints, etc.).
The output of the dimensioning phase is an elasticity model, i.e., a
mathematical function that describes the mapping of the input parameters (such
as workload and Quality of Service - QoS) to needed resource parameters (such
as the bandwidth, latency etc.).
##### 4.3. Deployment phase
The deployment phase of BigDataStack aims at determining the optimum
deployment configuration and deployment resources for the application and data
services in terms of cluster resources. The need for such configuration
emerges from the fact that to deploy the application and data services in a
way such that it will meet the user’s needs, BigDataStack needs to account for
the application and data services complexity/efficiency, the workload (e.g.
requests per second) and the user-defined quality of service
requirements/preferences (e.g. <100ms response time).
To this end, the deployment phase of BigDataStack includes a four-step
process:
* In a first step of the deployment phase, the application and data services compositions (as represented by a BigDataStack playbook) is analysed, and the independent substructures comprised of application and data services (referred to as “pods”) are identified.
* Second, a set of resource templates are used to convert each pod into a series of candidate deployment patterns (CDPs), where each CDP is comprised of a pod and resource template.
* Third, for each CDP, performance estimations are obtained from the Dimensioning phase (based on prior application benchmarking and analysis) given expected data and application workload or workloads.
* Finally, each CDP is scored with respect to the user’s quality of service requirements and/or preferences to determine the suitability of each. The best configuration for each pod is then selected, either for immediate deployment or to be shown to the user for prior approval.
##### 4.4. Operations phase
The operation phase of BigDataStack is realised through different components
of the
BigDataStack infrastructure management system and aims at the management of
the complete physical infrastructure resources, in an optimised way for data-
intensive applications.
The operation phase includes a seven-step process as depicted in Figure 6:
* Based on the deployment phase, outcomes regarding the optimised deployment pattern, computing resources are reserved and allocated.
* According to the allocated computing resources, storage resources are also reserved and allocated. It should be noted that storage resources are distributed.
* Data-driven networking functions are compiled and deployed to facilitate the diverse networking needs between different computing and storage resources.
* The application components and data services are deployed and orchestrated based on “combined” data and application-aware deployment patterns. An envisioned orchestrator mechanism compiles the corresponding orchestration rules according to the deployment patterns and the reserved computing, storage and network resources.
* Data analytics tasks will be distributed across the different data stores to perform the corresponding analytics, while analytics on top of these stores is performed through the seamless analytics framework.
* Monitoring data is collected and evaluated for the resources (computing, storage and network), application components and data services and functions (e.g. query execution status).
* Runtime adaptations take place for all elements of the environment, to address possible QoS violations. These include resource re-allocation, storage and analytics redistribution, re-compilation of network functions and deployment patterns.
#### 5\. Architecture
The following figure presents the overall conceptual architecture of
BigDataStack, including the main information flows and interactions between
the key components.
First, raw data are ingested through the Gateway & Unified API component to
the Storage engine of BigDataStack, which enables storage and data migration
across different resources. The engine offers solutions both for relational
and non-relational data, an Object Store to manage data as objects, and a CEP
engine to deal with streaming data processing. The raw data are then processed
by the Data Quality Assessment component, which enhances the data schema in
terms of accuracy and veracity and provides an estimation for the
corresponding datasets in terms of their quality. Data stored in Object Store
are also enhanced with relevant metadata, to track information about objects
and their dataset columns. Those metadata can be used to show that an object
is not relevant to a query, and therefore does not need to be accessed from
storage or sent through the network. The defined metadata are also indexed, so
that during query execution objects that are irrelevant to the query can be
quickly filtered out from the list of objects to be retrieved for the query
processing. This functionality is achieved through the Data skipping component
of BigDataStack. Moreover, slices of historical data are periodically
transferred from the LeanXcale database to the Object Store, to free-up space
for fresh tuples.
Given the stored data, decision-makers can model their business workflows
through the Process Modelling framework that incorporates two main components:
the first component is Process modelling, which provides an interface for
business process modelling and the specification of an end-to-end optimisation
goals for the overall process (e.g. accuracy, overall completion time, etc).
The second component refers to Process Mapping. Based on the analytics tasks
available in the Catalogue of Predictive and Process Analytics and the
specified overall goals, the mapping component identifies analytics algorithms
that can realise the corresponding business processes. The outcome of the
component is a model in a structural representation e.g. a JSON file that
includes the overall workflow, and the mapped business processes to specific
analytics tasks.
Following, through the Data Toolkit, data scientists design, develop and
ingest analytic processes/tasks to the Catalogue of Predictive and Process
Analytics. This is achieved by combining a set of available or under
development analytic functions into a high-level definition of the user’s
application. For instance, they define executables/scripts to run, as well as
the execution endpoints per workflow step. Data scientists can also declare
input/output data parameters, analysis configuration hyper-parameters (e.g.
the k in a kmeans algorithm), execution substrate requirements (e.g. CPU,
memory limits etc.) as service level objectives (SLOs), as well as potential
software packages / dependencies (e.g. Apache Spark, Flink etc.). The output
of the Data Toolkit component enriches the output of the previous step (i.e.
Process Modelling) and defines a BigDataStack Playbook.
The generated playbook is utilized by the Application and Data Services
Deployment Patterns Generator. The component creates different arrangements
(i.e. patterns / configurations) of deployment resources for each application
and data service Pod. These candidate deployment patterns (CDPs) are passed to
the Application Dimensioning Workbench, along with an end-to-end optimization
objective and the information on the available resources, in order to estimate
resource usage and QoS performance prior to actual deployment. The primary
output of the Application Dimensioning Workbench is an elasticity model, which
defines the mapping of the input QoS parameters to the concrete resource
needed (such as the number of VMs, bandwidth, latency etc.). These decisions
are depended on data-defined models. Thus, based on the obtained dimensioning
outcomes, deployment patterns are ranked by the Deployment Patterns Ranker and
the optimum pattern is selected for deployment, making the concluding
arrangement of services data-centric. The Deployment Manager administers the
setup of the application and data services on the allocated resources.
During runtime, the Triple Monitoring engine collects data regarding
resources, application components (e.g. application metrics, data flows across
application components, etc.) and data operations (e.g. analytics / query
progress, storage distribution, etc.). The collected data are evaluated
through the QoS Evaluation component to identify events / facts that affect
the overall quality of service (in comparison with the SLOs set in the
toolkit). The evaluation outcomes are utilised by the Runtime adaptation
engine, which includes a set of components (i.e. cluster resources re-
allocation, storage and analytics re-distribution, network functions re-
compilation, application and data services re-deployment, and dynamic
orchestration patterns), to trigger the corresponding runtime adaptations
needed for all infrastructure elements to maintain QoS.
Moreover, the architecture includes the Global decision tracker, which aims at
storing all the decisions taken by the various components. The overall
BigDataStack system takes advantage of this recorded historical information to
perform future optimisations. The key rationale for the introduction of this
component is the fact that decisions have a cascading effect in the proposed
architecture. For example, a dimensioning decision affects the deployment
patterns compilation, the distribution of storage and analytics, etc. The
information about whether these decisions are altered during runtime will be
exploited for optimised future choices across all components through the
decision tracker. Moreover, the tracker holds additional information such as
application logging data, Candidate Deployment Patterns, QoS failures, etc.
Thus, as a global state tracker, provides the ground for cross-component
optimisation, as well as tracking the state and history of BigDataStack
applications.
Finally, the architecture includes the Adaptive Visualisation environment,
which provides a complete view of all information, including raw monitoring
data (for resource, application and data operations) and evaluated data (in
terms of SLOs, thresholds and the evaluation of monitoring in relation to
these thresholds). Moreover, the visualization environment acts as a unique
point for BigDataStack for different stakeholders, actors, thus, incorporating
the process modelling environment, the data toolkit and the dimensioning
workbench. These accompany the views for infrastructure operators (e.g.
regarding deployment patterns).
#### 6\. Main architectural components
Based on the overall architecture presented in the previous chapter, this
chapter provides additional information regarding the individual components of
the BigDataStack architecture.
##### 6.1. Resources Management
The Resource Management sub-system provides an Enterprise grade platform which
manages Container-based and Virtual Machine-based applications consistently on
cloud and on-premise infrastructures. This sub-system makes the physical
resources (e.g. CPUs, NICs and Storage devices) transparent to the
applications. The application’s requirements will be computed based on the
input from the Realisation Engine and by a constant monitoring using the
Triple Monitor. The applications’ required resources are automatically
allocated from the available existing infrastructures and will be dismissed
upon execution completion. Thus, the Resource Management sub-system serves as
an abstraction layer over today’s infrastructures, physical hardware, virtual
hardware, private and public clouds. This abstraction allows the developing of
compute, networking and storage management algorithms which can work on a
unified system, rather than dealing with the complexity of a distributed
system.
BigDataStack will build on top of the open source OpenShift Kubernetes
Distribution (OKD) project [1] for its Resource Management sub-system. The OKD
project is an upstream project used in Red Hat’s various OpenShift products.
It is based and build around Kubernetes and operators and is enhanced with
features requested by commercial customers and Enterprise level requirements.
According to Duncan et al. [2] ODK is “an application platform that uses
containers to build, deploy, serve, and orchestrate the applications running
inside it”. OKD simplifies the whole process [3] of the deployment of a “fine-
grained management over common user applications” and management of the
containerized software (the lifecycle of the applications). Since its initial
release in 2011, it has been adopted by multiple organizations and has grown
to represent a large percentage of the market. According to IDC [4], OKD aims
at accelerating the application delivery with “agile and DevOps
methodologies”; moving the application architectures toward micro-services;
and adopting a consistent application platform for hybrid cloud deployments.
As a base technology, OKD uses Docker and/or CRI-O for containerization and
Kubernetes [5] for their orchestration, including packaging, instantiation and
running the containerized applications. It also implements “geard” or “gear
daemon” [6], a command-line client for the management of containers and its
linkage to systems across multiple hosts, used for the installation and
management of application components [7]. On top of the above described
technologies, OKD adds [8]:
* Source code management, builds, and deployments for developers
* Managing and promoting images at scale as they flow through your system
* Application management at scale
* Team and user tracking for organizing a large developer organization
* Networking infrastructure that supports the cluster
OKD integrates in the DevOps and users’ operation following a hierarchical
structure, as shown in Figure 8. A master node centralizes the
API/authentication, data storage, scheduling, and management/replication
operations, while applications are run on Pods (following the Kubernetes
philosophy).
Following this layered architecture, users access the API, web-services and
command line directly from the master node, while the applications and data
services are accessed through the routing layer where the services are
located, that is, in the physical machine the pod was deployed. Finally, the
integrated container registry includes the set of container images which can
be deployed in the system.
Another important point for the project is the protection of security and
privacy of the user. On top of the security provided by Kubernetes, OKD also
offers granular control on the security of the cluster. As shown in [4], users
can choose a whitelist of cipher suites to meet security policies; and share
PID between containers to control the cooperation of containers.
By building on top of OKD, we ensure that BigDataStack components are easily
portable to different cloud offerings, such as Amazon, Google Compute Engine,
Azure, or any On-Premise deployment based on OpenStack.
To ensure a more transparent and simple resource management we are working on
several fronts that will be present on our architecture:
* Kuryr: Network speed up by better integrating OKD on top of OpenStack cloud deployments. Working on Kuryr OpenStack upstream project to integrate OpenShift SDN networking into OpenStack SDN networking, simplifying the operations, as well
as achieving remarkable performance boost (up to 9x better). By using Kuryr at
the OKD level we connect the containers directly into the OpenStack networks,
instead of having 2 different SDNs and the performance problem of double
encapsulation.
* Kernel Driver: New (NVMe) Kernel driver that speeds up access to NVMe devices from VMs without guest image modification, achieving up to 95% of native performance – compare to standard 30% with existing VirtIO drivers.
* Network Policies: Network Management through declarative API. As part of the Kuryr upstream work, we have also extended its functionality to support Kubernetes Network Policies, which allows user to define the access control to the different components of their applications in a fine grained manner. These policies are defined in a declarative way, i.e., by stating the desired status, rather than the steps to accomplish it. Then Kuryr will make sure that the isolation level desired at the OKD (containers) level is translated and enforced through OpenStack Security Group rules.
* Operators: Development of operators for easy life cycle management of infrastructure and applications. In addition to the performance improvements, we are also pursuing the use of the operators design pattern. This entails the use and development of certain operators (containers) which have their business logic integrated and react to the current status of the system/applications until they match the desired status. This helps to install the applications in an easy/reproducible manners, as well as to deal with day two operations, such as scaling or upgrades. In this regard we are working on a Kuryr SDN operator that allows easy installation and scaling of OKD cluster on top of OpenStack environments. This network operator takes care of creating everything needed on the OpenStack side, as well as installing anything required by Kuryr both at the initial deployment time and upon OKD cluster scaling actions. Another example of operators being used are the Spark Operator and the Cluster Monitoring Operator
* Infrastructure API: Unified API for infrastructure resources to make infrastructure management easy, and abstracted from the real infrastructure. To achieve this, the upstream community created the Kubernetes Cluster API project. We have been working on the support for the OpenStack abstraction together with its operator/actuator: Cluster API Provider OpenStack. This allows us to automate the creation/scaling actions regarding OKD nodes when running on top of OpenStack too. Thus, we can easily extend an OKD cluster as needed, just by modifying an object in Kubernetes/OKD: Similarly, this give us further advantages regarding resource management, e.g., if any of the VMs where our OKD is running dies (or the physical server that has it dies), the developed operator/actuator will automatically recreate the needed VMs in a different compute node, automatically recovering the system until it maps the desired status.
Note that while the first two points are related to infrastructure
performance, the later 3 are key points for managing infrastructure as code,
as well as to enable easy configuration/adaptation by upper layers, such as
the Data-Driver Network Management or the Deployment Orchestration components.
##### 6.2. Data-Driven Network Management
The Data-Driven Network Management component will efficiently handle network
management and routing introspection, computing and storage resources, by
collectively building intelligence through analytics capabilities. The
motivation is to optimise computing and storage mechanisms to improve network
performance. This component can obtain data from different BigDataStack layers
(i.e. from storage layer to applications layer) and will be used to extract
knowledge out of the large volumes of data to facilitate intelligent decision
making and what-if analysis. For example, with big data analysis, the data-
driven network management will know which storage or computing resource has
high popularity. Based on the analysis result, the component will be able to
produce insights on how to redistribute storage and/or computing resources to
reduce network latency, improve throughput and satisfy access load and thus
response time.
Monitoring mechanisms over the storage layer will provide information to
adjust the network parameters (e.g. by enforcing policies to achieve a
significant reduction in data retrieval and response time). Also, monitoring
mechanisms over the computing layer will enable the development of
functionalities and trigger policies that will satisfy users’ requirements
regarding runtime and performance.
To serve data-driven network management, we will analyse the data coming from
storage and computing resources within a workflow which is depicted in Figure
10. The workflow is composed of three components namely: ingest, which
consumes network data, process, which computes network metrics and analyse,
which produces network insights. The lifecycle of the analysis task includes a
set of algorithms which enable computational analytics over the data, conduct
a set of control mechanisms and infer knowledge related to resources
optimisation. Taking advantage of data-driven network management, big data
applications will be able to access the global network view and
programmatically implement strategies to leverage the full potential of the
physical storage and computing resources.
##### 6.3. Dynamic Orchestrator
The Dynamic Orchestrator (DO) assures that scheduled applications conform to
their Service Level Objectives (SLOs). Such SLOs reflect Quality of Service
(QoS) parameters and might be related to throughput, latency, cost or accuracy
targets of the application. For example, to generate recommendations for
online customers of an e-commerce website, the recommender has to analyse the
customer profile and provide the recommendation in a limited amount of time
(e.g., 1 sec.), otherwise, the page load will be too slow and customers might
leave the website. If the number of online customers increases, then the
recommender will need to improve its recommendations throughput in order to
keep up serving the recommendations in less than 1 second. The DO will then
modify the deployment in order to improve throughput, so that the recommender
does not violate the corresponding SLO. The DO assures conformation to SLOs by
applying various dynamic optimisation techniques throughout the runtime of an
application at multiple layers across various components of the data-driven
infrastructure management system. As such, the DO knows about the adaptation
actions that can be carried out for an application and when these actions
should be carried out, i.e. what actions will affect each SLO.
Figure 11 depicts the high-level interactions of the dynamic orchestrator with
other components. Newly scheduled applications are deployed through the
Application and Data Service Ranking component (ADS-Ranking). 1 The ADS-
Ranking scores possible deployment patterns/configurations (CDPs) and selects
the one which it predicts to best satisfy the SLOs. After an application is
deployed, the DO monitors its performance through the triple monitoring
engine. In case there are SLO violations, the QoS component sends a message
with the violation to the DO, which has two choices: (i) Initiate a re-
deployment of the application through ADS (this choice will be made when SLOs
can only be reached with major deployment changes, e.g., selecting another ADS
ranking option), (ii) Performing more finegrained adaptations at different
components of the system (e.g., the DO might perform “small” changes in the
deployment configuration such as the number of replicas).
Note, that each of the other components also have their internal control loop
and their internal logic for performing (high-responsive) actions,
independently of the orchestrator or any of the other components. The primary
challenge of the dynamic orchestrator is to reach a (close-to) optimal
adaptation decision quickly, i.e., with a small overhead. This is a difficult
goal, because application tasks will be distributed and adaptation can be
achieved at different components (application, platform, network). The
relationship between an adaptation technique and how it affects an SLO is not
clear in advance and two adaptation techniques at different components might
lead both to conformation of an SLO. Likewise, two adaptations at two
components, might also conflict with each other. As such, the main challenges
of the dynamic orchestrator are:
* Conflicting adaptations in different components
* Overhead for adaptation decisions
* Optimal adaptation
The orchestration logic itself is not implemented using hardcoded rules, but
instead, uses Reinforcement Learning (RL). RL allows the DO to dynamically
change its adaptation logic over time based on the outcome (feedback) from
previous decisions. In RL, this means that the orchestration problem is broken
down into:
* States: These are system and application metrics (e.g. CPU usage and throughput) and the current and past SLOs fulfillment.
* Actions: These change in deployment (e.g. add/remove a replica).
* Reward: The reward value is positive and proportional to resource utilization (to avoid underutilization) if SLOs are met, negative otherwise.
Figure 12 depicts a more detailed view of the dynamic orchestrator. Each
application has its own BigDataStack application, RL Agent and RL Environment;
while the Manager is unique for all applications. The Manager is in charge of
the communication with the other components, receiving the Playbook, receiving
the metrics and passing them to the corresponding BigDataStack application,
and receiving the action to be taken from the RL Agent, and sending it to the
ADS-Ranking or the platform for performing dynamic adaptations.
Moreover, Figure 13 depicts the different classes of the DO. Their inner
working, step by step, is the following:
1. The Manager handles the communication with all the other components, using RabbitMQ and creates one instance of BigDataStackApplication for each application to be monitored.
2. The BigDataStackApplication creates the RLEnvironment, with its actions and state spaces, and the RLAgent that will be in charge of learning and deciding the best adaptation actions to take when an SLO is violated.
3. Each time a new message comes in, the Manager sends the information to the corresponding BigDataStackApplication, which updates the RLEnvironment state.
4. If a message with an SLO violation comes in, the Manager triggers the RLAgent, to decide which action should be taken according to the current RLEnvironment state.
5. Then, the Manager sends a message to the ADS-Ranking requesting the identification of a new deployment configuration or to ADS-Deploy to directly change the deployment.
##### 6.4. Triple Monitoring and QoS Evaluation
The Triple Monitoring and QoS Evaluation are two closely related components
with clearly separated responsibilities:
* The objective of the Triple Monitoring is to collect, store and serve metrics at three levels of the platform: application, data services and infrastructure (cluster) resources.
* The goal of the QoS Evaluation is to continuously evaluate those metrics against constraints (thresholds) or objectives imposed by certain BigDataStack platform users.
###### 6.4.1. Triple Monitoring
The monitoring engine manages and correlates/aggregates monitoring data from
different levels to provide a better analysis of the environment, the
application and data; allowing the orchestrator to take informed decisions in
the adaptation engine. The engine collects data from three different sources:
* Infrastructure resources of the compute clusters such as resource utilisation (CPU, RAM, services and nodes), availability of the hosts, data sources generation rates and windows. This information allows the taking of decisions at a low level. These metrics are directly provided by the infrastructure owner or through specific probes, which track the quality of the available infrastructures. In the context of bigdatastack, the infrastructure’s metrics are collected by Kubernetes. Those metrics will be ingested to the triple monitoring engine by federating Prometheus instances.
* Application components such as application metrics, data flows across application components, availability of the applications etc. This information is related directly to the data-driven services, which are deployed in the infrastructure. These metrics are associated with each application, and they should be provided by those applications. For application related to BigDataStack infrastructure, the most suitable method is to embed Prometheus exporter to each of those applications. Use case application will be sending metrics via a http method for flexibility reason.
* Data functions/operations such as data analytics, query progress tracking, storage distribution, etc. This is a mix of data and storage infrastructure information providing additional information for the “data-oriented” infrastructure resources.
The component will cover both raw metrics (direct measurements provided by the
infrastructure deployed sensors or external measurement systems like the
status of infrastructure) and aggregated metrics (formulas to exploit metrics
already collected and produce the respective aggregated measurements that can
be more easily used for QoS tracking). The collection of metrics will be based
on both solutions: the direct probes in the system that should be monitored
and the direct collection of the data from the monitoring engine.
* The probe approach will cover the information systems, where the platform will be able to deploy and collect direct information. In this case, the orchestration engine must manage the deployment of the necessary probes. This approach can cover other cases, where the probe is included directly in the application, and the orchestration only needs to deploy the associated application, which can provide the metric information to the monitoring engine.
* The direct collection will cover the scenarios where the platform cannot deploy any probe, but the infrastructures or the applications expose some information regarding these metrics. In this case, the monitoring engine will be responsible for collecting the metrics data that are exposed by a third party via a REST_API (Exporter).
After collecting and processing the data, the monitoring engine will be
responsible for notifying other components when an event happens based on the
metrics that it is tracking and specific attributes such as computing,
network, storage or application level. Moreover, it will expose an interface
to manage and query the content. This functionality is implemented in the QoS
Evaluator (SLA Manager). Figure 14 depicts the Triple Monitoring Engine and
their components.
The Triple Monitoring Engine will be based on the Prometheus monitoring
solution (see [9] for more details) and is composed of the following
components:
* Monitoring Interface: This is responsible for exposing the interface to allow other components to communicate. The interface will manage two ways of interaction with other components: i) exposing a REST API (outAPI, Figure 14) that will enable other components to know specific information, for example, if another component wants to know more details about one violation, to take the correct decision, or if they need to configure new metrics to collect directly by the monitoring engine. Therefore, the interface will consist of both a REST interface and a publish/subscribe notification interface. The publish/subscribe mechanism is implemented with RabbitMQ. This allows any components to consume in real-time information.
* Monitoring Manager: This component handles subscriptions by storing the queue, the list of metrics and metadata related to the subscription. The manager consumes all metrics collected by Prometheus. Based on the subscriptions list, they are redirected to the component subscribed by the queue declared.
* Monitoring Databases: ElasticSearch is currently used as the metrics database. MongoDB is also used to store all metrics requested via the outAPI in order to keep a track of metrics’ utilization.
* PrometheusBeat: Since Prometheus has a small retention period, BigDataStack optimization loops in various components (e.g. deployment patterns generation) raised the need for a solution that would allow accessing and holding the collected metrics. To this end, this component receives the metrics collected by Prometheus, and ingests them to a pipeline (Logstash) for being stored.
* Optimizer: Since the Triple Monitoring Engine of BigDataStack collects monitoring data from different sources and all those data are utilized at specific time periods by different BigDataStack architecture components, storage optimization is required. Based on the information stored in the MongoDB (metrics utilization) this component decides about the time period for which the monitoring data should be kept.
* Push gateway: The push gateway is a Prometheus exporter. It is used in BigDataStack specially for collecting monitoring data obtained after each Spark driver execution.
* Collector Layer: This component is responsible for obtaining the data to be moved to the Monitoring manager. There are two ways to collect the data, either through a probe or through direct collection:
* Probe API exposes an interface to allow different kinds of probes to send the monitoring data to the monitoring engine.
* Direct collection is realized through a component that collects directly the monitoring data, by invoking other systems or components. For example, it receives the data directly from the Resource management engine or invoke the third-party libraries to obtain the state of the application and data services.
Integration with resource management engines
The Triple Monitoring Engine provides APIs for receiving metrics from
different sources (infrastructure, application and data services) and expose
them for consumption. Although different APIs will be available due to the
great diversity of monitoring data sources, the recommended API is the
“Prometheus exporters” model. Some of the technologies that are being
considered for BigDataStack are already integrated within Prometheus, as shown
in Table 2.
<table>
<tr>
<th>
Technology component
</th>
<th>
Monitoring aspect
</th>
<th>
Prometheus
exporter availability
</th>
<th>
Method
</th> </tr>
<tr>
<td>
Kubernetes
</td>
<td>
Computing infrastructure
</td>
<td>
Yes
</td>
<td>
Federation
</td> </tr>
<tr>
<td>
OpenStack
</td>
<td>
Computing infrastructure
</td>
<td>
Yes
</td>
<td>
Exporter
</td> </tr>
<tr>
<td>
Spark/Spark SQL
</td>
<td>
Data
functions/operations
</td>
<td>
Yes
</td>
<td>
Exporter
(SparkMeasure)
</td> </tr>
<tr>
<td>
IBM COS (Cloud Object Store)
</td>
<td>
Data infrastructure
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
LeanXcale database
</td>
<td>
Data infrastructure
</td>
<td>
For some metrics
</td>
<td>
Federation
</td> </tr>
<tr>
<td>
CEP
</td>
<td>
Data Infrastructure
</td>
<td>
Yes
</td>
<td>
Federation
</td> </tr> </table>
Table 2 - Prometheus integration
Federation of Prometheus instances
Federation is used to pull monitoring data from another Prometheus instance.
This model is introduced in the BigDataStack Triple Monitoring Engine for two
main reasons. Firstly, the platform uses Kubernetes as containers
orchestrator, which embedded by default a Prometheus (prometheus-ks8)
instance. This instance collects monitoring data related to the cluster, nodes
and services running. For security reasons it is not efficient to use
prometheusk8s for collecting application- and data- related monitoring data.
Secondly, the LeanXcale database and the CEP are independent systems and have
their own Prometheus instances. For reusability reason and improvement
(collect only monitoring data directly used by BigDataStack components) the
proposed federation model is the most suitable method to achieve this
requirement.
In the federation mode, the master instance should be configured appropriately
by specifying the interval of time where metrics will be collected, the source
job also if needed, the metrics to collect can be specified.
###### 6.4.2. QoS Evaluation
The Quality of Service (QoS) Evaluation component is directly connected with
the Triple Monitoring Engine to evaluate the quality of the application and
data services deployed on the platform. To do so, it compares service metrics
(key performance indicators) with the objectives set by the owner of the
service and thus imposed over the BigDataStack platform when the service was
deployed. The QoS Evaluation component is also responsible for notifying if
the quality objectives are not met by the running the service. Therefore, the
component is not responsible for obtaining the metrics (delegated to the
monitoring engine) but to apply evaluation rules upon those metrics and notify
when quality failures occur.
The main entities within the QoS Evaluation are the following:
* Agreement: it is a description of the QoS evaluation task to be carried out by the QoS Evaluation. It describes the creation and expiration time of the task, the provider and consumer of the application or service whose quality needs to be guaranteed, and the list of QoS constraints or guarantees to be evaluated.
* SLO (Service Level Objective) or QoS guarantee: it is a set of thresholds for the value of a given metric, representing increasing levels of criticality. The last threshold is always the last limit or final objective to be meet. The other thresholds are used as checkpoints to better understand and control the dynamics of the indicator. The SLO belongs to the agreement.
* Violation: it is generated when the value of a the QoS metric trespasses any of the SLO thresholds. The QoS Evaluation component notifies each violation to other components of the platform subscribed to the event; perhaps the most important of the subscribers is the Dynamic Orchestrator, which is responsible for the service deployment adaptation decisions.
The QoS Evaluation is made of the following components:
* Interface component (REST API): through this interface the consumers of the QoS evaluation service can start/stop the evaluation of certain application metrics.
* QoS database: it is responsible for storing all the content agreements, violation, service level objectives. This will be stored in the Global Decision Tracker.
* Evaluator: it is responsible for performing QoS evaluation. A periodic thread is started to check the expiration date of agreements. For each enabled agreement, it starts a task to check agreement evaluation by getting needed metrics from the adapter. The task is also started when metrics are received from the Notifier.
* Adapter: it is responsible for calling the monitoring system to obtain the metrics data. It will be different for each monitoring system, so it will be accountable for building the specific request to the Triple Monitoring System to gather and transform metrics to have them ready to compare with SLOs by the Evaluator.
* Notifier: It is responsible for notifying to third parties that want to be alerted if something happens in the defined agreements, such that corrective actions can be taken.
In the BigDataStack platform, application and data services QoS constraints
(objectives are specified by the Data Scientist trough the Data Toolkit (see
Section 6.13) together with the rest of information describing the application
to be deployed. This is compiled in the so-called application playbook, which
serves as the specification for the BigDataStack platform to deploy and
operate the application. The following table shows and example of QoS
constraints imposed over the response time of an online service called
“recommendationprovider”. Notice the Data Scientist can specify not only
required response times but also recommended response time 2 :
<table>
<tr>
<th>
* name: recommendation-provider metadata: qosRequirements: - name: "response_time" type: "maximum" typeLimit: null value: 900
higherIsBetter: false unit: "miliseconds" qosPreferences:
* name: " response_time" type: "maximum" typeLimit: null value: 300
higherIsBetter: false unit: "miliseconds"
</th> </tr> </table>
When a service deployment is requested, The Dynamic Orchestrator (i.e. the
component in charge of making deployment adaptation decisions to satisfy QoS
constraints) breaks down the QoS objective into thresholds of increasing
levels of criticality. Depending on the nature of the QoS metric (indicator)
to control and both the recommended and required values, the Dynamic
Orchestrator may produce an arbitrary number of thresholds between the fist
(related to recommended value) and last (related to the required value)
thresholds.
With every deployment, the Dynamic Orchestrator will request the QoS
Evaluation component to create/start a task to continuously compare the
service performance metric against those thresholds. This request is made
asynchronously through a messages queue. This is implemented as topic within
the RabbitMQ service (which acts as the message broker between BigDataStack
components). In the previous example, the Dynamic Orchestrator may send the
following message to the QoS Evaluation 3 :
<table>
<tr>
<th>
"qosIntervals": {
"reponse_time": [
">300",
">500",
">700",
">900" ]
}
</th> </tr> </table>
The QoS Evaluation component incorporates the thresholds or intervals to be
monitored (requested by the Dynamic Orchestrator) as a guarantee object in the
agreement for the actual service deployment. In that way, all QoS constraints
to be evaluated and guaranteed for the same service deployment are maintained
together. In the previous example, the agreement and guarantee created from
the Dynamic Orchestrator request may be like the following:
{
"id": "TEST-ATOSWL-NormServ-19022019-1",
"name": "TEST-ATOSWL-NormServ-19022019-1_agreement",
"details": {
"id": "TEST-ATOSWL-NormServ-19022019-1",
"type": "agreement",
"name": "TEST-ATOSWL-NormServ-19022019-1_agreement",
"provider": {
"id": "a-provider-01",
"name": "ATOS Wordline"
},
"client": {
"id": "a-client-01",
"name": "Eroski"
},
"creation": "2019-05-30T07:59:27Z",
"expiration": "2020-01-17T17:09:45Z",
"guarantees": [
{
"name": "response_time",
"constraint": "[response_time>50]",
"importance": [
{
"Name": "0",
"Type": "warning",
"Constraint": ">300"
},
{
"Name": "1",
"Type": "warning 2",
"Constraint": ">500"
},
{
"Name": "2",
"Type": "warning 3",
"Constraint": ">700"
},
{
"Name": "3",
"Type": "error",
"Constraint": ">900"
}
]}
]}
}
The QoS Evaluation will continuously assess the value of all guaranteed QoS
attributes (metrics or indicators) and detect violations, that is, when the
value trespasses the different thresholds that have been specified. QoS
violations are notified to any interested component of the BigDataStack
platform through a publisher/subscriber mechanism implemented as topic within
the RabbitMQ service (which acts as the message broker between BigDataStack
components). Following the previous example, the following violation
notifications may be published 4 :
<table>
<tr>
<th>
{
"Application": "TEST-ATOSWL-NormServ",
"Message: "QoS_Violation",
"Fields": {
"IdAggrement": "TEST-ATOSWL-NormServ-19022019-1",
"Guarantee": "response_time",
"Value": "351",
"ViolationType: {
"Type": "warning",
"Interval": "0"
},
"ViolationTime": {
"ViolationDetected": "2019-06-30T07:59:27Z",
"AppExpiration": "2020-01-17T17:09:45Z"
}
}
}
</th> </tr>
<tr>
<td>
{
"Application": "TEST-ATOSWL-NormServ",
</td> </tr> </table>
"Message: "QoS_Violation",
"Fields": {
"IdAggrement": "TEST-ATOSWL-NormServ-19022019-1",
"Guarantee": "response_time",
"Value": "920",
"ViolationType: {
"Type": "error",
"Interval": "3"
},
"ViolationTime": {
"ViolationDetected": "2019-06-30T09:34:21Z",
"AppExpiration": "2020-01-17T17:09:45Z"
}
}
}
Perhaps the most important of the subscribers is the Dynamic Orchestrator
itself, which will respond to different violation alerts depending on the
criticality of the threshold trespassed.
The QoS Evaluation displays the warning (lowest criticality) and error
(highest criticality) thresholds on the interface of the Triple Monitoring
Engine, superimposed to the metrics evolution graphs to which apply. The
following figure is an example of the Response Time evolution graph on the
Triple Monitoring Engine.
and Throughput (right) metrics graphs: warning (lowest criticality) and error
(highest criticality) thresholds as orange and red lines, respectively.
##### _6.5. Applications & Data Services Ranking / Deployment _
Application and Data Services Ranking/Deployment is a top-level component of
the BigDataStack platform, as defined in the central architecture diagram (see
Section 5). It belongs within the realisation engine of the platform and is
concerned with how best to deploy the user’s application to the cloud, based
on information about the application and cluster characteristics. From a
practical perspective, its role is to identify which - of a range of potential
deployment options - is the best for the current user, given their stated
(hard) requirements and other desirable characteristics (e.g. low cost or high
throughput), as well as operationalize the deployment of the user’s
application based on the selected option.
In practice, the Application and Data Services Ranking/Deployment is divided
into three main sub-components, namely: the main component ADS-Ranking; and
two support components ADS-Deploy and ADS-GDT, which we describe in more
detail below:
* Application and Data Services Ranking (ADS-Ranking): This is dedicated to the selection of the best deployment option. Note that this component is sometimes referred to as the ‘deployment recommender service’, as from the perspective of a BigDataStack Application Engineer, it produces a recommended deployment for them on-demand.
* Application and Data Services Deployment (ADS-Deploy): This is concerned with the physical scheduling/deployment of the application for the selected deployment option via Openshift.
* Application and Data Services Global Decision Tracker (ADS-GDT): This stores information about the state of different applications and decision made about them.
Application and Data Services Ranking (ADS-Ranking)
ADS-Ranking is tightly coupled to the Application & Data Services Dimensioning
(ADSDimensioning) component of BigDataStack that sits above it. The main
output of ADSDimensioning is a series of candidate deployment patterns (ways
that the user’s application might be deployed) including resource usage and
quality of service predictions. It is these deployment patterns that ADS-
Ranking takes as input (see REQ-ADSR-01 [10]) and subsequently selects one or
more ‘good’ options for the Application Engineer. Each candidate deployment
pattern represents a possible configuration for one ‘Pod’ in the user’s
application (a logical grouping of containers, forming a micro-service) [11].
User applications may contain multiple pods.
Communication to and from ADS-Ranking is handled via the Publisher-Subscriber
design pattern. In this case, ‘messages’ are sent between components, which
trigger processing on the receiving component. More precisely, ADS-Ranking
subscribes to the ADS-Dimensioning component to receive packages of pod-level
candidate deployment patterns (CDPs), one package per-pod in the application
to deploy. On-receive, this triggers the ranking of the provided deployment
patterns, as well as the filtering out of patterns that either do not meet the
user’s requirements, or that are otherwise predicted to provide unacceptable
performance. After ranking/filtering is complete, ADS-Ranking will select a
single deployment pattern per-pod to send to the BigDataStack Adaptive
Visualisation Environment. Within this environment, the user can either choose
to deploy their application using the recommended patterns directly, customise
the patterns and then deploy, or otherwise cancel the deployment process. Upon
choosing to deploy with a set of patterns, those patterns are sent to ADS-
Deploy for physical scheduling on the available hardware.
Figure 17 illustrates the data flow between the components around ADS-Ranking.
As we can see, ADS-Dimensioning first gets information about the user’s
application and preferences from a BigDataStack Playbook and uses it to
produce packages of candidate deployment patterns (CDPs). Each CDP represents
a deployment configuration that we could use to deploy the user’s application
pod (where some CDPs will produce more efficient or effective deployments than
others). These pattern packages are sent as messages to ADS-Ranking, which
ranks and filters those patterns, finally selecting one per-pod, which is
predicted to efficiently and effectively satisfy the user’s requirements.
These top patterns are aggregated, then placed in a message envelope and sent
back to the BigDataStack Adaptive Visualisation Environment, where the
application engineer can accept those patterns and use them directly for
deployment, or otherwise customise them first. Once the application engineer
is happy with the deployment, they can then send the final patterns via the
visualisation environment to ADS-Deploy, which will schedule deployment on
OpenShift.
Deployment
Internally, ADS-Ranking supports two central operations: 1) the first-time
ranking/filtering of CDPs; and 2) re-ranking of CDPs in scenarios where the
previous deployment is deemed unsuitable. The first operation (CDP ranking and
filtering) is comprised of three main processes. These three processes are:
* Pod Feature Builder: This takes as input a set of CDPs, and for each CDP in that package, it builds a single vector representation of that CDP, which combines all the information provided by dimensioning. It can also filter out CDPs that do not meet minimal Quality of Service (QoS) requirements, saving computation time later in the process. The output of this component is the (filtered) list of CDPs along with their new vector representations. This process targets REQ-ADSR-02 [10].
* Pod Scoring: This process takes the CDPs and vector representations as input and ranks those CDPs based on their predicted suitability, with respect to the user’s desired quality of service. To achieve this, it uses either a rule-based model or a supervised model [12] trained on previous CDP deployments and their observed fitness. The output of this process is a ranking of scored CDPs. This process targets REQ-ADSR-03 and 04 [10].
* Pod Selection: This process takes as input the ranking of CDPs and selects one of these CDPs. This may be a simple process that takes the top CDP and filters out the rest. However, it may include more advanced techniques to better fit with user needs, such as making sure the selected CDP will provide sufficient extra processing capacity, in the case of applications that process data streams with fluctuating data rates. The output of this process is a single CDP (per-pod), which is the recommended deployment that is shown to the user. This process targets REQ-ADSR-05 [10].
If the user’s application is comprised of multiple pods, then the recommended
CDP for each pod are then collected and aggregated together to form a
recommendation for the entire application. The aforementioned processes are
implemented using Apache Flink [13] to facilitate low-latency real-time
processing. The overall flow for first-time ranking/filtering of CDPs is shown
in Figure 18. In this simplified example, three CDPs are used as input for a
single application (A1), which is comprised of two pods (P1 and P2). Pod 1 has
two CDPs (A1-P1-1 and A1-P1-2), while Pod 2 has one CDP (A1-P2-1). As we can
see from Figure 18, these CDPs are first grouped by pod, to create parallel
processing streams for each. For each CDP, these are then subject to feature
extraction, to create the representation vectors. In this case, features from
the overall pod (e.g. total cost) and features from each container (e.g.
container latency) are extracted here. These CDPs and feature vectors are sent
to pod scoring, to produce a numerical estimate of overall suitability of the
CDP. The best CDP per-pod (A1-P22 and A1-P2-1 here) are then grouped by
application (A1) and then output (to the visualisation environment for viewing
by the application engineer).
The second function (CDP Re-Ranking) is similar to the primary function, with
the exception that it takes in a CDP that has been deemed to have failed the
user in terms of quality of service along with context about that CDP (e.g.
why it failed), and it introduces an additional ‘Failure Encoding’ process:
Failure Encoding: This process examines the context of a failed CDP and
encodes that failure into the CDP structure as features, such that they can be
used by the Pod Feature Builder when generating the CDP vectors. In this way,
properties that promote other CDPs that will not suffer from the same issues
as the failed CDP can be upweighted during ranking. This process targets REQ-
ADSR-07 [10].
Figure 19 illustrates the main processes and data flow within ADS-Ranking. In
this case, reranking is triggered by sending a set of CDPs representing a
quality of service (QoS) failing user application deployment to ADS-Ranking.
For this example, the application has two pods and hence two CDPs (A1-P2-2 and
A1-P1-1), where a QoS failure has been detected for A1P1-2 (denoted by ). The
first step that ADS-Ranking takes is to collect all the alternative
CDPs that were not selected from the user’s application. These were stored in
ADS-GDT (Global Decision Tracker), which will be described later. Once these
CDPs have been collected, any CDPs for pods that were not subject to QoS
failures are discarded, as these do not need to be considered for re-
deployment (A1-P2-1). The remaining CDPs are then subject to failure encoding,
which converts the failure information into a feature vector that can be used
during ranking (<x>). The CDPs are then sent to the Pod Feature Builder in a
similar manner to first-time ranking, where the normal process is followed,
with the exception that the additional features obtained from the failure
encoding are used to enhance ranking effectiveness.
Application and Data Services Deployment (ADS-Deploy)
This process is triggered by the BigDataStack Adaptive Visualisation
Environment and takes as an input the selected CDP(s). The aim of this
component is two-fold. First, to use the given CDP(s) to launch the user’s
application pods on the cloud infrastructure. Second, to notify relevant
BigDataStack components of the deployment status, such that follow-on
processes (such as monitoring) can commence. To achieve this, the ADS-Deploy
component interacts with a container orchestration service (e.g. OpenShift),
translating the CDP into a sequence of deployment instructions.
This task is divided into the following steps:
1. Receive and check CDP. The component checks that the CDP triggering the deployment process is structurally correct.
2. Translate CDP. The CDP is translated to an ontology that the orchestrator will understand.
3. Interpretation and deployment. The orchestrator interprets the file received and starts the containers and rules.
4. Communication with the user. The result of the process (either success or fail) is communicated to the rest of the architecture (and ultimately, to the user) as an event by means of a publisher-subscriber model. The main subscribers to this event will be the Dynamic Orchestrator, ADS-GDT components, along with the BigDataStack Adaptive Visualisation Environment.
Application and Data Services Global Decision Tracker (ADS-GDT)
The role of the Global Decision Tracker is (as its name suggests) to keep
track of any state or decisions made about a user’s application related to its
deployment or run-time performance. In effect, it is a data store that holds
both the current configuration (BigDataStack Playbook and associated CDPs) for
each deployed user application, along with relevant events generated by other
components (e.g. ADS-Deploy reporting a successful deployment or the dynamic
orchestrator reporting a quality of service failure).
Like the other ADS-* components, ADS-GDT uses the publisher-subscriber pattern
to enable asynchronous one-to-many communication flows in a standardised and
reliable manner. In this case, it subscribes to all the message queues that
are relevant to deployment or application run-time activities and saves them
within a local database. It also hosts a RESTful API service that provides
bespoke access to the collected data for both BigDataStack services (e.g. ADS-
Ranking during re-ranking) but also to the BigDataStack Adaptive Visualisation
Environment, where application state information is needed for visualisation.
###### 6.6. Data Quality Assessment
The data quality assessment mechanism aims at evaluating the quality of the
data prior to any analysis on them to ensure that analytics outcomes are based
on datasets of specific quality. To this end, BigDataStack architecture
includes a component to assess the data quality. The component incorporates a
set of algorithms to enable domain-agnostic error detection, in a given
dataset. The domain-agnostic approach followed aims at facilitating the goals
of data quality assessment without prior knowledge of the application domain /
context, thus making it “generalised” and applicable to different application
domains and as a result to different datasets. While current solutions in data
cleaning are quite efficient when considering domain knowledge (for example in
eHealth regarding the correlation between different measurements of different
health parameters), they provide limited results regarding data volatility, if
such knowledge is not utilised. BigDataStack will provide a data quality
assessment service that exploits Artificial Neural Networks (ANN) and Deep
Learning (DL) techniques, to extract latent features that correlate pairs of
attributes of a given dataset and identify possible defects in it.
The key issues that need to be handled by the Data Quality Assessment service
are:
* Work in a context-aware but domain-agnostic fashion. The process should be adaptable to any dataset, learn the relationships between the data points and discover possible inconsistencies.
* Model the relationships between data points and reuse the learned patterns. The system should store the models learned by the machine learning algorithms, and reuse them through an optimisation component, which checks if the raw data have similar patterns, dataset structure or sources. In that case, already existing models should be activated, to complete the process in an efficient manner.
The way to learn and predict the relationships between data points, to
discover possible deviations, is to exploit the recent breakthroughs in Deep
Learning, and the idea of an embedding space. Figure 20 depicts a serial
architecture, which tries to predict if two entities are related to each
other.
Given the learned distributed encodings of each entity 𝑥, 𝑦 or, in our case
any data point, we can discover if these two candidate entities or data points
are related. Thus, considering the DANAOS use case, if the temperature sensor
emits a value that is illogical given other rpm sensor readings, the
relationship between these two data points would be associated with a low
score (or probability). This could provide significant improvements in the
results of an analytical task that the data scientist wants to execute, and is
part of a general business process.
To optimize the data quality assessment process, we introduce a subcomponent
that retrieves previously learned models, when a similar dataset structure
arrives in the system, or the same data source sends new data.
Data quality assessment component inputs:
* The raw data ingested by the data owner through the Gateway & Unified API
* The data model provided by the optimizer if exists
* User preferences and specifications, ingested through the Data Toolkit
Data cleaning component outputs:
* Assessed data, establishing data veracity o A probability score for each tuple in the database column
* Trained, reusable ML models, stored in a repository for later use
The main structure of the Data Quality Assessment component is depicted in
Figure 21.
Based on this figure the flow is as follows:
* The Data Pre-processing unit takes raw data and converts them in a form that the machine learning algorithms can work with
* The main pillar of the service is the data cleaning component, which takes the preprocessed data as input, trains a new model and stores it in the model repository
* During the assessment phase, a scheduler pulls newly ingested data to be assessed
* The data quality assessment module retrieves the learned model from the repository and makes the necessary predictions
* The assessed data are updated into the distributed storage
###### 6.7. Real-time CEP
Streaming engines are used for real-time analysis of data collected from
heterogeneous data sources with very high rates. Given the amount of data to
be processed in real-time (from thousands to millions of events per second),
scalability is a fundamental feature for data streaming technologies. In the
last decade, several data streaming systems have been released. StreamCloud
[14], was the first system addressing the scalability problem allowing a
parallel distributed processing of massive amount of collected data. Apache
Storm [15] and later Apache Flink [13] followed the same path providing
commercial solutions able to distribute and parallelise the data processing
over several machines to increase the system throughput in terms of number of
events processed per second. Apache Spark [16] added streaming capability onto
their product later. Spark’s approach is not purely streamed, it divides the
data stream into a set of micro-batches and repeats the processing of these
batches in a loop.
The complex event processing for the BigDataStack platform will be a scalable
complex event processing (CEP) engine able to run in federated environments
with heterogeneous devices with different capabilities and aggregate and
correlate real-time events with structured and non-structured information
stored in the BigDataStack data stores. The CEP will take into account the
features of the hardware, the amount of data being produced and the bandwidth
in order to deploy queries. The CEP will also consider redeploy and migrate
queries if there are changes in the configuration, increase/decrease of data,
changes in the number of queries running or failures.
Data enters the CEP engine as a continuous stream of events, and is processed
by continuous queries. Continuous queries are modeled as an acyclic graph
where nodes are streaming operators and edges are data streams connecting
them. Streaming operators are computational units that perform operations over
events from input streams and outputs resulting events over its outgoing
streams. Streaming operators are similar to relational algebra operators, and
they are classified into three categories according with their nature, namely:
stateless, stateful and data store.
* Stateless operators are used to filter and transform individual events. Output events, if any, only depend on the data contained in the current event.
* Stateful operators produce results based on state kept in a memory structure named sliding window. Sliding windows store tuples according to spatial or temporal conditions. The CEP provides aggregates and joins based on time windows (e.g., events received during the 20 seconds) and size windows (e.g. the last 20 events).
* User defined operators. They implement other user defined functions on streams of data.
* Data store operators are used to integrate the CEP with the BigDataStack data stores. These operators allow to perform correlation among real time streaming data and data at rest.
The main components of BigDataStack CEP are:
* Orchestrator: It oversees the CEP. It registers and deploys the continuous queries in the engine. It monitors the performance metrics and decides reconfiguration actions.
* Instance Manager (IM): It is the component that runs a continuous query or a piece of it. They are single threaded and run in one core.
* Reliable Registry: It stores information related to query deployments and components status. It is implemented by Zookeeper.
* Metric Server: It handles all performance metrics of the CEP. The collected metrics are load, throughput, latency of queries, subqueries and operators, CPU, memory and IO usage of IMs. These metrics are handled by Prometheus time series database.
* Driver: The interface between the CEP and other applications. Applications use the CEP driver to register/unregister or deploy/undeploy a continuous query, subscribe with the output streams of the queries to consume results and mainly to send events to the engine.
Figure 22 shows the different components of the CEP and their deployment in
several nodes. Each node can run several Instance Managers (one per core). The
registry and metric server are deployed in different nodes although they can
be collocated in the same node. The client and receiver applications are the
ones producing and consuming the CEP data (shown as dashed black lines). The
rest of the communication is internal to the CEP. The Orchestrator
communicates with the IMs to deploy queries (configuration messages) and
registers this information in Zookeeper (Zookeeper communication). All
components send performance metrics to the metric server (yellow dashed
lines).
###### 6.8. Process mapping and Analytics
The Process mapping and analytics component of the BigDataStack architecture
consists of two separate sub-components: Process Mapping and Process
Analytics.
* The objective of the Process Mapping sub-component is to predict the best algorithm from a set of algorithms available in the Predictive and Process Analytics Catalogue, given a specific dataset D and a specific analysis task T.
* The goal of the Process Analytics sub-component is to discover Processes from event logs and apply Process Analytics techniques to the discovered process models in order to optimize overall processes (i.e., workflows).
6.8.1. Process Mapping
The inputs of the Process Mapping sub-component consist of:
* The analysis task T (e.g., Regression, Classification, Clustering, Association Rule
Learning, Reinforcement Learning, etc.) that the user wished to perform
* Additional information that is dependent on the analysis task T (e.g., the response – predictor variables in the case of Supervised Learning, the desired number of clusters in the case of Clustering, etc.).
* A dataset D that is subject to the analysis task T
Table 3 provides an overview of the main symbols used in the presentation of
the Process Mapping sub-component.
<table>
<tr>
<th>
Symbol
</th>
<th>
Description
</th> </tr>
<tr>
<td>
T
</td>
<td>
An analysis task (e.g., clustering, classification…)
</td> </tr>
<tr>
<td>
D
</td>
<td>
A dataset
</td> </tr>
<tr>
<td>
T(D)
</td>
<td>
The analysis task T applied on dataset D
</td> </tr>
<tr>
<td>
A(T)
</td>
<td>
An algorithm that solves the analysis task T (e.g., A(T)=K-means for
T=Clustering)
</td> </tr>
<tr>
<td>
A(T,D)
</td>
<td>
An algorithm applied on D to solve the task T
</td> </tr>
<tr>
<td>
M(D)
</td>
<td>
A model describing a dataset D
</td> </tr>
<tr>
<td>
T
</td>
<td>
An analysis task (e.g., clustering, classification…)
</td> </tr>
<tr>
<td>
D
</td>
<td>
A dataset
</td> </tr>
<tr>
<td>
T(D)
</td>
<td>
The analysis task T applied on dataset D
</td> </tr> </table>
Table 3 - Μain symbols used in Process Mapping
The output of the Process Mapping sub-component is an algorithm A(T) that is
automatically selected as the best for executing the data analysis task T at
hand. The best algorithm can be based on various quantitative criteria,
including result quality or execution time, and combinations thereof.
_High-level Architecture_
Figure 23 provides an overview of the different modules and their
interactions. The Process Mapping sub-component comprises the following four
main modules:
* _Data Descriptive Model_ : This module takes as input a dataset in a given input form and performs automatically various types of data analysis tests and computation of different statistical properties, in order to derive a model M(D) that describes the dataset D. Based on the relevant research literature, examples of information that is typically captured by the model M(D) include: dimensionality and the intrinsic (fractal) dimensionality, set of attributes, types of attributes, statistical distribution per numerical attribute (mean, median, standard deviation, quantiles), cardinality for categorical attributes, statistics indicating sparsity, correlation between dimensions, outliers, etc. The exact representation of the model M(D) is going to be presented in the following more concretely, but it can be considered as a feature vector. Thus, in the following, the terms model and feature vector are used interchangeably. Subsequently, the produced feature vector M(D) is going to be used in order to identify previously analysed datasets that have similarities with the given dataset. This is achieved by defining a similarity function sim(M(D 1 ),M(D 2 )) that operates at the level of feature vectors M(D 1 ) and M(D 2 ).
* _Analytics Engine_ : The main role of this module is to provide an execution environment for analysis algorithms. Given a specific dataset D and a task T, the Analytics Engine can execute the available algorithms A(T) on the specific dataset, and obtain its result A(D,T). The available algorithms are retrieved from the Predictive and Process Analytics Catalogue for algorithms available in BigDataStack. In this way, evaluated results of analysis algorithms executed on datasets are kept along with the model description of the dataset. Separately, we implement in the analytics engine the functionality of computing similarities between models of datasets, thereby enabling the retrieval of the most similar datasets to the dataset at hand.
* _Analytics Repository_ : The purpose of this repository is to store a history (log) of previous evaluated results of data analysis tasks on various datasets. Each record in this repository corresponds to one previous execution of a specific algorithm on a given dataset. It contains the model of dataset that has been analysed in the past, along with the algorithm executed, and its associated parameters. In addition, the record keeps one or more quality indicators, which are numerical quantities (evaluation metrics) that evaluate the performance of the specific algorithm when applied to the specific dataset.
* _Evaluator_ : Its primary role is to evaluate the results of an algorithm that has been executed, and provide some numerical evaluations indicating how well the algorithm performed. For example, for clustering algorithms, several implementations of clustering validity measures can be used to evaluate the goodness of derived clusters. For classification algorithms, the accuracy of the algorithm can be computed. For regression algorithms, R-Squared, p-values, adjusted R-Squared and other metrics will be computed to evaluate the quality of the result. Apart from these quality metrics, performance-related metrics are also recorded, with execution time being the most representative such metric.
Once the Process Mapping sub-component has received the required inputs, the
data is ingested into the Data Descriptive Model where characteristics and
morphology aspects of the dataset D are analysed, in order to produce the
model M(D). Then, together with user requirements are forwarded to the
Analytics Engine. At this point a query is made from the Analytics Engine to
the Analytics Repository, a storage of previously executed analysis models and
the final algorithms that were executed in each case. We distinguish two
cases:
* No similar models can be found: In this case, the available algorithms from the Predictive and Process Analytics Catalogue that match the user requirements are executed, and the results are returned and evaluated in the Evaluator (where quality metrics are computed for each run depending on its performance). The results are stored in the Analytics Repository.
* A similar model can be found: In this case, the corresponding algorithm (that performed well in the past on a similar dataset) is executed on the dataset at hand, and the results are again analysed in the Evaluator. The results are again stored in the Analytics Repository. In case the result is not satisfactory, the process can be repeated for the second most similar model, etc.
_Example of Operation_
The operation of Process Mapping entails two discrete phases: (a) the learning
phase, and (b) the in-action phase.
In the learning phase, the system executes algorithms on datasets and records
the evaluations of the results in the analytics repository. Essentially, the
system learns from executions of algorithms of different datasets.
D
The learning phase starts without any evaluated results in the analytics
repository. As shown in Figure 24, when the first dataset D is given as input,
the Descriptive Model Generator produces the model M(D). In parallel, the
available algorithms A 1 , A 2 , …, A n are executed on D and their
result is given to the Evaluator, which computes the available metrics M 1
and M 2 . Examples of metrics could be accuracy and execution time. Then,
this information is stored in the analytics repository: the model M(D), the
algorithm A i , and the values of metrics M 1 and M 2 . Notice that the
actual dataset is not stored, however it is shown in the figure just for
illustration purposes.
dataset D'
Figure 25 shows the processing of a second dataset D’, still in the learning
phase. The same procedure as described above is repeated, and the results are
added to the Analytics Repository.
The in-action phase corresponds to the typical operation of Process Mapping in
the context of BigDataStack, namely to perform the actual mapping from an
abstract task T (which is present as a step of a process designed in the
process modelling framework) to a concrete algorithm A(T) that can be executed
on the dataset D at hand, i.e., A(T,D). The following example aims at
clarifying the detailed operation.
Figure 26 shows a new dataset which is going to be processed based on the
specification received from the process modelling framework. Next, the Process
Mapping automatically suggests the best algorithm (A * ) from the pool of
available algorithms A 1 , A 2 , …, A n .
As depicted in the figure above, the Descriptive Model Generator produces the
model for the new dataset, and then this model is compared against all
available models in the analytics repository in order to identify the most
similar dataset. In this example, M(D) is the most similar model. Then, the
best performing algorithm is selected from the results kept for M(D). The
values of available metrics (M 1 and M 2 ) are used to identify the best
algorithm based on an optimization goal, which could rely to one metric or a
combination of metrics, according the needs of the application. In the
example, the output of Process Mapping is depicted as algorithm A 1 .
Technical Aspects of Prototype Implementation
At the time of this writing, which corresponds to the first half of the
project, we have a prototype implementation of Process Mapping in place. The
prototype targets a specific class of analysis algorithms, namely Clustering
algorithms, in order to be focused. In the second half of the project, this
functionality is going to be extended. Below, we provide the technical details
and individual techniques used by Process Mapping.
First, the Descriptive Model Generator follows two alternative approaches for
model generation (i.e., feature extraction) from the underlying dataset, based
on the state-of-theart methods for automatic clustering algorithm selection.
The first approach, called attributebased, generated eight (8) features from
the dataset: logarithm of number of objects, logarithm of number of
attributes, percentage of discrete attributes, percentage of outliers, mean
entropy of discrete attributes, mean concentration between discrete
attributes, mean absolute correlation between continuous attributes, mean
skewness of continuous attributes, and mean kurtosis of continuous attributes.
The second approach, called distancebased, computes the vector of pairwise
distances d of all pairs of objects in the dataset. Then, it generates
nineteen (19) features from d. The first five (5) features are the mean,
variance, standard deviation, skewness and kurtosis of d. The next ten (10)
features are the ten percentiles of distance values in d. The last four (4)
features are based on the normalized Zscore, namely they correspond to the
percentage of normalized Z-score values in the range: [0,1), [1,2), [2,3),
[3,infinity). Determining the best approach between attribute-based and
distance-based is a subject of experimental evaluation in the context of
BigDataStack. A recent paper reports that distance-based approach is better
for clustering tasks.
Second, the Analytics Engine is implemented as a wrapper around WEKA, a
library for machine learning tasks. In the current implementation three
clustering algorithms are used (Kmeans, FarthestFirst, and EM) for the proof-
of-concept prototype. In the second half of the project, we are going to
replace WEKA with Spark’s MLlib. Also, we are going to extend the
functionality to other machine learning and analysis tasks, other than
clustering.
Last, but not least, the Evaluator uses metrics both for the quality of data
analysis as well as for performance. The result quality for clustering is
evaluated using Silhouette coefficient, a metric for clustering quality
assessment that is based on intra-cluster distances and intercluster
distances. In terms of performance, the Evaluator records the execution time
needed by the algorithm to produce the results. The application that runs in
BigDataStack can select whether algorithm selection will be based on
optimizing result quality, performance, or an arbitrary (application-defined)
combination of these two.
6.8.2. Process Analytics
The Process Analytics sub-component comprises the following four main modules:
* Discovery: The main objective of this component is via a given event log to create a process model.
* Conformance Checking/Enhancement: This component’s role is dual. Firstly, in the Conformance Checking Stage a process model is evaluated against an event log for missing steps, unnecessary steps, and many more (process model replay). Secondly, in the Enhancement Stage user input is considered (e.g. costeffectiveness or time effectiveness of a process) to create an according model of a process. Also, in this stage dependency graphs will be created and through metrics, such as direct succession and dependency measures to be utilized by the Predictions component.
* Log Repository: A repository consisting of any changes to a model during Conformance Checking/Enhancement stage.
* Prediction: Dependency graphs and weighted graphs of process models, created in the Enhancement phase will be used in collaboration with an active event log to predict behaviour of an active process.
* Model Repository: A storage unit of all process models, user-defined or created in the Discovery stage.
The input variables of this mechanism are:
* Event logs.
* Process models (not obligatory).
The output of the mechanism is as follows:
* Discovered process models.
* Enhanced process models.
* Diagnostics on process models.
* Predictions - Recommendations on events occurring in process models.
The main structure of the predictive component is depicted in Figure 27:
###### 6.9. Seamless Analytics Framework
A single logical dataset can be stored physically in many different data
stores and locations. For example, an IoT data pipeline may involve an
ingestion phase from devices via a message bus to a database and after several
months the data may be moved to object storage to achieve higher capacity and
lower cost. Moreover, within each lifecycle phase, we may find multiple stores
or locations for reasons such as compliance, disaster recovery, capacity or
bandwidth limitations etc. Our goal is to enable seamless analytics over all
data in a single logical dataset, no matter what the physical storage
organization details are.
In the context of BigDataStack, we could imagine a scenario where data would
stream from IoT devices such as DANAOS ship devices, via a CEP message bus, to
a LeanXcale data base and eventually, under certain conditions be migrated to
the IBM COS Object Store. This flow makes sense since LeanXcale provides
transactional support and low latency but has capacity limits. Therefore, once
the data is no longer fresh it could be moved to object storage to vacate
space for newer incoming data. This approach is desirable when managing Big
Data.
The seamless analytics framework aims to provide tools to analyse a logical
dataset which may be stored in one or more underlying physical data stores,
without requiring deep knowledge of the intricacies of each of the specific
data stores, nor even awareness of where the data is exactly stored. Moreover,
the framework provides the tools to automatically migrate data from the
relational datastore to the object store, without the interference of a
database administrator, with no downtime or expensive ETLs, ensuring data
consistency during the migration process at the same time.
A given dataset may be stored within multiple data stores and the seamless
analytics framework will permit analytics over it in a unified manner. LXS
Query Engine is extended in order to support queries over a logical database
that might be split across different and heterogeneous datastores. This
extended query engine will serve as the federator of the different datastores
and will a) push down incoming queries to each datastore b) retrieve the
intermediate results and merge them in order to return the unified answer to
the caller. Therefore, the data user will have the impression of executing a
query against a single datastore which hosts the logical dataset, without
having to know how the dataset is fragmented and split within the different
stores. Finally, the federator will provide a standard mechanism for
retrieving data: JDBC, thus allowing for a variety of analytical frameworks
such as Apache Spark to make use of the Seamless Analytical Framework to
perform such tasks.
The data lifecycle is highlighted in the following figure:
Data is continuously produced in various IoT devices and forwarded to the CEP
engine for an initial real-time analysis. This analysis might identify
potential alerts or challenges which are triggered by submitting specific
rules which use data coming from a combination of sources and are relevant
under a specific time window. CEP later ingests data to the LeanXcale
relational datastore, which is the first storage point due to its
transactional semantics that ensure data consistency. After a period, data can
be considered historical and are of no use for an application. However, they
are still invaluable as they can participate in analytical queries that can
reveal trends or customer behaviours. As a result, data are transferred to the
Object Store that is the best candidate for such type of queries. Due to this,
data is continuously migrating between stores, and the seamless interface
provides the user with a holistic view, without needing to keep track of what
was migrated when.
###### 6.10. Application Dimensioning Workbench
The goal of the dimensioning phase is to provide insights regarding the
required infrastructure resources primarily for the data services components,
linking the used resources with load and expected QoS levels. To this end, it
needs to link between the application/service-related information (such as
KPIs and workload, parameters of the data service etc.) and the used resources
to be able to provide recommendations towards the deployment mechanisms,
through e.g. prediction and correlation models. Benchmarking against these
services is necessary in order to concentrate the original dataset that is
needed in a variety of business scenarios, such as sizing the required
infrastructure for private deployments of the data services or consulting
deployment mechanisms in a shared multitenant environment where multiple
instances of a data service offering may reside.
The main issues that need to be handled by the Dimensioning Workbench are:
* The target trade-off that needs to be achieved between a generic functionality and an adapted operation. For example, benchmarking for each individual application request would lead to very high and intolerable delays during the deployment process. Thus, one would need to abstract from the specifics of an application instance through the usage of suitable workload features, benchmark in advance for a variety of these workload features and thus only need to query for the most suitable results during the deployment stage.
* The achieved abstraction and automation for easily launching highly scalable and multi-parameter benchmarks against the data services, with minimal user interaction and need for involvement. This would require the rationale of a benchmarking framework inside ADW that will be able to capture the needed variations between the configuration parameters (workload, resource etc), adapt to the needed client types per data service as well as the target execution environment of the tests (e.g. different execution platforms such as OpenShift, Docker Swarm, external public Cloud offerings such as AWS etc).
* The workflow/graph-based nature of the application, which implies that application (and data service) structure should be known and taken under consideration by the analysis. To this end, needed annotations are required so that the generic structure which is provided as input to the Workbench through the Data Toolkit contains all the necessary information such as expected QoS levels (potentially for different metrics), links between the service components etc. On top of this structure, the workbench can quantify the expected QoS per component and then propagate through the declared dependencies.
* While application structure is provided to the workbench, this will often not imply a particular deployment configuration for the application (e.g. what node types will be suitable for the user’s application). Multiple trade-offs in this domain could also be given to the users, enabling them to make a more informed final decision based on cost or other parameters. For this reason, the dimensioning workbench needs to receive this input of available deployment patterns from the Pattern Generation in order to populate them with the expected QoS, information that is taken under consideration in the process for final ranking and selection.
* Adaptation of benchmarking tests in a dockerized manner in order to be launched through the framework in a coordinated and functional manner, based on each test’s requirements and needed sequences.
Dependencies of the dimensioning component especially in the form of
anticipated exchange of information (in type and form) are presented in the
following bullets. Inputs include:
* Structure of the application along with the used data services is considered an input, as concretized by the Data Toolkit component (in the form of a playbook file, the BigDataStack Playbook) and passed on to the Dimensioning component, following its enrichment with various used resource types from the Pattern Generator, and including expected workload levels inserted by the user in the Data toolkit phase. This is the structure upon which the Dimensioning workbench needs to append information regarding expected QoS per component.
* Types of infrastructure resources available in terms of size, type, etc (referred to as resource templates). This information is necessary at the Pattern Generator side in order to create candidate deployments.
* Different types of Data Services will be provided by BigDataStack to the end users. Each of these services may have different characteristics and functionalities, affected in a different manner and quantity by the application input (such as the data schema used). Consideration of these features should be included in the benchmarking workload modelling of the specific service (e.g. number of columns in the schema tables, types of operations, frequency of them etc.), as well as inputs that may be received by the application developer/data scientist, such as needed quality parameters of the service (such as latency, throughput needed etc.) or other preferences declared through the Data Toolkit.
* Application related current workload and QoS values should be available to enable the final creation of the performance dataset, upon which any queries or modelling will be performed. This implies a collaboration and adaptation with the used benchmark tests and/or infrastructure monitoring components such as the Triple Monitoring Engine, in case the used benchmarks do not report on the needed metrics.
* Language and specification used by the Deployment component, or any other provisioned execution environment, given that ADW needs to submit such descriptors for launching the benchmarking tests.
* Exposure of the necessary information, such as endpoints, configuration, results etc to the Visualization components of the project, in order to be embedded and controlled from that side as well. Thus relevant APIs and JSON schemas need to be agreed and implemented based on this feature.
Necessary outputs:
* The most prominent output of the Dimensioning phase is the concretized (in terms of expected QoS) playbook for a candidate deployment structure for the used data services in the format needed by the ADS-Ranking component that utilizes the dimensioning outcomes. This implies that the format used by Dimensioning to describe these aspects should be understood by the respective components and thus was agreed in collaboration, defined currently as a Kubernetes configuration template type of file structure called a BigDataStack Playbook. More concretely, this is operationalized as a series of candidate deployment patterns (CDPs), which describe the different ways that the user’s application might be deployed along with the expected QoS levels per defined metric. CDPs are provided in the respective file format, such that they can be easily used to perform subsequent application deployment. The Dimensioning phase will augment each CDP with estimated performance metrics and/or quality of service metrics, providing a series of indicators that can be used to judge the potential suitability of each CDP. These estimates are used later to select the CDP that will best satisfy the user’s deployment requirements/preferences.
* Intermediate results include the benchmarking results that are obtained through the benchmarking framework of ADW. These need to be exposed either to internal ADW components for subsequent stages (e.g. modelling or population of the playbook) or external such as Visualization panels towards the users for informative purposes.
The main structure of the Dimensioning is depicted in Figure 29. The component
list is as follows:
* Pattern Generation: The role of pattern generation is to define the different ways that a user’s application might be deployed. In particular, given the broad structure of a user’s application provided by the Data Toolkit, there are typically many ways that this application might be deployed, e.g. using different node types or utilizing different replication levels. We refer to these different ways that a user’s application might be deployed as ‘candidate deployment patterns’ (CDPs). CDPs are generated automatically through analysis of the user’s application structure provided in the form of a ‘BigDataStack Playbook’ file from the Data Toolkit, as well as the available cloud infrastructure. Some CDPs will be more suitable than others once we consider the user’s requirements and preferences, such as desired throughput or maximum cost. Hence, different CDPs will encode various performance/cost trade-offs. These CDPs define the configurations that are used as filters for retrieving the most relevant benchmarking results during the Dimensioning phase, producing predicted performance and quality of service estimations for each. Even though Pattern Generation is part of Dimensioning, it is portrayed as an external component given that for each CDP the core Dimensioning block will be invoked.
* ADW Core: The ADW Core is the overall component that is responsible for the main functionalities of Dimensioning. It is split into two main parts, the ADW Core Benchmarking, which is responsible for implementing and storing benchmarking runs with various setups, and the ADW Core Runtime that is used during the assisted deployment phase of BigDataStack in order to populate the produced CDPs with the predicted QoS levels. Following, a highlight of the various functionalities of each element is described, split into more fine-grained parts.
* Bench UI: The Bench UI is used by the Data Service owner in order to define the parameters of the benchmarking process, which is performed “offline”, thus not in direct relationship to a given application deployment during runtime. It is necessary for this user to investigate the performance considerations of their service and proceed with this stage, during the incorporation of their data service in the BigDataStack ecosystem, in order to have gathered the necessary data a priori and not need to benchmark during the actual application deployment. The latter would create serious timing considerations and limitations that would not be tolerated by the end users. Through the Bench UI, multiple parameters can be defined, leading to a type of parameter sweep execution of a test, in order to automate and enable an easier result gathering process. The UI includes a visual element for selection of the parameters, as well as a relevant REST endpoint in which the user can submit a JSON description of the test (thus enabling further automation through multiple REST submissions). It can also be used to monitor the progress of the test. Result viewing and relevant queries can also be performed via the central visualization component of BigDataStack, while a workload definition tab is expected to be supported also in Y3 of the project.
* Test Control: Test control is used in order to prepare, synchronize and configure test execution. A number of steps are needed for this process based on the user’s selected options, such as running tests in a serial or parallel manner, preparing shared volumes and networks and so on.
* Deployment Description Adapter: In order to enable launching of the defined tests in an execution platform (such as Openshift, Docker Swarm, external Clouds etc), relevant deployment descriptors should be created. For example, for Openshift a relevant playbook file needs to be created and populated with the parameters selected for the benchmark tests, such as input arguments, selected resources etc and then forwarded to ADS Deploy. A playbook template structure is created beforehand for each bench test type based on the execution needs of each test (e.g. number and type of containers, needed shared volumes and networks etc), necessary included data service etc, that is then populated with the specific instantiation’s details. Different execution platforms can be supported through the inclusion of relevant plugins that implement the according formats of that platform or the relevant API calls to setup the environment (a Docker Swarm version is already supported at this time). Through this setup the system under stress (data service) is automatically deployed, as well as the necessary number of bench test clients in order to cover the desired load levels.
* Image repository: While this refers to the main image repository across the project, its inclusion here is used to indicate the necessary inclusion of the bench tests images, appropriately adapted based on the benchmarking framework’s needs, in terms of execution, configuration and result storing.
* Results/Model repository: This component is intended to hold the benchmarking results obtained through the test execution process as well as hold the created regression models used during the Result Retrieval queries in the Runtime phase (Y3).
* Structure Translator: This component acts as an abstraction layer and is responsible for obtaining the output of the Data Toolkit containing the application structure in the format this is expressed (e.g. playbook service structure) and extracting the parameters that are needed in order to instantiate the query towards the result retrieval phase. Furthermore, in cases of multi-level applications, it is responsible for propagating the process across the service graph.
* Result Retrieval: This component is responsible for obtaining the specified deployment options from the CDPs, the anticipated workload and produce the predicted QoS levels of the service. This may happen either through direct querying of the stored benchmarked results (y2) or through the creation and training of predictive regression models (Y3) that will also be able to interpolate for cases that have not been investigated, based on the training of the regressor and the depiction of the outputs (QoS) dependency from the predictor inputs (workload and h/w-s/w configuration used).
* Output Adaptor: This component acts as an abstraction layer and is responsible for generating the output format needed for the communication with ADS Ranking (in the particular case enriching the inputed playbook file with the extra QoS metrics).
external components
###### 6.11. Big Data Layout and Data Skipping
Here we focus on how to best run analytics on Big Data in the cloud. Today’s
best practices to deploy and manage cloud compute and storage services
independently leaves us with a problem: it means that potentially huge
datasets need to be shipped from the storage service to the micro-service to
analyse data. If this data needs to be sent across the WAN then this is even
more critical. Therefore, it becomes of ultimate importance to minimize the
amount of data sent across the network, since this is the key factor affecting
cost and performance in this context.
We refer the reader to the BigData Layout section (8.10) of the D2.1
BigDataStack deliverable which surveys the main three approaches to minimize
data read from Object Storage and sent across the network. We augmented these
approaches with a technique called Data Skipping, which allows the platform to
avoid reading unnecessary objects from Object Storage as well as avoiding
sending them across the network (also described in D2.1). As explained there,
in order to get good data skipping it is necessary to pay attention to the
Data Layout.
In BigDataStack data skipping provides the following added-value
functionalities:
1. Handle a wider variety of datasets, go beyond geospatial data
2. Allow developers to define their own data skipping metadata types using a flexible API.
3. Natively support arbitrary data types and data skipping for queries with UDFs (User Defined Functions)
4. Handle continuous streaming data that is appended to an existing logical dataset.
5. Continuously assess the properties of the streaming data to possibly adapt the partitioning scheme as needed
6. Handle general query workloads. This is significant because often different queries have different, even conflicting, requirements for data layout.
7. Handle query workloads which change over time.
8. Build a benefit/cost model to evaluate whether parts of the dataset should be partitioned anew (thus rewritten) to adapt to significant workload changes.
Previous research focused on the HDFS, whereas we plan to focus on Object
Storage, which is of critical importance in an industrial context. Object
Storage adds constraints of its own: once an object has been put in the Object
Store, it cannot be modified, where even appending to an existing object is
not possible, neither can it be renamed. This means that it is important to
get the layout right as soon as possible and avoid unnecessary changes.
Moreover, it is important for objects to have roughly equal sizes (see our
recent blog on best practices [17]), and we are researching the optimal object
size and how it depends on other factors such as data format. Moreover, the
cost model for reorganizing the data layout is likely to be different for
Object Storage than for other storage systems such as HDFS.
###### 6.12. Process modelling framework
Process modelling provides an interface to business users to model their
business processes and workflows as well as to obtain recommendations for
their optimization following the execution of process mining tasks on the
BigDataStack analytics framework. The outcome of the component is a model in a
structural representation – a JSON formatted file. The latter is actually a
descriptor of the overall graph reflecting the application and data services
mapped to specific executables that will be deployed to the BigDataStack
infrastructure. To this end, the descriptor is passed to the Data Toolkit
component and then to the Application Dimensioning Workbench to identify their
resource requirements prior to execution.
The main issues that need to be handled by the Process modeling framework are:
* Declarative process modelling approach: Processes may be distinguished in Routine (Strict) and Agile. Routine processes are modelled with the imperative method that corresponds to imperative or procedural programming, where every possible path must be foreseen at design time and encoded explicitly. If a path is missing, then it is considered not allowed. Classic approaches like the BPEL or BPMN follow the imperative style and are therefore limited to the automation type of processes. The metaphor employed is the flow chart. Agile processes are modeled with the declarative method according to which declarative models concentrate on describing what must be done and the exact step-by-step execution order is not directly prescribed; only the undesired paths and constellations are excluded so that all remaining paths are potentially allowed and do not have to be foreseen individually. The metaphor employed is rules/constraints. Agility at the process level, entails “the ability to redesign and reconfigure Individual business process components, combining individual tasks and capabilities in response to the environment” [18].
Declarative process modeling or a mixed approach seems to fit well in our
environment providing the necessary flexibility in process modelling, mapping
and optimization.
* Structure to output to the Data Toolkit and subsequently to the application dimensioning framework, workflow/reference to executables/execution logic: The output of the process modeling framework should be a structure to feed the Data Toolkit and later on the dimensioning framework. The structure should provide for reproducing the process graph, the tasks mapping to executables and the logic in terms of rules/constraints that govern the execution flow and the execution of the process tasks. Process Modelling outputs the structure of the developed process model to Data Toolkit component.
The main structure of the Process modelling framework is described below. The
component list is as follows:
* Modeling toolkit: This component provides the interface for business analysts to design their processes in a non-expert way, the interface for developers to provide in an easy way predefined tasks and relationship types as selectable and configurable tools for business analysts and the core engine to communicate with all the involved components towards design, concretization, evaluation, simulation, output and optimization of a business process.
* Rules engine: The engine provides all the logic for defining rules and constraints, evaluating and executing them. The aim is the business analyst to be provided with a predefined set of rules offered as a choice through the tasks and relations toolbox.
* ProcessModel2Structure Translator: This component generates the structure from the developed model that will feed the Data Toolkit and subsequently the dimensioning framework. This structure must be able to instantiate and run as an application. It will include the workflow, the logic in terms of relationships and rules regarding the execution of process tasks, reference and configuration of the involved analytics tasks (contained in the catalogue) and reference to other application tasks and services (which are not contained in any catalogue) (i.e. a task that generates a report from collected values, a task that finds the maximum value of a set of values, or a task that when triggered communicates using an API and turns off a machine, if we consider a process that controls the operation of machines).
Process Modelling Framework Capabilities
The Process Modeler component is the first link in the chain. The Business
Analysts have the ability to design their processes in a straightforward
graphical way by using a visual editor. The user can create a graph containing
nodes from a list provided and assign options to each node. In detail these
nodes and their respective options are:
* Data Load o Distributed Store o Object Store
* Clean Data o Yes
* No
* Transform Data o Normalizer o Standard Scaller
* Imputer
* Classification o Binomial Logistic Regression o Multinomial Logistic Regression o Random Forest Regression
* Regression o Linear Regression o Generalized Linear Regression o Random Forest Regression
* Clustering o K Means
* LDA
* GMM
* Frequent Pattern Mining o FP Growth
* Model Evaluation
* Binary Classification o Multiclass Classification o Regression Model Evaluation o Multilabel Classification o Ranking Systems
* Data Filter o Yes
* No
* Feedback Collector (External Service)
* Recommendations Calculation (External Service)
* Collaborative Filtering o ALS
Additionally, the business analyst can define the overall objective of the
graph which can be: Analytics Algorithm Accuracy
* Analytics Algorithm Time Performance
* Save Computing Resources
* Overall Time Efficiency
* Overall Cost Efficiency
* Decrease Average Throughput Decrease Average Latency
Finally, the Process Modeller Component provides the capability to import,
export, save and edit the generated graphs.
###### 6.13. Data Toolkit
The main objective of the data toolkit is to design and support data analysis
workflows. An analysis workflow consists of a set of data mining and analysis
processes, interconnected among each other in terms of input/output data
streams or batch objects. The objective is to support data analysts and/or
data scientists to concretize the business process workflows created through
the process modelling framework. This can be done by considering the outputs
of the process mapping component or choosing among a set of available or under
development analytic functions, while parametrizing them with respect to the
service-level objectives defined in the corresponding process. A strict
requirement regards the capacity to support various technologies/programming
languages for development of analytic processes, given the existence and
dominance of set of them (e.g. R, Python, Java, etc).
Towards this direction, the data toolkit is going to be modelled in a way that
will enable data scientists to declare and parametrize the data
mining/analytics algorithms, as well as the required runtime adaptations
(CPUs, RAM, etc.), data curation operations associated with the high-level
workflow steps of the business process model.
At its core, the data toolkit will incorporate an environment which supports
the design of graph-based workflows, and the ability to annotate/enrich each
workflow step with algorithm or processes specific parameters and metadata,
while respecting a predefined set of rules to which workflows must conform on
in order to guarantee their validity.
There is a wide range of versatile flow-based programming tools that fit well
the requirements for constituting the basis for the data toolkit, such as
Node-Red [19]. Also a custom workflowdesign environment tailored for the
specific needs of the data toolkit could be developed, supported by libraries
such as D3.js [20] and NoFlo [21], which will allow for fine-grained control
over all the elements associated with the data analytics workflow.
Figure 31 depicts the core configuration user interface per functional
component and/or service in the BigDataStack context. Therefore, the Data
Scientist can parameterise her components providing details on the elasticity
profile, the Docker images, the minimum execution requirements, the required
environmental variables, the exposed interfaces and required interfaces (if
any), existing attributes (i.e. lambda functions, etc.) and the corresponding
health checks regarding the services.
###### 6.14. Adaptable Visualizations
The adaptable visualization layer has multiple purposes: (i) to support the
visualization of data analytics for the applications deployed in BigDataStack,
(ii) to provide a visual application performance monitoring dashboard of the
data operations and the applications during benchmarking, dimensioning
workbench and during operation and (iii) to integrate and facilitate various
components such us Process Modeller, Data Toolkit, Benchmarking, Dimensioning
Workbench, Triple Monitoring Engine, Data Quality Assesment and Predictive
Maintenance. Importantly, the dashboard will be able to monitor the
application deployed over the infrastructure. For the visualization of data
analytics, it will provide a reporting tool that will enable to build visual
analytical reports. The reporting will be produced from analytical queries and
will include summary tables as well as graphical charts.
The main issues that need to be handled by the adaptable visualizations
framework are:
* User authentication
* KPIs definition and integration: Definition of a KPI must be possible through the framework if not supported elsewhere in the architecture
* Triggering of events and production of visual notifications. Event handling and triggering of alarms or responses to the event must be supported.
* Different views of the UI platform depending on the user role. 4 roles are defined:
* Administrator (full UI View) o Business Analyst (Process Modeller View) o Data Analyst (Data Toolkit View)
* Application Owner/Engineer (BenchMarking, Dimensioning Workbench, Analytics View)
* Integration of Process Modeller, Data Toolkit and Benchmarking Components.
* Deployment of playbooks towards the Dimensioning Workbench Component, visualization of the configurations recommended and deployment of the selected application.
* Management of the Deployed Applications and handling of the Deployment Adaptation Decisions. Decisions are consumed from the Global Decision Tracker.
* Ability to redeploy applications when QoS Warnings are received and Deployment Alterations are considered.
* Visualisation of the Predictive maintenance for both cases of full datasets and exclusively quality assessed data.
* Visualisation of the Data Quality Assessments in summary customizable tables.
The foreseen I/O and the structure of the visualization framework in terms of
definition of the subcomponents and their interactions are listed in the
following bullets.
Necessary inputs:
* Analytic outcomes as input from the seamless data analytics framework
* Real-time monitoring data as input from the triple monitoring engine. Data will refer Application components monitoring, to Data & Services monitoring and to Cluster resources monitoring
* CEP outcomes as input from the real-time CEP of the Storage engine
* Input from exposed data sources to facilitate KPIs definitions and event triggering rules.
Necessary Outputs:
* Output of visual reports
The main structure of the Adaptable visualizations framework is depicted in
Figure 32. The component list is as follows:
* Visualization toolkit: this component connects all the components (Process Modeller, Data Toolkit, BenchMarking, Dimensioning Workbench) and makes available a tool set of offered capabilities (e.g. types of graphs, reports, tables)
* Rights management module (Admin Panel): this component handles the permissions to modify views to components, editors and event triggers
* Data connector: this component makes possible to retrieve data schemas and data from the exposed data sources to assist in defining KPIs and set event triggers. Furthermore, it could provide the same way access to historical data or reports
* Events processing: this component makes possible to define event triggers that will produce visual notifications, warnings or generation of specific reports
#### 7\. Key interactions
##### 7.1. User Interaction Layer
User Interaction within the BigDataStack ecosystem plays an important role in
the entire lifecycle of a big data application / operation. There exist the
following user roles: Business Analysts, Data Analysts and/or Data Scientists.
First, the Business Analyst uses the Process Modelling Framework to define the
business processes and associated objectives and accordingly design a BPMN-
like workflow for the actualization of the business-oriented objectives and
the required analytic tasks to accomplish. The analyst is able to design,
model and characterize each step in the workflow according to a list of
predefined rules encapsulated by a rules engine component of the modelling
framework. The output of this process is a graph-like output (i.e. in JSON
format) with a high-level description of the workflow from the business
analyst’s perspective along with the related end-to-end business objectives.
The sequence diagram of Process Modelling is depicted in Figure 33.
Figure 34 depicts a high-level application graph designed by the Business
Analyst by indicatively incorporating within the data workflow four (4)
processing steps with editable fields by means of drop-down lists, namely data
load, data clean, perform analytic task and evaluate result.
Next, the Process Mapping component provides an association of the process
steps modeled by the Business Analyst with specific analytic tasks, following
a set of criteria related to each process task, while considering any
constraints defined in the business objectives. These criteria may contain the
characterization of required data, time, resources and/or performance
parameters need to be concretized to perform the analytic tasks. The output of
this step is a workflow graph (i.e. in JSON format) enriched with the mappings
of the business process steps grounded to algorithms, runtime and performance
parameters.
Then, the Data Analyst and/or the Data Scientist uses the Data Toolkit, to
perform a series of tasks related to the concretization of the analytics
process workflow graph produced in the process mapping step, as depicted in
Figure 35, such as:
* Concretizing the business objectives in terms of selecting lower bounds for hardware, runtime adaptations, performance for which the selected algorithms perform sufficiently well.
* Defining the data source bindings from where the datasets related to the task will be ingested.
* Defining any data curation tasks (i.e. data cleaning, feature extraction, data enrichment, data sampling, data aggregation, Extract-Transform-Load (ETL) operations) necessary for the algorithms and the related steps.
* Configuring and parametrizing the data analytics tasks returned (i.e. selected) by the Processes Mapping component, and additionally providing the functionality to design and tune new algorithms and analysis tasks, which are then stored to the Catalogue of Predictive and Process Analytics and can be re-used in the future.
* Selecting and defining performance metrics for the algorithms, along with the acceptable ranges with respect to the business objectives and service-level objectives, used to evaluate the algorithm/model and resources configurations.
At the end, a Playbook (i.e. in YAML format) representing the grounded
workflow for each business process will be generated, in the format that
further feeds the Dimensioning workbench in order to provide the corresponding
resource estimates for each node of the graph.
The following figure (Figure 36) presents the sequence diagram, which depicts
the main information flows for the User Interaction Layer of the BigDataStack
architecture.
Example Use Case: Predictive Maintenance
Regarding the entry phase described above, an example is presented in the
following sections to link the functionalities of different components to an
actual use case.
Business Analyst’s View
The following figure (Figure 37) shows the perspective of a business analyst
in terms of Process Modelling, which treats Real-time ship monitoring (RTSM)
as a whole. This is expected to be the view (not in terms of user interface
but in terms of processes and abstraction of information) of the Process
Modelling Framework. Moreover, through the framework, the business analyst
will be able to specify constraints (as noted with red fonts in the figure).
Overall, separate processes, actions and data required to perform RTSM. As
shown, the first step is the vessel and weather data acquisition. That
includes a dataset with granularity down to a minute and 2 years timespan for
vessel data, along with weather data as provided by the National Oceanic and
Atmospheric Administration (NOAA), i.e., granularity of weather reports up to
3 hours for every 30 minutes of a degree. Past this, given that there are
plenty of attributes within both datasets, there has to be some attribute
selection rule. For example, only 190 approximately are required from both
datasets, because these are the most reliable and important. Following this,
the data are imported into two different components. The first is the
monitoring tool, which simulates and enhances the on-board tools of the Alarm
Monitoring System (AMS). Given that, if an anomaly occurs a rule-based alert
has to be produced close-to or in real time. The second component is the
Predictive Maintenance Alert. This informs the end user that the current data
under examination pinpoint a malfunction that has occurred in the past. Again,
this should work close-to or even better in real-time. Consecutively, given
that identifying an upcoming malfunction is achieved, spare part ordering
follows. The ordered spare part has to be delivered at least 1 day before the
estimated time of arrival, while ordering of spare parts should be performed
only by suppliers that are to be trusted. Quality of service should not be
neglected while cost criteria are also taken into account. Finally, given the
delivery port of the spare part, re-routing of the vessel takes place, where
the estimated time of arrival to the closest port is less than 12 hours.
Data Analyst’s View
Following the outcome of the process modelling (previous view), Figure 38
depicts the view for the data analyst, that is the view in the Data Toolkit.
As shown in the figure, the view is different with components that have been
mapped automatically from the Process Mapping mechanism of BigDataStack (e.g.
“CEP monitoring” to enable the “Rule-based alert” process).
Overall the data analyst’s view is a set of system components, in-house or
out-sourced processes and/or systems, actions and data required to perform
RTSM. The Vessel data acquisition process is fed from an in-house database
(DB) that contains vessel data (power consumption related and main engine
data) along with Telegrams and past maintenance events. Given a total of 10
vessels, this requires up to 40 GB of hard disk storage. Weather data are
imported from NOAA via FTP, by a weather service that loads hindcasts in GRIB
format for the whole earth with a 3-hour granularity for every 30 minutes of a
degree. GRIB files are parsed and stored in a database that requires up to 2.1
TB storage. Given that any trajectory of a vessel can by joined with weather
data via a REST API that the weather service provides. Past this, given that
there are plenty of attributes within both datasets, i.e., weather and vessel
data, there has to be some attribute selection rule. For example, only 190
approximately are required from both datasets, because these are the most
reliable and important such as the consumed power (kW), the rotations per
minute of the main shaft (RPM) etc. In order to avoid feeding the algorithmic
components of this architecture with false or null data values, a filtering
component is in charge of removing null values, preferably with average
values, smoothing-out the effect of data-loss. Next, given a set of defined
rules, such as “if the power consumption exceeds a limit and the fuel-oil
inlet pressure drops below a threshold” the CEP component is in charge to
produce an alert, close-to or in-real time. In parallel, a pattern recognition
algorithm tries to identify patterns on the data that looks like a past case
where a malfunction occurred in the main engine. If this happens, an alert is
produced, and given the upcoming malfunction that has been identified a spare-
part suggestion is made. Given the Danaos-ONE platform, where orders of spare
parts are placed via a REST API, the order of the suggested spare-part is
placed and is accessible from the suppliers that are preferred. So, once the
order is made to a supplier, a suggested place and time are provided, and
given this re-routing of the vessel takes place via an external REST service
provided at a specific IP address and port.
##### 7.2. Realization & Deployment
Application and Data Service Ranking
Within the Realization module, there is a series of operationalizable tasks
associated to Application Data Service Ranking (ADS-Ranking). The goal of
these tasks is to enable the selection of a candidate deployment pattern (CDP)
which represents a complete configuration of the application (which is needed
for application deployment on the cloud). There are two main tasks of interest
when realizing an application’s deployment:
* First-Time Ranking of Candidate Deployment Patterns: This task aims to select the most suitable candidate deployment pattern from a set that has previously been generated when the user first requests deployment of their application.
* Application Deployment: This task involves the practical deployment of the user application on the cloud through interaction with Openshift.
Below we discuss each of these two tasks in more detail and provide an
interaction sequence diagram for each. For legibility of the interaction
diagrams, we use short names for each component. A mapping between components
and their short names are shown in the following table.
<table>
<tr>
<th>
Full name
</th>
<th>
Sub-component
</th>
<th>
Short name
(interaction diagrams)
</th> </tr>
<tr>
<td>
Application and Data Services Dimensioning
</td>
<td>
N/A
</td>
<td>
Dimensioning
</td> </tr>
<tr>
<td>
Application and Data Services Ranking
</td>
<td>
Pod Feature Builder
</td>
<td>
ADS-R Feature Builder
</td> </tr>
<tr>
<td>
Application and Data Services Ranking
</td>
<td>
Pod Scoring
</td>
<td>
ADS-R Scoring
</td> </tr>
<tr>
<td>
Application and Data Services Ranking
</td>
<td>
Model
</td>
<td>
ADS-R Model
</td> </tr>
<tr>
<td>
Application and Data Services Ranking
</td>
<td>
Pattern Selector
</td>
<td>
ADS-R Pattern Selector
</td> </tr>
<tr>
<td>
Application and Data Services Deploy
</td>
<td>
N/A
</td>
<td>
ADS-Deploy
</td> </tr>
<tr>
<td>
Dynamic Orchestrator
</td>
<td>
N/A
</td>
<td>
Orchestrator
</td> </tr>
<tr>
<td>
Application and Data Services Global Decision Tracker
</td>
<td>
N/A
</td>
<td>
ADS-GDT
</td> </tr>
<tr>
<td>
BigDataStack Adaptive Visualisation Environment
</td>
<td>
N/A
</td>
<td>
BigDataStack UI
</td> </tr> </table>
Table 4 - Short-name Component Mapping Table
First-Time Ranking of Candidate Deployment Patterns
The first task is concerned with the ranking of candidate deployment patterns
when the user first requests their application to be deployed. Candidate
deployment patterns are generated by the Dimensioning component of
BigDataStack. The output of this task is a selected deployment pattern, which
can be passed to Application and Data Services Deployment for physical
deployment.
This task is triggered by the Dimensioning component once it has finished
generating the different candidate deployment patterns (CDPs) and producing
the quality of service estimations for each. The Dimensioning component sends
a package of CDPs to the Application and Data Services Ranking (ADS-Ranking)
component, or more specifically the Feature Builder sub-component of it. This
component analyses and aggregates the different quality of service estimations
into a form that can be used for ranking (referred to as features). Once this
transformation is complete, the CDPs and aggregated features are sent to the
Scoring sub-component, which uses a ranking model to score and hence rank each
CDP based its suitability with respect to the user’s requirements. Once the
CDPs have been ranked, that ranking is sent to the Pattern Selection sub-
component, which selects the most suitable one. This selected CDP is then sent
to the BigDataStack Adaptive Visualisation Environment component for the user
to decide whether to deploy with this configuration. At the same time, a
notification is sent to the Dynamic Orchestrator to specify that deployment is
underway for the user’s application. Moreover, the selected CDP, other CDPs
not selected and ranking information/features are sent to the Global Decision
Tracker (ADS-GDT) for persistence.
Application Deployment
The ADS-Deploy component interacts with Openshift through Kubernetes‘ OpenAPI
v1 [1]. Once the candidate deployment pattern has been obtained, it is sent to
the deployment component. This is parsed by the ADS-Deploy component, which
extracts information on the three main objects of importance to the deployment
process (Pods, Services and Routes). ADS-Deploy maps these into a series of
independent Openshift-managed objects representing each, enabling incremental
deployment and more fine-grained control. However, all those objects are
grouped into a single logical application, in order to maintain the internal
coherence and keep relations between the objects. These objects are:
* Pods: A Pod represents an atomic object in Openshift, and includes one or more containers. Each pod can be replicated according to the configuration values or due to Quality-of-Service requirements. Pods have been represented as DeploymentConfig objects in BigDataStack. [11]
* Services: A Service provides access to a pod from the outside, and is in charge of vital actions such as load balancing. Services can also be replicated, so that they are scaled in/out independently or together with the pods. ADS-Deploy, creates a configuration file for each service and sends it to Openshift.
* Routes: A route gives a service a hostname that is reachable from outside the cluster. Routes are not replicable, but they are closely related with the services. In BigDataStack, a configuration file is created for each route, and information on the service and application to which they relate is contained in there.
##### 7.3. Data as a Service & Storage
The Data as a Service and the Storage offerings of BigDataStack cover
different cases. As base data stores, the LeanXcale data store and the Cloud
Object Storage (COS) are considered as depicted in the following figure
(Figure 31).
From the above, it can be considered that the two components that are able to
persistently store data are: LeanXcale’s relational data store, and IBM’s
Cloud Object Store. The former is a fully transactional database which will
serve operational workloads, while in the meantime can execute analytical
operations on the runtime, providing a JDBC implementation, thus being able to
execute SQL compliant queries. The latter is a cloud Object Store capable of
storing numerous terabytes of data but lacking transactional nor SQL
capabilities. Fresh data will be first inserted in the LeanXcale database
(LXS) in order to benefit from its transactional capabilities. Once data is no
longer considered as fresh, (e.g. several months have passed), data will be
moved to the Cloud Object Store (COS) while analytical processing over COS is
provided by Apache Spark.
On top of the datastores the Seamless Storage Interface (SSI) provides an
entry point for seamlessly executing queries over a logical dataset that can
be distributed over different datastores which themselves may provide
different interfaces. The SSI provides a common JDBC interface and is capable
of executing standard SQL statements. The SQL queries will be pushed down to
both stores, and retrieved intermediate results will be merged and returned.
Offering a JDBC interface, SSI can be exploited by data scientists through the
usage of wellknown analytical tools such as SparkSQL. As a result, the end-
user can write SparkSQL queries and have the SSI locate the various parts of
the dataset and retrieve the results. Direct execution of the queries to a
specific data store is also permitted. As a result, we have the following five
scenarios:
* Direct access to the LeanXcale database
* Direct access the Cloud Object Store (COS)
* Request data using a simple SparkSQL query
* Insert data to BigDataStack
* Insert streaming data to BigDataStack
Direct access the LeanXcale (LXS) database
User executes an SQL query, requesting data directly from LXS using a standard
JDBC interface, and the latter returns the resultSet as the response.
Direct access the Cloud Object Store (COS)
User executes a query from Apache Spark, requesting data directly from COS,
using the stocator open source connector which permits the connection of
Object stores to Spark, and the COS returns back the result as the response.
Request data using a simple SparkSQL query
User sends a request for executing an analytical task by writing a SparkSQL
query. The SSI, which is an extension of the LXS Query Engine provides a JDCB
functionality, and as a result, is already integrated with SparkSQL. Due to
this, SparkSQL will pushdown all operations to be executed by the SSI itself.
The SSI is aware of the location of the data over the distributed dataset that
is split into the two different datastores and is integrated with both of
them. As a result, it translates the query to each data store’s internal
language and requests the data from both of them. It finally aggregates the
results and returns the data back to SparkSQL, which returns the results to
the user. It is important to notice that the SSI supports various query
operations such as table scans, table selections, projections, ordered
results, data aggregations (min, max, count, sum, avg) either grouping them by
specific fields or not. From the above figure it can be also noticed that
steps 4A and 4B might be in parallel according to the type of the query
operators.
The architecture of the seamless analytical framework and the main
interactions between its components can be shown in Figure 45:
The Data Manager component, as shown in Figure 45, keeps track of the data
ingested in the framework. For each dataset the data user can configure the
period of time after which data can be considered as historical and can safely
be moved to a data warehouse such as the Object Store. When a data movement
action is triggered, it first informs the relational database that a data
slice should be moved to the COS. LXS is getting prepared to drop that slice
(internally it marks it as read-only and splits it to a data region that can
be easily dropped later on). The Data Manager then informs the Data Mover to
move the slice. The latter requests the data slice by executing one or many
standard JDCB statements to LXS and then uploads the data slice as one or many
objects into the objects store. When the whole slice is eventually persisted
into the Object Store, it informs the Data Manager which forwards this
acknowledgment to the data Federator. The data Federator internally keeps
track of a timestamp which records the latest successful data movement. When a
query is submitted for data retrieval, it creates the query tree and pushes
down a selection based on this timestamp on each operation for a table scan.
Then it rebuilds the query by interpreting it according to the target
datastore and retrieves the results. Finally, in accordance with the query
operation, it merges the results and builds the result set. When the Data
Manager acknowledges a data movement and informs the Data Federator, the Data
Federator will move accordingly the internal timestamp (the splitting point).
At this point, the data corresponding to the moved data slice co-exists in
both stores. However, the Data Federator thanks to the timestamp will hide the
replicated data first at the Object Store and after the timestamp is updated
at the relational store. When it receives the acknowledgement, it updates this
timestamp (split point) so that the next transactions can scan the tables
accordingly. Pending transactions however will continue to scan the tables
based on the value that they received when the transaction first started. The
transactional semantics of LXS ensure the data consistency when the split
point is updated. When this happens, the Data Federator can order the LXS to
safely drop the data slice that has now been moved to the object store.
However, it will wait until all pending transactions has been finished, and
thus, no scan operation is performed on the data slice that is about to be
dropped. By doing so, the Data Federator ensures data consistency and the
validation of the results during the process of data movement: Data will exist
either on LXS or the COS, or both, but they will be always scanned only once.
Instert data to BigDataStack
An integrated application produces data to be stored in the BigDataStack
platform. The data are being sent to the Gateway: the entry point of the
platform. Its responsibility is to transform data coming from external sources
in various formats, to the platform’s internal schema. Then, it forwards the
data to the operational data store to permanently store them. The latter
periodically moves data that has been inserted from more than a constant
period of time, to the COS.
Insert streaming data to BigDataStack
In this specific use case, a ship from the DANAOS fleet streams data coming
from one of its sensors. Data is being first sent to a local installation of
the CEP which correlates them and identifies possible threats, producing
alerts. Then, data is sent to the platform´s Gateway which is responsible of
transforming the data to the platform’s internal format. A CEP cluster inside
the platform receives data from the Gateway. It further analyses data to
detect possible rules infringement. Data coming from all the fleet vessels is
merged. This second CEP cluster processing involves querying LXS to retrieve
data in rest that has been already been stored in the data store. Finally, it
stores the incoming data to the relational datastore which eventually will
move the data to the Object Store.
##### 7.4. Monitoring & Runtime Adaptations
When considering the process of monitoring and adapting user applications on
the cloud, it is useful to divide the discussion into three parts: 1) the
interactions required to perform the actual monitoring of a running
application; 2) how this monitoring process can be used to track quality of
service; and 3) the interactions needed to adapt the user’s application to
some new configuration when a quality of service deficiency is identified or
predicted. We summarize each below.
###### 7.4.1. Triple Monitoring Engine
The triple monitoring system provides APIs for receiving metrics from
different sources and exposes them for consumption. Metrics are obtained
mainly by exporters and federation. In the case of the deployment of an
exporter is impossible for some reason, the monitoring engine implements a
system that can receives metrics by get and post methods and exposes them to
Prometheus. This component of the triple monitoring is expected to behave as a
REST API and Prometheus exporter. The following diagram describes its
functionality.
An application provider sends its metrics in JSON format by http get or post,
the API parses the json structure, sanitizes metrics to convert them to
Prometheus’s format and saves them in a temporally list. A response is then
returned to the application provider. The Prometheus engine scrapes the REST
API by http get metrics, to get available metrics. This scraping operation is
iteratively performed at intervals based on the amount of time specified in
the Prometheus configuration.
The triple monitoring engine implements two different exposition system
methods. The first is a REST API where applications consumers ask for a
metric, the REST API translates this request to an Elasticsearch query and
returns a result. The following sequence describes this process.
The second output interface implemented in the triple monitoring system is the
publish/subscription mechanism.
An application that needs steaming data can through this component subscribe
and receive metrics in real-time. Four different types of requests are
available.
* The first request type is the “subscription”, the consumer after having created his queue, it is going to send to the pub/sub system a subscription request that contains the name of its queue, its name (application name) and a list a metrics. The consumer sends its request in the “manager” queue so that to be consumed by the manager of the triple monitoring system. The manager receives the subscription request, creates a subscription object and adds it into the subscription list. A confirmation message is then returned to the consumer. The manager reads the subscription list each time it receives a metric from its queue, it redirects this metric to the declared queue.
* The second request is the “add_metrics” request type, the consumer sends a message that contains its name, queue name and a metric to add to its subscription list, the manager verifies the request, updates the subscription and returns a message.
* The third request type is “my_subscription”, the consumer sends its name and queue name. The manager returns the corresponding subscription list.
* The last request is the heart_beat, the manager has no way to detect disconnection by a consumer. The consumer should confirm its presence each specific interval of time. The heart_beat interval is declared in the subscription request.
###### 7.4.2. Quality of Service (QoS) Evaluation
QoS properties (parameters) to be evaluated by the QoS Evaluation component
should correspond to the kind of quality of service (QoS) requirements coming
from the Application Dimensioning Workbench and defined within the
BigDataStack Playbook.
* An example of a QoS requirement is the “throughput.”
* There should be a trivial mapping between Playbooks’ KPIs and the “guaranteed” of “agreements”.
The QoS Evaluation component will be responsible for translating the
Playbooks’ QoS requirements into SLOs (Service Level Objectives).
The QoS Evaluation component will periodically query the Triple Monitoring
Engine (based on Kubernetes) to recover the metrics related to the monitored
QoS parameters.
Once a violation of a given SLO is detected, a notification is sent to the
Dynamic Orchestrator to trigger the data-driven orchestration of application
components and data services. The standard sequence of interactions will be
the following:
* Evaluator calls the Adapter to recover a certain set of QoS metrics from Prometheus.
* The Evaluator calls the Notifier when an SLO violation is detected.
* Notifier calls the Dynamic Orchestrator passing a message describing the violation through publisher/subscriber mechanism implemented as a topic within the RabbitMQ service (which acts as the message broker between BigDataStack components)
The Dynamic Orchestrator communicates with the ADS-Ranking component to
trigger the dynamic adaptation (re-configuration) of the application or data
service deployment patterns.
Adapting at Runtime
If a user’s application is identified or predicted to have some deficiency
with respect to the quality of service targets, then that application’s
configuration needs to be altered to correct for this. For instance, this
might involve moving data closer to the machines performing the computation to
reduce IO latency, or in more extreme cases it might require the complete
redeployment of the user’s application on new more suitable hardware.
BigDataStack supports a range of adaptations that might be performed , such as
Pattern Re-Deployment, where the goal is to select an alternative candidate
deployment pattern (hardware configuration) after the user’s application has
been deployed. This is used in cases where the original deployment pattern was
deemed unsuitable and this could not be rectified without changing the
deployment infrastructure. In this case, a new candidate deployment pattern
will be chosen, and the application services will be transitioned to this new
configuration. This may result in application down-time as services are moved.
The components involved for this adaptation are the Dynamic Orchestrator (DO)
and the Triple Monitoring. When a new application is deployed, the Playbook is
sent to the DO on the queue OrchestratorPlaybook. The DO reads the playbook
and enriches it, adding more information about the SLOs: it splits the values
of the metrics related to SLOs in different intervals that the QoS component
will monitor, e.g. response time can be divided in the intervals 0.5-1s,
1-1.5s, etc. In addition, the DO subscribes to the Triple Monitoring Engine
and creates a new queue, using which it will consume the metrics from the
application.
The Enriched Playbook is sent to the QoS Evaluator on the queue
EnrichedPlaybook. The QoS registers this and will start monitoring the
application to detect when an SLO is violated, and in this case, a message
will be sent to the DO on the queue OrchestratorQOSFeed. The DO will read this
message and based on the current state (as defined by the metrics consumed
from the Triple Monitoring Engine, the QoS information and its experience),
will decide what is the most likely action to resolve the violation is and
subsequently send it to the ADS-Ranker on queue Lv3-ADSRanking-RR to start
adaptation.
In the remainder of this section we provide more detail on how Pattern Re-
Deployment is operationalized within BigDataStack.
Pattern Re-Deployment
The aim of the pattern re-deployment task is to facilitate the selection of a
new candidate deployment pattern (CDP) if a previously selected CDP is no
longer considered viable. This might occur if a deployed application fails to
meet minimum service requirements and this cannot be resolved through data
service manipulation. In this case, we need to take into account why the
current pattern is failing and based on that information, re-rank the CDPs for
the user application and select a new alternative that will provide better
performance. This new CDP can then be used to transition the user’s
application to the new configuration by the Application and Data Services
Deployment component.
This task is triggered by the Dynamic Orchestrator when the orchestrator
detects that an application deployment is failing. It sends a notification to
the Application and Data Services Ranking component. More precisely, this
notification is processed by the Failure Encoder subcomponent. This component
first contacts the Global Decision Tracker to retrieve the other CDPs that
were not selected for the failing user’s application (as it is from these that
a new pattern will be selected). These patterns are then sent into the same
process pipeline as for first-time ranking (see Section 6.5), with the
exception that the previously selected deployment is excluded (we know that it
is insufficient) and the Pattern Selector subcomponent will also consider the
reason that the previously selected CDP failed.
When the ADS-Selector chooses the new CDP, this information is sent to the
ADS-Deploy, together with the instruction to redeploy. Then, the deployment
component translates the CDP, and communicates it to the container
orchestrator using the same process as defined in Section 6.5. The
orchestrator will then start a re-dimensioning process. If the process is
successful, then the user’s process continues normally. However, if the re-
dimensioning was unsuccessful, then the container orchestrator needs to
destroy the current deployment, stopping the processes and starting a new
deployment from scratch. This situation has the setback that users have their
processes interrupted and/or restarted and ultimately impair the availability
of application and data services (downtimes).
#### 8\. Conclusions
This document refines the initial version of the BigDataStack architecture
presented in deliverable D2.4 - Conceptual model and Reference architecture.
It captures the updated version of the overall conceptual architecture in
terms of information flows and capabilities provided by each one of the main
building blocks. Additional refinements for each component are also detailed
on the corresponding sections, as well as the changes in the main interactions
between them.
This report serves as a design documentation for the individual components of
the architecture (which are further specified and detailed in the
corresponding WP-level scientific reports) and presents the outcomes (in terms
of design) of the initial integrated prototypes and the obtained
experimentation and validation results.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1434_SecureIoT_779899.md
|
# Introduction
The purpose of this deliverable D1.4 Data Management Plan_Interim version is
to provide an update in the data management lifecycle for the research data
that have so far been or are still foreseen to be collected, generated or
processed by the SecureIoT project, an early view on which was presented in
[1]. As part of making research data FAIR (findable, accessible, interoperable
and reusable) this version of the DMP includes relevant information that will
enable interested researchers and third parties to discover and reuse data
from the SecureIoT project in an easy and efficient way.
Towards that end, Section 2 provides a summary of the research data
collected/generated in the project. Details follow in a later Section which
includes all datasets, even ones that for certain reasons cannot be made
publicly available as well as datasets that while not collected or generated
in the project, have been imported and used in the project (e.g. for training
of algorithms prior to the availability of relevant data coming from the
project’s trialing activities).
Section 3 is about FAIR data, elaborating on the approaches followed and to be
followed by the project to ensure visibility and reusability of the project’s
generated/collected data. Section 4 discusses the allocation of resources and
means of long term preservation of data, while Section 5 presents how research
data are handled in the context of the project to prevent unauthorized access
to them. It is worth noting that, as described in [2], while the project will
not be collecting any kind of personal data from trialing activities
themselves, such data might be collected as part of T6.5 “Stakeholders’
Feedback and Evaluation” activities in the DoA, when collecting feedback and
their opinions about project generated results. Also, project data collected
during the trialing activities themselves, while not personal in nature, they
still need to be securely stored to prevent tampering that would jeopardize
their quality but also due to being commerciallysensitive.
Section 6 presents updates on ethical aspects while Section 7 presents a
detailed list of datasets where for every dataset we include the “as of now”
view for regarding the points mentioned in the previous Sections. Finally,
Section 8 concludes this document.
# Data Summary
SecureIoT has been collecting, generating and using data in the context of use
case validation in the three following broad domains:
* Multi-Vendor Industrie 4 Usage Scenarios
* Socially Assistive Robots and IoT Applications Usage Scenarios
* Connected Car and Autonomous Driving Usage Scenarios
These data are intended to validate the capability and performance of
SecureIoT components with functionalities ranging from collection of security
data (WP3), to analysis of security data to identify emerging threats (WP4)
and eventually providing assessment of risks, levels of compliance and
securing software components (WP5); which is the main objective of the
project. For more details regarding the specific scenarios in the context of
which these data have been (or will be still) collected, the interested reader
can refer to [3].
Such data, in addition to being useful for SecureIoT project testing purposes,
have the capability to further promote and foster research and development
activities in the broader community in the areas of security research in IoT
in similar or even different contexts depending on the specific deployment
scenario.
In Section 7 we present further details for all datasets under the umbrella of
the respective use cases in terms of:
* Name
* Description
* (Expected) Dataset size
* Structure
* Data Utility (i.e. to whom each dataset might be useful)
Also, as described in [4], data for capturing stakeholders’ feedback are
envisioned. It is worth noting that questionnaires to collect stakeholders’
data are currently under review, therefore expected data coming from
stakeholders’ feedback are not currently reported in Section 7 of this D1.4
deliverable (a provisional view of them can be found in Section 4.2.7 of [4]).
Such data will be collected though in the context of all 3 use cases as part
of the evaluation process.
# FAIR Data
## Making data findable, including provisions for metadata
Datasets collected/generated in the project, when they are to be deposited in
open data repositories, they will adopt a file naming scheme that will allow
to easily:
* link them with the SecureIoT project
* identify the type of data included and structure
* identify the version of the dataset
With these in mind the datasets produced by the project will be using the
following file naming scheme:
**SecureIoT_UseCase_NameofDataset_DataStructure_FROM:date_TO:date_Location_version.E
xtension**
Just as an example for the sake of presentation, for the 1 st version of
data coming from a vehicle without any compromise in the Connected Car use
case, which are representative of the date 15/01/2019 and of location
Cambridge, which are JSON entries in a zip file, the filename would be:
**SecureIoT_ConnectedCar_NormalCarData_JSON_20190115_Cambridge_v1.zip**
It is worth noting that if some of the fields in the file naming convention
are not needed for some datasets, e.g. because location is not of interest,
they can be omitted altogether.
This, together with a representative set of keywords (one of which will be
SecureIoT and all the rest accurately reflecting the content of the datasets)
and other associated metadata, based on the description of the dataset, will
allow for easily finding the dataset.
It is worth noting that for depositing datasets that can be made publicly
available (either immediately after they have been collected or after an
embargo period or based on certain restrictions) the Zenodo (
_www.zenodo.org_ ) repository will be used. Zenodo is a free of charge, open
data repository which can handle any file format up to 50GB. Zenodo allows the
uploader to define and store metadata following Zenodo’s metadata standards
and also generates and registers Digital Object Identifiers (DOIs) through
Datacite ( _https://datacite.org/_ ) which is the leading global non-profit
organization for providing DOIs for research data, making DOIs from SecureIoT
accessible in the long term.
## Making data openly accessible
Datasets collected/generated in the context of the SecureIoT project by
default will be made publicly available unless terms and conditions apply that
would prohibit this (e.g. IPR, commercial-sensitivity etc.). Data from
questionnaires to stakeholders will not be shared though through Zenodo in any
form but will remain solely and strictly for use within the project. For data
that do not need to remain completely closed, as mentioned above, the Zenodo
repository will be used for depositing them. For datasets where restrictions
apply in terms of accessing them, Zenodo eases this process of requesting
being granted access permission by allowing uploaders of data to present the
terms and conditions for access and be notified when a request for access is
issued.
In Section 7, the consortium ‘s stance with respect to openness of datasets as
in the time of writing this document, is presented; this may be revisited in
due course and -if so- this will be reflected in the final version of the Data
Management Plan deliverable [5]. Section 7 also presents the tools that can be
used to open/read the respective datasets.
## Making data interoperable
Data will be interoperable by following common vocabularies and ontologies
and/or providing clear description of the data structures if they follow less
common formats. Through the clear description of the data structure other
researchers even not using the same data structure, will be able to transform
data accordingly for use by their own custom software tools.
## Increase data re-use
In order to permit the reuse of data, the datasets will be accompanied by a
relevant license. SecureIoT considers the family of Creative Commons Licenses
(CCL) ( _https://creativecommons.org/licenses/_ ) as a very straightforward
way to allow the re-use of data as they ensure that the source and authority
of the data are recognized and commercial interests -if applicable- can also
be protected.
The specific version of the CCL license (or any other license - if different
for some reason) used is dataset dependent and is presented in Section 7
together with the datasets owners.
# Allocation of Resources
In the context of the project it is the role of task T1.3 “Use Cases
Coordination” with its allocated resources to ensure that datasets, as they
are produced, are checked for quality and -if their nature allows- are shared
with the broader public.
Using Zenodo (no fees) ensures that long term preservation of data can be
achieved with negligible associated costs.
Every dataset owner will be responsible for handling the data management of
respective datasets; from their collection/generation to their eventual upload
in Zenodo, when there are no IPR or other reasons which would prohibit them
from being deposited.
# Data Security
All gathered research data during the course of the project will be securely
handled to prevent them from loss and unauthorized access. Data need to be
securely stored due to their potential personal nature (data from
questionnaires to stakeholders) but also to prevent tampering that would
jeopardize their quality but also due to being commercially-sensitive (data
coming from trialing activities themselves).
The project is applying the following measures, prescribed by the General Data
Protection Regulation (2016/679) to ensure adequate protection during the
project execution for research data with partners involved in the processing
of data, in charge of applying them:
* Data storage in safe locations, with access limited to authorized persons and partners of the project
* Safe data transfer through secure, encryption-protected connections
* Remote access through secure, encryption-protected connections, granting authorization only to persons and activities relevant to the project and within the time frame of the project
* Close monitoring of access to SecureIoT platform instantiations used for use case testing activities
* For personal data (if this turns out to be the case), pseudoanonymization or complete anonymization will be applied to remove the link between the stored data and real person identity. For reporting purposes (e.g. in [6] and [7]) only anonymized and aggregated data will be reported to ensure that data subjects cannot be identified.
Regarding data deposited in Zenodo, data security relies on the widely tested
Zenodo platform.
# Ethical Aspects
Ethical aspects related to activities of the SecureIoT project are managed
within WP9 “Ethics requirements”. As described in [2] personal data will not
be collected as part of trialing activities themselves but might be collected
through questionnaires to collect stakeholders’ feedback. For this activity,
informed consent forms describing why -if this is the case- personal data are
needed, how and for how long they will be stored etc. will be included in the
questionnaires. The template of this consent form is being under review and
will be annexed in the final version of the Data Management deliverable at
M36.
The project is also collecting personal data through its web portal and will
also be collecting personal data through its market platform (WP7). While
these are not research data, for the sake of completeness we include in this
deliverable, as Annexes, the privacy policy of the SecureIoT web portal and
market platform.
# List of Datasets
In this Section, we present details for datasets in the project; all
representing version 1. As data collection and use is an ongoing process,
details in the list are subject to change. If need be, follow-up versions will
be created to capture any changes during the further course of the project.
When depositing to Zenodo datasets that are not closed, further details
regarding the structure of datasets (e.g. fields and measurement units) will
be provided to assist interested third parties.
## Multi-Vendor Industrie 4 Usage Scenarios data
### Datasets collected/generated in the project
<table>
<tr>
<th>
Number
</th>
<th>
#1
</th> </tr>
<tr>
<td>
Name
</td>
<td>
2019/06/10 - Injection Molding - low rate - normal
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Normal injection molding data with low datarate (1/0.5s)
This file contains datapoints from simulated injection molding cycles within
the industry 4.0 use case. A cycle, i.e. producing one piece of injection
molded product, takes 60 seconds. During this time, molten plastic is injected
into the mold, resulting in a strong increase of pressure and temperature in
the mold and the mold area. The part then cools off until it is cool enough
for the mold to be released.
The dataset contains 13 parameters.
Parameter Explanation
</td> </tr>
<tr>
<td>
</td>
<td>
Date
</td>
<td>
The date of the simulation
</td>
<td>
</td> </tr>
<tr>
<td>
Time
</td>
<td>
The timestamp for the respective value
</td> </tr>
<tr>
<td>
Heater
</td>
<td>
Indicates, if the heater is on or off
</td> </tr>
<tr>
<td>
T_hopper
</td>
<td>
The temperature of the plastic hopper (°C)
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
T_barrel
</th>
<th>
The temperature of the plastic barrel (°C)
</th>
<th>
</th> </tr>
<tr>
<th>
T_mold
</th>
<th>
The temperature on the outside of the mold (°C)
</th> </tr>
<tr>
<th>
T_machine
</th>
<th>
The temperature on the outside of the machine (°C)
</th> </tr>
<tr>
<th>
P_barrel
</th>
<th>
The pressure of the barrel (bar)
</th> </tr>
<tr>
<th>
P_mold
</th>
<th>
The in-mold pressure (bar)
</th> </tr>
<tr>
<th>
M_piston
</th>
<th>
Indicates piston movement
</th> </tr>
<tr>
<th>
Valve_Filler
</th>
<th>
Valve position of the filler.
</th> </tr>
<tr>
<th>
Valve_Mold_inlet
</th>
<th>
Valve postion at the mold inlet.
</th> </tr>
<tr>
<th>
Valve_Mold_outlet
</th>
<th>
Valve position at the modl outlet.
</th> </tr>
<tr>
<th>
Note that the parameters Heater, M_piston, Valve_Filler,
Valve_Mold_inlet/outlet may not contain correct values at the moment.
</th> </tr>
<tr>
<td>
Size
</td>
<td>
17.8 MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Variable name, Type ( **N** umeric or **A** SCII), Decimals (number of decimal
places in the case of a numeric variable, Writable (1 if can be written by the
machine, 0 if not)
Date,N,0,0,
Time,A,0,0,
Heater,N,0,0,
T_hopper,N,8,0,
T_barrel,N,8,0,
T_mold,N,9,0,
T_machine,N,8,0,
P_barrel,N,9,0,
P_mold,N,8,0,
M_piston,N,0,0,
Valve_Filler,N,0,0,
Valve_Mold_inlet,N,0,0,
Valve_Mold_outlet,N,0,0
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Closed, until further notice (may contain commercially sensitive information)
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Hendrik Eikerling ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#2
</th> </tr>
<tr>
<td>
Name
</td>
<td>
2019/06/10 - Injection Molding - high rate - normal
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Normal injection molding data with high datarate (1/0.05s) See #1
</td> </tr>
<tr>
<td>
Size
</td>
<td>
176 MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
See #1
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Closed, until further notice (may contain commercially sensitive information)
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Hendrik Eikerling ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#3
</th> </tr>
<tr>
<td>
Name
</td>
<td>
2019/06/10 - Injection Molding - low rate - anomalous
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Anomalous injection molding data with low datarate (1/0.5s).
Not all cycles are anomalous - the chance of an anomalous cycle is 50%.
See #1
</td> </tr>
<tr>
<td>
Size
</td>
<td>
19.3 MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
See #1
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Closed, until further notice (may contain commercially sensitive information)
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Hendrik Eikerling ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#4
</th> </tr>
<tr>
<td>
Name
</td>
<td>
2019/06/10 - Injection Molding - high rate - anomalous
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Anomalous injection molding data with high datarate (1/0.05s)
Not all cycles are anomalous - the chance of an anomalous cycle is 50%.
See #1
</td> </tr>
<tr>
<td>
Size
</td>
<td>
192 MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
See #1
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Closed, until further notice (may contain commercially sensitive information)
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Hendrik Eikerling ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#5
</th> </tr>
<tr>
<td>
Name
</td>
<td>
2019/06/10 - Injection Molding - high rate - anomalous timestamps
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Timestamps of anomalous cycles – high datarate (1/0.05s).
Correspond to anomalous cycles of dataset #4.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
45.4 MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
See #1
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Closed, until further notice (may contain commercially sensitive information)
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Hendrik Eikerling ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#6
</th> </tr>
<tr>
<td>
Name
</td>
<td>
2019/06/10 - Injection Molding - low rate - anomalous timestamps
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Timestamps of anomalous cycles – low datarate (1/0.5s).
Correspond to anomalous cycles of dataset #3.
</td> </tr>
<tr>
<td>
Use case involved
</td>
<td>
Industrie 4.0 – Injection Molding
</td> </tr>
<tr>
<td>
Size
</td>
<td>
4.7 MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
See #1
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Closed, until further notice (may contain commercially sensitive information)
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Hendrik Eikerling ( [email protected]_ )
</td> </tr> </table>
### Datasets imported
So far, no open data sets have been identified as sources of information
relevant to the use case; if this changes in later stages of the project, this
section will be updated.
## Socially Assistive Robots and IoT Applications Usage Scenarios
### Datasets collected/generated in the project
<table>
<tr>
<th>
Number
</th>
<th>
#1
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Environmental sensing
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Environmental sensing data coming from environmental sensors
</td> </tr>
<tr>
<td>
Size
</td>
<td>
414 kB sensing data/day
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
{
"_id" : "2014-09-01T00:00:00.000Z",
"_rev" : "1-b34306f2f0344672d653f5b5c7df711c",
"movement" : false,
"illuminance" : 2.0464407112347436, "temperature" : 19.40782989444393,
"humidity" : 52.07199253060395,
"NG" : 1,
"CO" : 2,
"LPG" : 1,
"door_open" : true,
"timestamp" : "2014-09-01T00:00:00.000Z"
}
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Sofoklis Kyriazakos ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#2
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Wearable sensing
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Wearable sensing data coming from wearable devices
</td> </tr>
<tr>
<td>
Size
</td>
<td>
3.03 MB wearable sensing data/day
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
{
"_id": "2014-09-01T12:29:10.000Z",
"_rev": "1-156f011ef2ecd6643a089eb61bc0b24e",
"activity": {
"IMA": 0.1112141600593139,
"ISA": 0.1112141600593139,
"steps": 5473,
"physicalActivity": "WALKING"
},
"fall": false,
"timestamp": "2014-09-01T12:29:10.000Z"
}
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Sofoklis Kyriazakos ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#3
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Visual sensing
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Visual sensing data coming from visual sensors
</td> </tr> </table>
<table>
<tr>
<th>
Size
</th>
<th>
23 kB visual sensing data/day
</th> </tr>
<tr>
<td>
Structure
</td>
<td>
{
"_id": "2014-09-01T12:29:10.000Z",
"_rev": "1-156f011ef2ecd6643a089eb61bc0b24e",
"people": [
{
"trackID": 0,
"x": 400,
"y": 340,
"width": 40,
"height": 40,
"positionConf": 0.9,
"gender": "MALE", "genderConf": 0.9,
"age": 70,
"ageConf": 0.8,
"emotion": "NEUTRAL",
"emotionConf": 0.7
},
{
"trackID": 2,
"x": 230,
"y": 310,
"width": 43,
"height": 42,
"positionConf": 0.9,
"gender": "MALE", "genderConf": 0.9,
"age": 68,
"ageConf": 0.7,
"emotion": "NEUTRAL",
"emotionConf": 0.7
}
],
"timestamp": "2014-09-01T12:29:10.000Z"
}
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Sofoklis Kyriazakos ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#4
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Resting furniture sensing
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Resting furniture data coming from furniture sensors
</td> </tr>
<tr>
<td>
Size
</td>
<td>
1.17 MB resting furniture data/day
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
{
"_id": "2014-09-01T00:00:10.000Z",
"_rev": "1-80ede81818d8a45211212921ae6749a7",
"pressure": true,
"IMA": 0.08626862285226562,
"timestamp": "2014-09-01T00:00:10.000Z"
}
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Sofoklis Kyriazakos ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#5
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Vitals sensing
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Vitals sensing data coming from vitals sensors
</td> </tr>
<tr>
<td>
Size
</td>
<td>
0.33 kB vitals sensing data/day
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
{
"_id": "2015-04-05T10:03:00.000Z",
"_rev": "1-88c1d3e9d1ce463320a70a9c740b5b57",
"SPO2": 99,
"HR": 75,
"HRV": 43,
"systolicBP": 139,
"diastolicBP": 87,
"meanABP" : 92,
"noninvBPPR" : 67
"timestamp": "2015-04-05T10:03:00.000Z"
}
Note: not all devices populate all metadata. Each of the devices may write a
subset of these elements in its JSON file.
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Sofoklis Kyriazakos ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#6
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Patterns of ‘Challenge’ gesture and corresponding motor _positions_
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The robot’s gesture controller parses a recorded gesture file and generate
proper motor command. Regardless of the application
</td> </tr>
<tr>
<td>
</td>
<td>
context, the generated motor _positions_ commands were recorded for both
normal and abnormal cases.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
2MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Every dataset contains
* motors position CSV which includes timestamp and motor joint
positions
* another CSV file which indicates at which time stamp the normal and abnormal cases are generated
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
_License_
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Pouyan Ziafati ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#7
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Patterns of ‘Show_right” gesture and corresponding motor _positions_
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The robot’s gesture controller parses a recorded gesture file and generate
proper motor command. Regardless of the application context, the generated
motor _positions_ commands were recorded for both normal and abnormal cases.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
1MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Every dataset contains
* motors position CSV which includes timestamp and motor joint
positions
* another CSV file which indicates at which time stamp the normal and abnormal cases are generated
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Pouyan Ziafati ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#8
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Patterns of ‘Challenge’ gesture and corresponding motor _velocities_ _(FAST)_
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The robot’s gesture controller parses a recorded gesture file and generate
proper motor command. Regardless of the application context, the generated
motor _velocities_ commands were recorded for both normal and abnormal cases.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
1MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Every dataset contains
* motors velocities CSV which includes timestamp and motor joint
velocities
* another CSV file which indicates at which time stamp the normal and abnormal cases are generated
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Pouyan Ziafati ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#9
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Patterns of ‘Challenge’ gesture and corresponding motor _velocities_ _(SLOW)_
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The robot’s gesture controller parses a recorded gesture file and generate
proper motor command. Regardless of the application context, the generated
motor _velocities_ commands were recorded for both normal and abnormal cases.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
1MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Every dataset contains
* motors velocities CSV which includes timestamp and motor joint
velocities
* another CSV file which indicates at which time stamp the normal and abnormal cases are generated
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Pouyan Ziafati ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#10
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Patterns of ‘Show_right” gesture and corresponding motor _velocities_ _(FAST)_
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The robot’s gesture controller parses a recorded gesture file and generate
proper motor command. Regardless of the application context, the generated
motor _velocities_ commands were recorded for both normal and abnormal cases.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
1MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Every dataset contains
</td> </tr>
<tr>
<td>
</td>
<td>
* motors velocities CSV which includes timestamp and motor joint
velocities
* another CSV file which indicates at which time stamp the normal and abnormal cases are generated
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Pouyan Ziafati ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#11
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Patterns of motors data during specific application content
</td> </tr>
<tr>
<td>
Description
</td>
<td>
For the normal case, the QT MemGame demo is played by a user and the motors
positions are logged during the different runs of the game together with the
start/end time of each run.
For the abnormal case QT MemGame demo is played by a user but the behavior of
the game disturbed by some irrelevant gestures and moving motors to some
positions which should not happen within this application content.
The motors positions, start/end time of each run of the game and the attack
(abnormal cases) times are logged.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
1MB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
Every dataset contains
* motors positions CSV which includes timestamp and motor joint positions
* another CSV file which indicate at which time stamp the normal and abnormal cases are generated
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
Researchers
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and Microsoft Excel or another tool to open CSV files
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
Pouyan Ziafati ( [email protected]_ )
</td> </tr> </table>
### Datasets imported
So far, no open data sets have been identified as sources of information
relevant to the use case; if this changes in later stages of the project, this
section will be updated.
## Connected Car and Autonomous Driving Usage Scenarios
### Datasets collected/generated in the project
<table>
<tr>
<th>
Number
</th>
<th>
#1
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Bilbao. Normal datasets.
Generated: 2019/07/02
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Bilbao.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
There is no (intended) weird behaviour or attack included.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Three trips are included:
* Bilbao0
* DriverRecord JSON file: 434KB o CANBeat: 92KB
* Bilbao1
* DriverRecord JSON file: 445KB o CANBeat: 94KB
</td> </tr>
<tr>
<td>
</td>
<td>
• Bilbao2
o DriverRecord JSON file: 340KB o CANBeat: 72KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data
{"bra": "0.0", "dist": "0.00", "element": "1", "fue": "0.00", "gear":
"2", "ignition": "0", "lat": "52.23741", "lon": "0.15823", "rpm": "1000",
"speed": "0.00", "str_ang": "-1.5", "throttle": "0.0", "timestamp":
"2019-01-15 14:58:07.469314", "v2xLat":
"52.23741", "v2xLon": "0.15823"}
* CAN data (CANBeat)
{"cbus_load": "9", "invl_crcs": "0", "invl_seqs": "0", "timestamp":
"2019-05-21 15:03:17.248674", "unex_dlcs": "0", "unex_msgs": "0"}
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#2
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Cologne. Normal datasets with a manipulated (CAN
& Vehicle Data attacks) version of Cologne0 for comparison.
Generated: 2019/07/02
</td> </tr> </table>
<table>
<tr>
<th>
Description
</th>
<th>
This dataset represents vehicle information collected while driving around
Cologne.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
‘ _Cologne0_ ’and ‘ _Manipulated Cologne0_ ’ are based on the same “trip”, of
which the manipulated version has some unusual behaviour both in the
application level data (DriverRecord) and in the CAN activity (CANBeat).
Other than ‘ _Manipulated Cologne0_ ’, there is no (intended) weird behaviour
or attack included in these sets.
</th> </tr>
<tr>
<td>
Size
</td>
<td>
Three trips are included:
* Cologne0 o DriverRecord JSON file: 250B o CANBeat JSON file: 53KB
* Manipulated version of Cologne0 o DriverRecord JSON file: 251KB o CANBeat JSON file: 54KB
* Cologne1 o DriverRecord JSON file: 350KB o CANBeat JSON file: 75KB
* Cologne2 o DriverRecord JSON file: 249KB o CANBeat JSON file: 53KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#3
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Munich. Normal datasets with a manipulated (CAN &
Vehicle Data attacks) version of Munich0 for comparison.
Generated: 2019/07/02
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Munich.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
‘ _Munich0_ ’and ‘ _Manipulated Munich0_ ’ are based on the same “trip”, of
which the manipulated version has some unusual behaviour both in the
application level data (DriverRecord) and in the CAN activity (CANBeat).
Other than ‘ _Manipulated Munich0_ ’, there is no (intended) weird behaviour
or attack included in these sets.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Two trips are included:
* Munich0 o DriverRecord JSON file: 254KB o CANBeat JSON file: 54KB
* Manipulated version of Munich0 o DriverRecord JSON file: 255KB
o CANBeat JSON file: 55KB
* Munich1 o DriverRecord JSON file: 299KB
</td> </tr>
<tr>
<td>
</td>
<td>
o CANBeat JSON file: 63KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#4
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Paris. Normal datasets with a manipulated (CAN &
Vehicle Data attacks) version of Paris0 for comparison.
Generated: 2019/07/02
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Paris.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
‘ _Paris0_ ’and ‘ _Manipulated Paris0_ ’ are based on the same “trip”, of
which the manipulated version has some unusual behaviour both in the
application level data (DriverRecord) and in the CAN activity (CANBeat).
Other than ‘ _Manipulated Paris0_ ’, there is no (intended) weird behaviour or
attack included in these sets.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Five trips are included:
* Paris0 o DriverRecord JSON file: 350KB o CANBeat JSON file: 75KB
* Manipulated version of Paris0 o DriverRecord JSON file: 350KB o CANBeat JSON file: 75KB
* Paris1 o DriverRecord JSON file: 275KB o CANBeat JSON file: 59KB
* Paris2 o DriverRecord JSON file: 306KB o CANBeat JSON file: 65KB
* Paris3 o DriverRecord JSON file: 337KB o CANBeat JSON file: 72KB
* Paris4 o DriverRecord JSON file: 222KB o CANBeat JSON file: 48KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#5
</th> </tr> </table>
<table>
<tr>
<th>
Name
</th>
<th>
Mature development datasets, Athens. Normal datasets with a manipulated (CAN &
Vehicle Data attacks) version of Athens0 for comparison.
Generated: 2019/07/02
</th> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Athens.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
‘ _Athens0_ ’and ‘ _Manipulated Athens0_ ’ are based on the same “trip”, of
which the manipulated version has some unusual behaviour both in the
application level data (DriverRecord) and in the CAN activity (CANBeat)
Other than ‘ _Manipulated Athens0_ ’, there is no (intended) weird behaviour
or attack included in these sets.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Three trips are included:
* Athens0 o DriverRecord JSON file: 278KB o CANBeat JSON file: 59KB
* Manipulated version of Athens0 o DriverRecord JSON file: 279KB o CANBeat JSON file: 60KB
* Athens1 o DriverRecord JSON file: 276KB o CANBeat JSON file: 58KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#6
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Brussels. Normal datasets
Generated: 2019/07/02 & 2019/07/03
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Brussels.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
There is no (intended) weird behaviour or attack included in these sets.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Three trips are included:
* Brussels0 o DriverRecord JSON file: 362KB o CANBeat JSON file: 77KB
* Brussels1 o DriverRecord JSON file: 266KB o CANBeat JSON file: 57KB
* Brussels2 o DriverRecord JSON file: 184KB o CANBeat JSON file: 39KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
• Cybersecurity experts aiming to understand the normal and abnormal
performance of a connected vehicle.
</td> </tr>
<tr>
<td>
</td>
<td>
• Providers interested in the development of services for connected and
autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#7
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Waterford. Normal datasets with a manipulated
(CAN & Vehicle Data attacks) version of Waterford0 for comparison.
Generated: 2019/07/02 & 2019/07/03
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Waterford.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
‘ _Waterford0_ ’and ‘ _Manipulated Waterford0_ ’ are based on the same “trip”,
of which the manipulated version has some unusual behaviour both in the
application level data (DriverRecord) and in the CAN activity (CANBeat).
Other than ‘ _Manipulated Waterford0_ ’, there is no (intended) weird
behaviour or attack included in these sets.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Three trips are included:
* Waterford0 o DriverRecord JSON file: 484KB o CANBeat JSON file: 102KB
* Manipulated version of Waterford0
</td> </tr>
<tr>
<td>
</td>
<td>
o DriverRecord JSON file: 484KB o CANBeat JSON file: 103KB
* Waterford1 o DriverRecord JSON file: 250KB o CANBeat JSON file: 53KB
* Waterford1 o DriverRecord JSON file: 431KB o CANBeat JSON file: 91KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
<table>
<tr>
<th>
Number
</th>
<th>
#8
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Mature development datasets, Cambridge. Normal datasets with a manipulated
(CAN & Vehicle Data attacks) version of Cambridge0 for comparison.
Generated: 2019/07/02
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset represents vehicle information collected while driving around
Cambridge.
The dataset contains multiple vehicle signals as collected by the IDAPT
onboard vehicle-unit from the vehicle CAN networks, IMU and V2X. In addition,
information gathered from the vehicle CAN bus by SecureIoT CANBeat is also
included.
</td> </tr>
<tr>
<td>
</td>
<td>
‘ _Cambridge0_ ’and ‘ _Manipulated Cambridge 0_ ’ are based on the same
“trip”, of which the manipulated version has some unusual behaviour both in
the application level data (DriverRecord) and in the CAN activity (CANBeat).
Other than ‘ _Manipulated Cambridge0_ ’, there is no (intended) weird
behaviour or attack included in these sets.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Three trips are included:
* Cambridge0 o DriverRecord JSON file: 499KB o CANBeat JSON file: 106KB
* Manipulated version of Cambridge0 o DriverRecord JSON file: 500KB o CANBeat JSON file: 107KB
* Cambridge1 o DriverRecord JSON file: 292KB o CANBeat JSON file: 63KB
* Cambridge2 o DriverRecord JSON file: 581KB o CANBeat JSON file: 124KB
</td> </tr>
<tr>
<td>
Structure
</td>
<td>
* Vehicle data: see previous datasets
* CAN data (CANBeat) : see previous datasets
</td> </tr>
<tr>
<td>
Utility
</td>
<td>
* Cybersecurity experts aiming to understand the normal and abnormal performance of a connected vehicle.
* Providers interested in the development of services for connected and autonomous vehicles.
</td> </tr>
<tr>
<td>
Openness
</td>
<td>
Open
</td> </tr>
<tr>
<td>
Tool needed
</td>
<td>
Unzipper and text editor
</td> </tr>
<tr>
<td>
License
</td>
<td>
Creative Commons Attribution 4.0 International Public License
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
David Evans ( [email protected]_ )
</td> </tr> </table>
### Datasets imported
So far, no open data sets have been identified as sources of information
relevant to the use case; if this changes in later stages of the project, this
section will be updated.
# Conclusions
This deliverable presented the current view of the SecureIoT project in terms
of datasets that have been or will be collected/generated and used in the
context of the project.
The final version of the DMP at M36 will present a final and definite list of
datasets that will have been collected/generated by the end of the project;
unless a dataset needs to remain closed for reasons clearly explained, all the
other will have been uploaded to the Zenodo repository accompanied by suitable
metadata, keyworks, licenses and descriptions in general that will ease their
reuse by other interested third parties.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1435_SecureIoT_779899.md
|
# Introduction
## Overall Objective
As part of its exploitation and sustainability strategy, SecureIoT will be
releasing part of its platform as open source software, which will be made
available through the project’s ecosystem portal that is developed in WP7 of
the project. Along with software, the project plans to release datasets as
well, as means of facilitating third-parties (i.e. members of the SecureIoT
platform community) to test, validate and possibly extend SecureIoT
developments. This intention is fully in-line with SecureIoT’s strategy for
data management, as the latter is reflected in the project’s DoA (Description
of the Action) document. In this context, this part of the deliverable is
devoted to the presentation of the project’s Data Management Plan (DMP).
In principle, the release of data in the scope of SecureIoT is aimed at the
following objectives:
* **Validation of SecureIoT components** : SecureIoT needs to provide partners and thirdparties with an easy way for using and validating its developments. In most cases, this requires the availability of some data that can be used to validate the operation of SecureIoT components.
* **Demonstration of SecureIoT components** : In addition to boosting the validation of SecureIoT components, datasets are also needed for running demonstrations of the various prototypes. Demonstrations is an essential element for ecosystem building, as third-parties are usually starving for one-click demonstrations that could easily help them understand the operation of certain software components.
* **Training and Education** : Open datasets can be an invaluable resource for developing training and education modules, such as the ones developed in the scope of the SecureIoT training services.
* **Follow the GDPR guidelines** : In May 2018, the new European Regulation on Privacy, the General Data Protection Regulation, (GDPR) came into effect. In this DMP we will describe the measures to protect the privacy of all data provided in the light of the GDPR.
In order to realize these objectives, SecureIoT is considering the release of
certain datasets as open data. This DMP identifies candidate datasets, along
with the preconditions for making them openly accessible as part of offerings
to the project’s ecosystem.
## DMP Evolution
The DMP presented in this deliverable is characterized as preliminary, given
that the project is still in the process of finalizing the specifications of
validating scenarios and use cases, while actual data capturing has not
commenced yet. SecureIoT will release updates to the present DMP, in-line with
the evolution of the specification and implementation of validating use cases,
including their deployment in the test environments.
As already outlined, this preliminary version of the DMP has a dual objective:
First to identify available datasets that are likely to be opened and shared
as part of the SecureIoT ecosystem. Second, to identify the conditions that
should be met in order for these datasets to be opened. The identification of
such conditions is particularly important, given that making data public is
against the corporate policies of the manufacturers of the consortium. In
certain case, this important barrier can be lowered following appropriate
processing of the data (e.g., anonymization), as well as following reception
of appropriate approvals.
## GDPR
Since the 25 th May 2018, GDPR is valid and obligatory and that applies also
for SecreIoT project. Therefore, partners are following the same new rules and
principles. In this section, we are describing how the founding principles of
the GDPR will be followed in the SecureIoT project. More specifically,
following points are taken into account:
* **Lawfulness, fairness and transparency** : Personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data user.
* **Purpose limitation** : Personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.
* **Data minimization** : Personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed.
* **Accuracy** : Personal data shall be accurate and, where necessary, kept up to date. All data collected will be checked for consistency.
* **Storage limitation** : Personal data shall be kept in a form, which permits identification of data for no longer than is necessary for the purposes for which the personal data are processed.
* **Integrity and confidentiality** : Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures.
* **Accountability** : The controller shall be responsible for, and be able to demonstrate compliance with the GDPR.
## Datasets Description Template
In following paragraphs we provide an overview of the datasets that SecureIoT
will be considering for release as part of its ecosystem. Note that the
inclusion of a dataset in the list implies that it is considered to be offered
in the project’s portal, subject to the clearance of some precondition.
Datasets information is divided in five different categories and in each
category the information is described in a tabular form. The attributes of the
information provided are:
### General information
* **Ref. No** : Sequence Number
* **Title** : The title of the Dataset
* **Version** : The dataset info version
* **Description** : Briefly describe what data would represent
* **Type of data** : Data already existing OR date to be released
* **Dataset availability** : Date of the dataset availability
* **Future revisions anticipated** : Define if future revisions are anticipated **Owner** : Denotes the provider of the datasets.
* **Contact Person** : Person in charge of the release of the dataset and its inclusion in the SecureIoT portal.
* **Related Use Cases** : The set of SecureIoT use cases that the dataset related. The description of the use cases is performed with reference to deliverable D2.2.
* **Utility / Potential Use** : An illustration of why the particular dataset could be useful to the SecureIoT community. e.g.:
o Research and experimentation o Service Development / Integration o Training
& Education
### Environment / Context
* **Directly observable device types** : i.e., Sensor, robot, vehicle board, monitor device, edge node, gateway
* **Directly observable software** : i.e., IoT application, gateway software, cloud service app…
* **Indirectly observable device** : i.e., Sensor, robot, vehicle board, monitor device, edge node, gateway (devices which are not directly monitored, be exhaustive to the extent possible)
* **Indirectly observable software** : List the software which is observed indirectly
* **Architecture/Topology description and communication protocols** : Figure showing where are the monitoring probes (some incertitude may occur)
### Data access
Here there are three cases:
1. Data is already retrieved and stored as data files
2. Monitoring data can be retrieved through an interface
3. Data is present in sw/hw but no means exists yet to access them remotely, need for a probe to be developed
The first two may coincide. Data access has the following attributes:
* **Dataset provided as data file(s)** : Define if the dataset is provided as data file(s) **Remote accessibility** : Define if the data are remotely accessible and how.
* **If data is not yet accessible, how can they be retrieved?** : Define the method which the data can be accessed in the future.
### Data description
* **Data format** : i.e., NetFlow, pcap, syslog, json (when an interface is used, the format of embedded data is needed to be described)
* **Encryption** : explain if and how the data are encrypted.
* **Data format description** : describe the syntax and semantics of data (very important for non-standard formats, e.g. describe the columns of a csv file, or the structure and semantics of what contains a JSON file)
* **For unusual format, tool to read it** : specify the required tool/library to read the data if their data type is not standard.
* **Dataset generation** : specify if the data was monitored in a system with real users? If no, how the data has been generated?
* **Attack** : specify if the dataset contain attacks? If yes, specify if the attacks are annotated? If yes, specify what is the granularity of the annotations?
* **Dataset statistics** : i.e., Duration, size(s) in appropriate format (MB, pkts), number of packets breakdown per IP address, protocols… (be exhaustive as possible)
* **Sample of data** : Provide a sample of data in this attribute or a link to them.
### Data restrictions
* **Is the data open publicly?** : Specify if the data are public
* **If no, is there a plan to make data open?** : Specify if the data are not public a plan to make them public.
* **If no, will the data be accessible to the consortium, or to specific partner(s)?** : Specify if the data can be accessible to the consortium, or to specific partner(s) in case they cannot be public.
* **If yes, for how long?** : Specify the time period the data can be accessible to the consortium, or to specific partner(s) in case they cannot be public.
* **Can the data be used for public dissemination** : Specify if the data can be used for public dissemination (without revealing the full content of the data, aggregated view)
* **Who owns the data?** : Identify the data owner
* **Legal issues** : Specify the confidentiality level of the dataset and the license under which the dataset could be opened and offered publicly.
## SecureIoT Datasets
<table>
<tr>
<th>
Ref. No
</th>
<th>
_0001_
</th> </tr>
<tr>
<td>
Title
</td>
<td>
Manufacturing data resulting from sensors, machines, IACS
</td> </tr>
<tr>
<td>
Version
</td>
<td>
1.0
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains or will contain different kind of data generated within
manufacturing. These will be sensor data sometimes aggregated for a complete
machine.
Moreover, data generated by IACS shall be considered in the use case.
The data may include application information, context information, status
information, traffic data and much more.
Details will be specified within the use case
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
Application, context, performance, status, usage, alerts, etc.
</td> </tr>
<tr>
<td>
Dataset
availability
</td>
<td>
TBD
</td> </tr> </table>
This is the first version of the DMP deliverables and some of the Dataset
information has not been determined yet. The missing fields will be completed
in the coming versions of the deliverable. The information provided below has
been collected in collaboration with WP3 and WP4.
### Multi-Vendor Industrie 4.0 Usage Scenarios Data
1.5.1.1 General information
<table>
<tr>
<th>
Directly observable device types
</th>
<th>
* IoT Gateways, e.g. FUJITSU Intelliedge
* IACS systems
</th> </tr>
<tr>
<td>
Directly observable software
</td>
<td>
* TBD in the use case. We consider several systems to be relevant:
* P@SSPORT factory virtualization
* SIEMENS Minsphere
* FUJITSU IoT-Platform
* FUJITSU Colmina Intelligent Dashboard
</td> </tr>
<tr>
<td>
Indirectly observable device
</td>
<td>
_Sensor, robot, vehicle board, monitor device, edge node, gateway (devices
which are not directly monitored, be exhaustive to the extent possible)_
Manufacturing devices connected to the IACS or gateways. These may be
sensors, etc. The objective
</td> </tr> </table>
<table>
<tr>
<th>
Future revisions anticipated
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
Owner
</td>
<td>
Weidmüller, Phoenix, it's OWL
</td> </tr>
<tr>
<td>
Contact
Person
</td>
<td>
David Schubert ( [email protected]_ )
</td> </tr>
<tr>
<td>
Related Use
Cases
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Utility /
Potential
Use
</td>
<td>
TBD
</td> </tr> </table>
1.5.1.2 Environment / Context
<table>
<tr>
<th>
</th>
<th>
within the use case will be to use virtualized control systems and sensors.
</th> </tr>
<tr>
<td>
Indirectly observable software
</td>
<td>
</td> </tr>
<tr>
<td>
Architecture/Topolog y description and
communication protocols
</td>
<td>
_Figure showing where are the monitoring probes (some incertitude may occur)_
Probes may be placed on each level and within the vertical communications.
Moreover, probes should be placed at the IoT-Platform level
</td> </tr> </table>
<table>
<tr>
<th>
Dataset provided as data file(s)
</th>
<th>
_Yes/No_
TBD
</th>
<th>
</th> </tr>
<tr>
<td>
Remote
accessibility
</td>
<td>
Yes/No
</td>
<td>
Usually: No
</td> </tr>
<tr>
<td>
Protocol
</td>
<td>
TBD
</td> </tr> </table>
1.5.1.3 Data access
<table>
<tr>
<th>
</th>
<th>
Message format
</th>
<th>
TBD
</th> </tr>
<tr>
<th>
Pull/Push
</th>
<th>
TBD
</th> </tr>
<tr>
<th>
Provided interface
</th>
<th>
TBD
</th> </tr>
<tr>
<td>
If data is not yet accessible, how can they be retrieved?
</td>
<td>
Describe the architecture and where the probe can deployed
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
</td>
<td>
Probe development requirements
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Usable software API on device
</td>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
Data format
</th>
<th>
TBD inm the use case
</th> </tr>
<tr>
<td>
Encryption
</td>
<td>
_Is the data encrypted? (explain)_
Yes, communication between all the components will rely on secure
communication protocols, i.e., HTTPS.
</td> </tr>
<tr>
<td>
Data format description
</td>
<td>
TBD in the use case
</td> </tr>
<tr>
<td>
For unusual format, tool to read it
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Dataset generation
</td>
<td>
Was the data We consider virtualized systems with monitored in a scenarios and
no real users
</td> </tr> </table>
1.5.1.4 Data description
<table>
<tr>
<th>
</th>
<th>
system with real users?
</th>
<th>
</th> </tr>
<tr>
<th>
If no, how the data
has been generated?
</th>
<th>
_Actions triggered /performed/simulated, how many of them, methodology_
</th> </tr>
<tr>
<td>
Attack
</td>
<td>
Does the dataset contain attacks?
</td>
<td>
In a first step we plan to provide normal operations data
Later on the virtualized plant(s) shall be exposed to attacks and the data
shall include attacks
</td> </tr>
<tr>
<td>
If yes, are the attack labeled?
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, what is the granularity of the labels?
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset statistics
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
Sample of data
</td>
<td>
TBD
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Is the data open publicly?
</th>
<th>
No
</th> </tr>
<tr>
<td>
If no, is there a plan to make data open?
</td>
<td>
No
</td> </tr>
<tr>
<td>
If no, will the data be accessible to the consortium, or to specific
partner(s)?
</td>
<td>
Yes, whole consortium
</td> </tr>
<tr>
<td>
If yes, for how long?
</td>
<td>
End of project
</td> </tr>
<tr>
<td>
Can the data be used for public dissemination (without revealing the full
content of the data, aggregated view)
</td>
<td>
TBD
</td> </tr> </table>
1.5.1.5 Data restrictions
<table>
<tr>
<th>
Who owns the data?
</th>
<th>
The respective partners of use case scenario T6.2
</th> </tr>
<tr>
<td>
Legal issues
</td>
<td>
There may be several issues regarding personal data of either customers or
employees. Within the Industrie4.0 use case we shall consider anonymization of
data for SecureIoT data collection.
Moreover we will face M2M communications and thus
telecommunication data will be a key part if the data collection.
Finally, the data may contain business secrets, e.g. process parameters.
</td> </tr> </table>
<table>
<tr>
<th>
Ref. No
</th>
<th>
_0002_
</th> </tr>
<tr>
<td>
Title
</td>
<td>
QTrobot
</td> </tr>
<tr>
<td>
Version
</td>
<td>
1.0
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset consists of traffic
* In QTrobot
* Between robot and its tablet GUI
* Between robot and CC2U of iSprint
* Between robot and internet
* Between tablet and its cloud backup server
</td> </tr> </table>
### IoT-Enabled Socially Assistive Robots Usage Scenarios Data
1.5.2.1 QTrobot
1.5.2.1.1 General information
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Type of data
</td>
<td>
* Raw sensory data (video stream, sound stream, robot’s joint angles)
* Perception data (recognized images, objects, faces, human gesture, speech, direction of voice)
* Application and actuation data (video, sound and gesture outputs of QT, application events, recognized activity, proposed activities )
* Robot and tablet config and performance (CPU, RAM, HDD and network bandwidth access and usage, network connection, running processes)
* User data (user profile, application history, user performance and progress data, user-built applications)
* Network traffic (packages)
</td> </tr>
<tr>
<td>
Dataset
availability
</td>
<td>
Mechanisms and Interfaces to capture and communicate the data to the
destination device are to be developed.
</td> </tr>
<tr>
<td>
Future revisions anticipated
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
LuxAI
</td> </tr>
<tr>
<td>
Contact
Person
</td>
<td>
Pouyan Ziafati ([email protected])
</td> </tr>
<tr>
<td>
Related Use
Cases
</td>
<td>
Social Assistive Robots
</td> </tr>
<tr>
<td>
Utility /
Potential
Use
</td>
<td>
Research and experimentation
Training & Education
</td> </tr> </table>
<table>
<tr>
<th>
Directly observable device types
</th>
<th>
_Sensor, robot, vehicle board, monitor device, edge node, gateway_
* Robot Gateway
* Tablet Gateway
</th> </tr>
<tr>
<td>
Directly observable software
</td>
<td>
Robot Operating System (ROS)
</td> </tr>
<tr>
<td>
Indirectly observable device
</td>
<td>
* 3D camera
* Microphone array
* Motor sensors
* Computer inside the robot
* Android tablet
* Router inside the robot
* Wi-fi inside the robot
</td> </tr>
<tr>
<td>
Indirectly observable software
</td>
<td>
Camera interface, microphone interface, motor interface, image recognition,
face recognition, object and gesture recognition, sound play, video play,
robot plan executor, gesture record and play
</td> </tr>
<tr>
<td>
Architecture/Topology description and communication protocols
</td>
<td>
Robot --- ROS (JSON API through Websocket server can be developed)
</td> </tr> </table>
1.5.2.1.2 Environment / Context
<table>
<tr>
<th>
Dataset provided as data file(s)
</th>
<th>
Yes
</th>
<th>
</th> </tr>
<tr>
<td>
Remote
accessibility
</td>
<td>
Yes/No
</td>
<td>
Yes (but means have to be developed)
</td> </tr>
<tr>
<td>
Protocol
</td>
<td>
ROS (or its Websocket server interface)
</td> </tr>
<tr>
<td>
Message format
</td>
<td>
_ROS messages (or JSON equivalent of ROS messages)_
</td> </tr>
<tr>
<td>
Pull/Push
</td>
<td>
_Pull, push_
</td> </tr>
<tr>
<td>
</td>
<td>
Provided interface
</td>
<td>
_ROS service/pub-sub interface + message description (or Websocket URI to be
developed)_
</td> </tr>
<tr>
<td>
If data is not yet accessible, how can they be retrieved?
</td>
<td>
Describe the architecture and where the probe can deployed
</td>
<td>
_We use ROS to communicate between different_
_pieces of software in the robot, and to communicate among the robot and
tablet. ROS can be provided with a websocket JSON-based interface which we can
use to develop a prob to access the robot. The other way around however would
be to extend the SecureIoT data capturing interface to support direct
communication with ROP._
</td> </tr>
<tr>
<td>
Probe development requirements
</td>
<td>
_See previous answer._
</td> </tr>
<tr>
<td>
Usable software API on device
</td>
<td>
_See previous answer._
</td> </tr> </table>
1.5.2.1.3 Data access
<table>
<tr>
<th>
Data format
</th>
<th>
_NetFlow, pcap, syslog, json (when an interface is used, the format of
embedded data is needed to be described)_
Network traffic (could be pcap for instance)
ROS Messages (proprietary format, or Jason equivalent)
</th> </tr>
<tr>
<td>
Encryption
</td>
<td>
Most of the data is not encrypted.
</td> </tr>
<tr>
<td>
Data format description
</td>
<td>
Full pcap file including payload
Each type of data has its own format.
</td> </tr>
<tr>
<td>
For unusual format, tool to read it
</td>
<td>
_ROS messages are simple data structures similar to C structs._
_http://wiki.ros.org/Messages_
</td> </tr>
<tr>
<td>
Dataset generation
</td>
<td>
Was the data monitored in a system with real users?
</td>
<td>
_May be possible_
</td> </tr>
<tr>
<td>
If no, how the data
has been generated?
</td>
<td>
_Data has not been generated_
</td> </tr>
<tr>
<td>
Attack
</td>
<td>
Does the dataset contain attacks?
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, are the attack labeled?
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
</td>
<td>
If yes, what is the granularity of the labels?
</td>
<td>
_Per packet, per flow, timeline of anomalies_
</td> </tr>
<tr>
<td>
Dataset statistics
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Sample of data
</td>
<td>
TBD
</td> </tr> </table>
1.5.2.1.4 Data description
1.5.2.1.5 Data restrictions
<table>
<tr>
<th>
Is the data open publicly?
</th>
<th>
_No_
</th> </tr>
<tr>
<td>
If no, is there a plan to make data open?
</td>
<td>
_Some parts can be made open_
</td> </tr>
<tr>
<td>
If no, will the data be accessible to the consortium, or to specific
partner(s)?
</td>
<td>
_Most part yes, Anonymization may be needed._
</td> </tr>
<tr>
<td>
If yes, for how long?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Can the data be used for public dissemination (without revealing the full
content of the data, aggregated view)
</td>
<td>
_TBD_
</td> </tr>
<tr>
<td>
Who owns the data?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Legal issues
</td>
<td>
_**_Flags:_ ** _
_data may be “personal data”_
_we plan to combine/merge the data with this other data source:_
___________________
_data may be “telecommunication_
_metadata”_
_data may be “telecommunication_
_content”_ _data is encrypted data may contain business secrets_
</td> </tr> </table>
1.5.2.2 CC2U
<table>
<tr>
<th>
Ref. No
</th>
<th>
_0003_
</th> </tr>
<tr>
<td>
Title
</td>
<td>
CC2U
</td> </tr>
<tr>
<td>
Version
</td>
<td>
1.0
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The Cloud Gateway if CC2U has different set of software interfaces used for
exchange of commands and data primarily from and to
Remote Proxy and that can be grouped to the following categories:
* Control and configuration interfaces (I 1 )
* Sensing data interfaces (I 2 )
* Notifications interface (I 3 )
* Actuator interfaces (I 4 )
</td> </tr>
<tr>
<td>
Extended
Description
</td>
<td>
Control and configuration Cloud Gateway interfaces (I1) are used for receiving
registration and point of contact information (version, status, etc.) from
remote proxy running in local home environments, synchronization of local and
cloud data, obtaining device configuration data from cloud and remote
configuration of local platform.
Sensing data interfaces (I2) are used for receiving all sensing data from home
environments and storing them using Data Manager. This includes user activity
data, environmental sensing data (temperature, humidity, luminance, gas
levels, movement, and presence), furniture sensing data, appliance sensing
data, speaker sensing data, visual sensing and vitals data.
Notification interface (I3) is used for exchanging notification messages from
local reasoners in local environments and Notification Manager. Actuator
interface (I4) is used for control and sensing actuator commands from cloud
components towards local home environment.
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
Application, context, performance, status, usage, alerts, etc.
</td> </tr> </table>
1.5.2.2.1 General information
<table>
<tr>
<th>
Directly observable device types
</th>
<th>
</th>
<th>
Wearable devices (e.g. Fitbit tracker)
Medical devices (e.g. NONIN SpO2, OMRON blood pressure)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Environmental sensors (temperature, humidity)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
A/V sensing (e.g. cameras, KINECT)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
QT Robot
</td> </tr>
<tr>
<td>
Directly observable software
</td>
<td>
</td>
<td>
CloudCare2U – Cloud Gateway
</td> </tr>
<tr>
<td>
Indirectly observable device
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
Indirectly observable software
</td>
<td>
TBD
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Dataset
availability
</th>
<th>
TBD
</th> </tr>
<tr>
<td>
Future revisions anticipated
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
iSPRINT
</td> </tr>
<tr>
<td>
Contact
Person
</td>
<td>
Sofoklis Kyriazakos
</td> </tr>
<tr>
<td>
Related Use
Cases
</td>
<td>
Social Assistive Robots
</td> </tr>
<tr>
<td>
Utility /
Potential
Use
</td>
<td>
TBD
</td> </tr> </table>
1.5.2.2.2 Environment / Context
<table>
<tr>
<th>
Dataset provided as data file(s)
</th>
<th>
TBD
</th>
<th>
</th> </tr>
<tr>
<td>
Remote
accessibility
</td>
<td>
Yes/No
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Protocol
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Message format
</td>
<td>
_JSON_
</td> </tr>
<tr>
<td>
Pull/Push
</td>
<td>
_Pull, push_
</td> </tr>
<tr>
<td>
Provided interface
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
If data is not yet accessible, how
</td>
<td>
Describe the architecture and where
</td>
<td>
TBD
</td> </tr> </table>
Architecture/T
opolog
y description and
communication
protocols
_Figure showing where are the monitoring probes (some_
_incertitude may occur)_
1.5.2.2.3 Data access
<table>
<tr>
<th>
Data format
</th>
<th>
TBD
</th> </tr>
<tr>
<td>
Encryption
</td>
<td>
Yes, communication between all the components will rely on secure
communication protocols, i.e., HTTPS.
</td> </tr>
<tr>
<td>
Data format description
</td>
<td>
TBD in the use case
</td> </tr>
<tr>
<td>
For unusual format, tool to read it
</td>
<td>
_-_
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset generation
</td>
<td>
Was the data monitored in a system with real users?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
If no, how the data
has been generated?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Attack
</td>
<td>
Does the dataset contain attacks?
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, are the attack labeled?
</td>
<td>
_Yes/No_
</td> </tr> </table>
<table>
<tr>
<th>
can they be retrieved?
</th>
<th>
the probe can deployed
</th>
<th>
</th> </tr>
<tr>
<th>
Probe development requirements
</th>
<th>
TBD
</th> </tr>
<tr>
<th>
Usable software API on device
</th>
<th>
TBD
</th> </tr> </table>
1.5.2.2.4 Data description
<table>
<tr>
<th>
</th>
<th>
\-
</th> </tr>
<tr>
<th>
If yes, what is the _-_ granularity of the labels?
</th> </tr>
<tr>
<td>
Dataset statistics
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Sample of data
</td>
<td>
_**Url** _
_ <HOST_URI>/Notif?user={user_name}&&date= _
_{date} &time={time}&type={type}&title={title}&content= _
_{content} &prior={priority}&source={source} _
_**Parameters** _
_user – user name (e.g. “Bob”), date – Date of the trigger (e.g._
_"08-05-2014"), time – Time of the trigger (e.g. "12.30.21"), type – The
design type of the notification, in regards to its representation to the UI
(e.g. "two-buttons"), title- The title of the notification shown on the UI
(e.g. " Congratulations!"), content – The information content of the
notification shown on the UI (a JSON formatted information), prior – the
priority value for the given type of notification. This facilitates the
possibility to sort the notifications based on priority (left for future use),
source – the url address of the component that sends the notification._
</td> </tr> </table>
<table>
<tr>
<th>
Is the data open publicly?
</th>
<th>
No
</th> </tr>
<tr>
<td>
If no, is there a plan to make data open?
</td>
<td>
No
</td> </tr>
<tr>
<td>
If no, will the data be accessible to the consortium, or to specific
partner(s)?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
If yes, for how long?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Can the data be used for public dissemination (without revealing the full
content of the data, aggregated view)
</td>
<td>
Yes thru Anonymization
</td> </tr> </table>
1.5.2.2.5 Data restrictions
<table>
<tr>
<th>
Ref. No
</th>
<th>
_0004_
</th> </tr>
<tr>
<td>
Title
</td>
<td>
Connected car and autonomous driving data
</td> </tr>
<tr>
<td>
Version
</td>
<td>
1.0
</td> </tr>
<tr>
<td>
Description
</td>
<td>
This dataset contains or will contain different kind of data related to the
connected car and autonomous driving data, i.e., application information,
context information, traffic data…
</td> </tr>
<tr>
<td>
Type of data
</td>
<td>
Application, context, performance, usage, alert.
</td> </tr>
<tr>
<td>
Dataset
availability
</td>
<td>
Application data is available however this is not formally put into a log
(e.g. JSON) so formally capturing this log and putting it into a dataset file
is to be developed.
</td> </tr>
<tr>
<td>
Future revisions anticipated
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Owner
</td>
<td>
IDIADA
</td> </tr>
<tr>
<td>
Contact
Person
</td>
<td>
David Evans
</td> </tr>
<tr>
<td>
Related Use
Cases
</td>
<td>
Connected and Autonomous Vehicles
</td> </tr>
<tr>
<td>
Utility /
Potential
Use
</td>
<td>
Research and experimentation
</td> </tr> </table>
<table>
<tr>
<th>
Who owns the data?
</th>
<th>
iSPRINT/LuxAI
</th> </tr>
<tr>
<td>
Legal issues
</td>
<td>
TBD
</td> </tr> </table>
### Connected Car and Autonomous Driving Usage Scenarios Data
1.5.3.1 General information
<table>
<tr>
<th>
Directly observable device types
</th>
<th>
IDIADA IDAPT platform
</th> </tr>
<tr>
<td>
Directly observable software
</td>
<td>
IoT FIWARE related components:
o IoT Agent (JSON) o Orion Context Broker
</td> </tr>
<tr>
<td>
Indirectly observable device
</td>
<td>
Vehicle components connected to the IDAPT platform. For instance: o Vehicle
Speed o Braking information o Steering Wheel Angle o GPS Heading o GPS Speed o
Yaw_Rate o …
</td> </tr>
<tr>
<td>
Indirectly observable software
</td>
<td>
</td> </tr>
<tr>
<td>
Architecture/Topology description and communication protocols
</td>
<td>
Vehicle components --- CAN bus --- IDAPT platform
IDAPT platform --- MQTT / TCP + HTTPS + REST (monitoring probe)--- FIWARE IoT
Agent
FIWARE IoT Agent --- TCP + HTTPS + REST (monitoring probe) --- FIWARE Context
Broker ---
</td> </tr> </table>
1.5.3.2 Environment / Context
<table>
<tr>
<th>
Dataset provided as data file(s)
</th>
<th>
Yes
</th>
<th>
</th> </tr>
<tr>
<td>
Remote
accessibility
</td>
<td>
Yes/No
</td>
<td>
No
</td> </tr>
<tr>
<td>
Protocol
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
Message format
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
Pull/Push
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
</td>
<td>
Provided interface
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
If data is not yet accessible, how can they be retrieved?
</td>
<td>
Describe the architecture and where the probe can deployed
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Probe development requirements
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Usable software API on device
</td>
<td>
TBD
</td> </tr> </table>
1.5.3.3 Data access
1.5.3.4 Data description
Data format _NetFlow, pcap, syslog, json (when an interface is used, the
format of embedded data is needed to be described)_
Application data / Context data o _ROS is available at the moment, but this
is moreso for development purposes, JSON to be implemented,_
<table>
<tr>
<th>
</th>
<th>
_however we are quite flexbile in relation to how the data is captured to a
file._
o At Cloud level: FIWARE NGSI model
( _http://fiware.github.io/context.Orion/api/v2/latest/_ )
Traffic data
o Pcap files Syslogs
</th> </tr>
<tr>
<td>
Encryption
</td>
<td>
Yes, communication between all the components will rely on secure
communication protocols, i.e., HTTPS.
</td> </tr>
<tr>
<td>
Data format description
</td>
<td>
_Syntax and semantics of data (very important for non-standard formats, e.g.
describe the columns of a csv file, or the structure and semantics of what
contains a JSON file)_
* Application data / Context data
* _ROS is available at the moment, but this is moreso for development purposes, JSON to be implemented, however we are quite flexbile in relation to how the data is captured to a file._
* At Cloud level: FIWARE NGSI model
( _http://fiware.github.io/context.Orion/api/v2/latest/_ )
* Traffic data
* Pcap files
* Syslogs
</td> </tr>
<tr>
<td>
For unusual format, tool to read it
</td>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
Dataset generation
</th>
<th>
Was the data monitored in a system with real users?
</th>
<th>
We are
_1._
</th>
<th>
analysing several options:
_To use synthetic data generated by a simulator tool_
_(_http://www.dlr.de/ts/en/desktopdefault.aspx_ _
__/tabid-9883/16931_read-41000/_ ) _
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_2._
</td>
<td>
_To use real data:_
_a._ _Logs from vehicles and / or near-realtime streaming of data._
</td> </tr>
<tr>
<td>
</td>
<td>
If no, how the data has been generated?
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
Attack
</td>
<td>
Does the dataset contain attacks?
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
If yes, are the attack labeled?
</td>
<td>
_-_
</td>
<td>
</td> </tr>
<tr>
<td>
If yes, what is
the
granularity of the labels?
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
Dataset statistics
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
Sample of data
</td>
<td>
TBD
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Is the data open publicly?
</th>
<th>
No
</th> </tr> </table>
1.5.3.5 Data restrictions
<table>
<tr>
<th>
If no, is there a plan to make data open?
</th>
<th>
No
</th> </tr>
<tr>
<td>
If no, will the data be accessible to the consortium, or to specific
partner(s)?
</td>
<td>
Yes, whole consortium
</td> </tr>
<tr>
<td>
If yes, for how long?
</td>
<td>
End of project
</td> </tr>
<tr>
<td>
Can the data be used for public dissemination (without revealing the full
content of the data, aggregated view)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Who owns the data?
</td>
<td>
IDIADA / ATOS
</td> </tr>
<tr>
<td>
Legal issues
</td>
<td>
We have identified that there may be some legal issues regarding the collected
data. For instance, the GPS position of the vehicle (indirect identification),
a vehicle identification number… Therefore, the data could lead to single out
car driver.
</td> </tr> </table>
# Data Access and Sharing
Due to the nature of the data involved, some of the results that will be
generated by each project phase will be restricted to authorized users, while
other results will be publicly available. As is our commitment, data access
and sharing activities will be rigorously implemented in compliance with the
privacy and data collection rules and regulations, as they are applied
nationally and in the EU, as well as with the H2020 rules. In the case end-
user testing will be performed, SecureIoT users would be required to pre-
register and consent using the system. Then they will need to authenticate
themselves against a user database. If successful, the users will have roles
associated with them. These roles will determine the level of access that a
user will be given and what they will be permitted to do.
As the raw data included in the data sources will be gathered from the closed
and controlled SecureIoT environment, collected measurements will be seen as
highly commercially-sensitive. Therefore, access to raw data can only take
place through the partners involved in the project. For the data analytic
models to function correctly, the data will have to be included into the
SecureIoT databases. The results of the IoT data collection and analysis will
be secured and all privacy concerns will be catered during the design phase.
In the cases of trend analytics, anonymization methods will be applied as part
of the built-in cloud platform features.
,
Publications will be released and disseminated through the project
dissemination and exploitation channels to make external research and market
actors aware of the project as well as appropriate access to the data.
Within the project, our produced conference papers and journal publications
will be Green Open
Access and stored in an appropriate repository – such as OpenAIRE (European
Comission, 2015) Registry of Research Data Repositories (German Research
Foundation, 2015) or Zenodo (CERN Data Centre, 2015).
# Data Management Plan Checklist
At the end of the project, we will be carrying out the following checklist to
ensure that we are meeting the criteria to have successfully implemented an
Open Access Data Management Plan. The required KPIs will be updated in
subsequent versions of this document. By adhering to the items below, we are
confident that the project will provide open access to the appropriate data
and software, and thereby, enable researchers to utilize the findings of this
project to further expand their knowledge capacity and personal gains as well
as to provide the IoT industry with the necessary tools to advance their
business and processes.
1. Discoverable:
1. Are the relevant data that are to be made available, our project publications or any Open software that has been produced or used in the project, easily discoverable and readily located?
2. Have we identified these by means of a standard identification mechanism?
2. Accessible:
1. Are the data and associated software in the project accessible, where appropriate, and what are the modes of access, scope for usage of this data and what are the licensing frameworks, if any, associated with this access (e.g. licensing framework for research and education, embargo periods, commercial exploitation, etc.)?
3. Useable beyond the original purpose for which it was collected:
1. Are the data and associated software, which are made available, useable by third parties even after the collection of the data?
2. Are the data safely stored in certified repositories for long term preservation and curation?
3. Are the data stored along with the minimum software, metadata and documentation to make them useful?
4. Interoperable to specific quality standards:
1. Are the data and associated software interoperable, allowing data exchange between researchers, institutions, organizations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins)?
# Conclusions
This deliverable has provided an initial framework on how to build the data
collecting - and sharing plan during the course of the SecureIoT project and
after the project will be finished. This plan will be updated as the project
progresses, addressing issues such as dataset repository management and
hosting of datasets after the end of the project, also considering public
repositories. This deliverable is regarded as a live document which will be
updated incrementally as the project progresses. This version sets the overall
framework that will form the basis for two additional iterations on M18 and
M36, towards the overall delivery of a comprehensive document at the end of
the project.
In this version of the deliverable, we outlined the descriptions of the Use
Case related Datasets, which are still in development and data access aspects
have been addressed.
The upcoming revisions of this deliverable will focus -among other- to a
fuller presentation of the datasets, description of the SecureIoT data models,
update of data access and sharing and update of data interoperability
priorities.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1436_WAI-Tools_780057.md
|
**Executive Summary**
</th> </tr> </table>
The WAI-Tools project work is defined by the following work packages:
* **WP1:** Development of High-Quality Authoritative Conformance Test Rules
* **WP2:** Deployment and Demonstration of Accurate Decision Support Tools
* **WP3:** Integration of Open Tools for Large-Scale Compliance Assessments
* **WP4:** Engaging, Involving, and Disseminating Results to Key Stakeholders
* **WP5:** Project Management, Administration, and Technical Coordination
Much of the project work builds on the test rules developed through Work
Package 1, which is the primary contribution of the project. Work Package 3
develops proof-of-concept tools and resources to support demonstrators of
real-life monitoring observatories in Portugal and Norway. These will be
developed through Work Package 2. The operation of these observatories is not
part of the project.
Given the nature of the project work, it does not collect or generate data in
the formal sense. This includes data about people, organisations, finances,
literature, or other entities, including clinical and experimental data. Most
importantly, the project does not collect or generate data that is considered
sensitive to privacy, security, or other ethical aspects. Yet the project does
generate resources, which can be considered data in a less formal sense.
Specifically, the testing rules and associated test cases can be considered as
a data set that is generated through the development activities of the
project.
This report describes the data management considerations and procedures
established in the project plans. This includes the development, publication,
and long-term sustainability of the project data.
These considerations and procedures are built directly into the project work
and financial plans, and ensure fully open data built on open standards from
the on-set of the project. That is, all data is open by default and published
using open licenses to ensure maximum implementation and reuse. This is
achieved by building on existing W3C development procedures and licensing,
which ensure open and royalty-free results. The W3C process and licensing is
widely recognised among the target audience.
<table>
<tr>
<th>
**2**
</th>
<th>
**Data Summary**
</th> </tr> </table>
The WAI-Tools Project develops several resources, most of which are typically
not considered data:
* Documentation of accessibility testing procedures
* Test cases for testing procedure implementations
* Specification for a format for results from testing
* Software implementation of the above resources
* Educational materials and other documentations
* Scientific publications including reviewed articles
All these outcomes of the project will be provided publicly using open
licensing. This may include:
* W3C Document License: _https://www.w3.org/Consortium/Legal/2015/doc-license_
* W3C Software and Document Notice and License: _https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document_
* W3C Community Contributor License Agreement: _https://www.w3.org/community/about/agreements/cla/_
* Other comparable open license ensuring royalty-free use
Scientific publications will be available under Open Access, typically Gold
Open Access unless there is specific justification to do otherwise. This would
be agreed upon with the Project Officer in advance.
For the purpose of this report, we do not consider the development of
specifications, software code, educational and outreach materials, and
scientific publications as data. We can consider the testing rules and the
associated test cases to be some form of ‘data’ that will be generated by the
project.
We expect to develop a total of 70 such testing rules over the 3-years
duration of the project. Each of the rules will, on average, have about 8 test
cases (at least 2 for ‘pass’ condition, 2 for ‘fail’ condition, and 2 for ‘not
applicable’ condition). This results in a data set of about 630 items (70 plus
560 items).
The testing rules and associated test cases are developed according to the W3C
ACT Rules Format 1.0 specification, which defines the format of these data
items:
* _https://www.w3.org/TR/act-rules-format/_
The testing rules and associated test cases are developed by project staff
through a W3C Community Group called Auto-WCAG. In the earlier stages of the
project some of the testing rules and associated test cases may reuse existing
materials. This may be testing rules that project partners or others have
provided publicly for reuse, or brought into W3C through the Contributor
License Agreement (CLA):
* _https://auto-wcag.github.io/auto-wcag/_
The target audience of these testing rules and associated test cases are
developers of accessibility testing tools and methodologies. Many of the key
vendors in this space are either participating in this effort directly, or
monitoring the development closely. The WAI-Tools Project includes activities
for further coordination, outreach, dissemination, and exploitation of these
primary project results.
In the process of developing the testing rules, selected samples of publicly
available websites will be used to validate the accuracy of the testing rules.
Results from these validations will not be shared or stored beyond the
involved project partners, and for the sole purpose of validation of the
rules.
<table>
<tr>
<th>
**3**
</th>
<th>
**FAIR Data**
</th> </tr> </table>
The following describes the findability, accessibility, interoperability, and
reusability of project data.
<table>
<tr>
<th>
**3.1**
</th>
<th>
**Findable**
</th> </tr> </table>
The testing rules and associated test cases are intended to become W3C
resources, and integrated as part of wealth of resources provided by the W3C
Web Accessibility Initiative (WAI):
* _https://www.w3.org/WAI/_
The exact final location of this data set on the W3C/WAI website is yet to be
determined. However, we expect that this data set will be incorporated into or
linked from key WAI resources, such as:
* Understanding WCAG 2.1: _https://www.w3.org/WAI/WCAG21/Understanding/_
* Techniques for WCAG 2.1: _https://www.w3.org/WAI/WCAG21/Techniques/_
* How to Meet WCAG 2.1: _https://www.w3.org/WAI/WCAG21/quickref/_
These resources are the primary and authoritative references for the target
audience of this data set, which will ensure maximum findability. Further
cross-linking may be considered later, if needed.
# 3.2 Accessible
The testing rules and associated test cases will be maintained on a W3C GitHub
repository:
• _https://w3c.github.io/wcag-act-rules/_
This repository will manage the testing rules and test cases developed through
the W3C Auto-WCAG Community Group, and that are considered to be sufficiently
mature and authoritative by W3C. That is, this repository will ensure a
certain threshold of quality, to ensure reliability for users of the data.
This will also ensure transparency, change control, and open accessibility of
the data. The data itself follows the publicly documented ACT Rules Format 1.0
specification, to ensure openness of the data.
# 3.3 Interoperable
Interoperability is a key criteria for completion of W3C standards. By
following theW3C ACT Rules Format 1.0 specification for the development of the
testing rules and associated test cases we ensure interoperability of the
data. The WAI-Tools Project include specific deliverables to demonstrate this
through documenting open source tools built by independent organisations that
employ this data.
# 3.4 Reuse
The testing rules and associated test cases developed by the project are
openly reusable by default. They are developed under the W3C Contributor
License Agreement (CLA), which allows royalty-free reuse by any entity,
commercially or non-commercially, including for the development of derivative
work. The finalised test rules and associated test cases will, in addition, be
published under the W3C Document License. This is more restrictive regarding
derivative work, to ensure interoperability. That is, developers who want to
refer to an authoritative data set endorsed by W3C can do so by referring to
the W3C repository using the W3C license. Developers who want to extend,
modify, or otherwise reuse the same data set can do so by referring to the
Auto-WCAG repository using the CLA license.
<table>
<tr>
<th>
**4**
</th>
<th>
**Resource Allocation**
</th> </tr> </table>
The costs for ensuring open data is built directly into the project work and
financial plan. Specifically, the work plan ensures development through the
open W3C community process. There are hardly any additional costs for this
work mode, yet the result are ensured to be open. The financial plan includes
a budget for Open Access should it be needed. However, the project does not
generate research data but rather scientific publications, which will be
equally provided through Open Access licensing.
<table>
<tr>
<th>
**5**
</th>
<th>
**Data Security**
</th> </tr> </table>
The testing rules and associated test cases developed by the project do not
include sensitive data. In turn, there is no need for particular security
measures in the project. The data is stored using one of the most widely known
developer platforms, GitHub. This has been recently acquired by Microsoft,
which may support the long-term availability of the platform. In addition, W3C
GitHub repositories are regularly backed-up in W3C space, to ensure long-term
preservation of the data. The data itself will continue to be maintained and
curated by responsible W3C groups beyond the project duration.
<table>
<tr>
<th>
**6**
</th>
<th>
**Ethical Aspects**
</th> </tr> </table>
There are no ethical aspects applicable to the project development of testing
rules and test cases.
<table>
<tr>
<th>
**7**
</th>
<th>
**Other Issues**
</th> </tr> </table>
There are no other issues applicable to the project development of testing
rules and test cases.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1437_MeMAD_780069.md
|
# Data Summary
The purpose of data collection and generation within the project is to
facilitate the development and evaluation of methods for multimodal analysis
of audiovisual content. Very large part of the data that is gathered and used
by the project is either already publicly available research data or
proprietary, strictly licensed audiovisual data from industrial partners. The
main data produced by the project is in the form of computer program code and
algorithms, trained Machine Learning models, metadata for media produced by
the ML systems, and processed AV content. In addition to that interview,
observation and test data will be collected in user experiments.
The research and evaluation data the project will use is in three main
formats:
1. audiovisual digital data
2. general metadata, subtitles and captioning aligned to audiovisual content
3. specific metadata describing the content of the audiovisual material The project will generate data of following type:
1. annotated datasets of audiovisual data
2. program code and algorithms
3. trained models using neural networks and supervised machine learning algorithms
Linkedin - MeMAD Project
4. interview, observation and test data
In addition, there are intermediate data types used within the project that
are not necessarily preserved:
1. AV content processed to a format more suitable for further analysis (resampling, transcoding, etc.)
2. Intermediate data types for metadata and AV aligned data (subtitles, content descriptions, etc.)
3. Datasets resulting from program code development.
4. User experience data relevant only for intermediate purposes.
MeMAD uses mostly previously created audiovisual content for research
purposes; for testing, raw footage by project partners and external partners
can be made available. Several freely available research licensed datasets are
used by the different work packages for their own specific needs. Industrial
partners within the project will provide datasets consisting of their own
media for the project. External partners are invited to provide datasets for
use in training and testing the systems and methods developed by the MeMAD
consortium. A detailed description of the research datasets is provided in the
Deliverable D1.2
The research and evaluation data is obtained from two major sources:
1. state of the art research data corpora that have been collected
2. publication-quality and published media from industrial and external partners
State of the art research corpora are obtained by each partner and work
packages individually according to their own research needs. A list of the
used datasets is kept centrally. Publication-quality datasets are obtained
from industrial partners. Additional datasets are made available by some
external partners. The deliverable 1.2 Collection of Annotated Video Data
reports these datasets in detail. Within the project, a summary of the
datasets is kept centrally, and the partners/ work packages (WP) are invited
to mark the datasets they are using.
The size of the research and evaluation data sets is large. Current estimates
are, based on the research and evaluation data sets defined during the first
quarter of the project, that the largest research oriented datasets are tens
of terabytes in size.
Linkedin - MeMAD Project
The data set would be of immense use for any other actors working with
automatic analyses of audiovisual data, including general AI research, media
studies, translation studies etc, as well as industrial actors developing
methods of media content management.
# FAIR data
Data management in MeMAD is guided by the set of guiding principles labelled
FAIR. The purpose of these principles is to make data Findable, Accessible,
Interoperable and Reusable.
In order to be findable according to these principles, the research data has
to be described using a rich set of metadata. This metadata must then be
assigned a unique, specified identifier, which will be registered in an
indexed or searchable resource.
According to the accessibility principle, this set of metadata has to be
accessible using standardized communications protocols that is free and
universally implementable, and that allows for the authentication and
authorization procedures when needed. The principle also dictates, that this
metadata has to remain accessible through these means even though the dataset
itself is not or no longer available.
The interoperability principles dictates that the metadata must use a formal
and accessible language for knowledge representation that is at the same time
also shared and broadly applicable. The vocabularies used in describing the
data should also follow FAIR principles, and include qualified references to
other metadata.
In order to further the re-usability of the data, the FAIR principles dictate
that the metadata should be composed of a plurality of accurate and relevant
attributes that are associated with their provenance and follow domain-
specific standards. The metadata must be accompanied with a clear and
accessible license for the use of the data.
It is understood, that data management as practiced currently in MeMAD does
not fully conform to the FAIR principles. This document describes the current
adopted practices within the project, in order to facilitate the integration
of practices during the successive iteration of data management practices. The
aim of MeMAD is to create an integrated set of data management practices
during the project, and the FAIR principles will be used to guide the process
of data management practice development.
Linkedin - MeMAD Project
## Making data findable, including provisions for metadata
In the first phases of the project, no overarching naming scheme is used. For
the legacy datasets produced by the project as deliverable D1.7 and aimed for
wider dissemination after the end of the project, a naming scheme will be
devised.
The specific naming schemes to be used in the preparation of the legacy
datasets will be decided after M24 of the project, when preparations for the
collection for legacy data is scheduled to begin.
Currently each WP and partner uses its own naming schemes:
### AALTO
Aalto University provides data of different types, including automatically
generated annotations for audiovisual data, and trained machine learning
models. In the initial phases of the project the data will be named according
to the internal conventions of the research groups. During the project we aim
to adopt the best practices decided within the consortium.
### UH
Metadata of the user data ( interview, observation and test data) produced
during the project will be made findable through FIN-CLARIN, a Finnish data
repository which part of the international CLARIN consortium. Describing and
naming the data will occur in compliance with the FIN-CLARIN guidelines.
### EURECOM
EURECOM provides also data of different types including: ontologies and
vocabularies that normalize the meaning of terms useful for describing
audiovisual content; annotations resulting from an automatic transformation of
legacy metadata or from an information extraction process run on the various
modalities of the audiovisual content; trained machine learning models.
Most of the annotations data will be represented in RDF, a graph-based model
standardized by the W3C, and will follow the linked data principles. This
means that each node and vertex in the graph is represented by a
dereferencable URI. We plan to adopt the base URI < _http://data.memad.eu/_ >
when defining the scheme for naming those objects.
Linkedin - MeMAD Project
### SURREY
The University of Surrey advises researchers on how to make their data
findable. This includes, for example, advice on
* Creating data statements ensuring that data is clearly labelled and described with regard to the terms on which the data may be accessed, any access or licensing conditions/constraints, and legal or ethical reasons why data cannot be made available;
* Applying a licence to research data;
* Depositing research data into publicly accessible data repositories to enable researchers to make their data;
* Documenting data to ensure other researchers can access, understand and reuse the data. Including embedding of metadata.
Surrey dataset naming convention for the audiovisual data used in the MeMAD
project will be along the following lines:
[Surrey]_[MeMAD]_[Datasource]_[Genre]_[Dataset]_[version] An example being:
Surrey_MeMAD_ BBC_Drama_EastEnders_v.1
The University also creates an official publicly discoverable metadata record
of where our data is held, such as in an external repository. Information
regarding suitable places of deposit are kept up to date at an Intranet site
of the University of Surrey.
### YLE
Yle provided datasets are named as follows:
[Yle]_[project]_[DatasetID]_[DatasetName]_[dataset_version] Yle = "Yle"; name
of the company
project = "MeMAD"; the name of the research project / context
DatasetID = running number identifying the dataset (three digits, starting
from 001).
DatasetName = Human readable name to help identification. No spaces, no
special characters, no underscore
DatasetVersion = number describing the changes in the content.
Linkedin - MeMAD Project
Each dataset produced by Yle will include a set of metadata describing the
dataset in RDF/XML format using DCMI Terms.
Yle datasets contain AV media and metadata describing it. Metadata is provided
as XML files and mapping information between medias and metadata are included
in the metadata.
### Limecraft
Limecraft does not generate or bring into the project original audiovisual
media files or prior available metadata, those are delivered by other partners
in the consortium who will act as users of the Limecraft platform. In that
case, we reuse names originally employed to identify the original media. Upon
exporting this media from the platform, the same naming is reused.
Any metadata generated by the project’s deliverables or by internal components
of the Limecraft platform will be stored in an optimized database format not
suitable for direct external use. However, all of this metadata will be
accessible through the platform’s API using the original media’s naming
identification of this information. When using scripting tools to perform
these exports, Limecraft will ensure that the naming conventions used for the
naming of the original media will also form a part of the naming for the
derived metadata, e.g.,
“<original_clipname>_transcript_<language>.json” or
“<original_clipname>_ner.xml”.
The metadata generated and stored by Limecraft systems will be made available
according to the exchange formats defined by WP6 (cf., D6.1, D6.4 and D6.7).
### Lingsoft
Lingsoft will follow the format and naming conventions in media production
industry along with best practices decided within the consortium.
### INA
INA provides media and metadata following its internal conventions.
Media files encode one hour of tV or radio stream and follow the naming scheme
<channel-id>-<yyyymmdd>-<hhmmhhmm>, yyyymmdd giving broadcast day and hhmmhhmm
giving start and end hour. For instance,
_Page_
Linkedin - MeMAD Project
FCR_20140519_15001600.mp3.mp3 encode the radio stream of “France Culture”
radio station on 19th may 2014, from 15:00 to 16:00.
Metadata are provided as CSV files, with various fields as : identifier of
program, channel, start and end times, program title, summary, descriptors,
credits, themes.
Mapping information between medias and metadata are provided separately as CSV
files.
#### ***
The practices relating to research metadata creation will be discussed in the
further stages of the project. No obvious choices for a common metadata
standard that could be adopted exist, and in the first stages of the project,
the data management practices are very much related to individual Work
Packages and their work. In the later stages, the requirements of the common
prototypes will provide the framework within which the data must be managed,
and the deliverables D6.1, D6.4 and D6.7 (Specifications Data Interchange
Format, v. 1 to 3), will provide the guidelines on how to document the data.
## Making data openly accessible
The data used and produced within the project can be divided in five groups
according to differences in licensing and re-useability.
1. Research-oriented data obtained from public repositories
2. Research and evaluation data obtained from industrial partners
3. Annotated media data produced from groups 1 and 2 during the project
4. Algorithms and program code produced by academic research groups
5. Proprietary technologies developed by industrial partners.
Of these data types, the data in groups 3 “Annotated media” and 4 “Algorithms
and program code produced by academic research groups” is the easiest to open
for public reuse and will be made available as widely as possible.
Data in group 1 “Research-oriented data obtained from public repositories”
often comes with a licence that does not allow re-distribution even though use
for research purposes
Linkedin - MeMAD Project
is free; this data is already available for research purposes, and therefore,
a redistribution within this project is not even desirable.
Data in group 2 “Research and evaluation data obtained from industrial
partners” is typically published media data, which has strict licences
concerning re-use and distribution, for example, tv-shows produced by
broadcasting companies. This group also includes the user data collected
during prototype testing. An open access publication of this kind of media is
at least prohibitively expensive, at worst legally impossible, and in the
context of MeMAD, the aim is not to make this data set publicly available to
parties outside the project. Possibilities to re-license these datasets on
terms equal to the ones used by MeMAD are pursued as default.
Data in group 5 “proprietary technologies developed by industrial partners”
concerns tools and methods that the industrial partners contribute to the
research project in order to facilitate and evaluate certain phases of the
research. They reflect a considerable economic investment on the part of the
industrial partners, and are aimed for developing further technologies and
solutions with commercial purposes, thus not suitable for open distribution.
Project partners have currently different settings regarding the research
data, and these will be described here.
### AALTO and EURECOM
Both Aalto University and EURECOM have an Open Access Policy and strive to
publish everything in as open way as possible. The same principle also applies
to data and source code produced by these parties.
### UH
University of Helsinki / WP4 will deliver reports on multimodal, multilingual,
and discourse-aware machine translation. Any computational models or software
developed in this process will be made available through freely accessible
platforms like Github, keeping everything as open as possible as often as
possible.
### SURREY
The University of Surrey has a general Open Access Policy and aims to publish
all research outputs as openly as possible. For example, Surrey publications
and conference presentations will be deposited in the University’s repository
and other suitable
Linkedin - MeMAD Project
repositories. However, any audiovisual datasets used in MeMAD are unlikely to
be open access due to licensing restrictions, although the annotations minus
video data can be open access.
### YLE
Yle provides the project with a selection of AV material and related metadata
from it’s broadcasting media archives. The AV material, altogether ca. 500
hours, consists of inhouse produced TV programs. However, the rights of Yle
are limited to typical business use such as broadcasting, and specifically do
not include open distribution. A license agreement with the national copyright
holder’s organization is developed, which allows Yle archive material to be
used freely within the MeMAD project and distribution of the material to
researchers for project purposes. Based on this license, open access
distribution of this media dataset is not possible, but the license agreement
takes into account the need to make the project data FAIR.
The selection of programming metadata consists of a single month’s TV
programme metadata. This includes the times of transmission, content
descriptions, classifications and the producing personnel. This data is not
limited by copyright, but as the data has originated from in-house production
processes for a specific use, it’s opening may be limited by issues related to
e.g. personal or journalistic data. Yle metadata set will be included in the
project legacy open access, if no limitations to do this are identified during
the project.
### Limecraft
Concerning data of group 3, Limecraft will share the output from automatic
analysis generated during the MeMAD project if sharing this is not prohibited
due to business needs or the copyright restrictions of the original media they
are based on.
Concerning data of group 5, most of the technologies Limecraft develops as
part of MeMAD will not be made openly available by default. On the other hand,
Limecraft will evaluate the open distribution of components developed during
the project if those are parts that form an extension to a sizable existing
open source component, or in case that the open distribution of a component
makes sense economically, e.g., to enforce the commercial ecosystem that
Limecraft intends to build around MeMAD technologies.
Linkedin - MeMAD Project
### Lingsoft
Lingsoft will share the output from automatic analysis generated during the
MeMAD project if sharing this is not prohibited due to business needs or the
copyright restrictions of the original media they are based on.
### INA
Since 1995, INA has been the legal depository of French television and radio.
Legal deposit is an exception to copyright and INA has no intellectual
property rights over the content deposited. The cataloging data (title,
broadcast date, producer, header, etc.) are accessible for free, in accordance
with the rules in force, by a search engine located on the site
_http://inatheque.ina.fr_ . INA also markets a collection mainly made of
content produced by public television and radio stations, for which INA holds
the production rights. INA thus offers broadcasters and producers excerpts and
full programs, and pays back a contribution to the rights holders.
To promote research, INA provides for strictly research purposes (academic or
industrial), various collections available on accreditation through the Ina
Dataset web site ( _http://dataset.ina.fr_ ) . INA proposes to MeMad’s
partners, on the conditions of use described on Ina Dataset web site, a
specific corpus of television and radio programs related to European elections
in 2014.
INA also offers an open data collection of metadata on the thematic
classification of the reports broadcast on the evening news of six channels
(TF1, France 2, France 3, Canal +, Arte, M6) for the period January 2005 -June
2015), available at _https://www.data.gouv.fr/fr/organizations/institut-
national-de-laudiovisuel/_ .
##### ***
During the early phases of the project, each project WP is responsible for its
own data collection and storage. Partners providing research datasets will
distribute the data using their own services. A central repository for all
created research data is planned for the legacy dataset. It has not yet been
decided whether this repository will be based at one of the research partner’s
own repository service, or whether some kind of public repository service is
to be used. The final depository for research data remains to be discussed in
the later stage of the project, mainly during the Project Task T1.3 during
M30—36 of the Project.
Linkedin - MeMAD Project
The program data (“code”) will be stored as a git repository, and can be
accessed thus by both via the www-interface to the repository as well as with
git directly. Documentation for the Git system is available on the internet
for free, and the use of the program is discussed on several open forums
worldwide. Program code used to analyse and process the datasets that is based
on algorithms and techniques discussed and presented in scientific
publications, is open source by default, and the released data sets will
contain information on the relevant program code for their use. However, in
the case of products intended for commercialization by the industrial
partners, the release of the program code is not possible by default.
Research and evaluation data is distributed via suitable tools. As most of the
previously prepared research datasets are available either as open access or
via specific agreements, the partners using them acquire the data directly
from the providers. Regarding research data from MeMAD industrial partners
(Yle, INA), the partners have their own systems for distributing large
datasets. INA data is available on the INA ftp server, and the Yle data will
be distributed via a high speed file transfer service suitable for
distributing large datasets.
The prototype applications developed during the project’s first year will have
specific needs for data transfer and distribution; these will be addressed and
discussed during the phase of first installments during the period M6-M12.
Technical solutions for distributing the legacy datasets will depend on the
repository chosen for the legacy dataset deposition, and will not be the main
concern of this project; these matters will be discussed during the relevant
project task in M30-M36.
Such project legacy datasets that do not contain licenced nor sensitive
information will by default be open for access to all interested parties, and
therefore no restrictions will be imposed on their use. This does not apply to
the proprietary media data provided by industrial partners. Whether it will be
possible to have these as part of any kind of accessible legacy dataset is
still an issue that needs to be discussed within the partner’s own
organization as well as with the relevant copyright representatives. In the
case some kind of restricted distribution is deemed possible, the access will
most likely be granted only by separate request to the parties holding the
rights to the data, and will include the requirement of agreeing to terms of
use for the data.
No need for a specific data access committee within the project is envisaged.
The research data provided, while under a restrictive research licence, does
not contain sensitive information on persons nor institutions.
Linkedin - MeMAD Project
Specific licensing issues will be addressed in combination of the legacy
dataset creation in Task T1.3.
## Making data interoperable
One of the main goals of the project is to create a set of interoperable
research and evaluation data. The first six months of the project have as the
main goal the creation of common interfaces and services for allowing the
interoperation of data and tools among the research teams and data providers
working in different countries. In practice this is rather straightforward,
for the data is available in well-known and accessible formats.
In general, known best practices will be followed. As much as possible of the
produced and used data is to be stored in formats that are well known and
preferably open; structures text formats are preferred when suitable.
The standard definition is an important part of the first months of the
project. As the first project deliverable (D6.1), a set of standards
describing the formats for exchanging data is presented. This work is to be
continued with further versions of the specification (D6.4, D6.7) These
deliverables concern mostly the interoperability of the prototypes and
frameworks within the project, which are proprietary technologies developed by
the project partners.
Research data used within the project is easily usable, for AV material is
delivered using well-known video formats, like MP4 and WMV, and metadata is
distributed in structured text formats, like XML or JSON, which do not require
proprietary technologies.
The legacy datasets will be described using a relevant metadata scheme, like
DCMI. In the case it is necessary to create project- specific
vocabularies/ontologies, mappings to commonly used ontologies can be provided.
## Increase data re-use (through clarifying licences)
Data collected specifically for the project by its industrial partners as
proprietary datasets, is strictly licenced, and in many cases the partners
don’t hold all the rights for the data or media. Therefore, it is highly
difficult to license these datasets for open ended further use, especially
under any kind of an open access license. Copyright societies granting
licenses typically wish to limit the duration and scope of licenses in
unambiguous terms, which doesn’t favour open ended licenses that would be
optimal for data re-use. Current approach is to acquire as open licenses as
possible, and include in
Linkedin - MeMAD Project
the agreement talks the idea and mechanisms for other parties to license the
same dataset for similar purposes in the future.
In the cases where parts of proprietary datasets can be given further access
to as parts of the project legacy dataset, their further use will most likely
be limited to to research purposes because of business interests and IPR
touching this data and media.
Interviews and user experience studies done in connection to the MeMAD
prototypes may contain aspects which describe internal processes at the
industrial partners. Opening this data for wider dissemination may result in
disclosing information that has commercial interest, and may preclude these
datasets from open distribution.
Data produced by the project itself can and will be open for re-use in
accordance with the commercialization interests of the project industrial
partners; this will take place either during or after the end of the project.
Specific arrangements for peer review processes can and will be arranged when
necessary.
# Allocation of resources
As research data will be made FAIR partially as part of other project work,
exact total costs are hard to calculate. Many of the datasets used already
carry rich metadata, are already searchable and indexed, are accessible and
presented in broadly applicable means and forms, are associated with their
provenance and meet domain-relevant community standards.
Explicit costs for increasing FAIRness of the data are related at least to
acquiring licenses for proprietary datasets in the form of license fees, but
also in these cases part of the costs come from work associated with drafting
license agreements and promoting FAIR principles among data and media rights
holders and their representatives.
Direct license fee costs will be covered from Work Package 1 budget. Work
hours dedicated to license negotiations and data preparation are covered from
each partner’s personnel budget respectively, as they have allocated work
months to Work Package 1 each. It is yet to be decided, how costs will be
covered in cases where they benefit only parts of the consortium.
Each consortium partner has appointed a data contact person, and the overall
responsibilities concerning data management are organized through work done in
Work Package 1 dedicated to data topics.
Linkedin - MeMAD Project
Regarding the potential costs related to the long-term preservation of
research data, these will be discussed in relation to the legacy dataset
formation during the last year of the project.
# Data security
In the first stages of the project, each WP or partner storing data has their
own secure methods for storing data. Data is transferred either using secure
cloud solutions, secure transfers over internet, or in the case of large
datasets, specific secure download services or even physical transportation of
the data on external media.
### AALTO
All data collected and processed by Aalto University will be stored on
internal network storage managed by Aalto University, or by CSC which is a
non-profit state organization co-owned by the Finnish state and the Finnish
universities. All data transfers are done using encrypted secure connections,
and access to the files is restricted to project personnel.
### UH
University of Helsinki stores the sensitive data on users it collects during
the project on an internal/local network storage owned and managed by the
University. This storage is secured and protected and access to it is
restricted. If required, data sharing will take place using a sharing and
downloading service specifically designed for transferring protected and non-
public datasets securely over internet.
### EURECOM
All data collected and processed by EURECOM is stored on internal network
storage managed by EURECOM IT department. All data transfers are done using
encrypted secure connections, and access to the files is restricted to project
personnel. EURECOM servers are locked in dedicated room with a restricted
badge access. EURECOM building is itself 24h secured within the campus.
### SURREY
The University of Surrey provides mechanisms and services for storage, backup,
registration and retention of research data during a research project and
after its completion as part of the University’s research data management
policy. Data collected
Linkedin - MeMAD Project
from users are anonymised and named under specific codes, which are also used
for any annotations and files storing the data coding for analysis. These are
stored separately from other datasets. All non-electronic data are kept in
locked cabinets or drawers when not in use. Electronic data are stored on an
internal network that is managed at Faculty level. Network access is secured
using IT through password systems, file system access control, system
monitoring and auditing, firewall, intrusion detection, centrally managed
anti-virus and anti-spyware software, regular software patching and a
dedicated IT support team overseeing all IT issues including data security and
network security. All full-time and associate university staff are advised of
data protection policies when they start working at the university. Research
staff will normally have undergone research training (e.g. at PhD stage),
which includes familiarisation with the UK research council code of conduction
and the major principles of data protection.
### YLE
Yle data is stored on an internal network share which is the same service as
used for other company data and managed by Yle IT department. This storage is
secured and protected and access to it is restricted. Data delivery will take
place using specific sharing and downloading service specifically designed for
transferring large datasets securely over internet. The data is delivered via
personal download links, which can be requested, from Yle when needed.
### Limecraft
Limecraft stores the project data either on storage in its internal network,
or as part of the Limecraft Flow online platform infrastructure. For both
environments, Limecraft follows the guidelines from ISO/IEC 27001 for best
practices in securing data. Limecraft is also participant in the UK Digital
Production Partnership and its “Committed to Security Programme” 1 .
* Data stored in the internal Limecraft network is not accessible from the internet, except through secured and encrypted VPN connections. Access to this network is strictly controlled to only employees and storage systems require user authentication for access to data.
* Data stored as part of the Limecraft Flow infrastructure is hosted in data centers within the EU, and all conform to the ISO/IEC 27001 standard for data security. In addition to infrastructure security provided by Limecraft’s data center partners
Linkedin - MeMAD Project
(physical access controls, network access limitations), Limecraft’s
application platform also enforces internal firewalling and is only accessible
for administration using dedicated per-environment SSH keys.
Any exchange of data is subject to user authentication and subsequent
authorization (either from Limecraft employees which requires special access
rights), or from clients who’s access is strictly confined to the data from
their own organisations. Additionally, any exchanges occur exclusively over
encrypted data connections.
### Lingsoft
Lingsoft stores the data it collects during the project on an internal network
storage owned and managed by Lingsoft or third party data management providers
within European Union. All storage is secured and protected and access to it
is restricted. If required, data storage and management can be also restricted
only to servers owned and managed by Lingsoft. If required, data sharing will
take place using a sharing and downloading service specifically designed for
transferring protected and non-public datasets securely over internet
### INA
The INA corpus is made available to the MeMad project partners via a secure
FTP server hosted at INA (specific port, implicit encryption over SSL). Each
partner has been provided with a specific login.
#### ***
The long term preservation of the data that is opened for further use is still
an open issue. Project deliverable D1.7 is the legacy dataset resulting from
the project, and our current aim is to store this in a repository that will be
responsible for the long term storage of the data. Deliverable D1.7 is due in
month 36 of the project, and plans regarding it will be specified in next
versions of DMP. Media datasets provided by Yle and INA are parts of their
archive collections, and will be preserved and curated through their core
business of media archiving.
# Ethical aspects
Part of the research data may contain personal information and it will be
handled following guidelines and regulation such as GDPR. A Data Contact will
be nominated and
Linkedin - MeMAD Project
a contact point on personal data related issues will be set up to answer
queries and requests for personal data related issues.
Metadata provided by industry partners may have issues related to the
journalistic nature of the original datasets. Some of these datasets, such as
the metadata provided by Yle, have been designed and intended for in-house
production use of a broadcaster, and opening this data to outside users may
result in needs to protect sensitive or confidential information stored within
the data. These issues are resolved by removing and/or overwriting sensitive
and confidential information in the research data set before delivering it to
the project.
The user data (interview, observation and test data) collected during the
project from experiments and authentic workplace interactions between human
beings are sensitive data and will be protected and handled with proper care
and measures (see MeMAD DoA, Chapter 5).
# Other issues
As all MeMAD partners are established institutions, often with several decades
of practices in data management, there are procedures in place, which play an
important role in the data management practices, especially in the first
stages of the project. These partner-specific issues have been described above
in relevant sections of this document.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1438_MeMAD_780069.md
|
# Introduction
This document describes the current status of the MeMAD project’s data
management, and will provide the basis of further work on developing common
data management practices during and after the project. Development of this
plan is an iterative process and will be continued throughout the project. The
last version of the Data Management Plan is due in M36 of the project, as
deliverable 1.6.
The main changes compared to the previous version of the Data Management Plan
are the following:
* Chapter 2: FAIR Data
* Individual project partner descriptions have been replaced with consolidated projectbased descriptions.
* Practices defined during the first half of the project have been described, references to metadata and data repositories have been added.
* Chapter 4: Data security
* Focus has been moved from individual partners to the project platforms.
Chapters 1, 3 and 5 contain no major changes.
These changes aim to incorporate the better understanding about the project’s
needs for the data management and to address the feedback from the project
reviewers given in February 2019.
# Data Summary
The purpose of data collection and generation within the project is to
facilitate the development and evaluation of methods for multimodal analysis
of audiovisual content. A very large part of the data that is gathered and
used by the project is either already publicly available research data or
strictly licensed audiovisual data from industry partners. The main data
produced by the project is in the form of computer program code and
algorithms, trained machine learning models, metadata for media produced by
the ML systems, and processed AV content. In addition, interview, observation
and test data will be collected in user studies and experiments.
The research and evaluation data the project will use is in three main
formats:
1. audiovisual digital data
2. general metadata, subtitles and captioning aligned to audiovisual content
3. specific metadata describing the content of the audiovisual material
The project will generate data of the following type:
1. annotated datasets of audiovisual data
2. program code and algorithms
3. trained models using neural networks and machine learning algorithms
4. survey, interview, observation and test data
In addition, there are intermediate data types used within the project that
are not necessarily preserved:
1. AV content processed to a format more suitable for further analysis (resampling, transcoding, etc.)
2. intermediate data types for metadata and AV aligned data (subtitles, content descriptions, etc.)
3. datasets resulting from program code development.
4. user experience data relevant only for intermediate purposes.
MeMAD uses mostly previously created audiovisual content for research
purposes. For testing and development purposes, project partners and external
partners provide additional audiovisual content. Several freely available
research licensed datasets are used by the different work packages for their
own specific needs. Industry partners within the project will provide datasets
consisting of their own media for the project. External partners are invited
to provide datasets for use in training and testing the systems and methods
developed by the MeMAD consortium. A detailed description of the research
datasets is provided in the deliverable D1.2.
The research and evaluation data is obtained from two major sources:
1. state of the art research data corpora that have been collected
2. published or archived media from industry partners and external partners
State of the art research corpora are obtained by each partner and work
package individually, according to their own research needs. Additional
datasets are provided by project partners, mainly Yle and INA. The deliverable
D1.2 Collection of Annotated Video Data describes these datasets in detail.
Within the project, a summary of the datasets is kept centrally, and the
partners/work packages (WP) are invited to mark the datasets they are using.
The total size of the research and evaluation data sets is large. Current
estimates, based on the research and evaluation data sets defined during the
first half of the project, are that the largest research oriented datasets are
tens of terabytes in size.
These data sets would be of immense use for any other parties working with
automatic analyses of audiovisual data, including general AI research, media
studies, translation studies etc, as well as commercial parties developing
methods for media content management.
# FAIR Data
Data management in MeMAD is guided by the set of principles labelled FAIR 1
. The purpose of these principles is to make data Findable, Accessible,
Interoperable and Reusable.
It is understood that data management as practiced in the early stages of
MeMAD does not fully conform to the FAIR principles. This document describes
the current practices within the project and facilitates the integration of
practices during the project. The aim of MeMAD is to create an integrated set
of data management practices during the project, and the FAIR principles will
be used to guide the process of data management practice development.
## Making Data Findable, Including Provisions for Metadata
Requirements of the common prototypes will provide the framework within which
the project data must be managed, and the deliverables D6.1, D6.4 and D6.7
(Specifications Data Interchange Format, v. 1 to 3) provide guidelines on how
to document the data. Respectively, project collaboration and data exchange
provide the guidelines for internal project data management, described in more
detail in deliverable D1.3 Data Collection and Distribution Platform.
Currently, project data stored to the project file server follows a systematic
folder structure where folder naming states whether the data is primary data
or annotations, which collection or software component is originates from,
version numbers, run timestamps etc. Each folder contains a machine and human
readable file that follows the LDAP Data Interchange Format LDIF 2 and
contains only elements following Dublin Core DCMI Metadata Terms 3 . A
minimum set of metadata elements and folder naming conventions for the project
are defined in deliverable D1.3 in detail. This aims to describe the project
data semantically, interlink project data and annotations across work
packages, and provide sufficient additional search handles for the project
participants. This will also make the project result dataset in deliverable
D1.7 easier to select and collect as dependencies between project sub-
datasets, annotations and software versions have been recorded along the data.
One of the research data produced by the project is a so-called knowledge
graph representing in RDF the legacy metadata associated to the audiovisual
programs as well as some of the automatic analysis results. This knowledge
graph follows the Linked Data principles, which means that every object is
identified by a dereferencable URI. The project has established a policy to
mint those URIs following some existing best practices from the (Semantic Web)
community. First, the MeMAD ontology has for namespace URI
< _http://data.memad.eu/ontology#_ > with the recommended prefix to be
“memad”. Second, the general pattern for identifying metadata object is
_http://data.memad.eu/[source|channel]/[collection|timeslot|series]/[UUID]_
where:
* source | channel (in lower case)
1. channel codes for INA: ['fcr', 'fif', 'fit', 'f24', 'fr2', 'fr5']
○ channel codes for Yle: [‘tvfinland', 'yle24', 'yleareena', 'yletv1',
'yletv2', 'yleteema', 'ylefem', 'yleteemafem’]
○ ‘surrey’ for the material used by the University of Surrey
* collection | timeslot | series (in lower case and in ASCII and slugified)
1. we replace: white space (‘ ‘), semicolon (‘:’), comma (‘,’), slash (‘/’), quote (‘‘’), brackets (‘(‘ or ‘)’ or ‘[‘ or ‘]’), exclamation marks (‘!’), interrogative marks (‘?’), hash sign (‘#’) by a hyphen ‘-’.
○ we delete the consecutive hyphens to only have one, at most; we do not end
by an hyphen; we do not start by a hyphen.
* UUID = MeMAD custom hashing function using a seed where:
1. seed for INA is “record ID” (of a program OR a subject)
○ seed for Yle is “guid” OR “contentID”
Finally, media objects are identified using the pattern
_http://data.memad.eu/media/[UUID]_
For the result datasets produced by the project as deliverable D1.7 and aimed
for wider dissemination after the end of the project, a naming scheme for
individual files will be devised to improve their findability.
The specific naming schemes to be used in the preparation of the resulting
datasets will be decided after M24 of the project, when preparations for the
collection of resulting data is scheduled to begin.
Currently each WP uses its own naming schemes according to internal
conventions of the research groups, typically following systematic structure
that states e.g. data origin, version numbers etc. The aim is to make
individual files findable and identifiable even when no additional metadata is
provided.
The next section describes how the project data is meant to be distributed.
Parts of the project data will be stored into open repositories and for the
license-restricted datasets, metadata entries will be created into relevant
data catalogues, currently CLARIN 4 and META-SHARE 5 , which improves
their findability. Once the repositories to be used have been chosen, the
project will adjust its metadata guidelines to ensure compatibility with the
target repositories.
## Making Data Openly Accessible
The data used and produced within the project can be divided into five groups
according to differences in licensing and reusability:
1. research-oriented data obtained from public repositories
2. research and evaluation data obtained from project industry partners
3. annotated media data produced from groups 1 and 2 during the project
4. algorithms and program code produced by academic research groups
5. proprietary technologies developed by project industry partners.
Of these data types, the data in groups 3 “annotated media” and 4 “algorithms
and program code produced by academic research groups” is the easiest to open
for public re-use and will be made available as widely as possible.
Data in group 1 “research-oriented data obtained from public repositories”
often comes with a licence that does not allow re-distribution even though use
for research purposes is free; this data is already available for research
purposes, and therefore, a re-distribution within this project is not even
desirable.
Data in group 2 “research and evaluation data obtained from project industry
partners” is typically published media data which has strict licences
concerning re-use and distribution, for example, tv-shows produced by
broadcasting companies. This group also includes the user data collected
during prototype testing. An open access publication of this kind of media is
at least prohibitively expensive, at worst legally impossible. In the context
of MeMAD, the aim is not to make this data set publicly available to parties
outside the project. Possibilities to re-license these datasets on terms equal
to the ones used by MeMAD are pursued as default.
Data in group 5 “proprietary technologies developed by project industry
partners” concerns tools and methods that the industry partners contribute to
the research project in order to facilitate and evaluate certain phases of the
research. They reflect a considerable economic investment on the part of the
industry partners, and are aimed at developing further technologies and
solutions with commercial purposes, thus not suitable for open distribution.
### Research Partners and Their Data and Source Code
The MeMAD project strives to publish all its research in as open a way as
possible. This principle applies to data and source code produced by the
research partners within the project.
### Industry and Commercial Partners and Their Data
Commercial partners in the MeMAD project will share the output from automatic
analyses generated during the MeMAD project if sharing them is not prohibited
due to business needs or the copyright restrictions of the original media they
are based on.
Concerning the data of group 5, most of the technologies Limecraft, Lingsoft
and LLS develop as part of MeMAD will not be made openly available by default.
On the other hand, Limecraft, Lingsoft and LLS will evaluate the open
distribution of components developed during the project if those are parts
that form an extension to a sizable, existing open source component, or in
cases where the open distribution of a component makes sense economically,
e.g., to enforce the commercial ecosystem that Limecraft, Lingsoft and LLS
intend to build around MeMAD technologies.
### Yle Dataset
Yle provides the project with a selection of AV material and related metadata
from its broadcasting media archives. The AV material, altogether ca. 500
hours, consists of in-house produced TV programs. The rights of Yle are
limited to typical business use such as broadcasting, and specifically do not
include open distribution. A license agreement with the national copyright
societies has been established, which allows Yle archive material to be used
freely within the MeMAD project and also the distribution of the material to
researchers for project purposes. Based on this licence, open access
distribution of this media dataset is not possible, but the licence agreement
takes into account the need to make the project data FAIR.
The selection of program metadata includes the times of transmission, content
descriptions, classifications and the producing personnel for the TV programs.
This data is not limited by copyright, but as the data has originated from in-
house production processes for a specific use, its opening may be limited by
issues related to e.g. personal or journalistic data. The Yle metadata set
will be included in the project legacy open access, if no limitations to do
this are identified during the project.
### INA Dataset
Since 1995, INA has been the legal depository of French television and radio.
Legal deposit is an exception to copyright and INA has no intellectual
property rights over the content deposited. The cataloging data (title,
broadcast date, producer, header, etc.) are accessible for free, in accordance
with the rules in force, by a search engine located on the site
_http://inatheque.ina.fr_ . INA also markets a collection mainly made of
content produced by public television and radio stations, for which INA holds
the production rights. INA thus offers broadcasters and producers excerpts and
full programs, and pays back a contribution to the rights holders.
To promote research, INA provides for strictly research purposes (academic or
commercial ), various collections available on accreditation through the INA
Dataset web site ( _http://dataset.ina.fr_ ) . INA proposes to MeMAD’s
partners, in relation to the conditions of use described on the INA Dataset
web site, a specific corpus of television and radio programs related to the
European elections in 2014.
INA also offers an open data collection of metadata on the thematic
classification of the reports broadcast on the evening news of six channels
(TF1, France 2, France 3, Canal +, Arte, M6) for the period January 2005 -June
2015), available at _https://www.data.gouv.fr/fr/organizations/institut-
national-de-laudiovisuel/_ .
While the primary data from the AV sets will not be openly accessible, the
project will create metadata entries of these datasets into CLARIN and META-
SHARE, accompanied with contact information needed for licensing and accessing
these datasets.
During the project, created research data is first stored to the project’s
internal file sharing platform and selection of this data will be included in
the project resulting dataset as deliverable D1.7. The final depository for
research data remains to be discussed in the later stages of the project and
the final decisions will be made in task T1.3 during M31-36 of the project.
This repository will be taken into active use by the project as soon as it is
available.
The program data (“code”) will be stored as a Git repository 6 , and can be
accessed thus by both via the www-interface to the repository as well as with
Git directly. Documentation for the Git system is available on the internet
for free, and the use of the program is discussed on several open forums
worldwide. Program code used to analyse and process the datasets that is based
on algorithms and techniques discussed and presented in scientific
publications, is open source by default, and the released data sets will
contain information on the relevant program code for their use. However, in
the case of products intended for commercialization by the project industry
partners, the release of the program code is not possible by default.
Research and evaluation data is distributed via suitable tools. As most of the
previously prepared research datasets are available either as open access or
via specific agreements, the partners using them acquire the data directly
from the providers. Regarding research data from MeMAD project industry
partners (Yle, INA), the partners have their own systems for distributing
large datasets. INA data is available on the INA ftp server, and the Yle data
will be distributed via a high speed file transfer service suitable for
distributing large datasets.
Technical solutions for distributing the project result datasets will depend
on the repository chosen for the legacy dataset deposition, and will not be
the main concern of this project; these matters will be discussed during the
relevant project task in M31-36.
Such project result datasets that contain neither licenced nor sensitive
information will by default be open for access to all interested parties, and
therefore no restrictions will be imposed on their use. This does not apply to
the proprietary media or data provided by project industry partners. Whether
it will be possible to have these as a part of any kind of accessible result
dataset is still an issue that needs to be discussed within the partners’ own
organizations as well as with the relevant copyright representatives. In
circumstances where some kind of restricted distribution is deemed possible,
the access will most likely be granted only by separate request to the parties
holding the rights to the data, and will include the requirement of agreeing
to the terms of use for the data.
No need for a specific data access committee within the project is envisaged.
The research data provided, while under a restrictive research licence,
contains neither sensitive information on persons, nor institutions. The user
data collected during the project is sensitive by nature, and person-related
details will not be quoted or published. The data are used only for research
purposes, and recordings in which persons may be identified will not be shown
in public.
Specific licensing issues will be addressed in combination with the project
result dataset creation in task T1.3.
## Making Data Interoperable
One of the main goals of the project is to create a set of interoperable
research and evaluation data. The following have been selected as the
interoperable data formats:
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Type**
</td>
<td>
</td>
<td>
**Data Format**
</td>
<td>
</td>
<td>
**Explanation**
</td>
<td>
</td> </tr>
<tr>
<td>
video
</td>
<td>
video/mp4
</td>
<td>
video data
</td> </tr>
<tr>
<td>
subtitles
</td>
<td>
Advanced SubStation Alpha
</td>
<td>
subtitles/captions for videos
</td> </tr>
<tr>
<td>
ontology
</td>
<td>
text/turtle
</td>
<td>
ontology encoded in OWL/RDF
</td> </tr>
<tr>
<td>
knowledge graph
</td>
<td>
text/turtle
</td>
<td>
RDF triples and named graphs, following a number of well-known ontologies such
as EBU Core, NIF, Web Annotations, etc.
</td> </tr>
<tr>
<td>
raw media analysis results
</td>
<td>
csv or json
</td>
<td>
media content annotations
</td> </tr>
<tr>
<td>
structured data
</td>
<td>
application/xml
</td>
<td>
multiple uses
</td> </tr>
<tr>
<td>
structured data
</td>
<td>
application/xml
</td>
<td>
multiple uses
</td> </tr>
<tr>
<td>
_Table 1. Interoperable data formats._
</td>
<td>
</td>
<td>
</td> </tr> </table>
In general, known best practices will be followed. As much as possible of the
produced and used data is to be stored in formats that are well known and
preferably open; structured text formats are preferred when suitable.
A set of standards describing the formats for exchanging data is presented as
part of the project prototype work and it is reported in more detail in
project deliverables (D6.1, D6.4, D6.7). These deliverables concern mostly the
interoperability of the prototypes and frameworks within the project, which
are proprietary technologies developed by the project partners.
The project result datasets will be described using well known ontologies
including EBU Core, DCMI, and Web Annotations. In the case it is necessary to
create project- specific vocabularies/ontologies, mappings to commonly used
ontologies is provided.
## Increase Data Re-use (Through Clarifying Licences)
Data collected specifically for the project by its industry partners as
proprietary datasets is strictly licenced, and in many cases the partners do
not hold all the rights for the data or media. Therefore, it is highly
difficult to license these datasets for open ended further use, especially
under any kind of an open access licence. Copyright societies granting
licences typically wish to limit the duration and scope of licences in
unambiguous terms, which does not favour open ended licences that would be
optimal for data re-use. The current approach is to acquire licences which are
as open as possible, and include in the agreement negotiations mechanisms for
other parties to licence the same dataset for similar purposes in the future.
In cases where it is possible to extend access to a part of licensed datasets
as an element of the project resulting dataset, its further use will most
likely be limited to research purposes, owing to business interests and IPR
impacting both data and media.
Licencing challenges affect mainly the primary data - video, audio and most
ancillary data such as subtitles - but parts of this data, e.g. neutral
metadata elements could and should be shared.
Secondary data such as annotations created during the project should be more
straightforward to share for re-using. The project aims to share these, but it
is yet to be decided whether they will be shared separately or as a part of
deliverable D1.7 which is the collection of data resulting from the project.
Also here some layers of licensing may be needed, as some types of annotations
are closer to the original copyrighted data (e.g. ASR results) than other ones
(e.g. extracted keywords).
Interviews and user experience studies conducted in connection with the MeMAD
prototypes may contain aspects which describe internal processes at the
industry partners’ organisations or sensitive personal information about the
interviewees. Disclosing information that has commercial interest or sensitive
personal information may preclude these datasets from open distribution.
Data produced by the project itself can and will be open for re-use in
accordance with the commercialization interests of the project industry
partners; this will take place either during or after the end of the project.
Specific arrangements for peer review processes can and will be arranged when
necessary.
# Allocation of Resources
As research data will be made FAIR partially as part of other project work,
exact total costs are hard to calculate. Many of the datasets used already
carry rich metadata, are already searchable and indexed, are accessible and
presented in broadly applicable means and forms, are associated with their
provenance and meet domain-relevant community standards.
Explicit costs for increasing the FAIRness of the data are related, as a
minimum, to acquiring licenses for proprietary datasets in the form of licence
fees, but also in these cases part of the costs come from work associated with
drafting licence agreements and promoting FAIR principles among data and media
rights holders and their representatives.
Direct licence fee costs will be covered from Work Package 1 budget. Work
hours dedicated to licence negotiations and data preparation are covered from
each partner’s personnel budget respectively, as they have allocated work
months to Work Package 1 each.
Each consortium partner has appointed a data contact person, and the overall
responsibilities concerning data management are organized through work done in
Work Package 1, dedicated to data topics.
Regarding the potential costs related to the long-term preservation of
research data, these will be discussed in relation to the project resulting
dataset formation during the last year of the project (deliverable D1.7).
# Data Security
Each of the project partners have their policies and means to keep the data
safe on their sides with secure methods of storing and transferring the data
and access control on shared data.
Project internal data platform is provided by INA and follows their security
policies. This is described in more detail in the project deliverable D1.3.
The project prototype uses Limecraft Flow 7 as platform. Limecraft follows
the guidelines from ISO/IEC 27001 for best practices in securing data.
Limecraft is also a participant in the UK Digital Production Partnership and
its “Committed to Security Programme” 8 .
* Data stored as part of the Limecraft Flow infrastructure is hosted in data centers within the EU, and all conform to the ISO/IEC 27001 standard for data security. In addition to infrastructure security provided by Limecraft’s data center partners (physical access controls, network access limitations), Limecraft’s application platform also enforces internal firewalling and is only accessible for administration using dedicated per-environment SSH keys.
* Any exchange of data is subject to user authentication and subsequent authorization (either from Limecraft employees, which requires special access rights, or from clients whose access is strictly confined to the data from their own organisations). Additionally, any exchanges occur exclusively over encrypted data connections.
The long term preservation of the data that is opened for further use is still
an open issue. Project deliverable D1.7 is the dataset resulting from the
project, and our current aim is to store this in a repository that will be
responsible for the long term storage of the data. Deliverable D1.7 is due in
month 36 of the project, and plans regarding it will be specified during the
second half of the project. Media datasets provided by Yle and INA are parts
of their archive collections, and will be preserved and curated through their
core business of media archiving.
# Ethical Aspects
The Project follows the guidelines for responsible conduct of research 9 .
Part of the research data may contain personal information and it will be
handled following guidelines and regulation such as GDPR. A Data Contact will
be nominated and a contact point on personal data related issues will be set
up to answer queries and requests for personal data related issues.
Metadata provided by industry partners may have issues related to the
journalistic nature of the original datasets. Some of these datasets, such as
the metadata provided by Yle, have been designed and intended for in-house
production use of a broadcaster, and opening this data to outside users may
result in needs to protect sensitive or confidential information stored within
the data. These issues are resolved by removing and/or overwriting sensitive
and confidential information in the research data set before delivering it to
the project.
The user data (interview, observation and test data) collected during the
project from experiments, user studies and authentic workplace interactions
between human beings are sensitive data and will be protected and handled with
proper care and measures (see MeMAD DoA, Chapter 5).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1439_SHARE4RARE_780262.md
|
# INTRODUCTION
## Share4Rare motivation
The projects’ overarching goal is to break the vicious circle of the rare,
scarce investment and of the reduced research on rare diseases, and find
possible schemes and initiatives that social innovation can bring using the
value of collective intelligence. A cross-cutting model of collaboration in
the "digital arena" is needed to erase the geographical and language
boundaries that exist among the different European countries and to increase
the awareness about rare diseases. This approach will allow connecting all the
dots, needs and stakeholders in the virtual world and offering a unique
environment to improve the quality of life of the population suffering rare
conditions. In Europe they represent a significant number: around 30 million
of people. In addition caregivers and other relatives, clinicians and other
professionals can be beneficiaries of this collective awareness platform for
social innovation (CAPS) involving them in the collaborative model based on
the principles of the health crowdsources research studies (Swan, 2012).
Share4Rare (S4R) will be a bottom-up awareness platform, with the aim of
improving three important pillars: **Education** , **Sharing** and
**Research** . It will build on existing knowledge and initiatives ensuring a
space for debate and co-creation, a space for further research based on
clinical data donation and priorities set collectively.
## Purpose of the Data Management Plan
In the Share4Rare platform data is a new gold in order to bring to the world
the real value of collective intelligence that emerges when patients, families
and clinicians share knowledge. All together they can create medical, social
and emotional information that will allow further initiatives (projects) in
order to promote better quality of life for patients and families in the field
of pediatric rare diseases.
The aim of this Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
Consortium with regard to the different types of data collected with the
Share4Rare platform.
The DMP specifically regarding clinical data covers the complete life cycle of
research. It describes the types of research data that will be generated or
collected during the project, the standards that will be used, how the
research data will be preserved and what parts of the datasets will be shared
for verification or reuse.
Figure 1: Research data life cycle (adapted from UK data archive 1 )
The DMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is an agreed version among the partners of the DMP, delivered in
Month 9 of the project. It includes an overview of the datasets to be produced
by the project, and the specific conditions that are attached to them.
The development of the platform that will allow the access in a private
environment to the users is expected to be ready at the end of the first
year’s project (2018). Until this moment, the unique data that we will storage
is regarding the people subscribed to the newsletter of the project (First
name, second name and email).
If before launching the tools that will facilitate sharing experiences
(virtual communities) and to donate/collect clinical data is required a new
version of this DMP the coordinator of the project will lead the process to
review this document. In this case, the revision will be aligned with the last
stage of the technical development of the platform.
## Research data types
For this first release of DMP, the data types that will be produced during the
project are focused on the Description of the Action (DoA) and on the results
obtained in the first months of the project.
According to such consideration, a list of types of research data that S4R
will produce has been collected. These research data types include data
structures, sampling and processing requirements, as well as relevant
standards. This list may be adapted with the addition or removal of datasets
in the next versions of the DMP to take into consideration the project
developments, if this is required. A detailed description of each dataset is
given in the following sections of this document.
1. Types
1. **Sign-up data** . Every user needs to be registered to access to the private areas of the platform. According with the different roles (patient, caregiver or legal guardian, clinician and researcher) a specific dataset will be collected and a specific system of authentication will ensure the access with the right role (CDA, consent document and diagnosis proof). Detailed information about the different profiles, authentication, content process, etc. has been included in the complementary deliverable “Protection of Personal Data Report”.
An independent dataset will work for the general people interested to receive
the newsletter of the project which have consented this purpose during the
subscription process.
2. **Observational data** . The primary source of observational data will be online questionnaires. Questionnaires will include data capture from the following areas: clinical aspects of the disease, disease development, genetic information, lifestyle, and quality of life. Observational data can be acquired at different periods and will be longitudinal.
3. **Derived data** (depending on the evolution of the platform). Data sources for derived data can be data provided in educational resources, accessed by text mining techniques.
4. **Modelling data** . Statistical modelling may provide statistical descriptions, prediction models, prognostic estimates, among other scores built from derived, observational and external databases data.
2. Formats
1. For software communications systems related to data visualization and report generation, questionnaire data and metadata will be requested and transferred in JavaScript Object Notation format (.json).
2. Plots and patient reports will be generated in Hypertext Markup Language (.html) and/or Portable Document Format (.pdf).
3. For statistical purposes, table-like formats such as comma separated value (.csv) and MS Excel compatible files (.xls, .xlsx) might also be used for data transfer.
4. The project will store information in other specialized formats including .R, .Rdata, .RProj (R); .py, .ipynb (Python); and plain text .txt files.
3. Re-used data
S4R will use external data in order to enrich internal data generated by the
project. Initially the project will also re-use data from related data
platforms such as RD Connect 2 or OMIM 3 , among others. This will
facilitate the user experience avoiding the need to fulfill complex data
connecting this data bases in autocompleted fields, and also ensure the
validation of the data reducing the margin of error.
4. Origin of the data
1. The data will be admitted from patients (parents or legal guardians) with paediatric rare diseases with no geographic constraint. Depending on the evolution of the platform, clinicians and patient advocates might also provide useful data.
2. In the case of re-used data from other external platforms, it will be retrieved from the specific data sharing platforms offered by each source, such as custom APIs or REST APIs.
Sensitive data linked to exploitable results will not be put into the open
domain; the protection of sensitive data is a H2020 obligation. Other types of
research data will be deposited in open access repositories as described in
Section 2.
In the case of data which is linked to a scientific publication, the
provisions described in Section 6 will be followed regarding the authorship.
Underlying data will consist of selected parts of the general datasets
generated, aggregated and anonymized, with the aim of analyzing and answering
research questions.
Other datasets might be related to any public report or be useful for the
research community. These datasets might be either selected parts of the
general datasets generated in the project or full datasets (i.e. up to 2 years
of key operating data); they will be published as soon as possible, after
being review and agreed by the corresponding research team.
## Responsibilities
Each S4R partner has to respect the policies set out in this DMP according
with the Spanish Personal Data Law and the GDPR. Datasets have to be created,
managed and stored appropriately and in line with applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the S4R website is easily available for the patient or legal
guardian (as owners of the data). Also is responsible of guarantee that
backups are performed and that proprietary data are secured (FSJD).
Authentication of the user profile will be ensured by a signed consent
document in the case of adult patients or caregivers. The electronic consent
will be followed by to sign the paper copy of the document and by send to the
Coordinator of the project (FSJD). This will be safely secured in a locked
place by the coordinator according with the legal rules. See the section
number 5 of this document for further information about the consent process
and document.
Similarly, clinicians will have a mandatory non-disclosure agreement (NDA) in
order to access to the questionnaires in the platform. This NDA will be
mandatory to be accepted after received the invitation to fulfill a
questionnaire from an adult patient or caregiver. They will be the responsible
to decide if they want to invite them in order to obtain additional wealth
data about the patient. They might accept or decline this invitation. The NDA
signature has the purpose to ensure that they don’t have any conflict of
interest regarding the research project underlined in the Share4Rare platform.
FSJD as coordinator of the project will be the responsible to lead and setup a
research team for the two pilot groups of conditions that will be studied
during the project. All the members will sign an agreement with the project
coordinator (FSJD) according with the rules that they need to follow regarding
the GDPR and about the management of the research project in which they will
be involved.
FSJD will be part of all the research projects that aim to explode the data
collected as legal responsible.
UPC, as Working Package (WP) 7 leader, will ensure dataset integrity and
compatibility for its use during the project lifetime by partners responsible
of the platform development (FSJD, Omada and UPC).
Validation and registration of datasets and metadata is the responsibility of
the partner that generates the data in the WP. Metadata constitutes an
underlying definition or description of the datasets, and facilitate finding
and working with particular instances of data (UPC).
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data (FSJD).
Quality control of the data is the responsibility of the relevant WP leader,
supported by the Project Coordinator (UPC and FSJD).
Last but not least, all partners must consult the Project Coordinator (FSJD)
before publishing data in the open domain that can be associated to an
exploitable result and aligned with the purposes that the owners of the data
(adult patients or legal representatives) have allowed through the signature
of the Consent Document.
# DATASETS
## Dataset reference and name
All datasets generated by S4R should include a Uniform Resource Identifier
(URI) that uniquely identifies the dataset. Every individual dataset of a
patient will be identified by a number code (ID) in order to ensure the
anonymization of the personal data and facilitate the cross-relation between
the different datasets.
## Dataset description
After URI information, the dataset description should include the following
information: title and description of the dataset, a set of keywords
describing the dataset, release date, publisher, contact point including name
and email, contents coverage (such as survey, genetic, clinical, aggregated,
quality of life data), multilingual information, origin of the data, target
user of the data (e.g. for general use, for computer use), versioning
information, publication scope and license.
## Standards and metadata
Strategy for data standardization will differ depending on the structure of
the data stored within S4R. We can differentiate:
* Raw data, stored as UTF-8 that will be enriched through the Linked Data approach (with help of RDF 4 and tools developed by Linda Project 5 ).
* Genetic data, stored in the following formats (as far as possible): Human Genome Variation Society (HGVS 6 ) Nomenclature, Human Gene Nomenclature Committee (HGNC 7 ), Reference Sequences NCBI (RefSeq) and Logical Observation Identifiers Names and Codes (LOINC 8 ).
* Clinical data will use HL7 to ensure transferability between systems, and will be enriched with Human Phenotype Ontology (HPO).
* Quality of life data will be enriched with help of a Quality of Life Ontology initially referenced from WHO ICF structures 9 .
Metadata must be provided redundantly for both human and computer
interpretation. We consider two metadata content: a) Overall features within
dataset metadata should include the content described in Section 2.2; and b)
Specific features for a dataset with panel data should include specific
metadata about variables and samples, including information such as units,
variable explanation, variable long name and variable short name.
Metadata will be provided, to the extent that it is possible, adopting the
Metadata Standards Directory Working Group directives at RD-Alliance 10 .
## Data sharing
Ultimate S4R goals stand for sharing the data generated at the end of the
project in order to allow its open use. S4R will consider different data
sharing schemes depending on the contents of the data.
For specific requests of aggregated data from third parties which has not been
approved by the patient or his/her legal representative in the Consent Form,
the Coordinator of the project can ask in the case of any need of advice to an
_ad hoc_ Data Access Committee (S4RDAC). A specific meeting may be set up to
decide if a new reuse of said data will be requested to the owners and to
initiate a re-consent process. Data sharing will be supervised by a Data
Access Committee (S4RDAC) that will be constituted at the end of year 2.
S4RDAC will be formed under the following structure:
* One representative of the coordinating institution of the project
* One representative from the Ethics Committee of FSJD
* One representative of the patient organizations related to the specific condition - One data scientist
* One member from clinical research
* One legal advisor
All authorizations by the S4RDAC should be taken by unanimous voting.
Data project access will be available in two approaches: a) bulk download; b)
data access following REST (REpresentational State Transfer) architectures.
REST APIs will be maintained by the technical partners from the project (UPC,
OMADA and FSJD). REST APIs will be described and versioned within the
platform. Older API versions will be maintained jointly with the current API
definition for the full lifetime of the project.
Dataset publication will be submitted to the authorization of the S4RDAC that
will approve/deny requests for accessing the data. The committee will evaluate
data access proposals and define a guideline of requirements for access
authorization.
All data access should be performed under identification of the accessing
user.
## Archiving, presentation and security
All data will be physically stored within FSJD premises that are aligned with
Sant Joan de Déu Hospital Information Department Systems where physically is
located the hosting system. Anonymized datasets will also be stored at the UPC
for data analysis and scientific analysis. Releases of public datasets will be
published on the S4R project and also uploaded to the EU Open Data Portal 11
, Google Public Data 12 and the Registry of Research Data Repositories 13
, depending on the nature and target of the dataset release.
Initially, data will be processed at UPC and FJSD. Automated analysis will be
carried out at HSJD by the S4R automated analysis software. Non-anonymous data
will be analyzed exclusively within FSJD premises throughout all phases of the
S4R project.
S4R will provide feedback data with information about individual measurements
vs population measurements for different diseases in a restricted-to-user
environment. Statistical population data (free of individual identifications)
will be offered within the S4R platform for the users of any research
community (private environment). S4R will also offer APIs for exposing under
controlled access to ensure full-anonymity.
Hosting, persistence and access will be managed by FSJD and UPC, with help of
virtual machines and data processing clusters under high availability. UPC as
a public institution follows the instructions of the APDcat (Autoritat
Catalana de Protecció de Dades) regarding the data analysis of its activity
which includes several H2020 projects.
Long term value of the data will be ensured by following the best practices.
FSJD and UPC will provide with means for restricted physical access to the
data server and computing systems for avoiding unauthorized access to data
from S4R.
Long-term and large-scale digital archiving of selected primary data will be
ensured by perpetual storage at university library system (UPC) and FSJD.
Back-up systems will consist on weekly automated data back-up setups offered
by UPC (for anonymous data) and FSJD. Target data for back-up will be primary
data.
## Versioning
Dataset and API versioning will be enforced. As a large part of the data comes
from questionnaires, changes on questionnaires should propagate to different
versions of data. Versioning will virally propagate downstream through the
analysis pathway, including secondary data, statistical models, plots and
reports. All reports should include comprehensive information on the versions
employed for the report contents including data, questionnaire and software
versions.
## Quality assurance processes
S4R Quality Assurance and quality Control (QA/QC) will consider the following
four practices:
1. QA/QC publications. Documents describing best practices for questionnaire creation, data input and data analysis will be written and maintained by UPC/FSJD.
2. Training. QA/QC publications will be explained in training sessions to the relevant stakeholders.
3. Verification tests. Automated controls in software will be in place to ensure that data input will comply with the QA/QC defined in the QA/QC publications.
4. Data exploration tests. Custom analysis will be performed by data scientists to check for errors through visual (scatterplots, mapping, distribution plots, and others) and automated analysis (multivariate outlier detection algorithms, and others).
# KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA
These principles can be applied to any project that produces, collects or
processes research data. As indicated in Guidelines on Data Management in
H2020 14 , scientific research data should be easily:
## Discoverable
The data and any associated software produced and/or used in the project
should be discoverable (and readily located), and identifiable by means of a
standard identification mechanism (e.g. Digital Object Identifier).
## Accessible
Information about the modalities, scope and licenses (e.g. licensing framework
for research and education, embargo periods, commercial exploitation, etc.) in
which the data and associated software produced and/or used in the project are
accessible should be provided.
## Assessable and intelligible
The data and any associated software produced and/or used in the project
should be assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. the minimal datasets are handled
together with scientific papers for the purpose of peer review, data are
provided in a way that judgments can be made about their reliability and the
competence of those who created them).
## Usable beyond the original purpose: open data use
The data and any associated software produced and/or used in the project
should be useable by third parties even long time after the collection of the
data (e.g. data are safely stored in certified repositories for long term
preservation and curation; they are stored together with the minimum software,
metadata and documentation to make it useful; the data are useful for the
wider public needs and usable for the likely purposes of non-specialists).
## Interoperable to specific quality standards
The data and any associated software produced and/or used in the project
should be interoperable, allowing data exchange between researchers,
institutions, organizations, countries, etc. (e.g. adhering to standards for
data annotation, data exchange, compliant with available software
applications, and allowing re-combinations with different datasets from
different origins).
# LEGAL ASPECTS
## Spanish National Law
Fundació Sant Joan de Déu (FSJD), coordinator of the Share4Rare project, is
under the Spanish legislation on data protection, and the applicable law is
Organic Law 15/1999, of December 13, on the protection of personal data
(LOPD), as the Royal Decree 1720/2007, of December 21, which approves the
Regulation of development of the Organic Law 15/1999, of December 13, of
protection of personal data (RLOPD); both norms in force in the precepts not
repealed by the Regulation (EU) 2016/679 of the European Parliament and of the
Council of 27 of April of 2016 relative to the protection of physical persons
with regard to the treatment of personal data and to the free circulation of
said data, and repealing Directive 95/46 / EC (GPDR).
FSJD conducted all the checks that were marked under the LOPD, as were the
mandatory audits in biannual data protection. The last audit dates from 2016
and was conducted by the external consultant Faura-Casas, Auditores
Consultores.
## GDPR
FSJD is in the process of adapting to the GPDR and the new guiding principles
of the Regulation, which came into force in May 2016 and applicable as of May
2018.
The GDPR is a directly applicable standard, which does not require internal
transposition nor, in most cases, norms of development or application. For
this reason, the FSJD assumes it as a reference standard.
Two elements of a general nature constitute the GPDR's greatest innovation for
the FSJD (and for Share4Rare): the principle of _proactive responsibility_ and
the _risk approach_ .
The aspects that directly affect the Share4Rare project and under the prism of
the GPDR:
* Legitimation basis for data processing:
* The platform will have a specific form to request access as active user in the private communities where users will share personal stories and experiences. At this level of interaction, non-clinical or personal data will be donated and stored in the platform.
* The website will have the informed consent for clinical data donation and data processing of Share4Rare. This consent will identify the legal basis on which the treatments will be developed, such as the new requirements of the RGPD. This document will be approved by the Ethics Committee of FSJD.
* The consent is unequivocal and explicit -following and in compliance with article 9.2.a) of the GDPR.
* The information is provided in a concise, transparent, intelligible and easily accessible manner, with clear and simple language.
* Exercise of rights:
Whether in forms or in internal procedures of the FSJD, the exercise of rights
- access, rectification, deletion, opposition, forgetting, treatment
limitation and portability - will be facilitated in a visible, accessible,
simple and free manner.
* Managers of treatment:
FSJD will adopt the appropriate measures in the selection of the possible
persons in charge of treatment in a way that guarantees and is in a position
to demonstrate that the data processing is carried out in accordance with the
GDPR (principle of proactive responsibility).
* Measures of proactive responsibility:
This is one of the main novelties, and the Foundation and Share4Rare must
perform and keep in mind:
* Risk analysis.
* Registration of activities. FSJD has a registry of treatment operations which contains the information established by the GDPR on issues such as:
* Name and contact information of the person in charge and the Delegate of Data Protection.
* Purpose of the treatment.
* Description of categories of interested parties and categories of data treated.
* International data transfer. ▪ Security measures.
* Data protection from design and default:
The procedural measures have been thought in terms of data protection from the
very moment that data processing of Share4Rare has been designed. Such
measures are reflected in the Share4Rare platform as only treating the
necessary data regarding the quality of the data, the extension of the
treatment, the conservation periods and the accessibility of the data.
* Security measures:
FSJD (for the Share4Rare platform) has established the appropriate technical
and organizational measures to guarantee an adequate level of security based
on the risks detected. The technical and organizational measures have been
established taking into account:
* The cost of the technique. o The costs of application. o The nature, scope, context and purposes of the treatment. o Any risks to rights and freedoms.
* FSJD has a data breach procedure, which includes any incident that causes the accidental or unlawful destruction, loss or alteration of personal data transmitted, conserved or otherwise processed, or communication or unauthorized access to said data.
* The Foundation will prepare and comply with the requirements of the GDPR a DPIA (Data Protection Impact Assessment), prior to the implementation of Share4Rare, as it is a treatment that entails a high risk for the rights and freedoms of the interested parties.
* The Foundation as an obligated entity set up the figure of the Delegate of Data Protection since May. The functions it performs both in the foundation and in the exercise of the Share4Rare are:
* Informing and advising on the protection of data to the FSJD and its employees. o Supervising the compliance with internal policies on data protection. o Cooperating with control entities.
* Acting as a point of contact with control entities.
* Data processing with minors:
The Consortium is aware that it dealing with a sensitive group of patients.
The data will always be donated by adult patients (if it is the case that the
natural history of the disease allows to include them) or, in the case of
paediatric patients, by their parents or legal guardians. Considering this,
S4R takes the corresponding measures, among them:
* The information offered to interested parties in relation to the treatment or exercise of rights must be especially concise, transparent and intelligible, and provided with clear and simple language.
* In the context of data erasure.
* The consent will be valid from the age of 18 years directly from the patient, and only in the case that the patients do not have any cognitive impairment that does not allow him or her to participate in the platform.
* Security measures and controls to ensure identities.
# CONSENT FORM
Informed consent will be mandatory for all users in the platform that will
contribute to the data donation on behalf of a pediatric patient (parents or
legal guardians) or directly and adult patient. This process is aligned with
the ethics principles and GDPR principles.
The document has been approved for the Ethics Committee of FSJD as coordinator
of the project and legal responsible of the data of the Share4Rare platform
users.
The process of obtain the consent of the user will include this steps:
1. Sign up form. Users must accept the Privacy terms of Share4Rare in accordance to GDPR law and full fill an initial list of fields in order to identify the different roles in the platform (patient or caregiver).
2. After, the user will validate his/her email and accept the terms of participation of the platform operating policy. Afterwards the user will access to a secondary process that will finish with the signature of the consent form.
3. Questionnaire about the disease in order to fulfill this information in the electronic consent form will be mandatory for the users.
4. A copy paper of the consent form needs to be signed and send by postal mail to the coordinator of the project.
The standard consent form itself is attached in the appendix and also the
approval of if from the Ethics Committee of FSJD.
# SCIENTIFIC PUBLICATIONS
The Project’s authorship policy will follow the rules for academic
publications. The ICMJE 15 recommends that authorship be based on the
following 4 criteria:
* Substantial contributions to the conception or design of the work; or to the acquisition, analysis, or interpretation of data for the work; and
* Drafting the work or revising it critically for important intellectual content; and
* Final approval of the version to be published; and
* Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
All those designated as authors should meet all four criteria for authorship,
and all who meet the four criteria should be identified as authors. These
authorship criteria are intended to preserve the status of authorship for
those who deserve credit and can take responsibility for the work.
1. Authorship positions and the Corresponding Author will be decided, ideally, before the work is started, by the respective members of the Project Team Board. They will also be expected as individuals to complete conflict-of-interest disclosure forms.
2. According to ICMJE, the corresponding author is the one individual who takes primary responsibility for communication with the journal during the manuscript submission, peer review, and publication process, and typically ensures that all the journal’s administrative requirements are properly completed, although these duties may be delegated to one or more co-authors. The corresponding author should be available throughout the submission and peer review process to respond to editorial queries in a timely way, and should be available after publication to respond to critiques of the work and cooperate with any requests from the journal for data or additional information should questions about the paper arise after publication.
3. All authors will reserve the right to withdraw from authorship at any time. All acknowledgements must be with the consent of the persons involved.
4. A person who has contributed to Share4Rare publications but does not meet all four criteria for authorship of the manuscript should be listed in its acknowledgements section.
5. Free and comprehensive acknowledgement of individuals and groups who have given support should be done wherever possible (i.e. we gratefully acknowledge…).
## Internal procedure for Publications review
The recommended internal review process is:
1. Authors write a manuscript and Work Package Leader (WP Lead) sends a draft to Project Team
2. All the WP leaders that form the Project Team will review the manuscript
3. Authors update the manuscript and send it to the Project Team for the final approval
4. Submission of the final version
Between phase 2 and 3 the maximum deadline to respond will be 2 weeks.
# Appendix A – Informed consent form (template)
**Appendix B**
**–**
**Ethics Committee approval of the informed consent form**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1441_FANDANGO_780355.md
|
# EXECUTIVE SUMMARY
Fake News are a hot issue in Europe and worldwide, particularly in relation to
Political and Social Challenges. The state of the art is still lacking a
systematic approach to address the aggressive emergence of fake news and post-
truth evalutaions of facts and circumstances.
FANDANGO aims at contributing to semi-automatic approaches that can aid humans
in the evaluation of the trusthworthiness of news (potentially fake ones).
To achieve this goal, FANDANGO will use several approaches, collecting a large
volume of data and providing sophisticated machine learning approaches to aid
in the investigation and validation purposes. To do so different data sources
will be used leveraging an open software stack that will include a wide range
of big data oriented technologies.
This document was supposed to be the FANDANGO Data Management Plan (DMP), and
as such it was meant to describe in particular how research data will be
produced, collected or processed by the FANDANGO project (and also procedures
and decisions to allow data to be _FAIR_ ).
For this reason this deliverable was also structured following the official
guidelines set forth in Horizon 2020 for similar documents.
However, while studying and working to write it, one circumstance began to
stand out and become prominent to our managerial attention.
The circumstance is the following: while FANDANGO will have no interest to
process personal data as such, it will need to process massively data about
the emerging news, especially data related to potentially untrustworthy (fake)
news.
In so doing, some processing steps may offer the potential opportunity to
infer personal data.
The easiest example showing such a risk is to be able to assess that a
personal account on Twitter is usually writing untrustworthy news and being in
the nedd to keep this information for future decision making about other news
published by the same person.
At the present stage of research we are still not in the position to evaluate
and put in place specific measures to counter this type of risk and sharing
data for scientific purposes, in this condition, would exacerbate the risk.
For this reason we are:
* asking for the opt-out option,
* submitting this deliverable still in a draft version, with only the information collected at the stage when the opt out option was considered.
Ethical aspects remaining after the opt-out choice are dealt with in D9.1 and
D9.2 documents. In particular the information presented in D9.2 beras some
resemblance to the information provided by this document.
## 1\. DATA SUMMARY
**1.1. WHAT IS THE PURPOSE OF THE DATA COLLECTION/GENERATION AND ITS RELATION
TO THE OBJECTIVES OF THE PROJECT?**
In short, FANDANGO will collect data about potential news whose
trustworthiness (i.e. are they real news or fake news ?) is uncertain and will
generate assessment scores about the probability of each of being genuine or
fake.
If we call S i such fakeness scores (where i = 1, 2 … N and N >> 1) Fandango
will generate the S i fakeness scores by combining partial scores S i,j ,
where j = 1, 2, 3, 4 are the output of specific software modules, studied and
developed as part of FANDANGO original results (in WP4, tasks 4.1 to 4.5).
Each module will perform an assessment on the basis of a specific criterion,
thus computing a partial fakeness score.
Fandango will generate the S i fakeness scores by optimally combining
partial scores S i,j .
This process is depicted in the following figure:
News
1
News
2
News
3
…
News
i
…
News
n
n
>>1
**Siren**
**Investigate**
Web sites
Services with
REST APIs
RSS sites
Social
Networks
Partner’s
Data
Open Data
HDFS
Spark
/
Spark
Streaming
MLlib
Apache Zeppelin
Neo4J
Kafka
Hbase
Hive
Elastic Search
kibana
**FANDANGO User**
**Interface**
**D 4.5**
**D 4.4**
**D 4.3**
**D 4.2**
**D 4.1**
**S**
**i,1**
**S**
**i,2**
**S**
**i,3**
**S**
**i,4**
**S**
**i**
**FANDANGO Data Lake Open Architecture**
_Figure 1_
Final users of the FANDANGO platform are working at more clearly specifying
their needs and priorities, that will be addressed while performing the
research work complying to the approved FANDANGO DoA.
Preliminary user results show that our user champions would rather see
FANDANGO as a set of tools prescriprions instead of one completely integrated
software solution. The tools deemed most useful are
* news verification tool
* photo/video verification tool
* alert system
In addition to that our users stated that Fandango should never make a final
decision about the trustworthiness of information, but rather help the
journalists in doing that (This is the reason why FANDANGO results are
outlined in the diagram above not only in terms of a general S i fakeness
score (where i = 1, 2 … N and N >> 1) but also in terms of partial scores S
i,j (j = 1, 2, 3, 4).
Please notice that FANDANGO is dealing with a big data class problem since the
sheer number of potential news to be examined (n) may be very large and each
piece of potential news will be analyzed by considering all its text and
multimedia content (multimedia content will be in potentially large) and
potentially many past texts or contant fragment related to past news (real or
fake ones).
This is the reason why to achieve this result FANDANGO will need to leverage
an integrated big data platform based on Open Source middleware.
It is worth noticing that FANDANGO will target users will be professional
journalists that need to evaluate in a short time the genuinity of a potential
news.
Being the target user a professional it is very likely that the Si fakeness
scores as well as the Si,j partial scores will be treated as a decision making
help, for a final decision that will still rely on human judgement.
**1.2. WHAT RESEARCH DATA DO WE COLLECT AND FOR WHAT PURPOSE?**
FANDANGO will collect data about potential fake news, for the sole purpose of
evaluating the effectiveness of algorithms and check their ability to
_partially_ automate the process of deciding about each news being fake or
not.
The achieve this main objective several minor objectives will need to be
achieved:
* ingest cross-domain and cross-lingual data sources of different nature to the FANDANGO platform
* provide state of the art algorithms for fake news related feature extraction (i.e. computing the S i,j partial fakeness scores)
* provide an higher level fake news evaluation (i.e. computing the S i fakeness scores)
* back-track the propagation of potential fake news, determining the original sources and the diffusion points for source scoring regarding fake news distribution
Such algorithms will analyse not only text content but also images and videos
and in structured and unstructured formats.
FANDANGO will leverage a Data Lake architecture to manage all relevant data
found in relevant data sources (a preliminary selection of data sources is
identified in DoA Section 1.3.4.4, but will be enlarged during the project).
FANDANGO partners are aware of the fact that the Open Research Data Pilot
applies primarily to the data needed to validate the results presented in
scientific publications and that other data can be provided on a voluntary
basis.
**1.3. WHAT RESEARCH DATA DO WE GENERATE AND FOR WHAT PURPOSE?**
In FANDANGO all collected data will be processed/analysed by a set of software
modules to extract markers and cues in order to reveal fake or misleading
news.
As already stated different (four) analysis modules will be in the FANDANGO
toolset:
1. The Spatio-temporal analytics and out of context fakeness markers module will be responsible for analyzing news posts and finding duplicate or near duplicate posts in the past or referring to other geographic/physical locations or contexts. In fact, a common case of fake news is the re-posting of a real past piece of news that it is no longer relevant or is removed from its original context. Such spatio-temporal or out-of-context correlations can generate strong fakeness markers (i.e. generating
S1,i).
2. The Multilingual text analytics for misleading messages detection module will handle multilingual content and score it the text as potentially misleading or not. To establish such scoring ability it will digest data from the public web as well as existing and well updated knowledge bases such as YAGO, DBPedia, Geonames etc.) to identify contradictions and potentially intentional errors. (i.e. generating
S2,i).
3. The Copy-move detection on audio-visual content module will detect the manipulation of images and videos to modify their visual content. This module will leverage deep learning architectures to identify such content and the pool of near duplicate content and visuals that were used as sources for creating the fake object. Synthetic data and publicly available big image datasets will be used to train the models. Moreover, state of the art audio analysis algorithms will be deployed to detect modified or voice-over attacks in news videos. (i.e. generating S 3,i ).
4. The Source credibility scoring, profiling and social graph analytics module will profile the sources of news and apply graph analytics to detect paths and nodes that tend to produce fake news and spread them widely on the public web. (i.e. generating S 4,i ).
To fuse the output of the above-mentioned modules a machine learnable approach
will be used for overall fake news scoring (i.e. generating S i )
A machine learnable score function that will learn how to weight and what data
to use from the data lake to decide about the fakeness or not of a news post.
The task will apply existing and successful predictive analytics deep learning
architectures in order to be able to score news posts incrementally and update
the score as new data populate the data lake, thus being able to provide hints
from the early beginning of the appearance of a post.
Finally, for the visualisation and analysis of fake news, FANDANGO will
provide a set of front end web applications and investigative intelligence
tools with focus on identifying case studies. However, these tools are
suitable for any kind of fake news discovery application. Siren platform is
commercial product that delivers a unique investigative experience to solve
real world data driven problems, enabling Analysts, Investigators and Data
Scientists. It uniquely allows you to identify relationships across multiple
data sets, accessible via search, dashboard analytics, knowledge graphs and
real-time alerts, providing journalists and investigative agents to get
contextual information and elaborate on their analysis.
**1.4. WHAT TYPES AND FORMATS OF DATA WILL THE PROJECT GENERATE/COLLECT?**
The FANDANGO project will leverage a Data Lake architecture that will store
all available data types found in the identified data sources, i.e. free text,
structured and unstructured data, images, videos and audio data from.
The data types we will be handling are plain text, images, videos, JSON
unstructured files, structured data from open data databases.
**1.5. WILL YOU RE-USE ANY EXISTING DATA AND HOW?**
We will reuse existing data mainly for the machine learning training set, some
other data may be kept to refine evaluations of trustworthiness of specific
news (“ground truth”).
As an example, CERTH will be reusing existing publicly available datasets that
are found in many publications and provide a reference for comparison with
other algorithms.
A list of possible datasets we will be using is the following:
* Moments (http://moments.csail.mit.edu/)
* Imagenet (www.image-net.org/)
* MIT Places (http://places.csail.mit.edu/
* 20bn-Something (https://20bn.com/datasets/something-something)
* Coverage (https://github.com/wenbihan/coverage)
* MS-COCO (http://cocodataset.org/#home)
* COMOFOD (http://www.vcl.fer.hr/comofod/)
* EUREGIO Image forensics challenge (http://euregiommsec.info/image-forensics-challenge/)
* Image manipulation dataset (https://www5.cs.fau.de/research/data/image-manipulation/)
* NIST media forensics challenge (https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018)
* SULFA (http://sulfa.cs.surrey.ac.uk/)
* REWIND (https://sites.google.com/site/rewindpolimi/downloads/datasets)
As another example UPM will use the existing data stored in the FANDANGO
platform for the graph analysis tasks. In addition, UPM will employ the data
provided by the ingestion process (crawler to make the proper data wrangling
and data transformation processes in order to have the data in the correct
format for both Machine learning and Deep learning procedures.
6. **WHAT IS THE ORIGIN OF THE DATA?**
The main goal of FANDANGO project can be pursued by aggregating data, from
different Data sources in a suitable Data Lake.
7. **WHAT IS THE EXPECTED SIZE OF THE DATA?**
FANDANGO will deal with Big data size for ingested and homogenized data, a
very different size for generated data (e.g. source thrustability and fakeness
scoring).
8. **TO WHOM MIGHT IT BE USEFUL ('DATA UTILITY')?**
Mainly to other Research Projects.
## 2\. FAIR DATA
All considerations about FAIRness of data will be postponed because of the
opt-out request.
### 3\. DATA SECURITY
All considerations about long term preservation and curation of data will be
postponed because of the optout request.
### 4\. ETHICAL ASPECTS
Ethical aspects remaining after the opt-out choice are dealt with in D9.1 and
D9.2 documents.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1442_EPICA_780435.md
|
# 1\. INTRODUCTION
This document presents the Data Management Plan (DMP) describing how the data
generated within the project are processed and preserved during EPICAs
lifetime. As the project will control, handle and process sensitive data from
different sources, the need for appropriate data management is highly
important.
The research data management procedures will be further described in future
releases of the DMP, as the project develop, and the data collection methods
are established and adjusted according to EPICAs on-going operations and the
project maturity.
This DMP is based on the H2020 Programme Guidelines on FAIR Data Management in
Horizon 2020. The goal is that the DMP will ensure that project generated data
is findable, accessible, interoperable and reusable, also allowing project
data to be generated, stored and managed during the complete project lifetime,
subject to changes in consortium policies and methodology.
The DMP is interconnected to deliverable D1.2 - “Ethics, Data Protection and
Privacy Management Plan”, especially regarding the procedures and policies of
collecting and protecting personal and sensitive data. In this regard GDPR
compliance is critical, and self-declarations of compliance from all partners
are referenced in the Appendixes of the DMP.
This document is further based on the terms and conditions established in the
Grant Agreement and its Annexes, as well as the applicable articles in the
Consortium Agreement.
The DMP is a deliverable which is intended to be used by all the project
partners, to ensure quality assurance of project processes and outputs and
prevent possible deviations from the project work plan as described in the
EPICA DOA.
The DMP also provides an analysis of the main elements of the data management
conducted within the EPICA project framework – and when in compliance -
ensures coherent management of the project data generated amongst and by the
consortium during the project.
_**Figure 1:** The data life cycle analyzed (University of Virginia Library,
Research Data Services) _
2\. DATASET AND STORAGE DESCRIPTION
The purpose of the collected and generated data is to use them for carrying
out the tasks as described in DOA, as well as to be able to validate and
assess the EPICA ePortfolio and relevant components.
The data can be divided into personal and anonymized data. At this stage of
the project, it is not yet exhaustively known which formats the data will
have, but until now, the following formats of data has been utilized:
1. .docx and equivalent
2. .pdf and equivalent
3. .jpeg and equivalent
4. .xls/xlsx and equivalent
5. .csv and equivalent
6. handwritten forms
7. geo-positioned data
8. business cards in filing cabinets and/or rolodexes
9. voice recordings (digital format)
Furthermore, the participants will have to fill out surveys via online tools
such as QuestBack, SurveyMonkey and Google Forms and the answers will be
another data source for the researcher within the project.
Data from the pilots is furthermore being reused in order to optimize the
final version accordingly.
The data will be generated and/or provided by the participants, consortium and
from partner universities staff, and will be stored on a specially designated
Google Shared Drive (a restricted database with access log) organized by
folders – where the project coordinator controls the access.
In addition, the consortium members are storing working documents on their
institution’s databases and IT-services – as well as on their own desktops in
the interim.
All EPICA project partners have identified the datasets utilized so far – and
envisioned utilized in the future. The list is provided below, further to be
elaborated on in the following DMP revisions.
<table>
<tr>
<th>
**#**
</th>
<th>
**Aggregate Dataset Name**
</th>
<th>
**Responsible Partner**
</th>
<th>
**Related WP**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Newsletter Subscribers List
</td>
<td>
ICWE + relevant processor
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
2
</td>
<td>
Participant Surveys
</td>
<td>
OUC + relevant processor
</td>
<td>
WP4, WP6
</td> </tr>
<tr>
<td>
3
</td>
<td>
Pedagogical Requirements
</td>
<td>
OUC + relevant processor
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
4
</td>
<td>
Legal Requirements
</td>
<td>
AVU, ICDE + relevant processor
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
5
</td>
<td>
Technical Requirements
</td>
<td>
MYD + relevant processor
</td>
<td>
WP5, WP4
</td> </tr>
<tr>
<td>
6
</td>
<td>
Business Requirements
</td>
<td>
MYD, ICDE, AVU
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
7
</td>
<td>
Other (Working Documents, other etc.)
</td>
<td>
All
</td>
<td>
WP1, WP7
</td> </tr> </table>
The above list in indicative for the data, and categorization of the data,
that the EPICA project will produce. This is subject to alteration, and it
might change in the next versions of the DMP considering project developments.
3\. GENERAL PRINCIPLES CONCERNING IPR & PDP
Project partners have specifically retained Intellectual Property Rights (IPR)
on their technologies and data, on which their economic sustainability relies.
As a legitimate result, the EPICA consortium, and partners individually, must
protect these rights and data and consult the concerned partner(s) before
publishing or disseminating data.
All necessary measures should be taken to prevent unauthorized access to the
data, and all data repositories used by the project is in this regard include
a secure protection of sensitive data.
A holistic security approach is furthermore to be undertaken to protect the
three main pillars of information security:
1. confidentiality
2. integrity
3. availability
The security approach will consist of a running assessment of security risks
followed by an impact analysis when necessary. This analysis will be performed
on the personal information and data processed by the proposed system, their
flows and any risk associated to their processing.
For a majority of the data acquisition activities to be carried out in the
project, it is necessary to collect basic personal data (e.g. full name,
contact details, background and more), even though the project will avoid
collecting such data unless deemed necessary – employing anonymization or
pseudo-anonymization methods where possible.
Such data will be protected in compliance with the EU's General Data
Protection Regulation (GDPR) aiming at protecting personal data. National
legislations applicable to the project will also be strictly followed, but the
GDPR framework is considered best-practice for this very purpose until the
project’s research discovers conflicting frameworks in other applicable
jurisdictions. As of the third revisioning of the Data Management Plan, other
relevant jurisdictions in Africa have been mapped (subtask T4.2). In this
process, no evidence of a stricter level of data privacy was identified in
Kenya, Tanzania and Uganda, and as such the GDPR level of data privacy remains
the consortia _modus operandi._
All data collected by the project will be done after giving data subjects full
details on the acquisition to be conducted, the data processing and after
obtaining an active and informed consent through a reliable method of
recording data suitable for future storage.
_**Figure 2:** The three pillars of information security (ISO 27001) _
4. FAIR D ATA
The consortium is striving to make the project data FAIR as described in the
guidelines, provided by the European Commission and their directorate-general
for Research & Innovation for H2020 programs, and the plan will be
increasingly oriented towards this point in detail in the fourth revision of
the data management plan.
This is also applicable to the continuous developed metadata, which is
recorded in terms of contributor identification and version history.
Certain datasets cannot be shared (or need to be shared under restrictions),
due to for example legal and other contractual reasons. Specific beneficiaries
have conditioned to keep their data closed with reference to the provisions
made in the consortium agreement.
In order to maintain a FAIR management of the data, the consortium will
designate their efforts to ensuring that:
1. All project outputs and otherwise produced project data is discoverable with relevant and logical metadata in a manner that allows for a lean discovery of the data in question. Search keywords should be added to the metadata where possible/reasonable, and at a minimum contain information about the time of the last edit, the author(s), the topic and institutional affiliation. The file type for text files shall be .pdf, .docx or .gdoc to ensure searchability and indexability, and the language used should be general and easily understandable.
2. All project data is identifiable locatable by means of a standard identification mechanism. All files shall be named with reference to the deliverable and/or subtask the data is tied to or generated in relation to. All file names should be in English and uniformly marked with the institution responsible for the data (abbreviation).
3. All documents subject to edits and changes should have clear version numbers, with a change log presenting an overview of the previous versions. Version numbers should start at 0.0, and increase with 0.1 for every version.
4. All project data shall be made available openly by default, subject to restrictions in the Grant Agreement as well as when data privacy concerns takes precedence. ICDE as the project coordinator acts as the repository for all project generated data, and stores this on the shared storage database. Data is accessible (upon deposition) to other project partners via this platform (Google Shared Drive). Third parties may access subject to PCT approval (acting as an ad-hoc Data Access Committee in these questions), in which ICDE as administrator generates the necessary credentials to access the shared drive. The current solution, where ICDE is the project data repository shall be reviewed by M36.
5. The repository shall in the legitimate interest of protecting project data ascertain the identity of the person accessing data (access logs)
6. All data produced in the project should be interoperable, allowing for data exchange and re-use between the partners. To achieve this, only well-known formats should be used when storing data. Data stored in the MS Office file format family is to be sought, along with generally recognized open file formats based on the XML and CSV file structure.
7. All abbreviations used in the generated project data must be outlined in an annex or preamble to the relevant deliverable or text, in order to limit and map the project specific ontologies or vocabularies
8. If not in conflict with the Grant Agreement, all produced data shall be under the creative commons license “Attribution (BY) 4.0” to permit the widest re-use possible. The Grant Agreement, and adjacent documents, such as the Consortium Agreement, takes precedence.
9. Research data not involving protected (CA) background, sideground and foreground shall be made available to third parties as quickly as possible. Especially research eligible for publishing in journals should be sought published, where also notification should be sent to ICWE (dissemination partner).
10. Quality assurance of the data produced under the project shall be carried out by way of partner validation. When a final draft is ready (text, multimodal etc.), one other partner shall review the data to ensure its quality. In order to cover the costs for this, the reviewing partner must be related to the task and work package the data pertains – so that it may report and claim its costs for the time spent on reviewing and ensuring the quality of the data.
11. Every partner is responsible for its own data management, but the project coordinator (ICDE) has an overarching responsibility to monitor and implement changes to protocols accordingly. The Legal and Data Protection Officer (LDPO) is the person responsible for following up on observed discrepancies of the plan, or act upon notification received from a consortium partner. If a partner has the knowledge about a breach (himself or others) of the DMP, the LDPO shall be notified immediately. The LDPO/DPO shall keep a breach log logging all incidents concerning deviations from the Data Management Plan.
12. By month 36 a detailed plan for the long-term preservation of the data shall be developed, where the costs and potential value, who decides and how what data will be kept and for how long will be assessed.
Under the scope of EPICAs data management plan, good data management is not a
goal in _itself,_ but rather to be considered as a foundation leading to
knowledge discovery and innovation, and as such the recording, management and
the storage of data and knowledge in this regard enables the reuse by the
community after the data is published and disseminated, subject to the
aforementioned IPR-related and contractual restrictions.
_**Figure 3:** The FAIR Guiding Principles (Wilkinson et al., Scientific Data
2016) _
5. DATA TRANSFER
In order to enable collaboration and exchange of project data and contributory
analysis, data must be transferred crossborder between multiple jurisdiction
and applicable data management regulations.
The EPICA project is in this regard subjected to the GDPR framework – which
also regulates cross-border transfer of certain types of data.
In terms if collaborative efforts, the platform utilized is Google Drive, Docs
and Sheets, as well as certain Microsoft Office online and local services.
These providers are incorporated in the United States, and in combination with
the consortium partners, the data is at any applicable time exchanged between
these identified countries:
1. Spain
2. Germany
3. Norway
4. Tanzania
5. Kenya
6. Uganda
7. United States
Intermediaries like AWS and other ISPs are considered to fall within the “Safe
Harbor”-category, and as such not a party to the transfer and exchange of
data. The legal requirements in TZ, KE and UG was mapped in D4.3 - in which a
preliminary report was made available in May 2019. The report identified no
major barriers for the deployment and implementation of the MYD ePortfolio in
Tanzania, Uganda and Kenya. The national regulatory frameworks for each
country does not have in place regulations that exceeds the General Data
Protection Regulation (GDPR) level of requirements. Regarding intermediary
liability, the framework and subsequent practice seems to be harmonized and in
line with international developments, and the local legal requirements are
found to be either less stringent, or absent entirely.
The United States is furthermore under the Privacy Shield framework, and as
such there are regulations directly governing the legalities of EU/U.S. data
exchange/transfer. For the exchange between African and European based project
partners, the DMP is considered sufficient in order to providing “appropriate
safeguards” as to the protection and privacy policies applied to the data,
with reference to GDPR art. 46 and the preliminary legal requirements report.
_**Figure 4:** Structure of EPICAs data transfer relationships. _
6. DATA SECURITY
Considering that EPICA processes sensitive data, and followingly to ensure
data confidentiality, the following encryption systems is analyzed and
utilized:
_ISO/IEC 18033-2:2006 Information technology -- Security techniques –
Encryption algorithms -- Part 2: Asymmetric ciphers)._
Both data at rest and data in flight will be encrypted using the AES symmetric
cipher while secure key exchanges will be performed using an asymmetrical
cipher of at least 2048 bits in length.
EPICA has an explicit plan on how the consortium will handle data generated or
provided by the participants (below, fig.5.). The figure also illustrates how
different data types are handled and stored with a special emphasis on the
different approaches between anonymized and pseudo-anonymized data samples.
These differences are especially important due to the different ethical
problems these respective data types uphold, as described in the Chapter 2 of
the deliverable
1.2 Ethics, Data Protection and Privacy Management Plan.
Main risks are identified to be project partners, aware or unaware, giving
access or sharing sensitive files with unauthorized persons outside the
project group.
The database in which the project data is stored is furthermore subject to an
access and change log.
_**Figure 5:** EPICA Data Storage Plan _
7. ETHICAL ASPECTS
See the D1.2 Ethics, Data Protection and Privacy Management Plan.
8. DATA MANAGEMENT PLAN FOR DATASETS
This form is to be completed and sent to the Project Data Protection Officer
before the acquisition of data is started – and is to form the basis for the
structure of any project dataset.
<table>
<tr>
<th>
**EPICA Dataset Template for Processing of Data**
</th> </tr>
<tr>
<td>
**Data Identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
</td> </tr>
<tr>
<td>
Source
</td>
<td>
</td> </tr>
<tr>
<td>
**Partners Activities and Responsibilities**
</td> </tr>
<tr>
<td>
Owner of Data
</td>
<td>
</td> </tr>
<tr>
<td>
Data Collection
</td>
<td>
</td> </tr>
<tr>
<td>
Data Analysis
</td>
<td>
</td> </tr>
<tr>
<td>
Data Storage
</td>
<td>
</td> </tr>
<tr>
<td>
Related WP(s) and Task(s)
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Metadata Information
</td>
<td>
</td> </tr>
<tr>
<td>
(Estimated) Volume of Data
</td>
<td>
</td> </tr>
<tr>
<td>
Format Standards
</td>
<td>
</td> </tr>
<tr>
<td>
**Data Exploitation and Sharing**
</td> </tr>
<tr>
<td>
Data Exploitation (purpose)
</td>
<td>
</td> </tr>
<tr>
<td>
Data Access Policy
</td>
<td>
</td> </tr>
<tr>
<td>
Data Sharing Policy
</td>
<td>
</td> </tr>
<tr>
<td>
Embargo Periods
</td>
<td>
</td> </tr>
<tr>
<td>
Personal Data
</td>
<td>
</td> </tr>
<tr>
<td>
Special Personal Data
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Duration of Data Storage
</td>
<td>
</td> </tr>
<tr>
<td>
Location of Data Storage
</td>
<td>
</td> </tr> </table>
# 9\. CONCLUSION
This Data Management Plan provides an overview of the framework of the data
management pertaining the EPICA project. Moreover, it describes the expected
sources, from where the data will be sourced from and the compliance to the
FAIR principles, cf. Section 4.
This framework is dynamic and expected to be enriched and elaborated within
the project lifetime. As the project is in its first 18-months of the project
cycle, it has proven difficult to make estimates regarding the volume of user
data that will finally preserved in the developed ePortfolio as well as the
different costs required for their preservations. This will be an area of
focus in the period between M18 and M36, and is closely tied to the advances
in the development of the ePortfolio. Concerning the project data however
(requirements, business development information and similar/other), the DMP in
its current form draws up the boundaries for the processing and management of
the data.
The ongoing work to be performed in the next months is generally expected to
generate useful data in order to provide more accurate information on the data
management. The document will keep being updated along with the EPICA
progress, and as such one more final version of D1.4 is going to be developed
based on the available findings during its corresponding period.
These versions will be delivered:
~~1\. Second version: M12~~
~~2.~~
~~Third version:~~
~~M18~~
3\.
Fourth version:
M36
_**Figure 6:** _
_Schedule for current and future revised EPICA DMPs_
10\. APPENDIXES
Self-declarations of GDPR compliance from project partners in Non-EU/EEA
Countries for the purpose of data transfer and processing available upon
request from the EPICA DPO.
**_Letter template/text):_ **
To be provided on the institution's letterhead to ICDE as coordinator:
**Recipient:**
_EPICA Legal, Ethics and Data Protection Officer_
_(NAME)_
_ICDE - International Council for Open and Distance Education Drammensveien
211, 0281 Oslo, Norway_
**Headline:** _Certification Attesting the Commitment to the EPICA Data
Management Framework_
**Text:**
_This is to certify that the_ ***NAME OF INSTITUTION/COMPANY*** _has
implemented the mandatory legal measures in relation to data protection and
operate in accordance with the established guidelines by national legislation,
the EU General Data Protection Regulation (GDPR) and the at any time
applicable project data management framework (DMP) as a partner in the EPICA
project_ .
**Signature:** _Your signature and title, and preferably institution’s
stamp/seal_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1444_Easy Reading_780529.md
|
# Executive Summary
The present document is deliverable “D9.7 Data Management Plan” of the Easy
Reading project which is funded by the European Union’s Horizon 2020 Programme
und Grant Agreement #780529.
The purpose of this document is to provide the plan for managing the data
generated and collected during the project. The Data Management Plan (DMP)
describes the data management life cycle for all data sets to be collected,
processed and/or generated by a research project. It covers:
* The handling of research data during and after the project
* What data will be collected, processed or generated
* What methodology and standards will be applied
* Whether data will be shared/made open and how
* How data will be curated and preserved
The DMP is currently in an initial state, as the project has just started.
Following the EU’s guidelines regarding the DMP, this document may be updated
- if appropriate - during the project lifetime (in the form of deliverables).
The DMP currently identifies the following data as research data generated
during the project:
* Structure of the user profile
* Usage statistics
* Anonymized user profile data
* User evaluations
* Administrative metadata
Most data sets will be provided openly on public web servers, via a REST-API
1 for real time data or other means of provision. As the user profiles may
contain sensitive data even if they are anonymized, it is, at this stage,
currently unclear what will be openly available and what not.
# Introduction
This document is the Data Management Plan (DMP). The consortium is required to
create the DMP because the Easy Reading project participates in the Open
Research Data pilot. The DMP describes the data management life cycle for all
data sets to be collected, processed and/or generated by a research project.
## Scope
The present document is the Deliverable 9.7 “D9.7 – Data Management Plan”
(henceforth referred to as D9.7) of the Easy Reading project. The main
objective of D9.7 is to provide the plan for managing the data generated and
collected during the project.
According to the EU’s guidelines regarding the DMP, the document may be
updated - if appropriate - during the project lifetime (in the form of
deliverables).
## Audience
The intended audience for this document is the Easy Reading consortium and the
European Commission.
# Data Summary
The Easy Reading framework will improve the cognitive accessibility of
original digital documents by providing real time personalisation through
annotation (using e.g. symbol, pictures, video), adaptation (using e.g.
layout, structure) and translation (using e.g Easy-to-Read, Plain Language,
symbol writing systems). The framework provides these (semi-)automated
services using HCI techniques (e.g. pop-ups/Text-To-Speech (TTS)/captions
through mouse-over or eye-tracking) allowing the user to remain and work
within the original digital document. This fosters independent access and
keeps the user in the inclusive discourse about the original content. Services
adapt to each user through a personal profile (sensor based tracking and
reasoning of e.g. the level of performance, understanding, preferences, mood,
attention, context and the individual learning curve).
During the project, data will be generated to improve the Easy Reading
framework, to model the capabilities and preferences of the user and to
evaluate the success of the project. The purpose of the data
collection/generation can be subdivided into the following points:
* **Modelling the user:** The model of the user is required as a basis to provide services on top of this information which helps users with cognitive disabilities browsing the web.
* **Framework performance monitoring and improvement** : Collected data will be used to improve the tool. Evaluations of usage statistics for example will be used to determine which functions were more helpful than others, which configurations were used and on which content areas issues occurred for the user.
* **Matching services to user needs:** Based on user profile data and usage data, suggestions for services/functions can be made automatically.
* **Pre-configuration of functions to simplify web content:** Depending on user profile data and usage data, functions can be pre-configured to provide helpful support and a good user experience from the start when installing a new function.
* **Adjusting functions according to user profile:** Besides the initial configuration of the functions, adjustments and fine-tuning on-the-fly using usage statistics will be possible too.
* **Learning about the target group:** From a research perspective, major contributions in the field of web accessibility and people with cognitive disabilities can be made by collection and analysis of usage statistics.
* **Deducing rules for cognitively accessible web content:** Based on these findings, rules for cognitively accessible web content may be deduced and support web developers and content creators to make website easier to understand for everyone.
## Types and Formats of Data
Currently the following data sets were identified. As mentioned before, these
are subject to changes during the project lifetime.
* Structure of the user profile:
* Cognitive capabilities of a user o Current mood of a user o Level of confusion o Usage statistics o Which target group uses which kinds of services Accumulated usage statistics o Time of the day o Mood
* Disabilities / CapabilitiesAnonymized user profile data o Function configuration preferences: Describes which configuration of a function the user uses
* Time consumption per UI-element: Describes at which elements of the content the user spends most of the time
* Level of confusion per element: Describes if there are elements of the content which especially confuse the user
* Understandabilty of elements: Describes which parts of the content the user understands/are easy to interact with, and which are not
* User evaluations: Evaluations will be conducted during the course of the project to to ensure requirements of this manifold user group are considered adequately from the very beginning of the project. These evaluations involve user satisfaction tests to ensure that the user interface as well as the features of the tool meet the actual needs of people with cognitive disabilities.
* Administrative metadata: When and how data was created e.g.
The main data exchange format for data sets will be JSON, while the data
itself is stored in a relational database. Also other kinds of structured data
will probably be made openly available using a REST-service 2 using JSON 3
as data format. Manually created data, like evaluations or data used in
publications, will be made available as PDFs.
## Reuse of existing data
At this state of the project, no existing data is planned to be reused. This
might be subject to change during the lifespan of the project.
## Origins of Data
Most data will be generated and retrieved during the actual usage of the tool,
either by user interaction or by tracking of the user’s actions. In addition,
relevant data will be created due to configuration processes either by the
user or a caregiver. Evaluations will also provide further useful data.
* Data generated by user interaction: During user interaction, data can and will be collected to achieve the goals mentioned before. Amongst others, the following data may be useful: o The functions the user prefers to use
* The way the user uses functions and how the tool and its features are configured o The kind of websites the user visits. This is very sensitive data of course, so some other data might be used instead. For example not the actual website, but measures like the complexity of the website layout or the complexity of the text.
* The time users spend on website, or parts of a website. Again, this is of course very sensitive data and due to ethical considerations, other measures which reflect the most interesting outcomes will be used instead.
* Level of confusion of the user
* Tracking of the user with sensors will be possible:
* Mouse and keyboard tracking o Head and eye tracking
* Due to client-side configuration of functions, data will also be generated
* Data will also be generated by carers who can do an initial configuration or users of the target group who can use a wizard independently, which allows to manually add information about capabilities and preferences
* User evaluations through questionnaires, interviews and other means.
## Expected Size of the Data
At this state of the project, the expected size of the data is unknown, as the
user profile and its structure is not fully defined and implemented.
## Data Utility
The data will be useful to the project (consortium), to other research
projects in a similar field which are concerned with people with cognitive
disabilities consuming web content and for companies which want to create
products or content of this kind with people with cognitive disabilities in
mind.
# FAIR Data
The research data generated by the Easy Reading project should be 'FAIR';
findable, accessible, interoperable and re-usable.
## Findable Data
* **Discoverability of data:** Since the user and usage data is stored in a relational database, it can be accessed using SQL queries. The database structure/schema will be made available in some sort of wiki.
* **Identifiability of data:** Not specified at this stage of the project
* **Naming conventions:** Most of the data will be stored in a relational database. Therefore standard SQL database naming convention are being used:
o Singular names for tables o Singular names for columns o Schema name for
tables prefix (E.g.: SchemeName.TableName) o Pascal casing (a.k.a. upper camel
case)
* **Search keywords:** Not specified at this stage of the project
* **Clear versioning:** Not specified at this stage of the project
* **Metadata creation standards used:** The use of specific standard for metadata creation is not yet decided upon.
## Accessible Data
Data generated by the Easy Reading project contains sensitive data. For
example the user profiles themselves, even if they are anonymized, are
considered sensitive. Therefore the consortium is very cautious on making this
data openly accessible. Currently following data might be made openly
available:
* structure of the user profile
* user evaluations
* accumulated usage statistics (in contrast to individual user statistics)
* deduced data from evaluations as part of publications
At this stage of the project it is not fully decided if individual usage
statistics, anonymized user profile data and administrative metadata will be
made openly accessible due to the sensitive nature of this kind of data.
Openly available data will be made public by following means:
* Publications (evaluations, accumulated usage statistics)
* Direct download (user profile structure)
To access openly available data no special software or method will be required
at the current stage. For publications and evaluations, a standard PDF viewer
is sufficient. Usage statistics and the user profile structure can be
displayed using a JSON scheme parser. This data and its associated metadata,
documentation and code are deposited on the project website.
**Data restrictions:** There are no restrictions to openly available data.
Other data is currently only available for consortium members and it needs to
be decided which parts of this data will be made available and in which way.
## Interoperable Data
Data is structured (relational database, JSON), but due to the highly
sensitive nature of parts of this data, it is not openly available at this
point in time. However, in the future interoperability of less sensitive parts
of the data is easily possible (e.g. parts of usage statistics with REST
service and data exchange format JSON). Data types are not fully decided at
this stage of the project and will evolve over the course of the project, but
standard types will of course be applied as often as possible.
## Data Reusability
To ensure FAIR use of data and to ensure widest reuse possible, openly
available data generated by the project is planned to be licensed under the
Apache License 2.0. Third parties will be able to use this openly available
data.However, it is at this stage of the project not possible to specify a
date when the data is available for reuse. Neither is it fully decided which
parts of the data will be openly available. For instance, parts of the usage
statistics might be deemed as sensitive data in terms of privacy.
# Allocation of Resources
Costs for making your data FAIR are covered by the project budget. The project
leader (JKU) coordinates the data management of the project. The costs and
potential value of long term preservation has not yet been determined due to
the early stage of the project. This will be dealt with later on over the
course of the project
# Data Security
As sensitive data is stored and transferred, data security is of utmost
importance for the Easy Reading project. At the moment, data used by the
framework is stored on cloud server infrastructure from IBM (IBM Bluemix).
Later on in the project this could be moved to Amazon AWS. Both platforms
provide sufficient measures and tools to provide data security. Data will be
stored on Servers in the EU.
* IBM Bluemix Cloud Security: _https://console.bluemix.net/docs/security/index.html#security_
* Amazon AWS Cloud Security _https://aws.amazon.com/security/_
Regular backups using cron-jobs of the relational database will ensure easy
data recovery. Data transfer will be exclusively done via HTTPS /WSS (Web
Socket Security).
Ethical Aspects
At the current stage of the project all ethical aspects are covered in section
5 “Ethics and Security” of the Grant Agreement of the project.
# Conclusions
The purpose of this document is to provide the plan for managing the data
generated and collected during the project; The Data Management Plan.
Specifically, the DMP describes the data management life cycle for all data
sets to be collected, processed and/or generated by a research project. It
covers:
* the handling of research data during and after the project what data will be collected, processed or generated. what methodology and standards will be applied.
* whether data will be shared/made open and how
Following the EU’s guidelines regarding the DMP, this document may be updated
- if appropriate during the project lifetime (in the form of deliverables).
Due to the sensitive nature of some data collected, it is unclear at this
point of the project if all data will be made openly available. Data sets that
will be openly provided to the public will be hosted on web servers or
provided in real time via REST endpoints. Finally, data sets will be preserved
after the end of the project on the pilot’s web sites, on web servers or other
web-based solutions.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1449_VOICI_785401.md
|
# Executive Summary
A Data management plan for VOICI is described. Generally FAIR principles will
be followed (findable, accessible, interoperable and reusable data). The
consortium or the Topic Manager may, however, decide that individual data
items are too sensitive for FAIR to apply, if this is justified by privacy of
persons or commercial exploitation of results. Data will be deposited on the
Zenodo repository. The data will mainly be speech and cockpit noise recordings
obtained in the Audio Evaluation Environment (the VOICI lab).
**About the project:** The main objective of VOICI is to demonstrate the
technology that implements an intelligent natural crew assistant in a cockpit
environment up to TRL 3. This is implemented through the following _specific
objectives_ as stated in the DoA:
**Obj. 1:** Develop a non-intrusive voice acquisition system that allows
separating speakers from each other and filtering speakers from background
noise. The ambition is to acquire speech input of sufficient quality from both
(i) crew headsets and (ii) an ambient microphone array system.
**Obj. 2:** Develop an audio evaluation environment emulating the audio
environment in the cockpit.
**Obj. 3:** Develop a high-end speech recognizer that reaches a word error
rate (WER) of 5% in harsh environment. This engine will recognize not only
aircrew requests but also inputs from the ATC radio communication.
**Obj. 4:** Develop an intelligent agent that naturally interacts with the
aircrew and the existing aircraft system through pre-defined flight scenarios
and that is aware of the flight situation through communication with other
cockpit sub-systems and flight procedures 1 .
**Obj. 5:** Develop the whole system in such way that it can be ultimately
embarked in the cockpit without external dependencies with external cloud-
based services.
**Obj. 6:** Test and evaluate the performances of the system under several
flight phases with the defined highlevel requirements.
# Introduction
## Purpose and structure of the document
The document describes how the VOICI consortium aims to manage scientific data
within the project, with emphasis on data that will be openly accessible in
accordance with the Horizon 2020 Open Research Data Pilot. The report is based
on _H2020 templates: Data management plan v1.0 – 13.10.2016._
## Intended readership
The document is mainly intended for Clean Sky 2, the Topic Manager and the
VOICI consortium, but is open to any interested reader.
# Data Summary
Data are gathered to facilitate development of technology for
* noise suppression for speech-input (WP1-2)
* speech recognition for targeted vocabulary in noisy environment (WP2)
* natural language understanding of operational requests and dialog management system (WP3)
* computer generated speech as output from dialog systems (WP3)
A target cockpit has been selected by Thales as Topic Manager: a Dassault
Falcon 2000 business jet. To aid technology development and evaluation an
Audio Evaluation Environment (AEE) is established in WP1: a laboratory that
recreates the geometry and noise conditions of the target cockpit.
## Pre existing data to be used
1. LibriSpeech ASR corpus from _http://www.openslr.org/12/_ to train an English speech recognition engine.
2. Falcon 2000 aircraft dimensions to build AEE (approximate from drawings and photos). (C) TTS Voice portfolio of Acapela Group
## Data anticipated to be obtained/generated in VOICI – and their planned
accessibility
For details about data already published see the tables in Appendix A.
Remaining items for publication in the below list will be reported in the same
manner.
1. Cockpit noise recorded in real Falcon 2000 jets, effectively a single microphone per flight session.
See Appendix A.2.
_Format_ : wav _Size_ : 0.1-1 gigabytes (GB) per recording
_Origin_ : Thales, SINTEF _Dissem. level_ : Public
_Publication_ : May 2019.
2. Speech recorded in the Audio Evaluation Environment (AEE): Speech as defined in section 2 played back over a Head and Torso Simulator (HATS) and recorded via headset microphone
_Format_ : wav + txt for transcription _Size_ : up to 150GB
_Origin_ : _http://www.openslr.org/12/_ _Dissem.level_ : Public 0
_Publication anticipated_ : Dec 2019
3. Speech recorded in the Audio Evaluation Environment (AEE): Speech played back over a Head and Torso Simulator (HATS) and recorded via microphone arrays
_Format_ : wav + txt for transcription _Size_ : up to 150GB per microphone
_Origin_ : _http://www.openslr.org/12/_ _Dissem.level_ : Public 0
_Publication anticipated_ : Dec 2019
4. Noise in the AEE, based on (i) above, recreated using multiple distributed loudspeakers, and recorded by the same microphones as in (i) and (iii) above.
_Format_ : wav _Size_ : up to 150GB per microphone
_Origin_ : Thales, SINTEF _Dissem.level_ : Public 0
_Publication anticipated_ : Dec 2019
5. Impulse responses between different sound sources (speech, noise) and receivers (microphones).
_Format_ : wav _Size_ per respons typically 0.1-1 megabytes (MB).
_Origin_ : SINTEF _Dissem.level_ : Public
_Publication anticipated_ : Dec 2019
6. Recordings from a professional speakers of the voice talent to create the Assistant voice
Format : wav Size : 2 GB
Origin : ACAPELA _Dissem. level_ ; Confidential
7. Speech data recorded by pilots using operational requests (ATC communications and/or voice assistant requests).
_Format_ : wav + txt for transcription _Size_ : X GB
_Origin_ : Thales _Dissem.level_ : Confidential (Commercial)
**Utility of data** : The purpose of all data is to facilitate technical
development within the project. All the above items are likely to be of
interest to the academic community carrying out research on microphone arrays,
speech recognition and dialog systems. In particular, interest is expected for
aviation applications.
Access to data is described in section 3.
# FAIR data
VOICI aims to make project data findable, accessible, interoperable and
reusable (FAIR) in accordance with the guidelines of the Horizon 2020 Open
Research Data Pilot. The consortium or the Topic Manager may, however, decide
that individual data items are too sensitive for FAIR to apply, if this is
justified by privacy of persons or commercial exploitation of results. This
will be implemented as follows: The institution responsible for a data item
will notice the others by email when it is ready for publication and await
objections within four weeks. The data will then be promptly published unless
objections are received. In the latter case publication will be abandoned. The
responsible institution may then ask the Steering Committee to decide, in
dialog with the Topic Manager. The final decision will in this case be based
on the Consortium Agreement, i.e. mainly founded on consensus principles.
## Making data findable, including provisions for metadata
VOICI will use th e _Zenodo repository_ as the main tool to comply with H2020
Open Access mandate. A VOICI community will be established. All public data
sets and scientific articles/papers will be uploaded to this community in
Zenodo and enriched with standard Zenodo metadata, including Grant Number and
Project Acronym. Data sets that are not public will be stored at the relevant
partner Relevant keywords will be assigned to each data set.
Zenodo provides version control and assigns DOIs to all uploaded elements.
**Naming conventions:**
Data will be named using the following naming conventions:
_Descriptive text CleanSky2_VOICI_DeliverableNumber_UniqueDataNumber_
_Descriptive text CleanSky2_VOICI_PublicationNumber_UniqueDataNumber_
_Data set folder CleanSky2_VOICI_DatasetNumber_ UniqueDataNumber_
For each speaker, a unique folder will be created (for protection of speakers'
identity, see section 5) _CleanSky2_VOICI_DatasetUser_UniqueUserID_
**Digital Object Identifiers (DOI)**
DOI's for all datasets will be reserved and assigned with the DOI
functionality provided by Zenodo. DOI versioning will be used to assign unique
identifiers to updated versions of the data records.
**Metadata**
Metadata associated to each published dataset will by default be
* Digital Object Identifiers and version numbers
* Bibliographic information
* Keywords
* Abstract/description
* Associated project and community
* Associated publications and reports
* Grant information
* Access and licensing info
* Language
**Metadata** for speech corpus (TIMIT-like metadata)
1. General presentation : template from default metadata (see above)
2. Corpus speaker distribution : native speaker (specify dialect regions) or not (specify mother language), male or female, number of speakers and their age, pilot or not.
3. Corpus text material : description of the sentences used by the speakers, distribution of sentences per speaker.
## Making data openly accessible
All data sets with dissemination level "Public" will be uploaded to Zenodo and
made open and free of charge.
Publications and underlying data sets will be linked through persistent
identificators, DOI.
Metadata including licences for individual data records as well as record
collections will be harvestable using the OAI-PHM protocol by the record
identifier and the collection name. Metadata is also retrievable through the
public REST API. The data will be available through www.zenodo.org, and hence
accessible using any web browsing application.
Data sets with dissemination level "Confidential" will not be shared due to
commercial exploitation and/or person privacy protection. Zenodo's mechanism
for time-limited embargo will be considered in individual cases.
## Making data interoperable
Zenodo uses JSON Schema as internal representation of metadata and offers
export to other popular formats such as Dublin Core, MARCXML, BibTeX, CSL,
DataCite and export to Mendeley. The data record metadata will utilise the
vocabularies applied by Zenodo. For certain terms these refer to open,
external vocabularies, e.g.: license (Open Definition), funders (FundRef) and
grants (OpenAIRE). Reference to any external metadata is done with a
resolvable URL.
## Increase data re-use (through clarifying licences)
VOICI will enable third parties to access, mine, exploit, reproduce and
disseminate (free of charge for any user) all public data sets, and regulate
this by using _Creative Commons Licences._
As default, the CC-BY-SA license will be applied for public VOICI data. This
license lets others remix, tweak, and build upon published work even for
commercial purposes, as long as they credit the original work and license
their new creations under the identical terms. This license is often compared
to “copyleft” free and open source software licenses. All new/derived work
will carry the same license as the original, so any derivatives will also
allow commercial use. This does not preclude use of less restrictive licenses
as CC-BY or more restrictive licenses as CC BY-NC not allowing commercial
usage. This will be assessed in each case.
For data published in scientific journals the dataset will be made available
simultaneously with granting of open access for the paper or the preprint. The
data will be linked to the paper. For data associated with public deliverables
data will be shared after approval of the deliverable by the EC.
Open data will be reusable as defined by their licenses. Data classified as
confidential will as default not be reusable due to commercial exploitation or
privacy of persons.
The public data will remain re-usable for unlimited time only limited by the
lifetime of the Zenodo repository. This is currently the lifetime of the host
laboratory CERN, which currently has an experimental programme defined for the
next 20 years at least. In cases Zenodo is phased out their policy is to
transfer data/metadata to other appropriate repositories.
# Allocation of resources
VOICI uses standard tools, and a free of charge repository. The costs of data
management activities are limited and will be covered by the project grants.
Potential resource needs to support reuse of data after the active project
period will be solved from case to case.
SINTEF is the lead for WP 4 Dissemination, communication and exploitation.
# Data security
**Repository - data security is as specified by the Zenodo**
1. Versions: Data files are versioned. Records are not versioned. The uploaded data is archived as a Submission Information Package. Derivatives of data files are generated, but original content is never modified. Records can be retracted from public view; however, the data files and record are preserved.
2. Replicas: All data files are stored in CERN Data Centres, primarily Geneva, with replicas in Budapest. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape on a nightly basis.
3. Retention period: Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least.
4. Functional preservation: Zenodo makes no promises of usability and understandability of deposited objects over time.
5. File preservation: Data files and metadata are backed up nightly and replicated into multiple copies in the online system.
6. Fixity and authenticity: All data files are stored along with a MD5 checksum of the file content. Files are regularly checked against their checksums to assure that file content remains constant.
7. Succession plans: In case of closure of the repository, best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories.
### Use of recorded speech data
Data management procedures will protect the confidentiality of human speakers
taking part in the VOICI project. A unique subject number will be assigned to
each speaker immediately after informed consent has been obtained. This number
will serve as the speaker’s identifier in the validated database. The
speaker’s recorded speech will be stored under this number. Only the partners
will be able to link the speakers’ data to a specific subject via an
identification list kept private among the partners. Data protection and
privacy regulations will be observed in capturing, forwarding, processing, and
storing subjects’ data. Speakers will be informed accordingly and will be
requested to give their consent on data handling procedures in accordance with
national regulations and the EU General Data Protection Regulation (GDPR).
# Ethical aspects
Other than what is mentioned above, under "recorded speech data", no ethical
or legal issues have been identified, that can have an impact on data sharing.
### Appendix
# A Data generated
A Zenodo community has been established: **(H2020 CleanSky JU) VOICI**
_https://www.zenodo.eu/communities/h2020_cleansky_voici_
The following data, A.1 and A.2, have been prepared and made available on this
community:
## A.1 Clean Sky 2 VOICI project information
<table>
<tr>
<th>
Name of IADP/ITD/TA/TE2/Domain
</th>
<th>
SYSTEMS ITD
</th> </tr>
<tr>
<td>
Data Storage
</td>
<td>
ZENODO
</td> </tr>
<tr>
<td>
Link to repository
</td>
<td>
_https://zenodo.org/_
</td> </tr>
<tr>
<td>
Dataset Identifier
</td>
<td>
</td> </tr>
<tr>
<td>
DOI number (10.5281/zenodo.2658911)
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Relevant Keywords
</td>
<td>
Cockpit, Noise, Crew assistant
</td> </tr>
<tr>
<td>
Data Licence
</td>
<td>
Creative Commons Attribution-ShareAlike 4.0 International
</td> </tr>
<tr>
<td>
Date for Data Publication
</td>
<td>
2019-05-03
</td> </tr>
<tr>
<td>
Date of data collection
</td>
<td>
2019-05-03
</td> </tr>
<tr>
<td>
Data Version
</td>
<td>
Zenodo DOI versioning
</td> </tr>
<tr>
<td>
Data Preservation time
</td>
<td>
Lifetime of Zenodo
</td> </tr>
<tr>
<td>
Name of the Data Set Responsible (DSR)
</td>
<td>
(owner of the data) Tor Arne Reinen
</td> </tr>
<tr>
<td>
DSR e-mail
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
DSR Telephone
</td>
<td>
+47 48288362
</td> </tr>
<tr>
<td>
Funding body(ies)
</td>
<td>
European Union’s H2020 through Clean Sky 2 Programme.
</td> </tr>
<tr>
<td>
Grant number
</td>
<td>
_785401_
</td> </tr>
<tr>
<td>
Partner organisations
</td>
<td>
SINTEF, Multitel, sensiBel, Acapela
</td> </tr>
<tr>
<td>
Project duration
</td>
<td>
Start: 2018-03-01 End: 2020-02-28
</td> </tr>
<tr>
<td>
Date DMP created
</td>
<td>
2018-09-28
</td> </tr>
<tr>
<td>
Date last update
</td>
<td>
2019-10-11
</td> </tr>
<tr>
<td>
Version
</td>
<td>
No. 2
</td> </tr>
<tr>
<td>
Name of the DMPR (responsibilities for data management of the
IADP/ITD/TA/TE2)
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
DMPR e-mail
</td>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
DMPR Telephone
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
The main objective of VOICI is to
demonstrate the technology that implements an intelligent natural crew
assistant in a cockpit environment up to TRL 3.
</td> </tr>
<tr>
<td>
</td>
<td>
Description of the research
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Data Collection
</td>
<td>
</td>
<td>
Description of the VOICI project, as background for other data items.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Existence of similar data
</td>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
Nature of the Data
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Text
</td> </tr>
<tr>
<td>
</td>
<td>
Data Type
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
.pdf
</td> </tr>
<tr>
<td>
</td>
<td>
Data Format
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
1.6 MB
</td> </tr>
<tr>
<td>
</td>
<td>
Data Size
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
1
</td> </tr>
<tr>
<td>
</td>
<td>
Number of files
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
Descriptive file
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
VOICI project info.pdf
</td> </tr>
<tr>
<td>
</td>
<td>
Data File
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
Quality/Accuracy
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
Unit measurement system
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Universities, Research Centers
</td> </tr>
<tr>
<td>
</td>
<td>
Potential Users
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
Ethical Issue
</td>
<td>
</td> </tr> </table>
## A.2 Cockpit noise Falcon 2000LXS TRH-OSL CleanSky2 VOICI Data 1
<table>
<tr>
<th>
Name of IADP/ITD/TA/TE2/Domain
</th>
<th>
SYSTEMS ITD
</th> </tr>
<tr>
<td>
Data Storage
</td>
<td>
ZENODO
</td> </tr>
<tr>
<td>
Link to repository
</td>
<td>
_https://zenodo.org/_
</td> </tr>
<tr>
<td>
Dataset Identifier
</td>
<td>
</td> </tr>
<tr>
<td>
DOI number (10.5281/zenodo.2660112)
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
Relevant Keywords
</td>
<td>
Cockpit, Noise, Crew assistant
</td> </tr>
<tr>
<td>
Data Licence
</td>
<td>
Creative Commons Attribution-ShareAlike 4.0 International
</td> </tr>
<tr>
<td>
Date for Data Publication
</td>
<td>
2019-05-03
</td> </tr>
<tr>
<td>
Date of data collection
</td>
<td>
2018-09-13
</td> </tr>
<tr>
<td>
Data Version
</td>
<td>
Zenodo DOI versioning
</td> </tr>
<tr>
<td>
Data Preservation time
</td>
<td>
Lifetime of Zenodo
</td> </tr>
<tr>
<td>
Name of the Data Set Responsible (DSR)
</td>
<td>
(owner of the data) Tor Arne Reinen
</td> </tr>
<tr>
<td>
DSR e-mail
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
DSR Telephone
</td>
<td>
+47 48288362
</td> </tr>
<tr>
<td>
Funding body(ies)
</td>
<td>
European Union’s H2020 through Clean Sky 2 Programme.
</td> </tr>
<tr>
<td>
Grant number
</td>
<td>
_785401_
</td> </tr>
<tr>
<td>
Partner organisations
</td>
<td>
SINTEF, Multitel, sensiBel, Acapela
</td> </tr>
<tr>
<td>
Project duration
</td>
<td>
Start: 2018-03-01 End: 2020-02-28
</td> </tr>
<tr>
<td>
Date DMP created
</td>
<td>
2018-09-28
</td> </tr>
<tr>
<td>
Date last update
</td>
<td>
2019-10-11
</td> </tr>
<tr>
<td>
Version
</td>
<td>
No. 2: Year 1 review
</td> </tr>
<tr>
<td>
Name of the DMPR (responsibilities for data management of the
IADP/ITD/TA/TE2)
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
DMPR e-mail
</td>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
DMPR Telephone
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
The main objective of VOICI is to
demonstrate the technology that implements an intelligent natural crew
assistant in a cockpit environment up to TRL 3.
</td> </tr>
<tr>
<td>
</td>
<td>
Description of the research
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Cockpit noise recording, Falcon 2000 flight Trondheim – Oslo. Pilot speech and
pitchtrim signal removed.
</td> </tr>
<tr>
<td>
</td>
<td>
Data Collection
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Existence of similar data
</td>
<td>
</td>
<td>
Not known
</td> </tr>
<tr>
<td>
</td>
<td>
Experimental data
</td> </tr>
<tr>
<td>
</td>
<td>
Nature of the Data
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Audio file
</td> </tr>
<tr>
<td>
</td>
<td>
Data Type
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
.wav
</td> </tr>
<tr>
<td>
</td>
<td>
Data Format
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
290 MB
</td> </tr>
<tr>
<td>
</td>
<td>
Data Size
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
2 (data + descriptive file)
</td> </tr>
<tr>
<td>
</td>
<td>
Number of files
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Cockpit noise Falcon 2000LXS TRH-OSL description.pdf
</td> </tr>
<tr>
<td>
</td>
<td>
Descriptive file
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Data File
</td>
<td>
</td>
<td>
Recording
6_low_freq_boost_remove_speech_2_Wood
Pk_reinsert_16b.wav
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Quality/Accuracy
</td>
<td>
</td>
<td>
Calibrated recording, as specified in Descriptive file
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Unit measurement system
</td>
<td>
</td>
<td>
SI
</td> </tr>
<tr>
<td>
</td>
<td>
Universities, Research Centers
</td> </tr>
<tr>
<td>
</td>
<td>
Potential Users
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
NA
</td> </tr>
<tr>
<td>
</td>
<td>
Ethical Issue
</td>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1453_NATHENA_785520.md
|
<table>
<tr>
<th>
Design and manufacture a complex core structure accordingly and well adapted
to the inner thermal phenomenon seems to be a promising way to increase
performances.
Accordingly, NATHENA project aims at developing new complex inner structures
for heat exchangers.
NATHENA project will focus on the design development of a complex compact heat
exchanger that best addresses thermal performance, made by additive
manufacturing.
These new compact air-air heat exchangers developed in NATHENA project will
provide an efficient thermal management system dedicated to hybrid propulsion
system.
Two types of material will be studied regarding heat exchanger use: Aluminum
for low temperature range and Inconel for high temperature range.
The set objectives (see targets below) will be reached using calculation and
multi-physical simulation (thermomechanical-fluidic) applied to evolutionary
latticed and thin-walled structures combined optionally with fins to form a
matrix of complex structures.
Predictive models and/or laws will be developed for pressure and temperature
drop.
Topological and parametric optimization will be carried out in an iterative
way towards the most efficient model.
Through sample tests and final element method, calculation correlations will
be carried out to ensure the relevance and validity of the basic structural
choices as well as their combinations.
_Targets_ :
* Delta temperature: 200°C to 400°C
* Flow: 0.01kg/s to 2kg/s
* Power: 0.5 to 500kW
* Reynolds number: 400 to 10000
* Pressure drop: 100mBar max
* Size: up to 500x300x300mm
</th> </tr> </table>
# _2.2 ACTORS OF THE PROJECT_
<table>
<tr>
<th>
**Scientific Coordinator**
</th>
<th>
**LIEBHERR-AEROSPACE TOULOUSE SAS,** established
in 408 avenue des Etats Unis, 31200 Toulouse, France, ("Topic Manager" in the
meaning of the CS2 Grant
Agreement for Partners), represented by :
* Elodie HERAIL, R&T and Development Programs, Program manager
* Dr. Gregoire HANSS, Acoustic & Aerodynamics Manager
</th> </tr> </table>
<table>
<tr>
<th>
**Project Partners**
</th> </tr>
<tr>
<td>
**Project Leader / Coordinator**
</td>
<td>
SOGECLAIR AEROSPACE SAS, established in AVENUE ALBERT DURAND 7, BLAGNAC 31700,
France, represented by :
* Patricia SANDRE, Innovation Department Manager
* Serge RESSEGUIER, Innovation project Manager
</td> </tr>
<tr>
<td>
Entity Description: SOGECLAIR aerospace SAS (SGA-F), as part of the holding
SOGECLAIR SA, is a major partner in engineering and a prime contractor for the
aerospace industry for each and all of its domains of expertise and product
line.
SGA-F is the French part of the division “SOGECLAIR aerospace”. With a total
team of nearly 610 highly qualified people, SGA-F relies on a variety of
different partnership modes and sites to ensure its development for the
benefit of its customers.
Our services come in various forms, quality consultancy and management in the
areas of :
* Aerostructures,
* Systems Installation,
* Configuration and Product Data Management,
* Equipment,
* Manufacturing engineering.
</td> </tr>
<tr>
<td>
**Participant 1**
</td>
<td>
ADDUP, established in 5 RUE BLEUE ZONE INDUSTRIELLE DE LADOUX, CEBAZAT 63118,
France, represented by :
\- Albin EFFERNELLI, R&D Engineer
</td> </tr>
<tr>
<td>
Entity Description : AddUp has been created in 2016 by the joint venture of
two large companies, Fives et Michelin, as a provider of complete industrial
metal 3D printing solutions:
* Machine design and production, integration into a full production line, from powder management to the finished part.
* Customer assistance on metal part production, to support additive manufacturing investment projects or additional production needs,
* A cross-functional service activity, including re-design of parts and additional services associated to the machine offer, to help industrial companies find the right technological and financial solutions.
</td> </tr> </table>
<table>
<tr>
<th>
**Participant 2**
</th>
<th>
TEMISTH SAS, established in 45 rue Frédéric Joliot-Curie, MARSEILLE 13382,
France, represented by :
* Dr. Jean-Michel HUGO: CEO and R&D Manager
* Dr. Damien SERRET: Business and Innovation
Manager
</th> </tr>
<tr>
<td>
Entity Description: TEMISTH is a spin-off of IUSTI Laboratory (CNRS UMR 7343)
created in 2012 and specialized energy efficiency of thermal systems. The
company develops numerical tools and virtual concept dedicated to heat
exchanger production by Additive Manufacturing. The entity is mainly headed
by:
TEMISTH is working on new kind of heat exchanger produced by additive
manufacturing and/or coupled with traditional processes. Our skills are based
on thermal modeling of heat transfer (convection, diffusion, radiation and
chemical reaction) coupled with fluid flow (turbulence), heat exchanger design
(technology review, sizing, innovative materials) and thermal
characterization. We propose to our customer to reduce the conception and
prototyping cycle using our own innovative design tools that allow generating
in a short period a pre-design that can be used for fast prototyping and test
to make a proof of concept and analyze it. At this step we propose a new cycle
of conception to generate the best and customized design at high readiness
level.
The industrial fields in which we operate are numerous: Aeronautic,
Aerospatiale, Transport, Oil &
Gaz, Electronics
</td> </tr>
<tr>
<td>
**Participant 3**
</td>
<td>
INSTITUT VON KARMAN DE DYNAMIQUE DES FLUIDES, established in CHAUSSEE DE
WATERLOO 72, RHODE SAINT GENESE 1640, Belgium, represented by :
* Pr. Jean-Marie BUCHLIN, Head of EA Department
* Philippe PLANQUART, Research Manager EA
Department
</td> </tr>
<tr>
<td>
Entity Description : The von Karman Institute for Fluid Dynamics has been
founded in 1956 by the Professor Theodore von Karman as an international
Centre combining education and research for citizens of NATO countries within
its motto “High-Level Training in Research by Research". The IVKDF offers the
following educational programs: Lecture Series / Short Courses / Colloquia,
Short Training, University Master Thesis, Research Master in fluid dynamics,
Doctoral Program and Applied Research Program.
The VKI undertakes and promotes research on experimental, computational and
theoretical aspects of liquid and gas flows in the fields of the aeronautic,
aerospace, turbomachinery, environment and industrial and safety processes.
About fifty different specialized test facilities are available, some of which
are unique or the largest in the world. Research is carried out under the
direction of the faculty and research engineers, sponsored mainly by
governmental and international agencies as well as industries.
The IVKDF activity in the field of heat transfer has been and continues to be
rich. It includes
</td> </tr>
<tr>
<td>
applications to aeronautics/aerospace, turbomachinery and industrial
processes. It concerns both the organization of international events and
fundamental and applied researches. As examples, one can quote the following
thematic areas:
· Thermal storage in packed beds
· Design of fluidized bed heat exchangers
· Thermohydraulics phenomena in saturated active porous media
· Heat pipe heat exchangers
· Ribbed heat exchangers
· Impinging-jet heat exchangers
· Engine Bypass Flow Heat Exchangers
· Tubular heat exchanger for hydraulic mockup
· Air/Hydrogen precooler heat exchanger · Multi-roll heat exchanger
To carry out such studies, optical measurement techniques relying on the
liquid crystal and more particularly infrared thermography have been
developed, sometimes in tough operating conditions. A state of the art of the
IR thermography application to IVKDF studies is proposed in the paper
“Convective Heat Transfer and Infrared Thermography (IRTh)” by Buchlin, J.-M.
Journal of Applied Fluid Mechanics; 3; 1. January 2010”
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Responsibility for data and update of the DMP**
</td>
<td>
SOGECLAIR aerospace SAS as Coordinator.
</td> </tr> </table>
# _2.3 RESSOURCES NEEDED_
<table>
<tr>
<th>
**Material resources implemented**
</th> </tr>
<tr>
<td>
The data management during the project will not require, a priori, the
acquisition or installation of specific equipment. As the project is aimed at
industrial development, few data will be made public.
On the other hand an exchange platform common to the partners and secure has
been set up.
</td> </tr>
<tr>
<td>
**Human and training needs**
</td> </tr>
<tr>
<td>
There is no recruitment or training planned for data management to this day,
given the amount of data to be made public. These data will mainly be
scientific publications made by the Von Karman Institute or publishable “pdf
format” documents delivered to the European Commission.
</td> </tr>
<tr>
<td>
**Financial valuation of needs**
</td> </tr>
<tr>
<td>
The potential overhead associated with data management will be estimated
during the project. It should not be significant given the data to be made
public and will be supported by each partner's
</td> </tr> </table>
existing data management means for private data.
# 3\. Phase 2 – STORAGE, SHARING, PROTECTION AND DISSEMINATION DURING THE
PROJECT
## 3.1 GENERAL INFORMATION ON THE DATA
As a reminder, this project is an industrial project. The data and results are
intended to lead to results that will be exploited by the scientific
coordinator according to the rules set out in the consortium agreement and in
the implementation agreement.
However, by mutual agreement between the partners and the scientific
coordinator, we have chosen to share some results by proposing public reports
that will provide information on the scientific process without giving
quantitative information.
The intellectual property concerning the data is managed in the implementation
agreement and the consortium agreement.
The project will draw on the experience and expertise of each partner and the
input data provided by the scientific coordinator.
Each dataset manager will be responsible for providing other partners with
data in formats that are neutral or compatible with the software provided for
the project.
The type and nature of data will be produced according to the following items:
* Numerical simulations and design: the principal software used for these steps are CATIA V5, Star-CCM+, ANSYS Fluent, Patran/Nastran, Excel for spreadsheets, WORD for reports. These data will mainly be generated by SOGECLAIR aerospace SAS and TEMISTH according to their respective project data management procedures.
* Preparation of manufacturing and manufacturing: these data will be managed and generated according to the internal procedure of ADDUP.
* Development of test fixture and testing implementation: these data will be managed and generated according to the internal procedure of the Von Karman Institute for Fluid Dynamics.
## 3.2 STORAGE AND SHARING DURING THE PROJECT
The generation, management and protection of data will be managed by each
partner according to its internal quality-security procedures.
In order to harmonize file names and ensure better version tracking, a file
naming procedure has been defined and shared between partners.
## 3.3 RISKS, SECURITY, DATA ETHICS
The main risks identified are data loss and non-respect of confidentiality.
In order to manage the risks inherent in data security, rules on the
publication of data have been defined in the consortium agreement and in the
implementation agreement validated by all partners. In addition, data exchange
between partners is organized through the use of a dedicated and secured
platform.
Given the data that will be generated during the project, the project does not
raise ethical issues.
## 3.4 DISSEMINATION AND ARCHIVING
As a reminder, this project is an industrial project. The data and results are
intended to lead to results that will be exploited by the scientific
coordinator according to the rules set out in the consortium agreement and in
the implementation agreement.
Apart from the specific market defined in the implementation agreement and in
accordance with the rules enacted in the consortium agreement, the data and
results generated by the Nathena project may be used by the partners to
propose new products or new innovative developments in various markets.
# 4\. Phase 3 - DISSEMINATION AND ARCHIVING AFTER THE PROJECT
## 4.1 IDENTIFICATION OF THE DATASETS
<table>
<tr>
<th>
**Number of datasets to be archived and / or disseminated.**
</th>
<th>
There will be at least two data sets corresponding to the two materials used
for the development of the prototypes.
</th> </tr>
<tr>
<td>
**Specific links or relationships between datasets**
</td>
<td>
The two datasets (one for each material) will be obtained with the same
methodology but will generate two different solutions
(prototypes).
</td> </tr>
<tr>
<td>
</td> </tr> </table>
## 4.2 PROTECTION - EXCEPTION OF DISSEMINATION
<table>
<tr>
<th>
**Reasons why datasets might not be disseminated**
</th>
<th>
Data and results to be industrially exploited by the scientific coordinator
(framework defined in the Implementation Agreement), and potentially by
partners in areas out of the Implementation Agreement and defined in the
Consortium Agreement.
</th> </tr> </table>
_**4.3 DESCRIPTION OF TECHNICALLY HOMOGENEOUS DATASETS** _
Nothing to report at this stage of the project
_**4.4 DESCRIPTION OF TECHNICALLY HETEROGENEOUS** _
_**DATASETS (intellectual coherence)** _
Nothing to report at this stage of the project
## 4.5 SORTING AND DATA ARCHIVING
Nothing to report at this stage of the project.
Will be in conformity with the internal procedures of the partners and the
durations imposed by the Consortium Agreement and Implementation Agreement
signed by all.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1456_DTOceanPlus_785921.md
|
# INTRODUCTION
## Motivation
The DTOceanPlus project participates in the Pilot on Open Research Data
launched by the European Commission (EC) along with the H2020 programme. This
pilot is part of the Open Access to Scientific Publications and Research Data
programme in H2020. The goal of the programme is to foster access to research
data generated in H2020 projects. The use of a Data Management Plan (DMP) is
required for all projects participating in the Open Research Data Pilot.
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, facts or numbers collected to be examined
and considered, and as a basis for reasoning, discussion, or calculation. In a
research context, examples of data include statistics, results of experiments,
measurements, observations resulting from fieldwork, survey results, interview
recordings and images. The focus is on research data that is available in
digital form.
As a user progresses through the stages of creating a design in DTOceanPlus,
they will require access to reference data to support decision-making.
Moreover, a database of long-standing reference data will collect all the
relevant information produced by the research and demonstration activities in
the project. Essentially, it will contain a catalogue of components, vessels,
ports and equipment, as well as the associated features for assessments of
designs such as performance, cost, reliability, environmental or social impact
ratings. Actually, user consultation responses [1] highlighted the need for
transparent access to this kind of data.
Additionally, the underlying data needed to validate the results presented in
scientific publications will be considered insofar possible for open access
publication [1] .
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited. In this sense, the Commission applies
the principle of 'as open as possible, as closed as necessary' and allow
partial opt out due to IPR concerns, privacy/data protection concerns or for
other legitimate reasons. Strategies to limit such restrictions could include
anonymising or aggregating data, agreeing on a limited embargo period or
publishing selected datasets.
## Purpose of the Data Management Plan
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the Consortium with regard to the
project research data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
**FIGURE 1.1: RESEARCH DATA LIFE CYCLE (ADAPTED FROM UK DATA ARCHIVE [2] )**
The DMP is not a fixed document, but will evolve during the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the final version of the DMP. It is an update from the
version submitted in October 2018 (D9.10). It has been produced following the
EC guidelines for project participating in this pilot and additional
consideration described in ANNEX I: KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH
DATA.
## Research data types in DTOceanPlus
The data types that will be produced during the project are based on the
Description of the Action (DoA) and their results.
According to such consideration, Table 1.1 reports a list of categories of
research data that DTOceanPlus will produce. These research data types have
been defined, including data structures, sampling and processing requirements,
as well as relevant standards. This list may be adapted with the addition or
removal of datasets in the final version of the DMP to take into consideration
the project developments and scientific publications. A detailed description
of each dataset is given in the following sections of this document.
**TABLE 1.1: DTOCEANPLUS TYPES OF DATA**
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset category**
</th>
<th>
**Lead partner**
</th>
<th>
**Related WP(s)**
</th> </tr>
<tr>
<td>
**3**
</td>
<td>
Logistics and Marine Operations
</td>
<td>
WavEC
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
**1**
</td>
<td>
SK, ET and ED Components
</td>
<td>
TECNALIA
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Environmental and Social Acceptance
</td>
<td>
FEM
</td>
<td>
WP6
</td> </tr> </table>
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. Datasets can be both collected,
unprocessed data as well as analysed, generated data. The policy for open
access are summarised in Figure 1.2.
**FIGURE 1.2: RESEARCH DATA OPTIONS AND TIMING**
Research data directly linked to the proprietary technologies or projects used
for the validation of the design tools will not be released in the open domain
as they can compromise the commercialisation prospects of industrial partners.
The rest of research data will be deposited in an open access repository.
When the research data is linked to a scientific publication, the provisions
described in ANNEX II: SCIENTIFIC PUBLICATIONS will be followed. Research data
needed to validate the results presented in the publication should be
deposited at the same time for “Gold” Open Access 1 or before the end of the
embargo period for “Green” Open Access 2 . Underlying research data will
consist of selected parts of the general datasets generated, and for which the
decision of making that part public has been made.
Other datasets will be related to any public report or be useful for the
research community. They will be selected parts of the general datasets
generated or full datasets and be published as soon as they become available.
## Roles and responsibilities
Each DTOceanPlus partner must respect the policies set out in this DMP.
Datasets must be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared are easily available, but also that backups are performed, and that
proprietary data are secured.
EDP CNET, as WP7 leader, will ensure dataset integrity and compatibility for
its use during the validation of the design tools by different partners.
Registration of datasets and metadata is the responsibility of the partner
that generates the data in the WP. Metadata constitutes an underlying
definition or description of the datasets, which facilitates finding and
working with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data.
Quality control of these data is the responsibility of the relevant WP Leader
(particularly WP4-5-6-7), supported by the Project Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publicly available data.
Last but not least, all Consortium members must consult the concerned
partner(s) before publishing data that can be associated with an exploitable
result, in the open domain.
# DATA COLLECTION, STORAGE AND BACK-UP
One of the main outputs of this DMP is to identify research datasets that are
needed in ocean energy designs. They must be generic enough (not specific,
such as site and machine characterisation) to be reusable in multiple
projects. For that purpose, a database of long-standing reference data will
collect all the relevant information produced by the research and
demonstration activities in the project. Essentially, it will contain a
catalogue of components, vessels, ports and equipment, as well as the
associated features for assessments of designs such as performance, cost,
reliability, environmental or social impact ratings. Three main categories for
open datasets have been identified:
* Logistics and marine operations: data on vessels, equipment, ports and operations.
* Components: PTO (Power Take-Off), mooring, electrical cabling.
* Environmental and social acceptance: stressors and materials.
Logistics and marine operations datasets will provide information on the
supporting systems to an ocean energy system throughout its lifecycle. The
environmental and social acceptance will gather key context data to enable
decision-making. Finally, the components datasets define the properties and
give data on main assessments (performance, reliability, cost). In this way,
they might gather pieces of information used in SLC (System Lifetime Cost),
RAMS (Reliability, Availability, Maintainability and Survivability) and SPEY
(System Performance and Energy Yield) modules.
It is important to point out that DTOceanPlus project will produce datasets
that are not univocally related to any commercial supplier, usually creating
catalogues from different sources of information. By combining them, it will
be creating something that is not bound to a specific provider. Particularly,
the DTOceanPlus project will produce reference data resulting from:
* Supplier datasheets.
* Literature review.
* Model fitting.
* Fundamental relationships.
* Default values.
The DMP must guarantee the integrity of data during the project. To avoid any
undesirable information loss, regular back-ups or replication in different
locations should be implemented.
The following sections describe the different categories for open datasets
that will be produced in the course of the project.
## Logistics and Marine Operations Data
The suitable design of offshore Logistics and Marine Operations (LMO) is
paramount to establish the global design of a particular project. Apart from
the physical components and systems, a full characterisation of a wide range
of vessels, equipment and port data is required. As a consequence, the
following reference data have been identified:
* Activities
* Operation Types
* Terminals
* Vessels
* Equipment (i.e. Pilling, Protection, Burial, Drivers, ROV)
Among the various features to be captured, there are the following ones:
* Physical description: dock space, loading capacity, storage area, cranes, vessel size & speed, bollard / winch pull, operating limits, crew, drivers, ROV, duration and location of the operations, relations between vessels and equipment …
* Quantitative rating: use costs, average fuel consumption, noise level, … A short description of the LMO dataset is given below.
**TABLE 2.1: LOGISTICS AND MARINE OPERATIONS DATA**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Logisitics_Marine_Operations
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Activities, operation types, terminals, vessels and equipment. Dataset being
characterised by the physical description and quantitative ratings.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Supplier datasheets, literature review and model fitting
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• CSV, MS Excel, SQL, JSON
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• N/A
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• <1 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Catalogue / Database
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Regular back-ups on local and/or cloud-hosted servers
</td> </tr> </table>
## Components Data
The physical characterisation of low-level data components provides key
information to drive the design decisions of ocean energy subsystems, devices
and full array projects. Availability of a large family of components will
significantly facilitate design optimisation. Default values will be provided
insofar they are necessary for completing and ocean energy design but
difficult to determine.
Usually components will comprise balance of plant (e.g. mooring lines and
shackles, power cables, connectors and switchgear) and off-the-shelf
components (e.g. generator and motors, gearboxes, hydraulic cylinders,
turbines, accumulators).
The following sub-sections describe the components data associated to station
keeping, energy transformed and delivery systems.
### Station Keeping component data
The physical characterisation of Station Keeping (SK) components provides key
information to drive the design decisions of ocean energy mooring systems. The
available data comprises the following components:
* Buoys
* Shackles
* Swivels
* Anchors
* Chains
* Wire ropes
* Synthetic ropes
Among the various component features, the material, mass, sizing and main
physical properties of components, as well as feasible combinations between
anchors and soil types will be captured.
A short description of the station keeping dataset is given below.
**TABLE 2.2: STATION KEEPING COMPONENT DATA**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Station_Keeping
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• SK components (i.e. Buoys, Shackles, Swivels, Anchors, Chains, Wire ropes
and Synthetic ropes), physical features and feasible combinations between
anchors and soil types
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Supplier datasheets and literature review
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• CSV, MS Excel, SQL, JSON
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• N/A
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• < 1 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Catalogue / Database
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Regular back-ups on local and/or cloud-hosted servers
</td> </tr> </table>
### Energy Transformation component data
To drive the design decisions of ocean Energy Transformation (ET) system for a
device or for a full array project, the available dataset comprises at least
the following components:
* Turbine
* Power generator
* Power converter
Among the various component features, the material, mass, sizing and main
physical properties of ET components; performance and energy yield
characteristics (e.g. efficiency curve, etc.); reliability, availability,
maintainability and survivability data (e.g. failure rate, design limits,
etc.); and lifetime costs (e.g. cost of manufacture, assembly, replace,
repair, etc.) will be captured.
DTOceanPlus will require quantitative ratings of various performance
parameters at component level to derive aggregated figures for subsystems,
devices and ultimately the whole array. Benchmarks and thresholds for
Structured Innovation and Stage Gate Design Tools may be also considered
within this category.
A short description of the ET dataset is given in Table 2.3.
**TABLE 2.3: ENERGY TRANSFORMATION COMPONENT DATA**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Energy_Transformation
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• ET components (i.e. Turbine, Generator, Power converter), physical features,
performance and energy yield, reliability, availability, maintainability and
survivability and lifetime costs.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Supplier datasheets and literature review
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• CSV, MS Excel, SQL, JSON
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• N/A
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• < 1 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Catalogue / Database
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Regular back-ups on local and/or cloud-hosted servers
</td> </tr> </table>
### Energy Delivery component data
The available data related to the Energy Delivery (ED) system will comprise
the following components:
* Switchgear
* Collection point
* Transformer
* Dry mate connector
* Wet mate connector
* Dynamic cable
* Static cable
As in the case of the Energy Transformation dataset, among the various
component features, the material, mass, sizing and main physical properties of
ED components; performance and energy yield characteristics; reliability,
availability, maintainability and survivability data; and lifetime costs will
be captured.
A short description of the ED dataset is given below.
**TABLE 2.4: ENERGY DELIVERY COMPONENT DATA**
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Energy_Delivery
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• ED components (i.e. Switchgear, Collection point, Transformer, Dry/Wet mate
connectors, Dynamic cable and Static cable), physical features, performance
and energy yield, reliability, availability, maintainability, survivability
and lifetime.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Supplier datasheets and literature review
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• CSV, MS Excel, SQL, JSON
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• N/A
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• < 1 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Catalogue/Database
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Regular back-ups on local and/or cloud-hosted servers
</td> </tr> </table>
## Environment & Social Acceptance Assessment Data
Reference data may be required to assess ocean energy projects in their
context and take global design decisions. One of the assessments in
DTOceanPlus is the Environmental and Social Acceptance (ESA). For this reason,
the available dataset comprises the following categories:
* Endangered species
* Materials
* Job Creation
Among the various component features, there will be captured characteristics
related to environmental and social acceptance such as stressors and CO2
emissions.
A short description of the ESA dataset is given below.
**TABLE 2.5: ENVIRONMENTAL & SOCIAL ACCEPTANCE DATA **
<table>
<tr>
<th>
**Reference/Name**
</th>
<th>
• DS_Environmental_SocialAcceptance
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
• Characteristics related to environmental and social acceptance of materials
and endangered species.
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
• Supplier datasheets and literature review
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
• Derived
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
• CSV, MS Excel, SQL, JSON
</td> </tr>
<tr>
<td>
**Software**
</td>
<td>
• N/A
</td> </tr>
<tr>
<td>
**Estimated size**
</td>
<td>
• <1 GB
</td> </tr>
<tr>
<td>
**Storage**
</td>
<td>
• Catalogue / Database
</td> </tr>
<tr>
<td>
**Back-up**
</td>
<td>
• Regular back-ups on local and/or cloud-hosted servers
</td> </tr> </table>
# DATA STANDARDS AND METADATA
The following standards should be used for data documentation:
* DNV-RP-J301 [3] : Subsea Power Cables in Shallow Water Renewable Energy Applications.
* DNVGL-OS-E301 [4] : it contains criteria, technical requirements and guidelines on design and construction of position mooring systems. The objective of this standard is to give a uniform level of safety for mooring systems, consisting of chain, steel wire ropes and fibre rope.
* IEC TS 62600-10 [5] : technical specification for assessment of mooring system for Marine Energy Converters (MECs).
* IEC TS 62600-30 [6] : technical specification on electrical power quality requirements for wave, tidal and other water current energy converters.
* IEC TS 62600-100 [7] : technical specification on power performance assessment of electricity producing wave energy converters.
* IEC TS 62600-200 [8] : Electricity producing tidal energy converters - Power performance assessment.
* ISO 14224:2006 [9] : collection and exchange of reliability and maintenance data for equipment.
Metadata records will accompany the data files in order to describe,
contextualise and facilitate external users to understand and reuse the data.
DTOceanPlus will adopt the DataCite Metadata Schema [10] , a domain agnostic
metadata schema, as the basis for harvesting and importing metadata about
datasets from data archives. The core mission of DataCite is to build and
maintain a sustainable framework that makes it possible to cite data through
the use of persistent identifiers.
The following metadata should be created to identify datasets:
* Identifier: A unique string that identifies the dataset.
* Author/Creator: The main researchers involved in producing the data in priority order.
* Title: A name or title by which a data is known.
* Publisher: The name of the entity that holds, archives, publishes prints, distributes, releases, issues, or produces the data.
* Publication Year: The year when the data was or will be made publicly available.
* Subject: Subject, keyword, classification code, or key phrase describing the resource.
* Contributor: Name of the funding entity (i.e. "European Union" & "Horizon 2020").
* Size: Unstructured size information about the dataset (in GBs).
* Format: Technical format of the dataset (e.g. csv, txt, xml, etc.).
* Version: The version number of the dataset.
* Access rights: Provide a rights management statement for the dataset. Include embargo information if applicable.
* Geo-location: Spatial region or named place where the data was gathered.
# DATA SHARING AND REUSE
During the life cycle of the DTOceanPlus project, datasets will be stored and
systematically organised in a relational database tailored to comply with the
requirements of WP7. The database schema and the queryable fields, will be
also publicly available to the database users as a way to better understand
the database itself.
In addition to the project database, relevant datasets will be also stored in
ZENODO [11] , which is the open access repository of the Open Access
Infrastructure for Research in Europe, OpenAIRE [12] .
All collected datasets will be disseminated without an embargo period unless
linked to a green open access publication. Data objects will be deposited in
ZENODO under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH.
* Use and reuse of data permitted. 🞂 Privacy of its users protected.
By default, data access policy will be unrestricted unless otherwise
specified. The generic Creative Commons CC-BY licenses will be used. This
license allows:
* Sharing - copy and redistribute the material in any medium or format.
* Adapting - remix, transform, and build upon the material for any purpose, even commercially.
# DATA ARCHIVING AND PRESERVATION
The DTOceanPlus project database will be designed to remain operational for 5
years after the project end. By the end of the project, the final dataset will
be transferred to the ZENODO repository, which ensures sustainable archiving
of the final research data.
Items deposited in ZENODO will be retained for the lifetime of the repository,
which is currently the lifetime of the host laboratory CERN and has an
experimental programme defined for the at least next 20 years. Data files and
metadata are backed up on a nightly basis, as well as replicated in multiple
copies in the online system. All data files are stored along with a MD5
checksum of the file content. Regular checks of files against their checksums
are made.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1460_SPEAR_787011.md
|
1. **Executive Summary**
The D1.4 Data Management Plan (DMP) is a framework, that describes how to work
with the data and datasets that will be generated during project’s lifecycle,
including access rights management, storage, backups, data ownership and
principles of collaboration within research teams, industrial partners and
public bodies. The DMP includes information about data types, formats of
generated/collected data, and specifies methods for data gathering,
processing, sharing, and archiving. The plan also documents some data
management activities associated with the SPEAR project. A list the various
types of data that SPEAR consortium expect to collect and create is also
represented.
The project will collect the following types of data: network traffic,
operating system shell commands, keystrokes, communications and syslogs
collected from the devices in smart grid, sensors, gateways, etc.;
quantitative data related to day-to-day activity (event data produced after
processing collected raw data); and cyber attacks and threats data for
information sharing through an anonymous channel/repository. Particularly,
data will be obtained from direct observation, industrial enterprises, field
instruments, experiments, and compilations of data from other studies.
The expected data volume will be approximately 150 GB. The document will be
updated regularly aimed to improve the data management life cycle for all data
generated, collected or processed by the SPEAR project.
2. **Introduction**
The SPEAR consortium joins the Pilot on Open Research Data project, which is
supported by the European Commission through the Horizon2020 program. The
SPEAR consortium supports the concept of open science, and shares an
optimistic assessment of the prospects of this concept for introducing
innovative solutions to the European economy, with the re-use of scientific
data on a wider scale. Thus, all data obtained during the implementation of
the SPEAR project can be published in open access mode, subject to the
additional conditions and principles described in this document below.
#### 2.1 Scope and objectives of the deliverable
The purpose of the Data Management Plan (DMP) deliverable is to provide
relevant information concerning the data that will be collected, used, stored,
and shared by the partners of the SPEAR project.
The SPEAR project aims at developing an integrated solution of methods,
processes, tools and supporting tools for (see Fig. 1):
1. Timely detection of evolved security attacks such as Threat Advanced Persistent (APT), the Man in the Middle (MiTM) attacks, Denial of Service (DoS) and Distributed DoS (DDoS) attacks using big data source analytics, advanced visual technique for anomaly detection and smart trust security management.
2. Developing an advanced forensic readiness framework, based on smart honeypot deployment that will collect attack traces and prepare the actionable evidence in court, while also ensuring privacy for the users.
3. Elaboration and implementation of the anonymous channel for securing smart grid stakeholders during the exchange of sensitive information about cyber-attack incidents and prevent information from leaking. (d) Performing risk analysis and proposing cyber hygiene procedures, while empowering EU-wide consensus by collaborating with European and global security agencies, standardization organizations, industrial partners and smart grid companies across Europe.
(e) Exploiting the research outcomes to more critical infrastructures (CIN)
domains and creating competitive business models for utilizing the implemented
security tools in smart grid operators and actors across Europe
**Figure 1 - SPEAR aims diagram**
#### 2.2 Structure of the deliverable
The report is structured in 5 chapters:
Chapter 1: Executive summary, including the purpose and the context of this
deliverable.
Chapter 2: Introduction concerning the scope of this deliverable.
Chapter 3: An overview of general principles for participation in the pilot on
open research data, IPR management and security as well as data protection,
ethics and security in SPEAR project.
Chapter 4: An overview of the data management framework along with the
specification of the dataset format, the dataset description methods,
definition of standards and metadata, approaches and policies for data
sharing, archiving and presentation. Datasets list for SPEAR new components is
also enclosed.
Chapter 5: Description of datasets from SPEAR partners.
Chapter 6: Conclusions
#### 2.3 Relation to other activities in the project
The following diagram illustrates the relationship between the seven main
activities of the SPEAR project.
1. Project Management and Coordination
2. Use Case Preparation
3. Cyber Attack Detection
4. Forensic Readiness
5. EU-Wide Consensus
6. Integration and Development
7. Dissemination and Exploitation
**Figure 2 - The main activities of the SPEAR project**
**3\. General Principles**
SPEAR project stands for data openness and sharing; hence we are committed to
making all data collected during the project to the best of and immediately
available for use within the limits of personal privacy and commercial
confidentiality following the Fair Data Principles.
#### 3.1 Participation in the Pilot on Open Research Data
##### 3.1.1 Data Availability
All the project data will be publicly available. However, different access
levels for different types of data will be allocated. For security reasons,
sensitive data such as personal data regulated by data protection rules, will
be obscured. Recordings and notes from meetings and workshops as well as
survey results will be anonymized. All anonymized data will be available in
open-access mode. Technical details of the attacks, from the anonymous
repository of smart grid incidents, will be available for everyone. The types
of data and rules will be specified in the following sections.
##### 3.1.2 Open Access to Scientific Publications
All Scientific publications will be open, unless there are special
requirements or constraints will force to non-open publications.
##### 3.1.3 Open Access to Research Data
To meet open access policy and be accessible to the research and professional
community research data will be uploaded and stored on the Zenodo, EC
publications and data repository. Research data archiving and availability
will be guaranteed by the Zenodo digital repository.
#### 3.2 IPR management and security
The SPEAR consortium consists of industrial partners form both private and
public sector, all of them preserving intellectual property rights on their
technology, technical solutions and data. Given this, the SPEAR consortium
will pay particular attention to the protection of data, and will consult with
the concerned parties prior to data publication.
IPR data management will be conducted within SPEAR PM. The Collection and /or
process of personal data are managed by the Data Protection Officer.
Within the project a number of data models will be created to support the
various SPEAR modules, e.g. for the Visual-based IDS. Of course, these models
will be also populated during the execution of the pilots in SPEAR end-users
Infrastructures. If necessary, anonymized data (except the data models that do
not have any privacy concern) will be exported. In addition, the DMP is
accommodated with a part in the SPEAR website, where the data models /
datasets are uploaded (public versions). This website will be created by CERTH
(M12).
#### 3.3 Data Protection, Ethics and Security
No data will be collected or processed prior the finalization of the
respective deliverables and the relevant Consent Forms.
**4\. Data Management Framework**
SPEAR will develop a data management framework for deliverables which are part
of the project and will be shared in the publicly accessible repository
Confluence. This repository will provide to the public, for each dataset that
will become publicly available, a description of the dataset along with a link
to a download section. The portal will be updated each time a new dataset has
been provided by research teams and partners, collected and is ready of public
distribution.
To reach out industrial partners and smart grid companies across Europe, the
anonymous repository of incidents and threats will be developed and anonymous
channel for exchanging sensitive information about cyber-attack incidents will
be launched.
Data lifecycle related to work packages (WP) of SPEAR project is represented
in fig. 3.
**Figure 3 - Project Data lifecycle**
#### 4.1 Format of datasets
For each dataset the following characteristics will be specified:
**Table 1 - Format of Datasets**
<table>
<tr>
<th>
X PARTNER Name_New Component/Existing Tool Name
</th> </tr>
<tr>
<td>
Dataset Information
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_ <Mention an indicative reference name for your produced dataset> _
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_ <Mention the produced datasets with a brief description and if they contain
future subdatasets> _
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
_ <From which device and how the dataset will be collected. Mention also the
position of installation> _
</td> </tr>
<tr>
<td>
Beneficiaries services and responsibilities
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_ <Partner Name> _
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_ <Partner Name> _
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_ <Partner Name> _
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_ <Partner Name> _
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_ <e.g. WP3, T3.4> _
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_ <Provide the status of the metadata, if they are defined and their content>
_
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_ <Mention the data format if it is available, the potential data volume and
refer also to the standards concerning the communication and the data
transfer> _
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_ <Purpose of the data collection/generation and its relation to the
objectives of the project> _
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_ <Access for partners & access for the public _
_(open access >, refer to the data management portal if available and to
dissemination acitivities> _
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_ <Provide if available the data sharing policies, the requirements for data
sharing, how the data will be shared and who will decide for sharing> _
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_ <Who will be the owner of the collected information, define the adherence to
partner policies and mention any potential limitations> _
</td> </tr> </table>
#### 4.2 Description of methods for dataset description
The datasets will be generated by the project research team as well as
industrial partners.
All incident-related data will be entered manually and will be stored in one
anonymous repository.
Folders will be organized in a hierarchical structure.
Files will be supported with identification and number of version by using
such structure: project name, dataset name, ID, place and date.
Keywords will be added by using the thesaurus.
### 4.3 Standards and metadata
For common project data the following standards and metadata will be applied:
**Table 2 - Standards and Metadata**
<table>
<tr>
<th>
_**Purpose** _
</th>
<th>
_**Standard** _
</th>
<th>
_**Link** _
</th> </tr>
<tr>
<td>
_**Recording information about research activity** _
</td>
<td>
CERIF (Common
European Research
Information Format)
</td>
<td>
_http://rd-alliance.github.io/metadatadirectory/standards/cerif.html_
</td> </tr>
<tr>
<td>
_**Data exchanging** _
</td>
<td>
Data Package
</td>
<td>
_http://rd-alliance.github.io/metadatadirectory/standards/cerif.html_
</td> </tr>
<tr>
<td>
_**Data citation and retrieval purposes** _
</td>
<td>
DataCite Matadata Schema
</td>
<td>
_http://rd-alliance.github.io/metadatadirectory/standards/datacite-
metadataschema.html_
</td> </tr>
<tr>
<td>
_**Data authoring, deposit, exchange, visualization, reuse, and preservation**
_
</td>
<td>
OAI-ORE (Open
Archives Initiative
Object Reuse and
Exchange)
</td>
<td>
_http://rd-alliance.github.io/metadatadirectory/standards/oai-ore-open-
archivesinitiative-object-reuse-and-exchange.html_
</td> </tr>
<tr>
<td>
_**Data registration** _
</td>
<td>
DOI (Digital Object
Identifier)
</td>
<td>
_https://fairsharing.org/biodbcore-001020/_
</td> </tr> </table>
### 4.4 Data sharing
All research data will be shared in the publicly accessible repository
Confluence using descriptive metadata as it provided by this repository. To
perform identification and access to citation all research data will be
supported by DOIs.
For all other cases, in accordance with project policy, credentials are needed
in order to obtain information from the repository.
**Table 3 - Data Types and Repositories for Storage and Sharing Data**
<table>
<tr>
<th>
_**Data types** _
</th>
<th>
_**Users** _
</th>
<th>
_**Repository** _
</th>
<th>
_**Type of** _
_**Repository** _
</th>
<th>
_**Link** _
</th>
<th>
_**Access** _
</th> </tr>
<tr>
<td>
_**Research data, e.g. statistics, visualization** _
_**analytics, measurements, survey results, results of experiments available
in digital form** _
</td>
<td>
_University researchers_
</td>
<td>
_University of Reading_
_Research Data_
_Archive_
</td>
<td>
_External_
</td>
<td>
_http://www.readin_ _g.ac.uk/reas-_
_RDArchive.aspx_
</td>
<td>
_Open_
</td> </tr>
<tr>
<td>
_**Publications** _
</td>
<td>
_All_
</td>
<td>
_Zenodo_
</td>
<td>
External
</td>
<td>
_https://zenodo.org_
_/_
</td>
<td>
_Open_
</td> </tr>
<tr>
<td>
_**Project documentation** _
</td>
<td>
_SPEAR_
_Partners_
</td>
<td>
_Confluence_
</td>
<td>
External
</td>
<td>
__https://space.uow_ _ __m.gr/confluence_ _
</td>
<td>
</td> </tr>
<tr>
<td>
_**Security related data, e.g. Network traffic data and syslogs, operating** _
_**system shell commands, Abnormal** _
_**network traffic dataset, database records that tracks the** _
</td>
<td>
_SPEAR_
_Partners_
</td>
<td>
_Anonymus_
_repository,_
_SPEAR webcloud_
</td>
<td>
Internal
</td>
<td>
</td>
<td>
_Closed_
</td> </tr>
<tr>
<td>
_**changes in reputation and trust of home nodes over time,** _
_**Cyber attacks and threats data** _
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
### 4.5 Archiving and preservation (including storage and backup)
In accordance with EC FAIR (Findable, Accessible, Interoperable, and Re-
usable) Policy and Horizon 2020 Data Management Guidance, SPEAR project data
will be archived and preserved in open formats. For this reason, the data will
remain re-usable until the repository withdraws the data or goes out of
business.
All project-related data will be stored in _Confluence_ repository.
### 4.6 Datasets List
**Table 4 - Datasets List for SPEAR New Components**
<table>
<tr>
<th>
_**SPEAR** _
_**New** _
_**Component** _
_**Name** _
</th>
<th>
_**Subcomponents Name** _
</th>
<th>
_**Related Task** _
</th>
<th>
_**Partner** _
</th>
<th>
_**SPEAR Pilot** _
</th>
<th>
_**Produced Datasets** _
</th> </tr>
<tr>
<td>
_**SPEAR - SIEM** _
</td>
<td>
_**OSSIM** _
_**SIEM SIEM** _
_**Basis (Data collector)** _
</td>
<td>
_**T 3.1** _
</td>
<td>
_**TEC** _
</td>
<td>
_UC1- The_
_Hydro Power_
_Plant Scenario_
_UC2- The_
_Substation_
_Scenario UC3- The combined IAN_
_and HAN scenario_
_UC4- The_
_Smart Home_
_Scenario_
</td>
<td>
**OSSIM is an open-source**
**SIEM,**
**_https://www.alienvault.com/pr_ ** **_oducts/ossim_ **
_Network traffic data and syslogs from the devices in Smart grid scenarios._
_Event data produced after processing collected raw data (Network traffic data
and syslogs)_
</td> </tr>
<tr>
<td>
_**SPEAR - SIEM** _
</td>
<td>
_**BDAC** _
</td>
<td>
_**T 3.2** _
</td>
<td>
_**SURREY** _
_**UOWM** _
_**CERTH** _
</td>
<td>
_ALL_
</td>
<td>
_Normal and abnormal network_
_traffic dataset, including different types of modern attacks, application
layer attacks and several network traffic features._
</td> </tr>
<tr>
<td>
_**SPEAR - SIEM** _
</td>
<td>
_**Visualbased IDS** _
</td>
<td>
_**T 3.3** _
</td>
<td>
_**CERTH** _
</td>
<td>
_ALL_
</td>
<td>
_Visualization of multiple attributes of network traffic as well as common
attributes among the records, the features extracted from the data, the
(dis-)similarities among them and the combination of multiple types of
features in clusters._
</td> </tr>
<tr>
<td>
_**SPEAR - SIEM** _
</td>
<td>
_**GTM** _
</td>
<td>
_**T 3.4** _
</td>
<td>
_**SURREY CERTH** _
</td>
<td>
_ALL_
</td>
<td>
_A set of database records that tracks the change in reputation and trust of
home nodes over time._
_A set of database records that tracks the change in reputation and trust of
nodes over time._
</td> </tr>
<tr>
<td>
_**SPEAR - FRF** _
</td>
<td>
_**AMI** _
_**HONEYP** _
_**OTS** _
</td>
<td>
_**T 4.3** _
</td>
<td>
_**TEC** _
</td>
<td>
_UC2- The_
_Substation_
_Scenario_
</td>
<td>
_Network traffic data, operating system shell commands, keystrokes,
communications and syslogs._
</td> </tr>
<tr>
<td>
_**SPEAR - FRF** _
</td>
<td>
_**PIA framework** _
</td>
<td>
_**T 4.4** _
</td>
<td>
_**ED** _
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**SPEAR - FRF** _
</td>
<td>
_**Forensic** _
_**Database** _
_**Services** _
</td>
<td>
_**T 4.5** _
</td>
<td>
_**ED** _
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_**SPEAR - CHF** _
</td>
<td>
_**SPEAR-RI** _
</td>
<td>
_**T 5.1** _
</td>
<td>
_**TEC** _
</td>
<td>
</td>
<td>
_Cyber attacks and threats data_
</td> </tr> </table>
## 5\. Description of Datasets
The SPEAR data management repository will enable project partners and research
teams to manage and distribute their public datasets through a common cloud
infrastructure in secure and efficient manner. The datasets on repository will
provide a holistic list of data resources, generic and easy to handle
datasets, and ability to move to industrial datasets. Datasets are to be
identifiable, with allowance to segregate access rights and with accessible
backups.
### 5.1 Datasets for SPEAR-SIEM
##### 5.1.1 Datasets for OSSIM SIEM
**Table 5 - TEC-SIEM Basis (Data Collector)**
<table>
<tr>
<th>
**TEC-_SIEM Basis (Data collector)_ **
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset Information**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_network traffic, syslog and event dataset for BDAC and Visual IDS_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_The dataset includes network traffic data and syslogs from the devices in
Smart grid scenarios, and also event data produced after processing collected
raw data (network traffic data and syslogs)._
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
* _In: Smart grid systems of the use case scenarios_
* _How: Wireshark, Suricata, AlienVault OSSIM, syslog protocol (RFC5424_ )
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td>
<td>
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_SURREY, UOWM, CERTH, 0INF, TEC, SH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP3, T3.1_
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata not yet defined._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Proprietary format using common data model of SPEAR_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The dataset will be used for the anomaly detection algorithms of the big data
analytics component (T3.2) and visual IDS component (T3.3)_
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form (e.g. security mechanisms will be
studied since the collected data needs to fulfil forensics requirements) in
servers indicated by the pilots or the technology providers._
</td> </tr> </table>
##### 5.1.2 Datasets for BDAC
**Table 6 - CERTH – Big Data Analytics Component**
<table>
<tr>
<th>
**CERTH-Big Data Analytics Component**
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset Information**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_Smart Home network traffic dataset for anomaly detection_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_The dataset includes both normal and abnormal network traffic and several
network traffic features to be used for anomaly detection._
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
* _In: Smart devices, gateways and sensors of the smarthouse_
* _How: Wireshark, AlienVault OSSIM_
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td>
<td>
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_SURREY_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_SURREY, CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP3, T3.2_
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata not yet defined._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Proprietary format using common data model_
</td> </tr>
<tr>
<td>
</td>
<td>
_of SPEAR_
_Data volume In = number of smart devices x time duration of capture x type
of network traffic_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The dataset will be used for the anomaly detection algorithms of the big data
analytics component_
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form (e.g. encrypted) in servers indicated
by the pilots or the technology providers._
</td> </tr> </table>
**Table 7 - SURREY – Big Data Analytics Component**
<table>
<tr>
<th>
**SURREY - Big Data Analytics Component**
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset Information**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_network traffic dataset for anomaly detection_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_The dataset includes both normal and abnormal network traffic and several
network traffic features to be used for anomaly detection._
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
* _In: Use case devices, gateways and sensors from the pilots_
* _How: Wireshark, AlienVault OSSIM_
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td>
<td>
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_UOWM_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_UOWM, SURREY, CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP3, T3.2_
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata not yet defined._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
* _Proprietary format using common data model of SPEAR_
* _Data volume In = number of devices x time duration of capture x type of network traffic_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The dataset will be used for the anomaly detection algorithms of the big data
analytics component_
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form (e.g. encrypted) in servers indicated
by the pilots or the technology providers._
</td> </tr> </table>
##### 5.1.3 Datasets for Visual-Based IDS
**Table 8 - CERTH – Visual-based IDS**
<table>
<tr>
<th>
**CERTH_Visual-based IDS**
</th> </tr>
<tr>
<td>
**Dataset Information**
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_Smart Home clustered network traffic dataset_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
* _In: Real-time network traffic capture_
* _Out: Visualization points and coordinates_
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
* _In: Smart devices, sensors, gateways_
* _How: Wireshark, AlienVault OSSIM_
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_SH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_SH, CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP3, T3.3_
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Graph coordinates, timestamp_
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
* _Proprietary format using common data model of SPEAR_
* _Data volume In = number of smart devices x time duration of capture x type of network traffic_
* _Data volume Out = number of nodes x graph space dimensions x frequency and amount of communications_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The datasets will be used for the visual identification of normal/abnormal
activities in the network in the pilot sites._
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form (e.g._
_encrypted) in servers indicated by the pilots or the technology providers._
</td> </tr> </table>
##### 5.1.4 Datasets for GTM
**Table 9 - CERTH – GTM**
<table>
<tr>
<th>
**CERTH_GTM**
</th> </tr>
<tr>
<td>
**Dataset Information**
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_Smart home’s nodes reputation over time_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_A set of database records which capture the change of reputation and trust of
smart home’s devices, sensors and gateways over time._
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
_Smart devices, sensors, gateways_
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_SURREY_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_SURREY_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_SURREY, CERTH_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP3, T3.4_
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Type of device, timestamp of reputation change_
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Proprietary format using common data model of SPEAR_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The dataset will be used for the validation of GTM component in the smart
home scenario._
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form (e.g._
_encrypted) in servers indicated by the pilots or the technology providers._
</td> </tr> </table>
### 5.2 Datasets for SPEAR-FRF
##### 5.2.1 Datasets for AMI Honeypots
**Table 10 - TEC-AMI HONEYPOTS**
<table>
<tr>
<th>
**TEC-_AMI HONEYPOTS_ **
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset Information**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_System activity_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_The dataset includes network traffic data, operating system shell commands,
keystrokes, communications and syslogs._
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
* _In: UC2- The Substation Scenario_
* _How: As a basis open-source honeypots can be used (conpot,_ _CryPLH…)_
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_TEC, SCH_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP4, T4.3_
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata not yet defined._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Proprietary format using common data model of SPEAR_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The dataset will be used for the identification of cyber attacks, collection
of intelligence about attack strategies and possible countermeasures needed
and also as deception technology against attackers._
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form (e.g. security mechanisms will be
studied since the collected data needs to fulfil forensics requirements) in
servers indicated by the pilots or the technology providers._
</td> </tr> </table>
### 5.3 Datasets for SPEAR-CHF
##### 5.3.1 Datasets for SPEAR-RI
**Table 11 - TEC-AMI HONEYPOTS**
<table>
<tr>
<th>
**TEC-_SPEAR-RI_ **
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset Information**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset / Name
</td>
<td>
_Cyber attacks and threats data_
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
_The dataset includes Cyber attacks and threats data for information sharing
through an anonymous channel/repository._
</td> </tr>
<tr>
<td>
Dataset Source
</td>
<td>
_In: Smart grid systems of the use case scenarios_
</td> </tr>
<tr>
<td>
</td>
<td>
_How: to be defined. There are different options: to be filled by a system
operator/administrator or automatically by the IDS system and confirmed
manually by a system operator/administrator_
</td> </tr>
<tr>
<td>
**Beneficiaries services and responsibilities**
</td> </tr>
<tr>
<td>
Beneficiary owner of the component
</td>
<td>
_TEC (UOWM, 8BL – to be defined)_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data collection (if different)
</td>
<td>
_TEC, UOWM, 8BL_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data analysis (if different)
</td>
<td>
_TEC, UOWM, 8BL_
</td> </tr>
<tr>
<td>
Beneficiaries in charge of the data storage (if different)
</td>
<td>
_TEC, UOWM, 8BL_
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
_WP5, T5.1_
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
_Metadata not yet defined._
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
_Proprietary format using common data model of SPEAR_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
_The dataset will be used for the threat intelligence information sharing
among industrial partners._
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level
(Confidential, only for members of the Consortium and the Commission Services)
/ Public
</td>
<td>
_The datasets will be confidential and only for the members of the
consortium._
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
_The datasets can be shared to support other WP and tasks as defined in the
DoA._
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): where? For how long?
</td>
<td>
_Data will be stored in a suitable form in servers indicated by the pilots or
the technology providers._
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1461_PROTAX_787098.md
|
# Executive summary
This document “D10.2 Data Management Plan (DMP)” is a deliverable of the
PROTAX project, which is funded by the European Union’s H2020 Programme (Grant
Agreement Number 787098).
This document follows the template provided by the European Commission in the
Participant Portal. 1
The aim of PROTAX is to reach advanced and harmonised levels of organisation
and networking and develop a validated and tested set of law enforcement tools
which will be instrumental in capacity building and for an effective counter
tax strategy and solidarity in the EU for the long term.
PROTAX considers the importance of making research data accessible and
available for sharing among interested stakeholders and plans on using the
existing data archives and services to ensure proper curation, preservation
and sharing of collected and generated data.
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
Consortium with regard to the project research data.
The data management plan includes data protection. Ownership of data,
information and knowledge generated through PROTAX is included in the
consortium agreement. Confidentiality agreements will be signed as relevant,
in particular for the data collected under Task 2.1 (Identify relevant
stakeholders in all 28 Member States). PROTAX will put special emphasis on
anonymisation and encryption when needed to prevent unauthorised access,
accidental deletion or corruption. Each partner will remain owner of its
intellectual and industrial property right over pre-existing know-how.
Knowledge generated by the project shall be the property of the partner
responsible for the work leading to this knowledge. Intellectual Property
Rights (IPR) of the project results shall be ruled as per the consortium
agreement.
Regarding publication of project deliverables and results in reviewed
publications, PROTAX will apply the route to open access publishing to support
the maximum openness to and accessibility of results. PROTAX will present the
results of the project through publications in peer-reviewed journals by
implementing the gold open access route. The partners have chosen the gold
route on the assumption that visitors to the websites of journals will find it
easier and more accessible to simply download our articles free of charge. In
addition, for visitors to the partners’ individual websites and the project
website, we will provide a link to the online journals to further improve the
widest possible accessibility to our published articles.
Figure 1: Research Data Life Cycle
Figure 2: The FAIR guiding principles
Figure 3: Disaster Recovery
**Table 1** : List of acronyms/abbreviations
# Glossary of Terms
<table>
<tr>
<th>
Term
</th>
<th>
Explanation
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data collection
</td>
<td>
The process of gathering information or data
</td> </tr>
<tr>
<td>
Data Management Plan
</td>
<td>
A plan that includes information on the handling of research data during and
after the end of the project, what data will be collected, processed and/or
generated, which methodology and standards will be applied, whether data will
be shared or made open access and how data will be curated and preserved
(including after the end of the project). ( _H2020 Guidelines on FAIR Data
Management, 2016_ )
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Data that describes other data
</td> </tr>
<tr>
<td>
Open Access
</td>
<td>
Open access (OA) refers to the practice of providing online access to
scientific information that is free of charge to the end-user and reusable.
'Scientific' refers to all academic disciplines. In the context of research
and innovation, 'scientific information' can mean peer- reviewed scientific
research articles (published in scholarly journals) or research data (data
underlying publications, curated data and/or raw data). ( _H2020 Guidelines to
the rules on open access to Scientific Publications and Open Access to
Research Data in Horizon 2020, 2017_ )
</td> </tr>
<tr>
<td>
Personal Data
</td>
<td>
Any data relating to an identified or identifiable natural person (‘data
subject’); an identifiable natural person is one who can be identified,
directly or indirectly, in particular by reference to an identifier such as a
name, an identification number, location data, an online identifier or to one
or more factors specific to the physical, physiological, genetic, mental,
economic, cultural or social identity of that natural person 2
</td> </tr>
<tr>
<td>
Research Data
</td>
<td>
Information, in particular, facts or numbers, collected to be examined and
considered as a basis for reasoning, discussion, or calculation.
( _H2020 Open Access Guidelines, 2017_ )
</td> </tr>
<tr>
<td>
Scientific Information
</td>
<td>
Can mean peer-reviewed scientific research articles (published in scholarly
journals) or research data (data underlying publications, curated data and/or
raw data). ( _H2020 Guidelines to the rules on open access to Scientific
Publications and Open Access to Research Data in Horizon 2020, 2017_ )
</td> </tr> </table>
_Table 2: Glossary of Terms_
# 0 Introduction
The DMP is not a static document, but will evolve over the lifespan of the
project, particularly whenever significant changes arise such as dataset
updates or changes in Consortium policies.
This document is the first version of the DMP, delivered in Month 6 of the
project. It includes an overview of the datasets to be produced by the
project, and the specific conditions that are attached to them. The next
versions of the DMP will get into more detail and describe the practical data
management procedures implemented by the PROTAX project. At a minimum, the DMP
will be updated in Month 18 (October 2019) and Month 36 (April 2021)
respectively.
This document has been produced following the EC guidelines for project
participating in this pilot and additional consideration described in ANNEX I:
KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA 3 .
# 1 Data Summary
* _What is the purpose of the data collection/generation and its relation to the objectives of the project?_
* _What types and formats of data will the project generate/collect?_
* _Will you re-use any existing data and how?_
* _What is the origin of the data?_
* _What is the expected size of the data?_
* _To whom might the data be useful ('data utility')?_
The purpose of the DMP is to provide an analysis of the main elements of the
data management that will be used by the Protax-Consortium in regard to the
project’s data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and IPR requirements.
_Figure 1: Research Data Life Cycle_
For this first release of DMP, the data types that will be produced during the
project are focused on the Description of the Action (DoA) and on the results
obtained in the first months of the project.
Based on such consideration, Table 3 reports a list of indicative types of
research data that PROTAX will produce. These research data types have been
mainly defined in WP10, including data structures, sampling and processing
requirements, as well as relevant standards. This list may be adapted with the
addition or removal of datasets in the next versions of the DMP to take into
consideration the project developments. A detailed description of each dataset
is given in the following sections of this document.
<table>
<tr>
<th>
Data
</th>
<th>
Format
</th>
<th>
Origin (WP, task)
</th>
<th>
Expected size
</th>
<th>
Utility
</th>
<th>
Data users
</th>
<th>
Access level
</th> </tr>
<tr>
<td>
Contact list (information obtained from PROTAX partners from publicly
available data)
</td>
<td>
.xls
</td>
<td>
WP 10
</td>
<td>
variable
</td>
<td>
For engagement, dissemination and communication
activities
</td>
<td>
PROTAX
partners
</td>
<td>
Currently restricted to
PROTAX
partners
</td> </tr>
<tr>
<td>
Focus Groups response data
</td>
<td>
.doc,
.xls
</td>
<td>
WP 2
</td>
<td>
variable
</td>
<td>
For research purposes, e.g., state- of-the-art reviews, socioeconomic impact
assessments, identification and analysis of ethical issues
</td>
<td>
PROTAX
partners
</td>
<td>
Currently restricted to
PROTAX
partners
</td> </tr>
<tr>
<td>
Data from interviews
</td>
<td>
.doc,
.xls,
audio
</td>
<td>
WP 2, 3, 4, 5, 6
</td>
<td>
variable
</td>
<td>
For research purposes, e.g., state- of-the-art reviews, socioeconomic impact
assessments, identification and analysis of ethical issues
</td>
<td>
PROTAX
partners
</td>
<td>
Currently restricted to
PROTAX
partners
</td> </tr>
<tr>
<td>
Data from legal research (e.g.
reviews, analyses)
</td>
<td>
.doc,
.xls,
.pdf
</td>
<td>
WP 1, 3, 4, 5, 6
</td>
<td>
variable
</td>
<td>
To inform the legal analysis
</td>
<td>
PROTAX
partners
</td>
<td>
Currently restricted to
PROTAX
partners, will be published via deliverables
</td> </tr>
<tr>
<td>
Reports (including deliverables)
</td>
<td>
doc,
.xls,
.pdf
</td>
<td>
All WPs
</td>
<td>
variable
</td>
<td>
Research and engagement
</td>
<td>
PROTAX
partners
</td>
<td>
Largely open access except where
restricted or of
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
confidential nature
</td> </tr>
<tr>
<td>
Input of stakeholders; result of questionnaire
</td>
<td>
doc,
.xls,
</td>
<td>
WP 2
</td>
<td>
variable
</td>
<td>
To integrate the views from the stakeholder board and other stakeholders into
the development of the codes by sending out draft versions and questionnaires
</td>
<td>
PROTAX
partners
</td>
<td>
Currently restricted to
PROTAX
partners
</td> </tr>
<tr>
<td>
Communication materials
</td>
<td>
.pdf,
.jpg, other audiovisual formats
(e.g. html, mp4,…)
</td>
<td>
WP 2, 7, 8
</td>
<td>
variable
</td>
<td>
Communications with external audiences
</td>
<td>
PROTAX
partners
</td>
<td>
Open, post publication
</td> </tr>
<tr>
<td>
Codes and Frameworks
</td>
<td>
.doc,
.xls
</td>
<td>
WP 11
</td>
<td>
variable
</td>
<td>
Improvement and
enhancement of ethical and legal frameworks
</td>
<td>
PROTAX
partners
</td>
<td>
open
</td> </tr>
<tr>
<td>
Scientific publications
</td>
<td>
.doc,
.docx,
.pdf
</td>
<td>
WP 9
</td>
<td>
variable
</td>
<td>
Research, communication, impact
</td>
<td>
PROTAX
partners, external audiences
</td>
<td>
open
</td> </tr> </table>
_Table 3: PROTAX preliminary overview of Data Types (October 2018)_
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. The policy for open access are
summarised in the following picture.
We will conduct “Desktop-Research”, reviewliterature and use openly accessible
statistical data from various institutions (i.e. European Social Survey) to
help us substantiate our findings.
We will conduct case studies (about already judged cases – therefore their
content is publicly accessible), focus groups and interviews (in which notes
and records are taken and transcripted).
The expected size of the data varies by types. scientific documents and
(delivery) reports will rather be small in size – though delivery reports
might have short data appendices (which also won’t be significant in size).
Data received via qualitative (and quantitative) research will be a bit
bigger. The data will include photos, tapings of interviews and their
transcription. Also, data from other (public) sources may be included.
The PROTAX project data will be useful to:
* Stakeholders
* Academics and University departments and Institutes that could use the PROTAX data for research and teaching purposes.
* Journalists and journalist practitioners
* International Organisations (UN, WTO, etc...)
* LEA, Tax Authorities
* Tax Practitioners (Lawyers, ‘Tax Advisors, Accountants, Corporations)
* NGOs
* Policy makers
## PROTAX personal data mapping
The table below identifies the activities which will involve collection
personal data and depicts the purposes of the collection, types of data that
will be processed, its storage formats, modes of collection, sharing,
location, accountability and access arrangements.
<table>
<tr>
<th>
**Activity/task/ WP (and purpose)**
</th>
<th>
**Type of personal data being processed**
</th>
<th>
**Storage format**
</th>
<th>
**Mode of**
**collection**
</th>
<th>
**sharing**
</th>
<th>
**Location (office, cloud, third parties)**
</th>
<th>
**Accountability**
</th>
<th>
**Access**
</th> </tr>
<tr>
<td>
Task 2.1, Stakeholder identificatio n and analysis (contact list)
Purpose: to develop contacts for PROTAX
project research and engagement
activities
</td>
<td>
Name, title, organisation, e- mail id, gender, key activities/relev a nce to
PROTAX,
website, social media handles (optional) (information already in the public
domain)
</td>
<td>
xls
</td>
<td>
From project partner networks, from publicly available sources only and the
subscriptio n form on PROTAX website
</td>
<td>
Internal only; restricted
</td>
<td>
SharePoin
t
</td>
<td>
CU
</td>
<td>
Restricted to PROTAX core EU partners. Coventry
University
(John
Callen) will function as gatekeeper . Password protected.
</td> </tr>
<tr>
<td>
PROTAX
events
</td>
<td>
Name, title, organisation, email
</td>
<td>
xls,
doc, pdf
</td>
<td>
From
PROTAX
contact list and public domain
</td>
<td>
Internal only; restricted
</td>
<td>
SharePoin
t
</td>
<td>
Partner managing the event
</td>
<td>
Partners organising the event
</td> </tr>
<tr>
<td>
Focus
Groups (WP
2)
</td>
<td>
Participants anonymised, voice recordings taken
</td>
<td>
Audi o
files, .doc,
.docx
, .xls
</td>
<td>
Focus Groups
</td>
<td>
Internal, restricted
</td>
<td>
SharePoin
t
</td>
<td>
CU
</td>
<td>
Restricted to PROTAX core EU partners. Coventry
University
(John
Callen) will function as gatekeeper . Password protected.
</td> </tr>
<tr>
<td>
Interview- related personal data interviews in
WP2, WP3, WP4, WP5 and WP6
</td>
<td>
Email, first name, last name, email, expertise,
position,
phone number or virtual id.
</td>
<td>
.docx
, .doc
</td>
<td>
PROTAX
contact list and publicly available information
</td>
<td>
PROTAX
partners carrying out interview s only
</td>
<td>
CU
Sharepoin
t
</td>
<td>
Partners carrying out
interviews
.
</td>
<td>
PROTAX
partners only
</td> </tr> </table>
_Table 4: PROTAX personal data mapping_
PROTAX partners will adhere to their own institutional policies and procedures
on data management. The table below illustrates this further.
<table>
<tr>
<th>
Partner
</th>
<th>
Institutional policy and procedures on research data management
</th> </tr>
<tr>
<td>
Coventry
University - UK
</td>
<td>
Their policies are available via these links:
_https://www.coventry.ac.uk/legal-documents/information-security-policy/_ and
_https://www.coventry.ac.uk/Global/09-aboutus/GDPR/Data%20Protection%20Policy%20V4.pdf_
_John Callen will be the lead person responsible for research data management
and management of compliance in PROTAX._
</td> </tr>
<tr>
<td>
Trilateral
Research - UK
</td>
<td>
Trilateral Research’s institutional policies and procedures are specified in
its internal Policies and Procedures document (last update, October 2017).
Trilateral follows established guidelines in relation to any project work
undertaken, which involves data collection, storage and transfer. Any personal
data collected is stored on a secure, private, cloud-based server that is
maintained on a routine basis. Any personal data collected is anonymised and
data subjects are provided with a pseudonymisation number. All access to
cloud-based server files is granted by invitation only; there is a log
register and related licences for each person on the cloud. Trilateral
encrypts access to the network via state-of-the-art network management tools,
ensuring that only authorised Trilateral staff may access the shared network
environment and assets on the network. Trilateral project members store their
laptops (and any other device used for PROTAX) securely when unattended (at
home or during travel); complete regular backups of locally-stored data;
password protect any sensitive files, including any that may include company
financial or banking information, or personal data for staff or customers;
encrypt home office network access; and install and regularly update anti-
virus software. No project data will be stored locally on Trilateral members’
devices. Any transfer of
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
sensitive data only takes place over encrypted connections, using password
protections and access controls in the case of uploads and downloads to and
from repositories. Trilateral is completing the process of becoming GDPR-
compliant, which will be finalised before the deadline. Trilateral Research is
accredited under the UK government Cyber Essentials scheme.
_David Wright will be the lead person responsible for research data management
and management of compliance in PROTAX._
</th> </tr>
<tr>
<td>
Vienna Centre of Societal Security
(VICESSE) - AT
</td>
<td>
Vicesse’s institutional policies and procedures are specified in its internal
Policies and Procedures document (last update, September 2018). Vicesse
follows established guidelines in relation to any project work undertaken,
which involves data collection, storage and transfer. Any personal data
collected is stored on a secure, private, cloud-based server that is
maintained on a routine basis. Any personal data collected is anonymised and
data subjects are provided with a pseudonymisation number. All access to
cloud-based server files is granted by invitation only; there is a log
register and related licences for each person on the cloud. Vicesse encrypts
access to the network via state-of-the-art network management tools, ensuring
that only authorised Vicesse staff may access the shared network environment
and assets on the network. Vicesse project members store their laptops (and
any other device used for PROTAX) securely when unattended (at home or during
travel); complete regular backups of locally-stored data; password protect any
sensitive files, including any that may include company financial or banking
information, or personal data for staff or customers; and install and
regularly update anti-virus software. Any transfer of sensitive data only
takes place over encrypted connections, using password protections and access
controls in the case of uploads and downloads to and from repositories.
_Regina Kahry will be the lead person responsible for research data management
and management of compliance in PROTAX._
</td> </tr>
<tr>
<td>
Austrian
Ministry of
Finance (BMF)
\- AT
</td>
<td>
Their policy I available via this link:
_https://www.bmf.gv.at/services/datenschutz.html_
</td> </tr>
<tr>
<td>
Austrian
Ministry of
Justice
(BMVRDJ) - AT
</td>
<td>
The Austrian Ministry of Constitutional Affairs, Deregulation, Reforms and
Justice (BMVRDJ) has recently established new rules on data management and
data protection (“Erlass vom 24. April 2018 über die allgemeine Gewährleistung
des Datenschutzes im BMVRDJ und in den nachgeordneten Dienststellen
(Datenschutzerlass)”). Any personal data collected is stored on a secure,
private server that is maintained on a routine basis. BMVRDJ encrypts access
to the network via stateof-the-art network management tools, ensuring that
only authorised staff may access the network environment and assets on the
network. Laptops (and any other device used for PROTAX) are secured by
multiple passwords and are only accessible via use of ID cards. No project
data will be stored locally on BMVRDJ’s members’ devices. Any transfer of
sensitive data only takes place over encrypted connections.
</td> </tr>
<tr>
<td>
Ministry of
Finance
(MFIN) - MT
</td>
<td>
You appreciate that the Financial Intelligence Analysis Unit (FIAU) as an
intelligence agency is prohibited by law from divulging information relation
to the affairs of the Unit (Article 34(1) of the Prevention of Money
Laundering Act - Cap. 373 of the Laws of Malta). This would include specific
details of its data management (such as collection, storage and dissemination
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
channels used by the Unit). Having said that below please find an extract from
the Scope of the FIAU's Information Security Policy (v2.0 effective as from
14.09.2015):
The Financial Intelligence Analysis Unit (FIAU) maintains information that is
secret and sensitive. It relies on the information it collects, stores and
processes to carry out its legal obligations and activities effectively and
efficiently in the area of the prevention of money laundering, prevention of
funding of terrorism and the carrying out of compliance monitoring relating to
such activities. The confidentiality of the information is also protected by
virtue of the Prevention of Money Laundering Act (Cap. 373 of the Laws of
Malta). The preservation of the integrity, confidentiality and availability of
its information and systems underpin the Agency’s ability to carry out its
legal obligations and to safeguard its reputation. The exposure of secret and
sensitive information to unauthorised individuals could cause irreparable harm
to the FIAU and its employees. Additionally, if the FIAU information were
tampered with or made unavailable, it could impair its ability to carry out
its operations.
_Daniel Frendo will be the lead person responsible for research data
management and management of compliance in PROTAX._
</th> </tr>
<tr>
<td>
Estonian Tax and Customs
Board (ETCB) -
EE
</td>
<td>
Section 26 of the Estonian Taxation Act imposes the protection of tax secrecy.
The tax authorities and officials and other staff thereof are required to
maintain the confidentiality of information concerning taxable persons,
including all media (decisions, acts, notices and other documents) concerning
the taxable persons, information concerning the existence of media, business
secrets and information subject to banking secrecy, which is obtained by the
authorities, officials or other staff in the course of verifying the
correctness of taxes paid, making an assessment of taxes, collecting tax
arrears, conducting proceedings concerning violations of tax law or performing
of other official or employment duties (hereinafter tax secrecy). The
obligation to maintain tax secrecy continues after the termination of the
service or employment relationship.
According to subsection 6 (7) of Personal Data Protection Act concerning
physical persons there is a general obligation to notify the person concerned
of the data collected. The principle of individual participation requires that
the data subject shall be notified of data collected concerning him or her,
the data subject shall be granted access to the data concerning him or her and
the data subject has the right to demand the correction of inaccurate or
misleading data. This is not specifically applicable in tax matters.
Taxation Act does not require the consent of the person who is the object of a
request for information. In accordance with section 30 of Taxation Act, the
tax authority may disclose information subject to tax secrecy without the
consent of a taxable person:
1.to the competent bodies of a foreign state in respect of a resident taxpayer
in that state concerning information relevant to tax proceedings under the
conditions provided for in an international agreement;
2. to bodies of the European Union and Member States thereof which are competent to exchange information relating to taxable persons pursuant to the procedure prescribed in the legislation of the European Union
3. Processing of personal data for scientific research is regulated by section 16 of the Personal Data Protection Act. Permission for conducting scientific research must be requested from the Data Protection Inspectorate if in the process of the scientific research data that are not non-personalized are used without the content of the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
person. If in the process of scientific research also sensitive personal data
are processed, processing of sensitive personal data must be registered with
the inspection separately. No permission by the inspection is required if
personal data are processed in scientific research with the consent of the
person. Even so, with scientific research carried out on the basis of a
consent, in the process of which sensitive personal data are processed,
processing of sensitive data must be registered with the inspection. If
scientific research is carried out with non-personalized data (i.e., a person
is marked with a feature which does not allow identification of the person),
the data are not deemed to be personal data for the purposes of the law and,
therefore, use of such data does not require consent of the person, permission
of the inspection or registration of processing of sensitive personal data.
For the implementation of this provision, personal data must be coded before
these are handed over to the person carrying out the scientific research.
Permission for scientific research can be requested by submitting an
application in Estonian to the inspection.
• Relevant legal Act`s:
1. Taxation Act
2. Public Information Act
3. Personal Data Protection Act
4. Electronic Communications Act
5. EU General Data Protection Regulation **Standards we implement**
ISKE (Three-Level IT Baseline Security System) the information security
standard that is developed for the Estonian public sector. According to the
Government of the Republic Regulation no. 273 of 12 August 2004- ISKE is
compulsory in organizations of state and local administration who handle
databases/ registers. The goal of ISKE implementation is to ensure the
security Level sufficient for the data processed in IT systems. The necessary
security level is achieved by implementing the standard organizational,
infrastructural/physical and technical security measures. The preparation and
development of ISKE is based on a German information security standard - IT
Baseline Protection Manual (IT-Grundschutz in German), which has been adapted
to match the Estonian situation. Estonia’s regulatory authority that manages
ISKE, has added some Estonia-specific content. In particular, ISKE contains
additional content that is relevant to Estonia’s national identification cards
and X-road and new cloud module.
Framework documents:
1. Information security policy (06.08.2015 No 103)
2. IT Services agreement between ETCB and Information Technology Centre fort he Ministry of
Finance
3. General security rules
4. Data processing overview
Security measures:
1. Confidentiality and non-disclosure agreements are signed by all employees.
2. "Ordinary users do not have admin. rights and can`t install software.
All changes made by sys admin are stored in log file. Before implementing
changes it needs to be approved by different parties and notified before
actual upgrade takes place."
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
3. Different roles have different access rights depending on the duties
4. Access rights are centrally managed and provided after approval by the manager. 5. All incidents must be reported to our Helpdesk. Helpdesk manages the incident solving process.
6. Errors and faults are reported to helpdesk and registered in our online application, converted to problems if needed. Risk assessment is conducted 2 times a year.
7. Removable computer media is marked accordingly when taken into usage. Media is erased securely or destroyed physically when not used any more depending on the media type.
8. No shared networks
9. Network communications are secured using TLS and VPN. In case of VPN all traffic is encrypted as opposite to TLS encrypting just specific traffic.
10. Here is process in place and Tax and Customs Board, Internal Control Department is reviewing user’s access rights on regular basis and using expert models to analyze audit log fails.
11. Network is segregated by using firewalls
12. Remote computer - nodes are authenticated by VPN and logon mechanisms.
13. Users must report all incidents to helpdesk either by email or phone. Helpdesk then registers the incident and coordinates the resolution.
14. In case of incident helpdesk co-ordinates the resolution of the incident using the resources required, working close with security officer. Procedures are documented in corresponding plan.
15. Risk assessment is conducted 2 times a year. Based on that assessment risk are registered in our risk management/planning application. Then mitigation steps are planned, scheduled and conducted. There are also plans for equipment and software life cycles and replacement.
16. Continuity plan is defined and documented in our internal information system.
Continuity plan is periodically analyzed. Tolerance is 1 week.
</th> </tr>
<tr>
<td>
Policia
Judiciaria (PJ)
\- PT
</td>
<td>
Polícia Judiciária collaborates in PROTAX as an end-user. We will support
other partners while defining requirements, supporting the development phase
and validating results.
PJ does not collect, process, store or even supply to the consortium any data
respecting to ongoing investigations due to judicial confidentiality.
If any data will be supplied by PJ it will be simulated, fictional or
anonymized.
</td> </tr>
<tr>
<td>
An Garda
Siochana
(AGS) - IE
</td>
<td>
Appropriate security measures are taken against unauthorised access to, or
alteration, disclosure or destruction of, personal data and against their
accidental loss or destruction. The security of personal information is all-
important. High standards of security are essential for all personal
information. The nature of security used may take into account the sensitivity
of the data in question.
* Access to information is restricted to authorised staff on a “need-to-know” basis,
* Computer systems are password protected,
* Information on computer screens and manual files is kept hidden from the public,
* Back-up procedures are in operation for computer held data, including off-site back-up,
* All waste papers, printouts, etc. disposed of carefully by shredding,
* All employees must lock their computer on each occasion when they leave the workstation,
* Personal security passwords are not disclosed to any other employee of An Garda
Síochána,
</td> </tr>
<tr>
<td>
</td>
<td>
• All Garda Síochána premises are secure when unoccupied,An Garda Síochána
complies fully with the provisions of the Data Protection Act, 2018
</td> </tr> </table>
_Table 5: Institutional Policies_
# 2\. FAIR data
The European Commission recommends that Horizon 2020 beneficiaries “make their
research data findable, accessible, interoperable and reusable (FAIR), to
ensure it is soundly managed” 4 . Based on this guidance, this section
outlines how PROTAX will operationalise this.
The figure below illustrates the FAIR Guiding principles:
_Figure 2: The FAIR guiding principles_
## 2.1 Making Data findable, including provisions for metadata
### Internal provisions
PROTAX project documents and administrative data are stored in a centralised
online repository – SharePoint – provided by the University of Coventry that
is accessible to all the partners working on the project. The manager of
PROTAX (John Callen) is the repository owner. Administrative access rights are
given to additional members of the University of Coventry staff (Umut Turksen,
hereafter referred to as SharePoint administrator). SharePoint administrators
will manage access rights and monitor folders and file names to ensure the
data repository is consistent. To make data findable and reusable, the
following measures are put in place:
* **Location:** All documents will be stored in relevant folders. There are ten master folders:
Work Packages, Contractual Guidance, Project Governance, Project Reporting,
Project
Meetings, Partner Details, Project Management, Deliverables, PROTAX evolution
and
Templates. Each of these folders have sub-folders (e.g., the WP folder has a
folder each for WPs 1- 10) where related documents can be stored in a variety
of formats e.g., Word documents, PDFs, Excel spreadsheets, PowerPoints or
other standard data formats. Each PROTAX partner is responsible for storing
documents related to their work in the project in the correct location.
* **Naming of files** : The file names will include a short title of the document and version number (of creation or revision) to make them uniquely identifiable and distinguishable. This will ensure any partner requiring to the information can easily find it (see further below).
* **Reports and documents** : All reports and documents will also contain information on: authors and contributors, clear version numbering, and key words.
* **Search functionality** : SharePoint has a search functionality that enables users to search and find documents with ease.
* **Survey data (Focus Groups)** : Participants in the Focus Groups will remain anonymous. Contact Details or any other personal information is to be stored with the contents oft he focus groups.
### External provisions
The following provisions will ensure that PROTAX outputs are findable
externally:
* All public deliverables (in some cases redacted versions) and outputs will be published on the PROTAX website and in agreed institutional or other repositories.
* A digital object identifier (DOI) will be assigned to datasets for effective and persistent citation when it is uploaded to an institutional repository. This DOI can be used in any relevant publications to direct readers to the underlying dataset.
* Search keywords will be provided for every deliverable and report.
* All partners will be advised of the availability of data, changes to data and their location to facilitat access and wider sharing (as deemed fit).
* _Naming Convention_
For report deliverables (document code: D), the document identifier will have
the following format:
PROTAX_WPX.X_DX.X_<DocumentName>
All other documents, the document identifier will have the following format:
PROTAX_WPX.X_<DocumentName>
* _Version numbers_
We will also provide clear version numbers for continuous updated Deliverables
(i.e. the DMP, as it will be updated in Month 18 and 30). The updated versions
will be named as follows:
PROTAX_WPX.X_DX.X_<DocumentName>_V<NumberAndDateOfVersion>
## 2.2 Making data openly accessible
### Open access to scientific publications
Per clause 25 of the PROTAX Grant Agreement (GA), each PROTAX beneficiary will
ensure open access (free of charge online access for any user) via gold open
access routes 5 to all peer-reviewed scientific publications relating to its
results. Per the GA 6 , beneficiaries will:
* as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; the beneficiary will also aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications.
* ensure open access to the deposited publication — via the repository — at the latest: (i) on publication, if an electronic version is available for free via the publisher, or (ii) within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
* ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication. The bibliographic metadata must be in a standard format and must include all of the following: - the terms “European Union (EU)” and “Horizon 2020”; - the name of the action, acronym and grant number; - the publication date, and length of embargo period if applicable, and - a persistent identifier.
PROTAX partners will discuss governance of open access requirements and their
implementation further in 2018 (refer to _PROTAX deliverables D9.1
Dissemination and exploitation Plan and D9.2: Communication plan_ ).
### Open access to research data
In line with PROTAX GA 7 clause 25, PROTAX will provide open access to
research data (unless an exception to its open access applies, e.g., if the
achievement of the project’s main objective would be jeopardised by making
those specific parts of the research data openly accessible, it is not in line
with data protection requirements or does not constitute a trade secret).
The raw data from the focus groups will not be made publicly accessible.
Before sharing any data – either with the consortium or externally – we will
ensure that no disclosive 8 information is included.
The raw data from the focus groups (i.e., audio recordings and note-takers’
notes) will not be made publicly available. This decision is based on the
difficulty in truly anonymising audio recordings. If we were to make these
publically available, we would need participants to give explicit consent
(acknowledging that their data could be used to identify them). This is not
considered ideal as it could affect engagement with the study, both at
recruitment and during fieldwork. The raw data from the panels will be used to
produce reports (i.e., deliverables in WP’s 1-8) that will be publicly
available and used more widely by the PROTAX partners to support their work on
the project.
Public deliverables and outputs (redacted, if needed) will be published on the
PROTAX website.
After EC approval of deliverables (and not before the relevant interim and
final reviews) and/or the end of the project, PROTAX will deposit its
deliverables on the Protax Homepage and take measures to make it possible for
third parties to access, mine, exploit, reproduce and disseminate, free of
charge for any user.
Within the consortium we will share our Deliverables, Contributions and other
relevant information and data via the Coventry University SharePoint. This
share point is primarily a document management and storage system which is
highly configurable. To access this, all project partners have to be invited
to the platform by the Project Manager (John Callen) and are able to log in
with a personal pin or password. Within the PROTAX project all partners have
read/write rights to the root (all folders) and each partner has their own
private space which only they can access.
Nevertheless, data sharing in the open domain can be restricted as a
legitimate reason to protect results that can reasonably be expected to be
commercially or industrially exploited. Strategies to limit such restrictions
will include anonymising or aggregating data, agreeing on a limited embargo
period or publishing selected datasets.
As we use Office-Applications (like MS office or Open Office) no specification
about software tools or documentation is needed. In rare cases where data is
statistically utilized, easily accessible tools or software will be used (i.e.
SPSS, R).
There is no need for a Data Access Committee, as we don’t use personal data
for publications, reports and deliverables and the data we get via the focus
groups are anonymised.
## 2.3 Making data interoperable
PROTAX partners will exchange information using a variety of means, e.g.,
e-mail, SharePoint and password-protected local storage and will select the
sharing plateform as appropriate for the purpose.
To allow data exchange and reuse between researchers, institutions,
organisations, countries, etc., PROTAX ensures data interoperability through
the consistent use of common, standardised file formats [See Table 3,
Preliminary overview of data types]. The consortium uses file formats that,
even when originating in or primarily used with propriety software and/or
code, are accessible with open source software. When available and not
otherwise in conflict with data security, data protection or processing
measures and requirements, the consortium will use open source software
applications. Through its use of common, standardised file formats and
software, PROTAX aims to facilitate any legitimate and lawful data re-
combinations with different datasets from different origins.
As we use standard office Software (mainly MS Office and Open Office) we will
not seek to make our data interoperable any further with other research data
sets. Standard Office Software will enable data exchange and re-use between
researcher, institution, organizations, stakeholders and countries. The
project will avoid generating its own ontologies and vocabularies.
**2.4 Increase data re-use (through clarifying licenses** )
_Re-use of existing data_
Some of PROTAX‘ work may, if appropriate and needed, re-use (aggregate,
synthesise or analyse) existing materials (e.g., figures, tables, quotations)
from existing literature (academic, policy or other documents); in such cases,
they will be properly referenced and acknowledged) and any necessary
permissions for re-use will be obtained. We will use literature (both academic
and press articles) relevant to the tasks.
_Increasing re-use of PROTAX results_
The deliverables developed during the project will be publicly accessible via
the PROTAX website and the institutional open access repositories.
PROTAX deliverables use a Creative Commons Attribution 4.0 International
License 9 . According to this, a user can share (i.e., copy and redistribute
the material in any medium or format) or adapt (remix, transform, and build
upon the material for any purpose, even commercially), under the following
terms:
* The user must give appropriate credit, provide a link to the license, and indicate if changes were made. A user may do so in any reasonable manner, but not in any way that suggests the licensor endorses the user or their use.
* The user may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
In line with the PROTAX Consortium Agreement, the results of work performed
within Work Packages in the Project are owned by the party that generate(s)
them. Data delivered by the subcontracting party to one or more Parties shall
be exclusively owned jointly by the beneficiaries. The former parties shall
require their subcontracting parties concerned to assign ownership to them on
any results achieved including intellectual property rights on such results or
to be vested on such results within the framework of any research assignment
by the former Parties. Joint ownership is governed by GA Article 26.2 and
unless otherwise agreed:
* each of the joint owners shall be entitled to use their jointly owned results for non-commercial research activities and academic teaching on a royalty-free basis, and without requiring the prior consent of the other joint owner(s), and
* each of the joint owners shall be entitled to otherwise exploit the jointly owned results and to grant non-exclusive licenses to third parties (without any right to sub-license), if the other joint owners are given:(a) at least 45 calendar days’ advance notice; and (b) fair and Reasonable compensation.
Intellectual property issues will be re-visited (if needed) in the next update
to this plan and the final version of the DMP and will be monitored together
with the PROTAX Project Management Committee (PMC).
The PROTAX consortium has internally specified data quality assurance policy
and processes under the ambit of Task 10.3 which is devoted to quality
assurance 10 .
# 3\. Allocation of resources
* What are the costs for making data FAIR in your project?
For the whole project, there is a total of €22.500 dedicated to providing FAIR
data. Each partner is allocated €2.500.
* How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions).
This is included as service purchase in the grant agreement with € 2500 for
each partner (total €22.500)
* _Who will be responsible for data management in your project?_
Vicesse (Vienna Centre for Societal Security) is in charge of administrating
the Data Management Plan for the PROTAX-Project.
* _Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)?_
The data from the PROTAX-Project will be stored for long term in the
institutional repositories of the partners.
The PROTAX project has a specific task dedicated to data management - “Task
10.3: Adminisiter the projects data management.” This task is led by Vicesse
and supported by contributions from the University of Twente and Uppsala
University. A total of 0,5 Person-month has been allocated to this task
(including D.10.3 Quality Assurance Plan) over the duration of the project.
The first deliverable, _D10.2 Data management plan_ (i.e., this deliverable),
will be submitted in October 2018 (month 6 of the project) to the European
Commission.
This deliverable will be updated and a revised version, i.e., _D10.2 Final
revised data management plan_ will be delivered in April 2021 (month 36 of the
project) to the European Commission. The plan will be reviewed prior to the
project’s interim review (currently scheduled for month 18) and final review
(month 36) of the project; updates will be made to take into account new data,
changes in consortium policies, and changes in consortium composition and
external factors (e.g., new consortium members joining or old members
leaving).
The final version of the DMP (i.e., D10.2) will further describe how data from
the PROTAX project will be managed in a sustainable manner.
# 4\. Protection of personal data
_Purposes and legal basis of personal data processing_
The project will collect and process personal data only if, and insofar as, it
is necessary for its research and engagement activities i.e., research,
consultations, interviews and events, and to share its findings and results
with stakeholders via mailings, the website and newsletters. Our primary legal
basis for processing personal data will be an individual’s consent.
Individuals will have the right to withdraw consent at any time without any
negative consequences.
PROTAX activities which will collect personal data, purposes of the
collection, types of data that will be processed, its storage formats, modes
of collection, sharing, location, accountability and access arrangements.
PROTAX will mostly collect personal data that is largely available in the
public domain. Project partners and the project subcontractor will collect
such data from respondents from EU and non-EU countries.
Personal data may be collected from members of the consortium, members of
external organisations or individuals in their capacities as experts,
respondents or participants. Use of such data will be in line with legal and
ethical standards described in _D11.1 Ethical, social and privacy issues in
PROTAX_ and this deliverable.
_Data minimisation, storage and retention_
PROTAX will minimise the amount of data collected and processed, and the
length of time it retains the data. According to GDPR requirements, the
personal data collected in PROTAX will be adequate, relevant and limited to
what is necessary in relation to the purposes for which they are processed.
PROTAX partners will ensure that personal data about an individual is
sufficient for the purpose it holds it for in relation to that individual, and
PROTAX will not hold any more information than what is properly needed to
fulfil that purpose.
PROTAX will store personal data securely on password-protected computers.
Personal data will only be used for the specific purpose for which it was
collected (e.g., workshop management, travel arrangements) and will be deleted
immediately after that purpose is fulfilled, unless legally required to be
retained (noting here that the PROTAX Grant Agreement requires project data to
be archived correctly at least five years after the balance project payment is
paid). Published interviews, survey and panel reports will not contain any
personal data or reference to personal data.
PROTAX will comply with ethical principles and applicable international, EU
and national law (in particular, Directive 95/46/EC and, once it applies, the
EU General Data Protection Regulation 2016/679). For activities for which
informed consent is required, we will provide research participants with a
clear description of PROTAX activities and clear information on the procedures
that will be used for data control and anonymisation.
Using the PROTAX participant information sheet and informed consent form [see
Annex] and GDPRcompliant data protection notices [see Annex], PROTAX will give
participants information about how the project will collect, use, retain and
protect their data during the project.
_Rights of individuals_
Individuals will have the following rights:
* Right to request from the PROTAX data controllers’ access to the personal data PROTAX has that pertains to them.
* Right to request the controllers to rectify any errors in personal data to ensure its accuracy.
* Right to request the controllers to erase their personal data.
* Right to request the controllers to restrict the future processing of their personal data, or to object to its processing.
* Right to data portability - upon request the data controller will provide a data subject with a copy of the data PROTAX has regarding them in a structured, commonly used and machine- readable format.
* As the processing of your personal data occurs based on their consent, individuals will have the right to withdraw their consent at any time and PROTAX will cease further processing activities involving their personal data. (However, this will not affect the lawfulness of any processing already performed before consent has been withdrawn).
* Right to lodge a complaint with a supervisory authority, such as their national data protection authority.
If partners consider a plan to re-use personal data, they will give
participants information about this as soon as it becomes available and give
them the opportunity to consent or withdraw their data. During the project,
PROTAX will give participants the option to withdraw themselves and their data
at any time. As part of each communication the participant receives, PROTAX
will give her or him the opportunity to opt out of further communications and
have their data deleted from the project’s records.
If the project uses secondary personal data, it will only do so from a public
source or such source as is authorised for such use (either specifically for
our research and engagement activities or generally for any secondary use).
All partners of the consortium will adopt good practice data security
procedures. This will help avoid unforeseen usage or disclosure of data,
including the mosaic effect (i.e., obtaining identification by merging
multiple sources). Measures to protect data include access controls via secure
log-ins, installation of up-to-date security software on devices, regular data
backups, etc. Section 6 of this document further covers data security aspects.
Recorded information (audio and/or visual) will be given special consideration
to ensure that privacy and personal identities are protected. Participants
will be provided with a consent form [see Annex] to read and sign if they will
be photographed or recorded visually (e.g., video) during PROTAX activities.
The signed forms will be kept on file for inspection.
The PROTAX consortium will carefully assess the benefits and burdens of
collecting and processing sensitive personal data 42 before conducting the
public opinion surveys and panels of citizens. 43 If the need to collect and
process such data arises (e.g., to establish eligibility for participation
according to recruitment criteria), PROTAX will seek the explicit consent of
data subjects. Data subjects will be able to opt out of the focus groups at
any stage.
In keeping with best practices for data security, Coventry University
SharePoint will store focus groups responses in a secure location in the file
system that only project staff can access. There will be no disclosive
information in data files, meaning there is no risk of individual respondents
being identified.
_International data transfers_
The PROTAX consortium does not expect to transfer personal data outside the
EU. In the case this position changes, we will comply with the GDPR
requirements and ensure that personal data is only transferred outside of the
EU in compliance with the conditions for transfer set out in Chapter V of the
GDPR 11 .
_Data controllers_
The project’s data controllers will make the necessary notifications and/or
obtain the necessary authorisations for collecting and processing data. Upon
request, the project co-ordinator will provide copies of these authorisations
to the European Commission. For the purpose of personal data processing from
data subjects involved in PROTAX research and engagement activities, the data
controllers for PROTAX are:
* Umut Turksen, Coventry University, [email protected]
* David Wright, Trilateral Research Ltd, [email protected]
* John Callen, Coventry University, [email protected]
* Regina Kahry, Vienna Centre for Societal Security, [email protected]
# 5\. Data security
For the duration of the project, PROTAX partners will store PROTAX project
data in a SharePoint repository, hosted by Coventry University. The SharePoint
repository is password protected and only current project invited members with
passwords may access it. All incoming and outgoing network communication with
SharePoint is encrypted using a Quovadis-verified certificate 12 . When
partners leave PROTAX, their access to the PROTAX's repository will be still
be available but changed from Read/Write to Read only _._
We do the DB back up every 30 minutes (incremental) and full back up every
day.
We have a separate Disaster Recovery system in place and more detail regarding
this can be seen below.
Figure 3: Disaster Recovery 13
In addition to the SharePoint repository, partners may store local copies of
research data on their institutional servers and or business cloud-based
servers with access controls, encryption or password protection. Partners will
follow their institutional security safeguards.
All partners will as a minimum:
* ensure PROTAX research data stored with them on their institutional servers is regularly backed up.
* ensure devices and data are safely and securely stored, and access controls are defined (e.g., via encryption, password protection, restriction of number of persons with access) at the user level.
* support good security practices by protecting their own devices and installing and updating antimalware software, anti-virus software and enabling firewalls.
* (In case personal data is processed), ensure appropriate security and confidentiality of the personal data, including for preventing unauthorised access to or use of personal data and the equipment used for the processing (GDPR).
* where necessary, the controller or processor of personal data will evaluate the risks inherent in the processing and implement measures to mitigate those risks (e.g., encryption) and ensure an appropriate level of security, including confidentiality, taking into account the state of the art and the costs of implementation in relation to the risks and the nature of the personal data to be protected (GDPR).
After PROTAX ends, the responsibility concerning data security of the PROTAX
datasets will lie with the owners/managers of the repositories where these are
stored.
# 6\. Ethical aspects
* _Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA)._
All ethical, social and privacy issues in PROTAX are discussed and stated in
D11.1. The latter is put on the public dissemination level and is therefore
accessible for all partners.
* _Is informed consent for data sharing and long-term preservation included in questionnaires dealing with personal data?_
We will include information about data sharing and long-term preservation in
our questionnaires.
The _D11.1 Ethical, social and privacy issues in PROTAX_ sets out in detail
how the consortium will manage potential ethical issues according to
applicable regulatory frameworks, ethical and data protection standards and
other ethics requirements. All partner-organisations signed letters of
compliance, confirming their adherence to the EMP. Even so, certain
regulations, principles, standards and requirements merit special emphasis in
this deliverable.
_PROTAX_ partners will comply with Article 34 of the Grant Agreement 787098 —
_PROTAX_ , which states that all activities must be carried out in compliance
with ethical principles. Consequently, partners will conduct research in
accordance with fundamental principles of research integrity, such as those
described by ALLEA in its _European Code of Conduct for Research Integrity 14
_ .These principles
1\. 14 All European Academies, _European Code of Conduct for Research
Integrity_ , Revised Edition, May
2017\. http://www.allea.org/wp-content/uploads/2017/05/ALLEA-European-Code-of-
Conduct-forResearch-Integrity- 2017.pdf
are reliability, honesty, respect and accountability. Furthermore, the
partners will avoid misconduct, namely, fabrication, falsification or
plagiarism.
In keeping with the highest standards of research integrity, and to ensure the
privacy, safety and dignity of data subjects, PROTAX partners and the project
subcontractor will provide participants with project information sheets and
consent forms in a language and in terms fully understandable to them [See
Annex]. These forms will describe the aims, methods and implications of the
research, the nature of the participation and any benefits or risks (e.g., to
privacy) that might be involved. The forms will explicitly affirm that
participation is voluntary and that participants have the right to refuse to
participate and to withdraw their participation, or data, at any time, without
any consequences. The forms will outline how partners will collect and protect
data during the project (e.g., use of anonymisation), and then destroy it or
reuse it (with consent). The form will indicate the procedures to be
implemented in the event of unexpected findings. Researchers will ensure that
potential participants have fully understood the information and do not feel
pressured or forced to give consent.
In addition to the preceding ethical safeguards, PROTAX partners will conform
to the applicable rules and aims of Data Protection Directive 95/46/EC and the
EU General Data Protection Regulation 2016/679, its successor. These
regulations are complemented by PROTAX partners acting in accordance with
applicable national legislation and data protection-related regulations.
# 7\. Responsibilities
The planning and overall co-ordination of the data management task will be the
responsibility of Trilateral Research. Each project partner who handles and is
responsible for data collected, stored or used will ensure compliance with the
strategy outlined in this document.
VICESSE will review and revise this plan, consult with partners and implement
any corrective actions, if required. Revisions to the DMP may become necessary
in the following cases: new or unanticipated datasets become available,
existing datasets are re-classified into a different data sharing category due
to emerging/newly discovered data privacy or commercial concerns, or external
factors, including changes to data protection law, the removal of a project
partner, or technological advancements that could impact data security. PROTAX
partners should notify Trilateral Research if any such cases arise and advise
of any updates to their institutional data management policies and procedures
that might have an impact on PROTAX‘ data management.
# 8\. Management of compliance
VICESSE will oversee compliance with the data management plan along with the
University of Coventry (project co-ordinator) and Trilateral Research. Each
PROTAX partner will be responsible for adhering to the strategy and procedures
outlined in this document and other relevant documents, (e.g., the PROTAX
ethical monitoring protocol).
# 9\. Other issues
⮚ _Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?_
We do not make use of other procedures for data management.
# 10\. Summary and Outlook
This deliverable presented the PROTAX consortium’s plan to manage the
production, collection and processing of its research data and scientific
publications.
This deliverable will be reviewed by the consortium in the final year of the
project, and an updated version will be generated in March 2021. Updates will
be made to the plan to take into account new data, changes in consortium
policies, and changes in consortium composition and external factors (e.g. new
consortium members joining or old members leaving).
Each project partner handling and responsible for data collected, stored or
used in PROTAX will ensure compliance with the strategy outlined in this
document.
Annex
**Participant Information Sheet**
This project is funded by the EU. This publication has been produced with the
financial support of the
European Union’s H2020 research and innovation programme under grant agreement
No
787098
.
The
contents of this publication are the sole responsibility of
the authors
and can in no way be taken to reflect
the views of the European Commission.
©PROTAX, 2018 - 2021
This work is licensed under a Creative Commons Attribution 4.0 International
License
787098
P
RO
TAX
–
D
10.3
Deliverable Report
–
DMP
30
787098
P
RO
TAX
–
D
10.3
Deliverable Report
–
DMP
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1462_PRECRIME_787703.md
|
## Contents
**1 Data Set Description** **1**
**2 Making Data Findable** **1**
**3 Making Data Openly Accessible** **2**
**4 Making Data Interoperable** **2**
**5 Increase Data Re-use** **2**
**6 Allocation of Resources and Data Security** **3**
**A MIT License** **4**
iv
# Data Set Description
The project will produce software prototypes and will collect experimental
data when applying the testing techniques under investigation to existing
software systems. Hence, the following types of data items will be managed in
the project:
1. **software prototypes** , implementing the testing techniques investigated in the project;
2. **systems under test** , mostly open source systems, but possibly also closed source industrial systems;
3. **train and test sets** , used to train the systems under test, as well as the **trained models** ;
4. **test scenarios** generated for the systems under test;
5. **metrics** collected for the systems under test to quantify the effectiveness and efficiency of the proposed testing techniques.
Items of type (2) may be available as open source projects from public
software repositories, such as GitHub, or may be not publicly available,
because of confidentiality restrictions imposed by the owners of such systems.
In both cases, the involved artefacts are not under the control of this
project. If publicly available, the project will reference the public
repositories where they can be obtained. In any case, it is not the project’s
responsibility to manage such data.
Similarly, items of type (3) may or may not be publicly available. Usually,
train and test sets are provided with the software systems that use them in
the training/testing phase, so they undergo the same availability restrictions
as the systems under test (see item (2)). Open source systems that need data
for training are usually accompanied by train and test set, which are stored
in the same repository as the software itself. On the other hand, industrial
systems that need data for training/testing may not come with publicly
accessible data sets. Whenever public data sets are available for
training/testing, the project will reference them explicitly, but clearly it
is not the project’s responsibility to manage also these data.
Items of type (1), (4), (5) are produced by the project and their management
is described in the following sections. To satisfy the FAIR principles, this
project intends to create a public repository in GitHub for each research
prototype developed during the project. The repository will store the software
implementing the prototype, as well as the replication package needed to
reproduce the experimental results, including the generated test scenarios and
the collected metrics.
Table 1 summarizes the features of the data managed by the project. The
estimated volume is per experiment. The estimation was obtained based on
comparable experiments conducted in the past.
<table>
<tr>
<th>
**Type**
</th>
<th>
**Origin**
</th>
<th>
**Format**
</th>
<th>
**Volume**
</th> </tr>
<tr>
<td>
Software prototype
</td>
<td>
Precrime
</td>
<td>
source code (text)
</td>
<td>
100MB
</td> </tr>
<tr>
<td>
Systems under test
</td>
<td>
reused (third party)
</td>
<td>
source code
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Train and test sets
</td>
<td>
reused (third party)
</td>
<td>
system dependent
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Test scenarios
</td>
<td>
Precrime
</td>
<td>
source code (text)
</td>
<td>
100MB
</td> </tr>
<tr>
<td>
Metrics
</td>
<td>
Precrime
</td>
<td>
CSV (text)
</td>
<td>
10MB
</td> </tr> </table>
Table 1: Main features of the managed data; volumes estimated per
system/experiment
# Making Data Findable
The main source of metadata information for Precrime’s software prototypes,
test scenarios and metrics will be the README file created inside the
corresponding GitHub repository, where data is stored. Such README file will
be in the markdown (md) format and will include the following information:
* acronym and name of the prototype tool;
* instructions for tool users, including (1) prerequisites, (2) installation instructions, (3) execution instructions;
* versioning, authorship and licensing information;
* steps for the reproduction of the experimental results, including instructions for the reexecution of test scenarios;
* description of the metrics collected to assess the performance of the tool in the experiments.
# Making Data Openly Accessible
All data generated by the Precrime project will be made openly accessible by
storing them into the Precrime’s GitHub repository. GitHub
(https://github.com) is a commercial hosting service mostly used to store
source code. It offers free accounts, often used to host open source projects,
and it supports distributed version control, source code management, access
control, issue tracking, feature requests and wiki pages. In the software
engineering research community it is widely used for the permanent storage of
research prototypes and of experimental packages. Hence, it is the ideal data
repository to give maximum visibility to the project’s outcome.
The data reused from third parties – namely, systems under test and train/test
data sets – are not under the control of the project. If publicly available,
they will be referenced in the README file of the Precrime repository storing
the experiments based on such systems/data. However, we anticipate that
industrial systems and industrial train/test data sets may not be publicly
available. On the other hand, for the validation of the project’s outcome it
is quite important to apply the resulting research prototypes to both open
source, publicly available systems, as well as industrial, possibly closed
source, systems, because the target end users of Precrime’s research consists
of software developers, possibly working within commercial software companies.
# Making Data Interoperable
Research prototypes and test scenarios will be published in source code
format. In Precrime we intend to adopt widely used programming languages, such
as Java and Python, for which wellestablished standards and compilers are
freely available. This ensures maximum interoperability at the code level.
For what concerns the metrics collected in the experiments, we will represent
such data in the CSV format. This format is accepted by most spreadsheet
applications and can be read by many data processing tools and libraries, such
as those available in R and Python.
# Increase Data Re-use
All the software and data produced by the project will be made available under
the MIT License, so as to ensure ample opportunities of reuse and modification
to other researchers. The MIT License enables other scientists to copy and
modify the licensed software, making it easily reusable in the public domain.
Scientists can freely build upon, enhance and reuse the software for any
purposes with the only key restriction that the software is attributed to its
creators via inclusion of the license itself in all copies. The MIT License
adopted by Precrime is reported in Appendix A.
BSc and MSc students involved in the Precrime research will be asked to sign a
copyright transfer agreement to let Precrime publish their work in accordance
with the project’s open data policy. In particular, the software prototypes
produced by BSc and MSc students that contribute to the project’s research
will be released under the MIT License as any other software produced by the
project.
# Allocation of Resources and Data Security
GitHub ensures long-term, secure data preservation at no cost. In addition to
such data storage, Precrime performs periodic (weekly) backup of all (private
and public) project data, using the cloud storage device provided by USI.
Hence, project data will be securely stored both on GitHub and on USI’s cloud
storage device, thus ensuring data replication and geographic distribution,
for a time span that can be estimated as at least a few decades past the end
of the project.
# A MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1465_D-NOSES_789315.md
|
# Introduction
The D-NOSES project, funded under the topic _H2020-SwafS-23-2017_
_Responsible Research and Innovation (RRI) in support of sustainability and
governance, taking account of the international context_ , will reverse the
way in which odour pollution is commonly tackled. It will empower citizens to
become a driving force for change through RRI, citizen science and co-creation
tools to map and measure the problem, and co-design solutions with key
quadruple helix stakeholders.
D-NOSES aims to kickstart a much needed collaborative journey to tackle the
problem of odours at a global scale by developing coordinated local case
studies in 10 European and non-European countries (pilots). Several project
actions will guarantee a high impact and project sustainability. With the aim
of situating odour pollution in the map, the International Odour Observatory
(IOO) will be created to promote engagement and public participation. In the
IOO, all relevant data and information will be gathered, mapped and made
available, granting access to information to allow for the implementation of
Principle 10 of Rio Declaration. The App OdourCollect will also be used to
gather odour observations from engaged citizens, meaning that citizens will
not only have, for the first time, access to information in odour pollution,
but will become data generators.
All this means that the data will be collected from different sources and
different stakeholders, as described in _Deliverable_ _7.2 Project website,
branding and templates_ , including data collected by citizens.
The results of the D-NOSES project will improve the management of odour
problems, after the validation of the proposed innovative, bottom-up
methodology to monitor, for the first time, the real perception of nuisance in
the impact area of odour emitting activities. The analysis of the results of
each pilot (at least 10 pilots in at least 10 different countries) will be
used to co-create DIY Guidelines for Project Replicability and, standard
criteria for future odour regulations at different levels, together with the
Green Paper and the Strategic Roadmap for Governance in odour pollution, which
will pave the way for capacity building and a improved governance.
D-NOSES, in its core, is a Citizen Science project. As such, there are
currently no standards in data or metadata that have been released yet,
although there are some initiatives _CA15212_ , leadedlead by by the CSA
European 3 or the Citizen _Working_ Science _Group #5_ Association _of the
Citizen_ (ECSA _Science_ 4 5), a _COST_ partner _Action_ of
D-NOSES with which Ibercivis collaborates closely to further develop the above
mentioned standards, working on this. These first steps are following open
standards; for example: “ _WG5’s_ _specific objective for the second period
(1.5.2017-30.4.2018) is to contribute to develop an ontology of citizen-
science projects (including a vocabulary of concepts and metadata) to support
data sharing among citizen-science projects. WG5 will coordinate with
activities on data and service interoperability carried out in Europe,
Australia and the USA (e.g., the CSA’s international Data and Metadata Working
Group [http://citizenscience.org/ association/about/working-groups/]), and
will take into account existing standards, namely Open Geospatial Consortium
(OGC) standards (via the OGC Domain Working Group on Citizen Science), ISO/TC
211, W3C standards (semantic sensor network/Linked Data), and existing
GEO/GEOSS semantic interoperability. WG5 will investigate the best format to
publish the ontology.” 6 _
Partial results on how to manage data or metadata in Citizen Science projects
in a FAIR way have been produced by those initiatives, which will be used in
D-NOSES where possible. The outcome of our experience in producing, validating
and managing citizen science data will be reported to the WGs of the Citizen
Science COST Action to contribute and improve the work already done.
3 _4_ _
https://www.wilsoncenter.org/sites/default/files/wilson_171204_meta_data_f2.pd
f _ _8 https://www.cs-
eu.net/sites/default/files/media/2018/04/2018.03%20WG%20meeting%20in%20Milan%20%2_
_COST%20Action%20CA%2015212%29%20-%20minutes.pdf_
5 _6_ _https://www.cs-eu.net/sites/default/files/media/2018/06/COST-
WG5-GenevaDeclaration-Report-2018.pd_ _f_ _https://www.cs-
eu.net/sites/default/files/media/2018/04/2018.03%20WG%20meeting%20in%20Milan%20%28_
_COST%20Action%20CA%2015212%29%20-%20minutes.pdf_
2
# Data Summary
Within the D-NOSES project there will be two main sources of data. The first
one, directly produced by the consortium, will consist mainly of documentation
relevant to methodologies, data analysis, metadata definitions, etc. This type
of data will be presented under free licenses whenever possible such as
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) 7 . This data will
be available under the D-NOSES main web page 8 .
On the other hand, some of the data will be generated by citizens reporting
odour episodes through our app. Right now, a new version of the app is under
development, including a potent back office for validation purposes of the
data gathered in the pilot case studies. In this document we will refer to the
current - legacy - version of the app \- OdourCollect, which was developed in
2016 after receiving funding in the context of the _MyGeoss_ _Project -
Applications for your Environment_ , from the Joint Research Center 9 .
This data is stored in an SQL database and can be downloaded in an anonymized
way under CC BY-SA 4.0 License. The tables in annex I describe this dataset.
In addition, the Community Maps platform will enable citizens to map cases
where they are affected by odour issues in their communities and other
information deemed relevant for the different pilots. Community Maps supports
constructing digital representations of physical space through participatory
action. Its map interface provides a way in which to add new data as well as
editing and deleting existing data. Community Maps is a single-page front-end
application built on top of GeoKey to which it connects via the public API. Is
it able to retrieve and store public and private information that is
visualised onto the map. If private information is to be used, OAuth2
authentication is required to authorise the user. At the core of the back-end
is a postgreSQL relational database system with geospatial capabilities that
stores all information relevant to run the platform.
789 _ https://creativecommons.org/licenses/by-sa/4.0 dnoses.eu
http://digitalearthlab.jrc.ec.europa.eu/app/odourcollec _ _/_ ? __ _t_
3
# Fair Data
Theprinciples term were **FAIR** published was launched in 2016 at10 . aThe
Lorentz term FAIR Workshop describes in a2014 set of, and guiding the
principlesresulting to make data **Findable,** **Accessible, Interoperable,
and Reusable** . We will follow the guidelines described below, but we are
aware that, as _ 11 _ stated, “ _participating_ in the _H2020_ _in the
Programme ORD Pilot_
_Guidelines on Fair Data Management in Horizon 2020_
_does not necessarily mean opening up all your research data. Rather ORD pilot
follows the principle “as open as possible, as closed as necessary” and
focuses on encouraging sound data management as an essential part of research
best practice”._
The current version of this deliverable reflects the D-NOSES Data Management
Plan as designed at this stage of the Project. It has to be taken into account
that we are still in the process of developing some of the project tools, such
as the International Odour Observatory, where Community Maps will be
integrated, and the new version of the App OdourCollect. We will be defining
further issues in relation to data management, both in terms of openness and
data/metadata ontologies, and updates on the Data Management Plan will be
provided as new versions of the current deliverable, always guaranteeing the
the project data is FAIR. It is foreseen to have the final version of the
deliverable before the first Reporting Period, once all the project tools will
be created, running and validated.
**_DATA FINDABLE_ **
The concept of findability refers to the ability to locate information by
other users, it means that we will provide the necessary metadata to help in
the identification of the different datasets generated in each pilot and those
provided by citizens outside these pilots. As provided by the open document
_The_ _FAIR Guiding Principles for scientific data management and
stewardship12_ , published by Mark D. Wilkinshon et al. in Nature, we will,
where possible:
* assign a globally unique and persistent identifier to (meta)data
* describe data with rich metadata
10 _11_ _
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-data-mgt_en https://www.force11.org/group/fairgroup/fairprinciple _
_s_ _._
_pdf_
12 _https://www.nature.com/articles/sdata201618#bx_ _2_
* include clearly and explicitly in the metadata the identifier of the data it describes
* register on index (meta)data in a searchable resource
When possible, the data will be stored in SQL database, anonymized and linked
to the web page. It will be downloadable openly in csv format during the life
of the project. Periodically, anonymized (meta)data will be uploaded to
Zenodo, providing a DOI (Digital Object Identifier) for each dataset
generated. Using DOI will allow us to _edit/update the record's files after
they have been published_ 1
We will search for other datasets which can be used for the purposes of the
project, such as meteorological data.
**_DATA ACCESSIBILITY_ **
Four main tools will be used to provide access to the project data:
* The project web page (see more details on the structure and the contents on Deliverable 7.2)
* The International Odour Observatory (see more details on the structure and the contents on Deliverable 7.2)
* The OdourCollect mobile App, to generate collaborative odour maps
* The D-NOSES Community Mapping tools, which will integrate odour observations with other relevant project data and make it available online for public access.
As in the previous point, following D. Wilkinshon et al., we will follow the
following rules where possible :
* (meta)data will be retrievable by their identifier using a standardized communication protocol.
1. the protocol is open, free and universally implementable
○ the protocol allows for an authentication and authorization procedure, where
necessary
* metadata will be accessible, even when the data are no longer available.
**_DATA INTEROPERABILITY_ **
As previously stated, D-NOSES biggest challenge in relation to data management
is that data standards and / or metadata have not yet been defined in Citizen
Science projects. However, we will follow the partial results that have come
out of the above mentioned Working Groups of the Citizen Science COST Action.
In particular, within D-NOSES, when possible:
* (meta)data will use a formal, accessible, shared, and broadly applicable language for knowledge representation.
* (meta)data will use vocabularies that follow FAIR principles
* (meta)data will include qualified references to other (meta)data
**_DATA RE-USE_ **
On a case by case basis, it will be agreed between all consortium partners
when the data produced by the consortium and/or data produced by the engaged
citizens will be licensed under Creative Commons International CC BY 4.0, with
no embargo to enable re-use. Exceptions may occur in some of the pilots in
relation to specific requirements of the different stakeholders in the
quadruple helix for each country. In those cases, other re-use licenses will
be adopted to fulfill all requirements. In particular, D-NOSES will follow the
following guidelines, when possible:
* Meta(data) will be richly described with a plurality of accurate and relevant attributes
1. (meta)data will be released with a clear and accessible data usage license
○ (meta)data will be associated with detailed provenance
○ (meta)data will meet domain-relevant community standards 2
4
# Allocation of Resources and Data Security
The consortium will use Ibercivis servers to store the data in a SQL database
in a FAIR way. Regarding Community Maps/Geokey, data will be collected and
stored on Mapping for Change servers, from where they will be pushed to the
Ibercivis' server using the GK API. When required, data controller/data
processing agreements will be established.
The data obtained during the project, when possible, will also be uploaded
anonymized to the free-of-charge Zenodo repository. The handling of the local
servers and Zenodo repository, as well as all data management issues related
to the project, falls in the responsibility of the Coordinator. The data is
guaranteed for 15 years on unfunded effort by Ibercivis.
Francisco Sanz, the Executive Director of Ibercivis, is the responsible for
Data Management within the D-NOSES project, specifically for this deliverable
D1.6, and also for the associated Ethics deliverables D8.1 (informed consent
procedures for the identification and recruitment of research participants)
and D8.2 (in relation to collection and processing of personal data). He will
also take care of the revision of this document before M15 (v1.1) and M36
(v1.2). The PI of each partner will have the responsibility for implementing
the data management plan in relation to the project actions. Each D-NOSES
partner shall be responsible for following the policies described in this DMP.
The data will be stored on Ibercivis Foundation's servers, on hosts with RAID
1 hard disk system and daily backups. This guarantees its conservation for any
eventuality arising.
5
# Ethical Aspects
The D-NOSES consortium further confirms that each partner will check with
their national legislation/practice and their local ethics committee that
provides guidelines on data protection and privacy issues in terms of both
data protection and research procedures in relation to any of the proposed
public engagement and potential volunteer research activities. Any procedures
for electronic data protection and privacy will conform to Directive (EU)
2016/680 and Regulation (EU)2016/679 on the protection of personal data, and
its enactments in the national legislations.
Ethical approval for studies with volunteer participants (such as correlation
of public feedback) will be sought from the University of Zaragoza Ethics
Committees in line with institutional procedures at ECSA or UCL (the partners
with extensive experience in volunteer research). There will be:
* No collection of data on a citizen without permission.
* Information will only be used for the purposes covered by agreement, and will not be retained except as required for these purposes.
* Information will not be made public or provided to third parties without explicit permission.
* Contractual and technical controls will be applied to prevent information becoming inadvertently available to third parties.
Informed consent will be obtained from any volunteer, especially those
participating in WP5 with the provision of on-line forms supported by the
necessary information for the individual to make a voluntary informed decision
about whether or not to participate in any of the evaluation feedback
sessions. Any electronic information collected and mined will be anonymised to
prevent the identification of individual subjects unless express permission is
granted.
More details on the Ethics requirements in relation to informed consent
procedures and protection of personal data will be provided in deliverables
8.1 and 8.2.
11
6
# Other
Each partner will provide the Project Coordinator copies of opinion or
confirmation by the competent Institutional Data Protection Officer and/or
authorization or notification by the National Data Protection Authority must
be submitted (which ever applies according to the Data Protection Directive
and the national law). As focused in the JRC document _Survey_ _report: data
management in Citizen Science Projects,_ we will pay attention not only to
legal aspects regarding different countries but also to cultural aspects. We
will follow also legislation on personal data from GDPR (2016/679). Two
deliverables - D8.1 and D8.2 - will cover all aspects related with GDPR.
7
# Definitions, acronyms and abbreviations
**CSV: C** omma **S** eparated **V** alues is a text file that uses a
comma to separate value
**CSA: C** itizen **S** cience **A** ssociation
**DMP:** Data Management Plan
**D-NOSES:** Distributed Network for Odour Sensing, Empowerment and
Sustainability
**DOI:** Digital Object Identifier is a persistent identifier used to uniquely
identify objects, standardized by the ISO
**ECSA:** European Citizen Science Association
**FAIR:** Research data that is findable, accessible, interoperable and re-
usable. These principles precede implementation choices and do not
necessarily suggest any specific technology, standard, or implementation-
solution.
**JRC:** Joint Research Centre
**Metadata:** data that provides information about other data. Three types of
metadata can be distinguished, including descriptive metadata, structural
metadata and administrative metadata.
**OGC:** Open Geospatial Consortium
**Open data:** Research data that that can be freely used, re-used and
redistributed by anyone for any purpose. Open data is free of restrictions
from copyright, patents or other mechanisms of control.
**PPSR_CORE** : Citizen Science and **P** ublic **P** articipation in
**S** cientific **R** esearch
**RRI:** Responsible research and Innovation
**SQL:** **S** tructured **Q** uery **L** anguage is a domain-
specific language used in programming and designed for managing data held in
a relational database management system
**W3C:** World Wide Web Consortium
**WP:** Work package
8
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1466_VIPRISCAR_790440.md
|
# 1\. INTRODUCTION
This document describes the initial **Data Management Plan** (DMP), as
Deliverable 8.4 on Month 6, customized for the VIPRISCAR project, funded by
the BBI-JU (The Bio-Based Industries Joint Undertaking) under the Grant
Agreement (GA) No. 790440.
The purpose of this DMP is to ensure the data generated and collected in the
VIPRISCAR project will follow the **FAIR** data management policy, meaning
making data findable, accessible, interoperable, and reusable. According to
the guidelines provided by EU Horizon 2020 programmes (European Comission,
2018), following information will be included in this DMP:
Methods to handle the research data during and after the end of project
Descriptions of the datasets that will be collected, processed, and/or
generated, such as data type, format, volume, source, etc.
Methodologies and standards that will be adopted for the data management
Level of accessibility/confidentiality of the data
Methods to curate and preserve the data during and after the end of the
project
Nevertheless, some important remarks are to be noticed. The encouragement to
conduct the DMP is to serve as a tool to assist the project having good data
management practice. In addition, according to article 29.3 in the GA, the
VIPRISCAR project is not applicable for open access to research data, meaning
the research data collected and/or generated in the VIPRISCAR project does not
have to be submitted to open access. Hence, the rules to apply in this case
would be in accordance with the IPR strategy and exploitation plan. Research
data dissemination shall not hinder the ability of the partners to file for a
patent. More details will be provided in the first version of exploitation
plan as deliverable D8.7 due month 6.
# 2\. DATA SUMMARY
## 2.1 Purpose of Data Generation and Collection
The purpose of data generation and collection in the VIPRISCAR project is to
achieve the objectives of the project: Improve the manufacturing process of
isosorbide bis(methyl carbonate) (IBMC) from the current technology readiness
level (TRL) 3 to 5 and provide proofof-principle of the major target IBMC
applications of coating, adhesive, and medical catheters.
## 2.2 Data Generation and Collection
The majority of the datasets will be generated from work package (WP) 2 to WP7
from the experiments throughout the project lifetime. Descriptions of the
datasets are categorized into both qualitative and quantitative aspects (as
shown in Table 1). There are total 21 datasets being identified at current
stage. The information has been collected via questionnaires distributed to
each partner and may be updated in future versions of the DMP (D8.5, due M24;
D8.6, due M36).
#### TABLE 1 DATASET INFORMATION TEMPLATE
<table>
<tr>
<th>
**Work Package**
</th>
<th>
Which WP and deliverable are this dataset related to
</th> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
The name of the dataset should be easily to search and find
</td> </tr>
<tr>
<td>
**Dataset Description**
</td>
<td>
Brief description of the dataset
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
The lead partners responsible for the dataset generation/collection
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
The purpose of the data collection/generation and its relation to the
objectives of the project
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Types of data could be report, paper, interview, expert or organization
contact details, video, audio, presentation, or note
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Data formats could be XLSX, DOC, PDF, PPT, JPEG, OPJ, TIFF
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
The size of the dataset (units: GB/MB) and the number of files
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
The origin of the data
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
Which project participant(s) own the intellectual property right (IPR)
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Identification if any existing data being reused and how they are used
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
To whom the data may be useful
</td> </tr>
<tr>
<td>
**DOI (if known)**
</td>
<td>
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
The keywords associated with the dataset to make it easier to search and find
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
To keep track of changes to the dataset
</td> </tr> </table>
#### TABLE 2 DATASETS INFORMATION FOR WP1
<table>
<tr>
<th>
**Work Package 1**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP1-9, all deliverables
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Deliverables from work package one to nine
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset includes all the deliverable reports from work package one to nine
required in the GA
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA and all the lead partners for each deliverable
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To ensure the project implementation and document the results in proper manner
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Reports
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☒ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: Approx. 40-50
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Partners contribution
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
Involved partners who write the report
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
VIPRISCAR consortium and public if the deliverables are openly accessible
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Deposit location**
</td>
<td>
Openly accessible data will be deposited on the project website. Confidential
report will be deposited in the project intranet.
</td> </tr> </table>
**TABLE 3 DATASETS INFORMATION FOR WP2**
<table>
<tr>
<th>
**Work Package 2**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 2 , Deliverable D2.1 and D2.2
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP2_IBMC process development and validation at lab scale
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will contain data collection about conditions of reactions carried
about in TECNALIA to IBMC process development. A complete characterization of
products will be also reported.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To bring B4P and Exergy enough data for upscaling and techno-economic analyses
of the IBMC process, respectively
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Conditions of reaction
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: To be defined
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Lab experimentation in TECNALIA
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☒ No ☐
They are data obtained before VIPRISCAR application and used for filing the
patents granted through 2018. The will be used as starting point for reaction
improvement in WP2.
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
B4P, Exergy
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
IBMC
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 2 , Deliverable D2.3 Process simulation and preliminary up scaling report
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Heat and mass balance; process flow diagram; equipment list
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
1. Heat and mass balance: a document (excel) contains all stream information, including flowrate, temperature, pressure, composition, and physical properties
2. Process flow diagram: a diagram shows all the unit operations in the integrated plant, and the main pipe connections;
3. Equipment list: list of equipment used in the process and the essential equipment information
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
EXERGY
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To complete the deliverable in WP2
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Quantitative data, diagram, list
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: at least 4
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Simulation software, information from partners
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
EXERGY
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All technical consortiums
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Simulation
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr> </table>
**TABLE 4 DATASETS INFORMATION FOR WP3**
<table>
<tr>
<th>
**Work Package 3**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 3 , Deliverable D3.3 Plant up-scaling simulation to industrial scale
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Heat and mass balance; process flow diagram; equipment list; equipment sizing
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
1. Heat and mass balance: a document (excel) contains all stream information, including flowrate, temperature, pressure, composition, and physical properties.
2. Process flow diagram: a diagram shows all the unit operations in the integrated plant, and the main pipe connections.
3. Equipment list: list of equipment used in the process and the essential equipment information.
4. Equipment sizing: sizing calculations of the scaled-up equipment
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
EXERGY
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To complete the deliverable in WP2
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Quantitative data, diagram, list
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: at least 5
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Simulation software, manual calculations, information from partners
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
EXERGY
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All technical consortiums
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Simulation
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr> </table>
**TABLE 5 DATASETS INFORMATION FOR WP4**
<table>
<tr>
<th>
**Work Package 4**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 4 , Deliverable D4.1, D4.2, D4.3
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Vipriscar_WP4
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset is constituted of a hard copy series of notebooks with progressive
number, that reference files containing relevant data. It contains synthetic
protocols, formulations and testing conditions/test results.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
AEP
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
The dataset will be used for internal purposes. It will contain design and
synthesis data of new materials from IBMC to be used in coatings and
process/performance data of the obtained coatings.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Description of lab procedures for synthesis and testing, test results,
chemical structures and properties of the materials
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
Other ☒ Hard copy laboratory notebook(s)
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: 100 GB☐ MB☒ Number of files: 10-20
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
The data is generated internally, through design and lab testing
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
AEP POLYMERS
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☒ No ☐
We will use QC and analytic data provided by TECNALIA regarding the received
IMBC samples (From TECNALIA) as a basis for our processes.
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
AEP, GAIKER, TECNALIA, JOWAT, LEITAT
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
WP4, coatings, PUD, NIPU
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 4 , Deliverable D4.1
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
IBMC hydroxyl-polycarbonates and waterborne polyurethane dispersions
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
Procedures for synthesizing IBMC derived hydroxyl-oligocarbonates and
properties of obtained products. Procedures for producing PUDs and properties
of obtained dispersions.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
GAIKER
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To define the fabrication procedure to obtain IBMC derived prepolymers and to
develop PUDs with IBMC for further preparation of coatings.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Report with associated characterization results.
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☐ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐ Other ☐ Click or tap here to
enter text.
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: 10 GB☐ MB☒ Number of files: < 10
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Experimental work
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
GAIKER
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☒No ☐
As a reference for define experimental conditions and characterization
methods, and as comparative data to define chemical structures and properties.
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
Chemical industry; Manufacturers of coatings/paints/adhesives/sealants;
Scientific researchers
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Isosorbide bis(Methyl Carbonate), bio-based molecule, polycarbonate diol, bio-
based PUDs, bio-based polyurethane
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 4 , Deliverable D4.2
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
IBMC based coatings
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
Procedures for producing IBMC based coatings, characterization of properties
and comparison to reference examples.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
AEP POLYMERS
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To develop coatings from IBMC based PUDs
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Report with associated characterization results.
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☐ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐ Other ☐ Click or tap here to
enter text.
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: 10 GB☐ MB☒ Number of files: < 10
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Experimental work
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
GAIKER
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☒ No ☐
As a reference for define experimental conditions and characterization
methods, and as comparative data to define chemical structures and properties.
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
Manufacturers of coatings/paints/adhesives/sealants; Scientific researchers
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Isosorbide bis(Methyl Carbonate), bio-based PUDs, bio-based polyurethane, bio-
coating
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒No ☐
</td> </tr> </table>
**TABLE 6 DATASETS INFORMATION FOR WP5**
<table>
<tr>
<th>
**Work Package 5**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 5 , Deliverable D5.1
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP5.1 _Adhesives application proof of principle
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will contain data collection about the selection of the raw
materials and the definition of adhesives applications
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To collect enough data to develop NIPUs (adhesives) from IBMC
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Materials and applications specifications
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: To be defined
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Literature review
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
TECNALIA, JOWAT
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
NIPUs, adhesives
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 5 , Deliverable D5.3-D5.6
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP5.2 NIPUs-based adhesives
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will contain data collection about conditions of reactions carried
about in TECNALIA to NIPUs-based adhesive process development. A complete
characterization of products will be also reported.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To collect proper conditions to developed NIPUs-based adhesives
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Conditions of reaction and characterization
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: To be defined
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Lab experimentation in TECNALIA
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
JOWAT
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
NIPUs-based adhesives
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr> </table>
**TABLE 7 DATASETS INFORMATION FOR WP6**
<table>
<tr>
<th>
**Work Package 6**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 6 , Deliverable D6.1
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP6- Synthesis of thermoplastic IBMC-based NIPUs
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will contain data collection about experiments related to
synthesis, biofunctionalization and characterization of IBMC based NIPUs.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA, CIKAUTXO
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To bring to CIKAUTXO a bio functionalized IBMC-based NIPU to process it into a
catheter
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Experiments conditions and results
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: To be defined
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Lab experimentation in TECNALIA
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
CIKAUTXO
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Synthesis, biofunctionalization, toxicity, IBMC, antimicrobial, antithrombotic
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 6 , Deliverable D6.3
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP6_Biocompatibility and bio functionality of the final prototype
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will contain data collection about the results of biocompatibility
and bio functionality evaluations of the final catheter prototype.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To demonstrate the usefulness in catheter production of IBMC-based NIPUs
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Results of experiments
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: To Be Defined
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Lab experimentation carried out by TECNALIA
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
CIKAUTXO
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Biocompatibility bio functionality biocidal antithrombotic
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr> </table>
**TABLE 8 DATASETS INFORMATION FOR WP7**
<table>
<tr>
<th>
**Work Package 7**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 7 , Deliverable D7.7
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP7_ Health and safety study
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will contain data collection about the results of a toxicity study
on IBMC and most promising final product. Results of the bibliographic search
on Regulatory issues and standards related to Environment and health and
safety issues concerning the IBMC production will also be included.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA, VERTECH
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To identify and evaluate health and safety issues related to VIPRISCAR project
technologies and products to prevent, correct and control potential risks, if
necessary.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Results of experiments and results of bibliographic data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: TBD
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Lab experimentation carried out by TECNALIA and bibliographic research carried
out by Vertech with the support of TECNALIA. Other partners´ contributions.
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☒ No ☐
Bibliographic data to know the state of the art in all the mentioned issues
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
To all the consortium
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Toxicity, REACH, health, safety, regulation, standard, IBMC
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 7 , Deliverable D7.2
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
LCC data collection
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset**
**Description**
</th>
<th>
All partners will have to fill the data collection template with the CAPEX
(investment for the machinery, external processes, infrastructure), OPEX
(specific cost of waste material, process energy, maintenance, labor force,
insurances, taxes etc.) and the incomes of the system (the specific cost of
the main product and by-products of the process).
</th> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
All partners
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
The data collection will determine cost-effectiveness of the proposed
technologies compared to currently used techniques.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Quantitative data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☐ PDF ☐ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: <100 GB☐ MB☒ Number of files: <15
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
The data comes from the different demosites. The partners will fill the data
collection table and send it back to Vertech Group.
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
Technology owners.
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All involved partners.
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
LCC, economic feasibility, economic validation
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 7 , Deliverable 7.3
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
LCA data collection
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
Input and output of all partners processes
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
All partners will have to generate the information and Vertech will collect
it.
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
The data will be used to comprehensively characterize environmental impacts
through the whole life cycle thanks to an LCA.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Quantitative data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☐ PDF ☐ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
Other ☒ CSV
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: <100 GB☐ MB☒ Number of files: <15
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
The data comes from the different demosites. The partners will fill the data
collection table and send it back to Vertech Group.
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
Technology owners.
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All involved partners.
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
LCA, environmental impacts, sustainability analysis.
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 7 , Deliverable D7.1 Technical evaluation of VIPRISCAR concepts
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Heat and mass balance
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
This document (excel) contains all stream information, including flowrate,
temperature, pressure, composition, and physical properties
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
EXERGY
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To complete the deliverable in WP7
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Quantitative data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: at least 5
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Simulation software, manual calculations, information from partners
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
EXERGY
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All technical consortiums
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Simulation
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr> </table>
**TABLE 9 DATASETS INFORMATION FOR WP8**
<table>
<tr>
<th>
**Work Package 8**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 8 , Deliverable D8.11-D8.14
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Dissemination and communication plan
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The plan will contain data related to dissemination and communication issues
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To manage the issues related to dissemination and communication
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Dissemination material
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: at least 10
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Partners contribution
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All audience
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Publications, dissemination, communication
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Deposit location**
</td>
<td>
Through the VIPRISCAR website
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 8 , Deliverable D8.15
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Website
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
Content of the website
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To disseminate the VIPRISCAR project
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Dissemination material
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☒ DOC ☒ PDF ☒ PPT ☒ JPEG ☒ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Partners contribution
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
All audience
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Website, dissemination
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 8 , Deliverable 8.4
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP8_D8.4_Data Management Plan Questionnaires From the Consortium
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
This dataset includes all the questionnaires answered by each partner in the
consortium about the datasets that will be generated within the project
lifetime and how they will be managed during and after the end of project
</td> </tr> </table>
<table>
<tr>
<th>
**Responsible partners**
</th>
<th>
All partners are responsible to fill out the questionnaire that is designed,
distributed, and collected by Vertech
</th> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To conduct the Data management plan tailor-made for VIPRISCAR project
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Questionnaires
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☒ PDF ☐ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: >10 GB☐ MB☒ Number of files: Approx. 30
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Project partners
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
Partners who fill out the questionnaire
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
Whole consortium and related stakeholders
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Data management plan, FAIR, findability, accessibility, interoperability,
reusability, data security
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 8 , Deliverable 8.7
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
WP8_D8.7_Exploitation Plan Questionnaires From the Consortium
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
This dataset includes all the questionnaires answered by each partner in the
consortium for the information about the KERs, IPR strategy and protection,
market analysis, and exploitation
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
All partners are responsible to fill out the questionnaire that is designed,
distributed, and collected by Vertech
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To conduct the Exploitation plan tailor-made for VIPRISCAR project
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Questionnaires
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☒ PDF ☐ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: >10 GB☐ MB☒ Number of files: Approx. 30
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Project partners
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
Partners who fill out the questionnaire
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
Partners involved for each commercial KERs
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Exploitable results, exploitation route, intellectual property
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 8 , Deliverable D8.5
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
VIPRISCAR Articles
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
Articles in technical journals.
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To increase the visibility of the project and disseminate outstanding results
related to IBMC based PUDs and coatings
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
Technical paper.
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☐ PDF ☒PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: 5 GB☐ MB☒ Number of files: 2
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Experimental work and reporting
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
GAIKER
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☒ No ☐
As a reference for define experimental conditions and characterization
methods, and as comparative data to define chemical structures and properties.
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
Chemical industry; Manufacturers of coatings/paints/adhesives/sealants;
Scientific researchers
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Isosorbide bis(Methyl Carbonate), bio-based PUDs, bio-based polyurethanes,
biocoatings
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☐ No ☒
</td> </tr> </table>
**TABLE 10 DATASETS INFORMATION FOR WP9**
<table>
<tr>
<th>
**Work Package 9**
</th>
<th>
</th> </tr>
<tr>
<td>
**Work Package**
</td>
<td>
WP 9 , Deliverable D9.1 and D9.2
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Ethics requirements
</td> </tr>
<tr>
<td>
**Dataset**
**Description**
</td>
<td>
The dataset will collect the ethics requirements that the project must comply
</td> </tr>
<tr>
<td>
**Responsible partners**
</td>
<td>
TECNALIA
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
To comply with the ethics requirements
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
authorization of compliance with ethical requirements
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XLSX ☐ DOC ☒ PDF ☒ PPT ☐ JPEG ☐ OPJ ☐ TIFF ☐
</td> </tr>
<tr>
<td>
**Volume**
</td>
<td>
Expected Size: GB☐ MB☒ Number of files: 2
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
Partners contribution
</td> </tr>
<tr>
<td>
**IPR Owner**
</td>
<td>
</td> </tr>
<tr>
<td>
**Re-use existing Data**
</td>
<td>
Yes ☐ No ☒
</td> </tr>
<tr>
<td>
**Beneficiary**
</td>
<td>
BBI-JU
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Ethics
</td> </tr>
<tr>
<td>
**Version number**
</td>
<td>
Yes ☒ No ☐
</td> </tr> </table>
# 3\. FAIR DATA
The VIPRISCAR project will dedicate to make the datasets collected or
generated in the project comply to European Commission’s FAIR data policy –
“Findable, Accessible, Interoperable, Reusable”.
## 3.1 Findability
For published articles, a Digital Object Identifier (DOI) as a unique and
permanent code to identify will be assigned by the corresponding journal. In
other case, the identification mechanism will depend on the repository that
the VIPRISCAR project adopts if any.
Common naming conventions have been set out in D1.1 Quality Assurance Plan
prepared by project participant TECNALIA for all files stored on the project
archive.
Naming conventions:
VIPRISCAR_<DX.Y/WPX/TX.Y>_<Title>_ <Version>_<Date>.filetype
Where:
<DX.Y> Deliverable number, e.g. “D2.3” for Deliverable 2.3.
<WPX> Work Package identifier, e.g. for example “WP1” or ”WP2”.
<TX.Y> Task number, e.g. “T3.1” for Task 3.1.
<Title> Short description of document.
<Version> Version identifier, e.g. ‘v1’.
<Date> Date in “yyyymmdd” format.
Example:
VIPIRSCAR_D1.1_Quality Assurance Plan (I)_v1_20180208.docx.
Search keywords of each dataset are provided by the project participants who
generate the datasets to optimize the possibilities for reuse and are noted in
the dataset information table as shown in section 2.2 above.
Other different standards to identify the datasets used by each partner are
listed below if any:
#### TABLE 11 STANDARDS OF DATASET IDENTIFICATION BY EACH PARTNER
<table>
<tr>
<th>
**Partner Name**
</th>
<th>
**Standards**
</th> </tr>
<tr>
<td>
JOWAT
</td>
<td>
Analysis-ID, Date, person, batch number
</td> </tr> </table>
## 3.2 Accessibility
According to Article 29.1 in the GA, each beneficiary must disseminate the
project results as soon as possible by disclosing them to the public through
appropriate means, unless the legitimate interests would be infringed.
Currently, the VIPRISCAR project considers using Microsoft’s SharePoint as an
intranet/repository to deposit project related data and documentation. Key
features include easiness to manage/share/collaborate the file anywhere, wide-
range of preview function for more than 270 common file types, support for
team communication and engagement, and automation of repetitive tasks
(Microsoft, 2018).
2016)
For scientific publications, each partner must take measures to ensure open
access, meaning providing online access for any user without additional
charge, to all peer-reviewed scientific publication relating to its results in
accordance with the Article 29.2 in the GA. Two main publishing approaches to
consider are Green and Gold open access (Newcastle University, 2018)(Springer,
2018).
**Green open access:** Also referred as self-archiving. Authors deposit the
manuscripts into their institutional repository or a subject repository with
immediate or delayed open access, making the publications freely accessible
for all users. The deposited version of the publication (usually will be the
final version for publication), terms and condition (e.g. embargo period) for
the open access depend on the funder or publisher.
**Gold open access:** Final version of the manuscripts are freely accessible
for all users via publisher website permanently right after the publication
without any embargo period.
Authors owns the copyright without most of the permission restrictions
compared to green open access.
Research data of VIPRISCAR project, as mentioned in previous section, is not
bound to be submitted to open access. As one of the results of the VIPRISCAR
project, research data will be owned by the project participants who generate
it, according to article 26 in the GA. The project coordinator together with
the responsible partners will determine how the data collected and/or
generated in the project will be made openly available. Relevant information
to provide in future versions of the DMP (D8.5, due M24; D8.6, due M36) may
include but not limit to following information: The channels to deposit the
data (e.g. repository, website, scientific journals), methods or software
required to access the data if any, restriction on use if any, embargo period
if any, the procedures to provide access, etc. Certain datasets may not be
shared or would be share under restrictions considering ethical,
confidentiality (in Article 36), security-related (in Article 37), privacy-
related (in Article 39), IPR and commercial/industrial exploitation potential
(in Article 27). In this case, reasons for data accessibility constrains will
be explained.
Below is the list of the datasets that have been identified as confidential in
order to protect the IP of the results and ensure the success of the
exploitation after the end of the projects. **TABLE 12 CONFIDENTIAL DATASETS**
<table>
<tr>
<th>
**WP**
</th>
<th>
**Datasets**
</th>
<th>
**Accessibility within the**
**Consortium**
</th> </tr>
<tr>
<td>
WP1-9
</td>
<td>
All deliverable reports except D1.1-D1.4 Quality
Assurance Plan, D1.5-D1.8 Project Management Plan, D7.8-D7.10 European and
local legal and non-legal limitations, barriers and standards for VIPRISCAR
products, D8.4-D8.6 Data Management Plan, D8.11D8.14 Dissemination and
communication plan, D8.15 Project Website
</td>
<td>
Confidential, only for members of the consortium (including the
Commission Services)
</td> </tr>
<tr>
<td>
WP2-9
</td>
<td>
All data generated within the project
</td>
<td>
Accessible to the partners within the project
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
VIPRISCAR_WP4
</td>
<td>
The consortium will be given access to select portions of the dataset, mainly
concerning test results.
</td> </tr> </table>
Important remark for any partner intending to disseminate its results, it is
obligatory to provide notice with sufficient information on the dissemination
contents to other partners at least **45** days in advance to the
dissemination. Other partners, if not agree, may object within **30** days
after receiving the notification and should provide proper justification to
explain the reason why its legitimate interests would be significantly
infringed. In this case, appropriate steps to solve the conflicts should take
place; otherwise, the dissemination would not be able to further proceed.
## 3.3 Interoperability
The VIPRISCAR project aims to collect and document the data in a standardized
way to ensure the datasets would be easy to understand, reuse and interoperate
among different parties who are interested in utilizing them. Standard
technical terminology will also be used to facilitate inter-disciplinary
interoperability.
## 3.4 Reusability
Data reusability means the easiness to re-use the data for further researches
or other purposes. In VIPRISCAR project, the datasets have high reusability in
that normally no special methods or software is required to re-use the data.
The time of reusability for those research data which will be made available
to re-use is not yet defined.
The procedures to ensure the highest data quality and validity include
internal reviews as well as peer reviews if the articles or documents would be
published through scientific journals. Other specific procedures adopted by
partners are listed below:
#### TABLE 13 SPECIFIC QUALITY CONTROL PROCEDURES ADOPTED BY PARTNERS
<table>
<tr>
<th>
**Partner Name**
</th>
<th>
**Standards**
</th> </tr>
<tr>
<td>
JOWAT
</td>
<td>
Good Laboratory Practices
</td> </tr>
<tr>
<td>
AEP
</td>
<td>
International standards (ASTM, ISO, UL94 and others) and written internal
procedures and testing methods
</td> </tr> </table>
Additionally, quality control of data at different stages from data
collection, data entry or digitalization, and data checking is crucial in the
VIPRISCAR project in that many research experiments would be conducted
throughout the lifetime of the project. Following measures referred to the
Good Practice Note of Research Data Management (CGIAR, 2017) are offered as
references for the consortium partners to follow in order to ensure data
quality.
Stage 1: Data collection
Calibrate the instruments to ensure the measurement accuracy
Take multiple measurements, observations, or samples to ensure the data
reliability
Double confirm the truth of the record with adequate experts in the relevant
domains
Unify standardized methods and standard operating procedures Stage 2: Data
entry or Digitalization:
Set out validation rules in data entry software
Use controlled vocabularies, anthologies, code lists and choice lists to
minimize the occurrence probability of human mistakes
Follow the naming conventions for the variables including names, dates,
versions to avoid confusion
Stage 3: Data checking
Double check the coding accuracy and out-of-range values
Check data completeness, appropriate naming conventions used
Choose random samples to verify the consistency with original data
Conduct statistical analysis to detect if any errors or abnormal values exist
# 4\. DATA SECURITY
Currently, the VIPRISCAR project considers using Microsoft’s SharePoint as the
intranet/repository to manage, share, and collaborate for the data and
documents related to the project. Three levels of configurations to balance
between the security protection and the ease of collaboration are recommended
based on the confidentiality level of the data and documents from baseline,
sensitive, to highly confidential (as shown in Figure 2) (Microsoft, 2018).
More details will be provided in the future versions of DMP if SharePoint is
chosen.
Meanwhile, most of the consortium partners have their own provisions in place
for data security within organizations (as listed in the Table 14 below).
### TABLE 14 DATA SECURITY PROVISIONS WITHIN PARTNER'S ORGANIZATION
<table>
<tr>
<th>
**Partner Name**
</th>
<th>
**Data Security Provisions**
</th> </tr>
<tr>
<td>
TECNALIA
</td>
<td>
Access controls: Every worker in TECNALIA has his/her own password-protected
user account to access the systems. The password must satisfy complexity
requirements and shall be changed every 90 days. The access to networks
folders and programs where information is stored/managed depends on user
permissions which are decided by factors such as division, role in the
company, role in the project, etc. The permissions are managed by
administrators only and must be asked by authorized persons through authorized
channels.
Backup: TECNALIA has two-level backup. The first level is the system “previous
versions” service that allows a user to recover a copy of the work (5 copies a
day, two weeks period) by his/her own. Moreover, every day TECNALIA makes full
backup of the working information. There are daily, weekly, monthly and yearly
copies. The recover from this backup requires a formal procedure.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Transfer of data: To transfer the information we can use platforms that
require security protocols, such as OneDrive, SharePoint, or the “consigna” of
TECNALIA, and we can use information protection tools such as Veracrypt and
others.
</th> </tr>
<tr>
<td>
JOWAT
</td>
<td>
National regulations
</td> </tr>
<tr>
<td>
CIKAUTXO
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
B4P
</td>
<td>
Regular server back-up of all data
</td> </tr>
<tr>
<td>
AEP
</td>
<td>
The data is stored in a firewalled and password-accessible server and in
online password protected server(s).
Daily back-up on a stand-alone mirrored hard-drive.
</td> </tr>
<tr>
<td>
VERTECH
</td>
<td>
Using internal company server
Documents are automatically saved on the OneDrive. Historical copies could be
access on the server
</td> </tr>
<tr>
<td>
EXERGY
</td>
<td>
Hardware (computers) purchased for performance, reliability and security. All
of them are equipped with windows defender and are automatedly updated and
password protected.
Password protected cloud-based central document storage is utilized for
project documents, plus 2-step authentication protection for administrators.
Automatic file retention and regular electronic backups.
Email retention that are protected by password.
Guidance on safeguards provided for employees in the handbook which all
employees are required to review.
Holding of and processing of all personal data in line with General Data
Protection Regulation (GDPR) requirements.
</td> </tr>
<tr>
<td>
GAIKER
</td>
<td>
On-Premise: from the earlier stages of the project until it is considered a
closed project, information access is granted only to the staff working
directly on it; there is just a live copy of information, and several others
in backup data; the backup data is encrypted and protected with random
passwords of more than 50 positions. The passwords are kept in security boxes,
with physical access controls in place.
Offsite copies: The access is restricted to the IT staff of the company, and
the information is encrypted, so if someone else accesses by accident or
intentionally to the information, it would be useless.
</td> </tr>
<tr>
<td>
LEITAT
</td>
<td>
Using internal server
</td> </tr> </table>
# 5\. ETHICAL ASPECTS
The VIPRISCAR project partners are to comply with article 34 concerning ethics
and research integrity principles in the GA.
Ethical principles (including the highest standards of research integrity)
Applicable international, EU, and national law
In the VIPRISCAR project, no ethical or legal issues that can have an impact
on data sharing have been identified at current stage.
Important remark to be noticed that the EU GDPR regulation has been officially
enforced on 25 May 2018, aiming to protect and empower all EU citizens
personal data privacy as well as reshape the way organizations across the
region manage data and proceed towards data privacy.
The GDPR is organized around seven key principles (European Commission, 2016):
* Lawfulness, fairness and transparency
* Purpose limitation
* Data minimization
* Accuracy
* Storage limitation
* Integrity and confidentiality (security)
* Accountability
**Personal data** is information that relates to an identified or identifiable
individual (name, number, location, IP address…). Information which has had
identifiers removed or replaced in order to pseudonymize the data is still
personal data for the purposes of GDPR.
Hence, if any dataset that will be collected and/or generated in the VIPRISCAR
project may involve data privacy issue, responsible partner should take notice
of the following key changes in GDPR (GDPR.ORG, 2018)(European Commission,
2018) and ensure to be compliant with the regulations. Noteworthily, only the
relevant changes have been listed below. The consortium shall comply with but
not limit to those GDPR regulations if applicable.
**Conditions for consent** : The request for consent must be provided in an
intelligible and easily accessible form, along with the explanation of the
purpose for data processing attached to that consent. The language used is
required to be clear and plain instead of illegible terms or conditions full
of legalese.
**Increased territorial scope:** GRDP is applicable if at least one of the
following conditions is met.
The personal data processing concerns data subjects in the EU
Personal data controller or processor is located in the EU, regardless of the
exact location of processing taking place **Data subject rights:**
Breach notification: In case of any data breach that may “result in a risk for
the rights and freedoms of individual”, the breach notification must be
provided within 72 hours after becoming aware of a data breach.
Right to access: Data subjects are empowered to request confirmation with the
data controller that if personal data concerning them is being process, where
and for what purpose and shall receive an electronic copy of personal data
without additional cost.
Right to be forgotten: Data subjects have the right to demand the data
controller to erase their personal data, cease further dissemination, and half
third-parties processing it upon condition that the data is no longer
applicable for the original purpose for processing or the data subjects
withdraw their consents.
**Privacy by design** : Data controller shall include data protection into
consideration from the very beginning of designing of systems. Appropriate
measures shall be taken to protect the rights of data subjects, for instance
only data which is considered necessary for completion of the tasks should be
held and processed and only relevant personnel would be granted the access
rights for data processing.
Recommendations on the right to be informed:
Inform individuals about the collection and use of their personal data.
Provide individuals with information including: The purposes for processing
their personal data, the retention periods for that personal data, and who it
will be shared with. It is called the ‘privacy information’.
Provide privacy information to individuals at the time their personal data are
collected from them.
When you obtain personal data from a source other than the individual, you
need to provide the individual with privacy information in less than a month.
If you use data to communicate with the individual, you should provide privacy
information at the latest when the first communication takes place
When you collect personal data from the individual it relates to, you must
provide them with privacy information at the time you obtain their data. you
must tell people who you are giving their information to and give them an easy
solution to opt out.
The information you provide to people must be concise, transparent,
intelligible, easily accessible, and it must use clear and plain language.
It is often most effective to provide privacy information to people using a
combination of different techniques including layering, dashboards, and just-
in-time notices.
User testing is a good way to get feedback on how effective the delivery of
your privacy information is.
You must regularly review, and where necessary, update your privacy
information. You must bring any new uses of an individual’s personal data to
their attention before you start the processing.
The checklist (as shown in Table 15) suggests the information to provide when
collecting personal data either from individuals directly or from other
sources (ico., 2018).
### TABLE 15 CHECKLIST OF INFORMATION TO PROVIDE WHEN COLLECTING PERSONAL
DATA
<table>
<tr>
<th>
**What information do we need to provide?**
</th>
<th>
</th> </tr>
<tr>
<td>
The name and contact details of your organization
</td>
<td>
</td> </tr>
<tr>
<td>
The name and contact details of your representative
</td>
<td>
</td> </tr>
<tr>
<td>
The contact details of your data protection officer
</td>
<td>
</td> </tr>
<tr>
<td>
The purposes of the processing
</td>
<td>
</td> </tr>
<tr>
<td>
The lawful basis for the processing
</td>
<td>
</td> </tr>
<tr>
<td>
The legitimate interests for the processing
</td>
<td>
</td> </tr>
<tr>
<td>
The categories of personal data obtained
</td>
<td>
</td> </tr>
<tr>
<td>
The recipients or categories of recipients of the personal data
</td>
<td>
</td> </tr>
<tr>
<td>
The details of transfers of the personal data to any third countries or
international organizations
</td>
<td>
</td> </tr>
<tr>
<td>
The retention periods for the personal data
</td>
<td>
</td> </tr>
<tr>
<td>
The rights available to individuals in respect of the processing
</td>
<td>
</td> </tr>
<tr>
<td>
The right to withdraw consent
</td>
<td>
</td> </tr>
<tr>
<td>
The right to lodge a complaint with a supervisory authority
</td>
<td>
</td> </tr>
<tr>
<td>
The source of the personal data
</td>
<td>
</td> </tr>
<tr>
<td>
The details of whether individuals are under a statutory or contractual
obligation to provide the personal data
</td>
<td>
</td> </tr>
<tr>
<td>
The details of the existence of automated decision-making, including profiling
</td>
<td>
</td> </tr> </table>
# 6\. OTHER ISSUES
At current stage, most of the consortium partners including GAIKER, TECNALIA,
AEP, LEITAT, VERTECH have reported no obligation to comply with additional
specific national, funder, sectorial, departmental, or institutional data
management policies.
Certain partners have informed using other procedures for data management:
B4P: Funder regulation
JOWAT: Data management software
More information may be updated in the future versions of the DMP (D8.5, due
M24; D8.6, due M36) regarding the details of the specific policies followed by
those partners as well as other possible issues related to data management if
identified.
# 7\. ALLOCATION OF RESOURCES
According to the guidelines provided by EU Commission (European Comission,
2018), costs related to open access to research data in Horizon 2020 programme
are eligible for reimbursement during the project lifetime if the requirements
in article 6 and article 6 D.3 as well as other articles relevant for the cost
category chosen are met.
The planned budget dedicated to data management which is already foreseen in
the GA as well as additional information provided by each partner have been
gathered together in Table 16 below. This information might be completed or
evolve in the future versions of the DMP (D8.5, due M24; D8.6, due M36)
depending on the results of questionnaires collected from the consortium
partners.
### TABLE 16 ALLOCATION OF RESOURCES
<table>
<tr>
<th>
**Partner Name**
</th>
<th>
**Descriptions**
</th> </tr>
<tr>
<td>
TECNALIA
</td>
<td>
Open access articles (10k€)
Web page: web domain, picture, video, plugin… (2k€)
</td> </tr>
<tr>
<td>
EXERGY
</td>
<td>
Cost related to open access and IPR (5k€)
</td> </tr>
<tr>
<td>
LEITAT
</td>
<td>
Publication in Open Access (5k€)
</td> </tr> </table>
As for long-term preservation of the datasets, different internal policies of
each partners are noted in Table 17 and will be updated in future versions of
the DMP (D8.5, due M24; D8.6, due M36) based on the information provided by
the consortium partners.
### TABLE 17 DATA LONG-TERM PRESERVATION POLICIES
<table>
<tr>
<th>
**Partner Name**
</th>
<th>
**Planned Resources**
</th>
<th>
**Decision Maker for Data Preservation**
</th>
<th>
**Preservation Timeframe**
</th> </tr>
<tr>
<td>
TECNALIA
</td>
<td>
Yes
</td>
<td>
Project Manager of VIPRISCAR
</td>
<td>
10 years
</td> </tr>
<tr>
<td>
JOWAT
</td>
<td>
Yes
</td>
<td>
Jowat
</td>
<td>
According to national regulation
</td> </tr>
<tr>
<td>
CIKAUTXO
</td>
<td>
To be
Determined
</td>
<td>
To be Determined
</td>
<td>
To be Determined
</td> </tr>
<tr>
<td>
B4P
</td>
<td>
Yes
</td>
<td>
Board of B4plastics
</td>
<td>
At lest 3 years after project termination
</td> </tr>
<tr>
<td>
AEP
</td>
<td>
Yes
</td>
<td>
Project Manager
</td>
<td>
Indefinitely
</td> </tr>
<tr>
<td>
VERTECH
</td>
<td>
No
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
EXERGY
</td>
<td>
Yes
</td>
<td>
Project Manager and Head of department
</td>
<td>
To be confirmed
</td> </tr>
<tr>
<td>
GAIKER
</td>
<td>
Yes
</td>
<td>
**Internal policies.** Project information will be preserved in several
repositories. 1) On-Premise storage systems, as repositories for the
information. 2) On-Premise copy of data, as a first backup copy of info.
</td>
<td>
**Internal policies.** Virtually forever. At least 2 copies of information
will be preserved forever, as the company exists.
**External repositories:**
Depending on the repository,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
3. Offsite copy of data (cloud providers, in Dublin and Frankfurt) as an external backup copy of data.
4. External searchable scientific information repositories.
</td>
<td>
for example, if zenodo is used, it will maintain the information as CERN
Laboratory exists (at the moment 20+ years guaranteed).
</td> </tr>
<tr>
<td>
LEITAT
</td>
<td>
No
</td>
<td>
Principle investigator of VIPRISCAR project
</td>
<td>
</td> </tr> </table>
# 8\. EVOLUTION OF THE DATA MANAGEMENT PLAN THROUGHOUT THE PROJECT
This initial DMP will continuously evolve within the lifetime of the project
and future versions will be provided in Deliverable 8.5 (due M24) and
Deliverable 8.6 (due M36). New questionnaires will be circulated to the
consortium partners in order to update all the identification of new datasets,
changes of the already identified datasets or data management policy within
the consortium (e.g. new innovation potential, decision to file for a patent)
if necessary.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1468_ExaQUte_800898.md
|
# Introduction
The ExaQUte project participates in the Pilot on Open Research Data launched
by the European Commission (EC) along with the H2020 program. This pilot is
part of the Open Access to Scientific Publications and Research Data program
in H2020. The goal of the program is to foster access to research data
generated in H2020 projects. The use of a Data Management Plan (DMP) is
required for all projects participating in the Open Research Data Pilot, in
which they will specify what data will be kept for the longer term. The
underpinning idea is that Horizon 2020 beneficiaries have to make their
research data findable, accessible, interoperable and re-usable (FAIR), to
ensure it is soundly managed.
This initiative aims to improve and maximize access to and re-use of research
data generated by Horizon 2020 projects and takes into account the need to
balance openness and protection of scientific information, commercialization
and Intellectual Property Rights (IPR), privacy concerns, security as well as
data management and preservation questions.
Although open access to research data thereby becomes applicable by default in
Horizon 2020, during the ORDP it applies primarily to the data needed to
validate the results presented in scientific publications, although other data
can also be provided by the beneficiaries on a voluntary basis
Data Management Plans (DMPs) are a key element of good data management,
providing an analysis of the main elements of the data management policy that
will be used by the consortium with regard to the project research data. A DMP
describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and re-usable (FAIR), a DMP
should include information on:
‐ the handling of research data during and after the end of the project;
‐ what data will be collected, processed and/or generated;
‐ which methodology and standards will be applied;
‐ whether data will be shared/made open-access, and
‐ how data will be curated and preserved (including after the end of the
project).
This document is the first version of ExaQUte project’s DMP and has been
elaborated within the first 6 months of the project. If significant changes
arise during the course of the project (such as new data, changes in
consortium policies, etc.), the DMP will have to be updated.
This DMP has been produced following the _Horizon 2020 FAIR Data Management
Plan (DMP) template,_ and includes thee following sections as suggested by the
aforementioned guide:
1. Data Summary
2. FAIR Data
3. Allocation of resources
4. Data Security
5. Ethical aspects
6. Other issues
The ExaQUte Management Plan will be updated as the project progresses.
# 1\. Data Summary
The ExaQUte project aims at constructing a framework to enable Uncertainty
Quantification (UQ) and Optimization Under Uncertainties (OUU) in complex
engineering problems, using computational simulations on Exascale systems. The
methods and simulation tools developed in ExaQUte will be applicable to many
fields of science and technology.
In particular, the chosen application focuses on **wind engineering** , a
field of notable industrial interest. The problem to be solved has to do with
the quantification of uncertainties in the simulation of the **response of
civil engineering structures to the wind action** , and the shape optimization
taking into account uncertainties related to wind loading, structural shape
and material behavior.
The project entails the numerical simulations of heavy real engineering
problems though the use of different codes and solvers that, given some input
data, produce a file including the values of the relevant parameters that
describe the results of the simulation of the original problem. Thus, the use
and/or generation of large data sets is inherent to the nature of the project,
making it very exigent regarding the amount of data involved.
Having said that, we have identified five main types of data sets that will be
used and/or generated during the span of the project:
‐ data related to the management of the project (such as GA and CA
documentation, review reports, rinutes of meetings, deliverables, papers in
journals and communications in conferences, documentation of audits, etc.);
‐ data related to the geometry of the structure to be simulated;
‐ data produced as outcome of the numerical simulation;
‐ data for validation of the simulations; ‐ software.
Specific datasets may be associated to scientific publications, public project
reports and other raw data or curated data not directly attributable to a
publication. Datasets can be both collected, unprocessed data as well as
analyzed, generated data.
Research data linked to exploitable results will not be put into the open
domain if they compromise its commercialization prospects or have inadequate
protection, which is a H2020 obligation. The rest of research data will be
deposited in an open-access repository.
ExaQUte has created an intranet, organized under GitLab repository at
https://gitlab.com/principe/exaqute, a snapshot of which is shown in Fig. (1).
At the same time all the developments of ExaQUte will be integrated at the
GitHub page of Kratos:
https://github.com/KratosMultiphysics/Kratos (Fig. 4), which includes a wiki
with the documentation of the project.
Figure 1: Git repository to share documents between partners
In parallel, PU documents related to this project will be uploaded to the
ExaQUte customized repository created under the Open Science Platform
Scipedia, available at https://www.scipedia.com/institution/exaqute.eu, a
snapshot of which is shown in
Fig. (2).
Figure 2: Scipedia repository to share open documents
The project has also created a dedicated webpage for ExaQUte ( www.exaqute.eu
) where all the public reports and deliverables will be uploaded as they are
produced (Fig. 3)
Figure 3: ExaQUte webpage with the list of deliverables to be uploaded as they
are produced during the span of the project All the code from Kratos is
publicly available at the GitHub page:
https://github.com/KratosMultiphysics/Kratos (Fig. 4). The same platform also
includes a wiki with the documentation of the project. On this platform, all
the developments of ExaQUte will be integrated.
Kratos adopts open standards for input and output formats, thus simplifying
the exchange of data. In particular a JSON (Java Script Object Notation)
format is employed in the definition of the parameters defining the
simulation. Simulation results can be stored either in proprietary “.post.bin”
format (which can be opened by the GiD software) or in HDF5 format.
Figure 4: ExaQUte Code repository at GitHub
## 1.1. Documents and Dissemination material
Documents will consist of all the reports generated during the project,
including all deliverables, publications and internal documents. Microsoft
Word (DOCX) and PDF(preferred) and will be used for final versions, while
intermediate versions can consider the usage of TeX (or LaTeX) files.
ExaQUte will produce dissemination material in a diversity of forms: flyers,
newsletter, public presentations (DOCX, PPTX, PDF or OpenDocument formats),
and videos demonstrating the performance of solvers, algorithms and plugins
(widely used video file formats for distribution, such as MOV or AVI will be
used)
We expect this data to be in the order of dozens of gigabytes, given the size
of the videos (the lion’s share of this type of data) to be included in the
dissemination material
This data will be useful for those who want to learn about the outcomes of the
project. From the point of view of Project Management, the documentation will
be useful to EC officers and the consortium to assess the progress of the
project.
__Specific Provisions for Research publications:_ _
Project Partners are responsible for the publication of relevant results to
scientific community by Scientific Publications. The data (including
associated bibliographic metadata) needed to validate the results presented in
scientific publications will be deposited in a research data repository. This
data is needed to validate the results presented in the deposited scientific
publication and is therefore seen as a crucial part of the publication and an
important ingredient enabling scientific best practice.
Metadata will maximize the discoverability of publications and ensure the
acknowledgment of EU funding. Bibliographic data mining is more efficient than
mining of full-text versions. The inclusion of metadata is necessary for
adequate monitoring, production of statistics, and assessment of the impact of
H2020.
In addition to basic bibliographic information about deposited publications,
the following metadata information is expected.
* EU funding acknowledgement:
* Contributor: "European Union (EU)" & "Horizon 2020".
* Peer Reviewed type (e.g. accepted manuscript; published version).
* Embargo Period (if applicable):
* End date. o Access mode.
* Project Information:
* Grant number: “800898” o Name of the action: “Research and Innovation action” o Project Acronym: “ExaQute”
* Project Name: “EXAscale Quantification of Uncertainties for Technology and Science Simulation” • Publication Date.
* Persistent Identifier:
* Authors and Contributors. Wherever possible identifiers should be unique, nonproprietary, open and interoperable (e.g. through leveraging existing sustainable initiatives such as ORCID for contributor identifiers and DataCite for data identifiers).
* Research Outcome
* License. The Commission encourages authors to retain their copyright and grant adequate licences to publishers. Creative Commons offers useful licensing solutions.
The ExaQUte project will support the open-access approach to Scientific
Publications (as defined in article 29.2 of the Grant Agreement). Scientific
publications covered by an editorial copyright
will be made available internally to the partners and shared publicly through
references to the copyright owners’ websites.
Whenever possible, a scientific publication, as soon as possible and at the
latest six months after the publication time, will be deposited in a machine-
readable electronic copy of the published version (or final peer-reviewed
manuscript accepted for publication) in a repository for scientific
publications. Moreover, the beneficiary should aim at depositing at the same
time the research data needed to validate the results presented in the
deposited scientific publications.
CIMNE (through its Spin-off Scipedia S.L.) has developed the Scipedia
Publications repository, which is an open-access repository. The repository is
indexed by Google and fulfills international interoperability standards and
protocols to gain long-term sustainability.
## 1.2. Data related to the geometry of the structure to be simulated
Simulation geometries will be prepared using GiD or other CAD/Preprocessing
software.
Exact geometries will be stored in the open format described in:
https://link.springer.com/article/10.1186/s40323-018-0109-4
The format employs a JSON notation and is hence readily readable and
interchangeable.
Whenever possible (not proprietary geometries) geometries written in this
format will be made available through the website.
## 1.3. Data produced as outcome of the numerical simulation
The ExaQUte project targets the solution of UQ and optimization problems by
the use of variations of Monte Carlo techniques. This essentially implies
running very many simulations and extracting statistical data from the outcome
of each simulation sample.
For this very reason, the intermediate results are never stored, since it is
preferred to generate and analyze new data on the fly rather than to store and
later analyze the results.
The computation outcome, to be stored and made available to the end user, is
thus a “normal” postprocessing output enriched with a statistical
characterization of the results.
Kratos supports a multiplicity of formats for postprocessing output. Within
ExaQUte, output to native GiD format and to the open HDF5 format will be used.
We note in any case that final results will be made available to the general
public only for selected benchmarking cases. Outcome of other simulations
would not be of interest to the general public.
## 1.4. Data for validation of the simulations
Validation data is typically available in the form of tables of data recorded
by sensors or possibly as video footage.
Whenever possible, sensor input will be stored in HDF5 format so as to
maximize its encapsulation and to make it portable. Videos will be stored
using commonly available codecs.
JPG and PNG will be used to store static images.
Only our industrial partners (Str.ucture) could have some testing data that
could be described as restrictive regarding their industrial interests, and
thus that data could not be available to the general public. However, they
would make it available to the consortium when necessary, under their
preferred conditions.
## 1.6 Software
ExaQUte produces open-source software, which can be readily downloaded and
compiled from the source repository. Point releases corresponding to the
deliverable (containing both a snapshot of the source and the compiled object
for Linux64) will be made available through the project’s GitLab account.
Released software will be packaged in ZIP format.
The possibility of packaging the software so that it can be automatically
installed as a Linux package or as a pip package will be explored. However no
guarantee of success can be made in this sense.
# 2\. FAIR Data
## 2\. 1. Making data findable, including provisions for metadata
To facilitate discoverability (the degree to which something, especially a
piece of content or information, can be found in a search of a file or
database) of the data produced in the project, ExaQUte will establish a
taxonomy for the data generated during the duration of the project.
The ExaQUte project will generate data resulting from the simulation results
during the development of the different simulation tools and the final
validation experiments. The data and associated software produced and/or used
in the project should be discoverable (and readily located) and identifiable
by means of a standard identification mechanism (e.g. Digital Object
Identifier). This provision clearly refers to data designed for publication.
Produced data files, plugins and research data will be accompanied by a README
file including who created or contributed to the data, its title, date of
creation and under what conditions it can be accessed. Documentation will also
include details on the methodology used, analytical and procedural
information, any assumptions made, and the format and file type of the data.
In the case of software, it may also include installation instructions and
usage examples. All this information will be inside the manuscripts as well,
unless structure of the document inhibits it (e.g. a journal/conference
paper).
Releases are identified by the Git hash tag associated to the snapshot from
which they were generated. The name also takes into account the type of
compilation (Release, Debug, etc.). Such data can also be queried when
launching the program, for example:
>>> from KratosMultiphysics import * | / | ' / __| _` | __| _ \ __|
. \ | ( | | ( |\\__ \ _|\\_\\_| \\__,_|\\__|\\___/ ____/
Multi-Physics 6.0.0-17e3c693fe-FullDebug
In the case of manuscripts, the owner/responsible of the document will be the
one controlling the version of the document, while files created by partners
adding contributions to the original will be named by appending “_initials” to
the filename.
### 2.2. Making data openly accessible
All documents and data that compromise neither IPR nor licensing rights will
be available to the public on the different platforms and repositories
described in Section 1.
Information about the modalities, scope and licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.) in
which the data and associated software produced and/or used in the project is
accessible should be provided.
The data and associated software produced and/or used in the project should be
assessable by and intelligible to third parties in contexts such as scientific
scrutiny and peer review (e.g. the minimal datasets are handled together with
scientific papers for the purpose of peer review, data are provided in a way
that judgments can be made about their reliability and the competence of those
who created them).
### 2.3. Making data interoperable
Interoperability is the ability to access and process data from multiple
sources without losing meaning, and then integrate that data for mapping,
visualization, and other forms of representation and analysis.
The data and associated software produced and/or used in the project should be
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc. (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing re-combinations with different datasets from different origins).
### 2.4. Increase data re-use (through clarifying licenses)
Data re-use will be facilitated thought the repositories of the project.
The consortium has set up quality procedures for internal documents,
deliverables and software. Publications are not considered in the procedure as
they already go through an external refereed process.
Images and videos to be used, and those acquired in the project, will go
through a natural quality control by the RTD partners as they will monitor
that minimum quality requisites are obtained in the shootings to be able to
run their algorithms. Quality of images and videos produced during the project
will be assessed by end-user partners who will control that the obtained
material is compliant with standards in the industry.
In the case of the software produced, the quality is guaranteed by several
means: continuous integration performed by partners, and the integration of
tests to confirm the right performances.
# 3\. Allocation of resources
Each ExaQUte partner has to respect the policies set out in this DMP. Datasets
have to be created, managed and stored appropriately and in line with
applicable legislation. The Project Coordinator has a particular
responsibility to ensure that data shared through the ExaQUte website are
easily available, but also that backups are performed and that proprietary
data are secured.
CIMNE, as Project Coordinator of ExaQute, will ensure dataset integrity and
compatibility for its use during the project’s lifetime by different partners.
Validation and registration of datasets and metadata is the responsibility of
the partner that generates the data. Metadata constitutes an underlying
definition or description of the datasets, and facilitates finding and working
with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data. Quality control of this
data is the responsibility of the relevant WP leader where the data was
generated, supported by the Project Coordinator.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publicly available data. WP1 will
provide naming and version conventions.
Last but not least, all partners must consult the concerned partner(s) before
publishing data in the open domain that can be associated to an exploitable
result.
All dissemination material produced during the project will be preserved and
made public as soon as possible to let the research community know about
ExaQUte solutions and results at the earliest date.
For the public reports and dissemination material, no much extra effort is
considered for its preservation beyond the act of publishing them in public
repositories (see Section 2.2). It is agreed that this data has to be
preserved a minimum of 3 years after the project’s end.
# 4\. Data Security
Storage and maintenance of ExaQUte data will be handled according to the data
category, privacy level, need to be shared among the consortium, and size.
This section covers the storage selections for data, independently of whether
the data is to be shared externally. For that purpose, specific storage
systems allowing public access will be selected.
Software data and source code will be stored on a **GitHub** server: a project
management web application offering multiple-project support, version control
(SVN and Git), issue tracking, file management, activity feeds, wiki and
forums. Allowing installation on a partner’s server is an important feature as
it is a project requisite for internal sharing of software.
The use of Git guarantees that a distributed copy of all the data is available
on all the computers who cloned the repository, thus removing the need for
backup procedures.
Maintenance of datasets stored in partners’ servers will be carried out
according to the partners’ backup policy.
We do not envision any sensitive data to be produced/transferred during
ExaQUte. Only our industrial partners (Str.ucture) could have some restrictive
data regarding their industrial interests, that they would make available to
the consortium when necessary and under their preferred conditions.
# 5\. Ethical aspects
ExaQUte will neither make use of nor produce any type of data that could be
described as either “sensitive” or raising any ethical issue.
# 6\. Other issues
N/A
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1469_VECMA_800925.md
|
1. _**EXECUTIVE SUMMARY5** _
2. _**DATA MANAGEMENT5** _
1. **Introduction5**
2. **Data Summary5**
3. **FAIR DATA7**
# 2.3.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA7 2.3.2 MAKING
DATA OPENLY ACCESSIBLE7 2.3.3 MAKING DATA INTEROPERABLE8 2.3.4 INCREASE DATA
RE-USE (THROUGH CLARIFYING LICENSES)9
**2.4** **ALLOCATION OF RESOURCES9**
**2.5** **DATA SECURITY10**
**2.6** **ETHICAL ASPECTS10** _**3** _ _**OTHER11** _
4. _**CONCLUSIONS11** _
5. _**ANNEXES12** _
# EXECUTIVE SUMMARY
This deliverable, D1.3: Data Management Plan, acts as a detailed and
comprehensive document on the data management plan that are being followed to
guide the use of various type of data by the project. This deliverable is
linked to VECMA’s Work Package 1: Management that includes this deliverable.
This management plan is a ‘living document’ that will be updated throughout
the project, as required.
# DATA MANAGEMENT
## Introduction
This deliverable responds to the standard questions that must be answered to
produce an initial VECMA project data management plan. The data management
plan presented in this document was produced using the DMP Online tool
available at: _https://dmponline.dcc.ac.uk/_ [1] and follows the H2020 DMP
template [2].
## Data Summary
**Provide a summary of the data addressing the following issues:**
* **State the purpose of the data collection/generation**
* **Explain the relation to the objectives of the project**
* **Specify the types and formats of data generated/collected**
* **Specify if existing data is being re-used (if any)**
* **Specify the origin of the data**
* **State the expected size of the data (if known)**
* **Outline the data utility: to whom will it be useful**
The purpose of the VECMA project is to enable a diverse set of multiscale,
multiphysics applications -- from fusion and advanced materials through
climate and migration, to drug discovery and the sharp end of clinical
decision making in personalised medicine -- to run on current multi-petascale
computers and emerging exascale environments with high fidelity such that
their output is "actionable". That is, the calculations and simulations are
certifiable as validated (V), verified (V) and equipped with uncertainty
quantification (UQ) by tight error bars such that they may be relied upon for
making important decisions in all the domains of concern. The central
deliverable will be an open source toolkit for multiscale VVUQ based on
generic multiscale VV and UQ primitives, to be released in stages over the
lifetime of this project, fully tested and evaluated in emerging exascale
environments, actively promoted over the lifetime of this project, and made
widely available in European HPC centres. All data collected, used and
generated by the project is done in support of this objective.
VECMA is a large consortium, comprising not just funded core partners, but
also a network of associate partners who seek to participate in the project's
activities. As such the list of types and formats of data generated within
VECMA will be extensive and dynamic, and includes but is not limited to:
* Formatted/unformatted text
* Mov
* MP4
* Binary
* HDF5
* Xlsx
* Jpg
* VTK
* PDB
* PSF
* PRMTOP
* XTC ● PDF
* PNG
* EPS
* DICOM
* C3D
* VTK
VECMA is not actively involved in assembling initial datasets and has a policy
of using data brought into the project by project partners.
The data originates from many different sources. Non-simulation data, used to
build models generally, can originate from clinical data management systems or
DICOM image stores.
Simulation results are generated from computational models, with the focus of
the project being on running these models on high performance computing
resources around Europe. We distinguish at least three kinds of data: 1) to
produce input for simulations, 2) to verify results of simulations and 3)
results of simulations.
This exact extent of the data the project will need to store is unknown but
anticipated to be in excess of 300TB in total.
The project includes a fast track that will ensure applications are able to
apply available multiscale VVUQ tools as soon as they are available, while
guiding the deep track development of new capabilities and their integration
into a wider set of production applications by the end of the project. The
deep track includes the development of more disruptive and automated
algorithms, and their exascale-aware implementation in a more intrusive way
with respect to the underlying and pre-existing multiscale modelling and
simulation schemes. The data managed and produced within the project is of
immediate use to alpha users and researchers in these areas, and in the longer
term to industrial researchers and a wider scientific community across various
domains. The data generated by the project will typically be generated by
software and workflows developed in the project, and therefore correspond to
specific versions of that software. In addition to our data management
infrastructure, the project has developed a software repository, which acts as
a central store of our project’s software tools. We will use the metadata
associated with data objects to reference the specific version of the code or
workflow used to generate the data, using its software repository URL.
## FAIR DATA
### MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
**Making data findable, including provisions for metadata:**
* **Outline the discoverability of data (metadata provision)**
* **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?**
* **Outline naming conventions used**
* **Outline the approach towards search keyword**
* **Outline the approach for clear versioning**
* **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how**
Much of the initial data, at least that used to build models, is held by the
project partners as the result of other projects and research endeavours. As
such, VECMA does not have control over how this data is published and made
available.
Where data is generated by research conducted within the project, we will
mandate that the final results of a simulation can be made discoverable. UCL
has been a participant in the EUDAT and EUDAT2020 projects and became the
first higher education institutional partner to join the EUDAT CDI. We will
therefore leverage the best practice and services which EUDAT provides to make
data discoverable (including the issuing of unique identifiers through the
Handle system or Digital Object Identifiers). This will allow us to exploit
the EUDAT B2FIND catalogue to make data keyword searchable. In addition to
EUDAT resources we will also exploit local institution or other standard
repositories where possible or mandatory to use. Versions of code associated
with publications will additionally be uploaded to the Zenodo repository which
can then be referenced by DOI in metadata.
The EUDAT Consortium follows the OpenAIRE guidelines for Data Archives by
mandating standard minimal metadata and publication of metadata using the OAI-
PMH protocol. Simulation results will be deposited in the B2SHARE service, and
as such VECMA researchers will be compelled to provide a basic metadata record
that complies with the OpenAIRE application of the DataCite Metadata Schema.
In addition, data will be documented with a content- or discipline-specific
metadata record. The data generated by the project will arise from a number of
different interrelated fields, therefore not a single metadata standard will
apply to all the cases, but we will work with data generators to identify
suitable standards from the Research Data Alliance Metadata Standards
Directory [3].
Where appropriate we will use established community metadata schemas (such as
the Common Information Model, developed by ENES, for climate models). However,
there are no general standards for multiscale models, so we propose that we
will develop and internal VECMA schema to mandate a minimal set of metadata
that must accompany all communicated datasets (the VECMA project name and
grant number, application area, link to code used and version number where
appropriate). All project related deposits will use keywords that clearly take
into account all multidisciplinary and multiscale aspects of the generating
application.
### MAKING DATA OPENLY ACCESSIBLE
* **Specify which data will be made openly available? If some data is kept closed provide**
**rationale for doing so**
* **Specify how the data will be made available**
* **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?**
* **Specify where the data and associated metadata, documentation and code are deposited**
* **Specify how access will be provided in case there are any restrictio** ns
Data that relates to published work will be made available after a suitable
embargo period (as defined by the relevant journal). Where specific data is
identified as having legal, ethical or IPR barriers, the VECMA project will
work with the data owners to identify whether the data can be made open after
a period of embargo. We will make use of the features of EUDAT that allow
depositors to choose to keep data private and apply embargo periods.
Data will be made openly available via the B2SHARE repository. This is a user-
friendly, reliable and trustworthy way for researchers to store and share
research data from diverse contexts. It guarantees long-term persistence of
data and allows data, results or ideas to be shared worldwide.
All data hosted within the EUDAT CDI will be advertised through the central
B2FIND catalogue and assigned a persistent identifier. The B2FIND service is a
web portal allowing researchers to easily find and access collections of
scientific data and allowing them to access the data using a web browser. As
well as the metadata mandated by EUDAT, we will provide links to software used
to generate the data (generally VECMA modelling tools), which are listed in
the software catalogue featured on the VECMA project website.
VECMA intends to make use of the B2DROP service provided by EUDAT for sharing
live data internally in the project, which will ease the transition of making
data openly available in future. B2DROP is a tool to store and exchange data
with collaborators and to keep data synchronized and up-to-date. VECMA will
take advantage of the free storage space provided for research data within the
B2DROP framework.
### MAKING DATA INTEROPERABLE
* **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.**
* **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?**
In general, data used and created by the VECMA project is stored in standard
formats such as DICOM and PDB. Data will be annotated with the metadata
standards mandated by EUDAT when it is deposited, along with appropriate
standard from the Research Data Alliance Metadata Standards Directory.
Because of the vast array of data types arising from the VECMA project, it is
impossible to define a single interoperability standard, while the project
does not have sufficient human resources available to enforce ontological
annotation. However, we will produce guidance for researchers to annotate
their data using popular ontologies such as SNOMED [4].
### INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES)
* **Specify how the data will be licenced to permit the widest reuse possible**
* **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed**
* **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why**
* **Describe data quality assurance processes**
* **Specify the length of time for which the data will remain re-usable**
We expect core project partners to deposit their data openly using a Creative
Commons version 4.0 licence or equivalent. Unless there is a publication
requirement, IPR or data protection issue, we would expect data to be made
available at the conclusion of the relevant work package within VECMA. We will
also encourage our associate partners to adopt similar policies and promote
these policies at VECMA training events.
The EUDAT B2SHARE service allows data shared openly or kept private.
Regardless of whether deposited data are made open or kept private, metadata
records submitted as part of a data deposit are made freely available for
harvest via OAI-PMH protocols. Accessible data is made available directly to
users of EUDAT CDI services through graphical user interfaces and application
programming interfaces.
We will make published data available for third-party use as long as the EUDAT
platform is able to host it.
The use of open standard formats, metadata annotation and workflow
documentation (on the VECMA software portal) will be used to help ensure data
quality prior to deposit.
## ALLOCATION OF RESOURCES
**Explain the allocations of resources, addressing the following issues:**
* **Estimate the costs for making your data FAIR. Describe how you intend to cover these**
**costs**
* **Clearly identify responsibilities for data management in your project**
* **Describe costs and potential value of long-term preservation**
As outlined in section 6, we will largely build on the services provided by
the EUDAT project to make our data FAIR compliant. The lead partner UCL
already pays a membership subscription to participate in the EUDAT CDI, which
will be beneficial to the whole consortium, so we don’t anticipate incurring
any further costs to use these services.
Project data management is primarily the responsibility of individuals leading
tasks that generate data within the project but is being overseen by the
Project Technical Manager (Dr Derek Groen) and the Project Applications
Manager (Dr Olivier Hoenen).
We will leverage facilities offered by EUDAT for the long-term presentation of
data.
UCL has previously developed a relationship with EUDAT data nodes RZG and EPCC
to provide long term B2SHARE and B2SAFE provision, which we will aim to make
use of in this project. PSNC, one of
VECMA consortium partners, is also a member of EUDAT. PSNC will provide all
the physical storage required for the project for the partners to store their
data such as simulation results. To facilitate this, PSNC has asked each
partner to provide what storage size is needed and they will allocate a total
physical storage to be available for all partners.
## DATA SECURITY
**Address data recovery as well as secure storage and transfer of sensitive
data**
Internally within the project, file-based data will be shared using the B2DROP
service, which uses the HTTPS protocol for secure transfer. Other types of
data, such as DICOM image data, will be stored at a data centre at UCL, making
use of the access control and secure transfer features provided by the service
in question, and taking advantage of UCL’s central data centre management
policies. Other partners including HPC centres, PSNC and LRZ have considerable
data storage, some of them free of charge. Therefore, the VECMA project can
use storage resources provided from both EUDAT, PSNC storage and LRZ storage.
Data shared and published via the EUDAT CDI will be stored at one or more
partner sites, according to applicable service level agreements and policies.
Backup of data is performed at two levels using the B2SAFE service: multiple
replicas of data are stored at different sites (i.e. geographically and
administratively different); and data may additionally be backed up at an
individual site. Responsibility for the storage and backup at any individual
site lies with the designated site manager.
All EUDAT CDI core sites are large, national or regional data and computing
centres and operate according to good IT governance and information security
principles. Some sites are accredited through the ISO 27001 information
security process and/or have certifications of trustworthiness such as the
Data Seal of Approval, while others are working actively towards it.
## ETHICAL ASPECTS
**To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former**
VECMA does not actively collect data from individuals, and simulation
scenarios are largely based on publicly obtainable/consented data that has
been provided to project partners. For further details on how the project is
handling any ethical issues which arise refer to VECMA D4.5 Ethics Report [5].
Regarding data governance, VECMA is not intended as a facility for the routine
processing of live, identifiable clinical data; it will operate in the
research domain, and all data introduced by users will be required by the
VECMA conditions of use to be pre-processed to render it non- personal, and so
excluded from consideration under current and anticipated future research
governance regulations. VECMA will however act as a Data Controller for the
information relating to the registration and access control of its users, and
such data will be handled in full accordance with appropriate panEuropean
legislation.
Regarding data ethics, the VECMA framework is designed to support independent
users in their access to large-scale computational facilities and does not
carry out patient-related research; as a, consequence the VECMA project does
not itself acquire or handle patient-specific clinical data. Rather, it
enables users to work with models, applications and data for which they are
responsible, in the pursuit of their own research goals. Users sharing data
must do so under the terms granted by the data’s original ethical sanction,
and again users will be required by the VECMA conditions of use to reach
documented agreement that the terms of ethical sanction have been met. It is
the case however that ultimately VECMA cannot take responsibility for the
provenance or ethical compliance of data share through its infrastructure, nor
can it take account of the diverse legislation and the variable interpretation
of European directives that may occur in the various Member States.
However, situations may arise where VECMA will have access to clinical data.
In these situations, VECMA should be considered a _Data Manager_ , which is
delegated by the _Data Provider_ (typically a hospital) to handle clinical
data, for which the data provider has received from the _Data Owner_ (the
patient) the necessary permission to allow the treatment to be accessed by one
or more _Data Consumers_ (typically modelling experts) in order to fulfil a
certain treatment scope. In order to be legally compliant, clinical data
require two things: the permission to treat from the data owner (the patient),
and an adequate protection of confidentiality. This in turn implies:
A: VECMA can handle only clinical data for which access has been granted. All
users are fully responsible for ensuring that the necessary permission has
been acquired. VECMA will assist not-forprofit users such as research
hospitals or universities by providing them with informed consent templates
(written by an expert) that provide the type of permission necessary for a
given treatment using the project’s tools and services.
B: Full anonymisation: when the processing of the data does not require the
distinguishing of one individual patient from another, if necessary VECMA will
provide a server, to be installed behind the hospital firewall, that will
automate the replication of selected data to VECMA storage, while providing
automated semantic annotation according to popular ontologies, and
irreversible anonymisation according to agreed rules. This server will be
managed by the hospital staff.
C: Pseudo-anonymisation via a trusted third party: if the identity of the
patient cannot be entirely removed (for example, for personalised clinical
treatment), the type of infrastructure is the same as (B) above but this time
the data are annotated with a PatientID that remains, within the safety of the
hospital secure network, associated with the patient’s actual identity.
# OTHER
**Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)**
N/A
# CONCLUSIONS
This data management plan will help VECMA project partners to identify the
correct decisions that must be made regarding the use our data throughout our
project. The plan is a living document and it will be updated at various
project stages, as required.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1471_NICI_801075.md
|
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Raw format for more sophisticated final postprocessing.
Selected scanner logs.
</th>
<th>
all active sites
TOTAL
350GB
</th>
<th>
</th> </tr>
<tr>
<td>
**QALY questionnaires**
</td>
<td>
Questionnaire responses
</td>
<td>
Structured text format
</td>
<td>
<1MB
TOTAL
<1GB
</td>
<td>
WP5 (150)
</td> </tr>
<tr>
<td>
**CT scans for**
**hospital standard-of-care treatment**
</td>
<td>
Human computed tomography images and coded radiology reports
</td>
<td>
DICOM format for images. Structured text format for reports.
</td>
<td>
250MB
TOTAL
150GB
</td>
<td>
WP5 (150 x up to
4 scans per
patient)
</td> </tr>
<tr>
<td>
**Clinical patient data**
</td>
<td>
e.g. chemotherapy details extracted from clinical notes and coded before
storage to preserve anonymity
</td>
<td>
XML format
</td>
<td>
1MB
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
**Prediction model**
</td>
<td>
Optimised prediction model.
Outcomes of prediction model.
</td>
<td>
Computer code
And
Spreadsheet of results e.g. XLSX format
</td>
<td>
<1GB
</td>
<td>
WP4 and WP5
</td> </tr>
<tr>
<td>
**Health economic data**
</td>
<td>
Predictions of the potential health economic impact of the new metabolic
MRI scans
</td>
<td>
Health economic evaluation data – the CHEER standard
(Consolidated Health Economic Evaluation Reporting Standards) 1 which
provides guidelines
</td>
<td>
<1GB
</td>
<td>
After WP5
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
on reporting health economic evaluation data.
</td>
<td>
</td>
<td>
</td> </tr> </table>
# _Table 1: List of data types expected to be generated_
The NICI consortium strive to use recognised standards for data documentation
and meta-data production in order to promote re-use of our data outputs beyond
the consortium members. The final decision on which standards to use will be
made by the Management Board and will be included in revisions to this Data
Management Plan once test data sets are available. The following standards
might be applicable and are under consideration:
* Imaging data – The DICOM standard (Digital Imaging and Communications in Medicine - http://medical.nema.org), which is the international standard for medical images and related information (ISO 12052). It defines the formats for medical images that can be exchanged with the data and quality necessary for clinical use. DICOM is one of the most widely deployed healthcare messaging standards in the world. We are developing a set of extensions to DICOM for the new metabolic imaging format which we intend to publish in due course (in WP4).
* Imaging data – Raw data formats defined by the vendors, Philips, Siemens, GE. This data will also be archived to ensure that any parameters missing from the DICOM files can be extracted for the final post-processing stages. This is a form of “insurance”.
* Imaging protocols – Exported in PDF and in binary format from each scanner during monthly QA will be archived to ensure protocol consistency.
* Clinical Study Design, Data collection and Analysis – CDISC standards (http://www.cdisc.org) provide a format to document the entire study in WP5.
* NMR data – the nmrML and nmrCV standards, which are a an open mark-up language for NMR data and an MSI-sanctioned NMR controlled vocabulary, to support the nmrML data standard for nuclear magnetic resonance data in metabolomics with meaningful raw data descriptors (http://nmrml.org).
* CT scans and radiographers reports will be obtained from the recruiting hospital systems, most probably in DICOM format and PDF or XML for the reports.
We anticipate that the data from the organoid study and the human metabolic
imaging data will be of future interest to the community. They will provide a
thorough evaluation of the metabolic signatures seen in vitro, and in healthy
subjects (WP3) and in patients with colorectal cancer undergoing chemotherapy
(WP5). We anticipate that the prediction model and health economic predictions
will be of interest to the scientific community, and to experts in health
policy.
**4 FAIR data**
# a. Making data findable, including provisions for metadata
Our dissemination strategy will make data generated during the NICI project
discoverable by stakeholders from academia, clinical medicine, MRI technology
companies and by patient groups.
## _Table 2: Dissemination strategy_
We have created a Zenodo Community ( _https://zenodo.org/communities/nici/_ )
which will allow us to upload all of the smaller open access data sets, with
accompanying metadata to enable them to be found and searched. Zenodo provides
a DOI to all uploaded objects and has a robust system for tracking version
numbers and object metadata. In addition, we are also investigating the option
to use DataLad ( _https://www.datalad.org/_ ) which a Git-based open source
system for federated data sharing. Finally, we will reference the original
data sets where relevant in scientific publications arising from the NICI
project and on the NICI website. This will ensure that they can be located by
interested parties.
# b. Making data openly accessible
To maximise data utilisation, the consortium will archive all data and provide
open access to all data that are not earmarked for exploitation.
Unless earmarked for exploitation (by the Management Board), data will be
shared. Data will be coded and anonymised in order to protect participants’
privacy. NICI will make use of an open access depository (e.g. the NICI Zendo
Community , _https://zenodo.org/communities/nici/_ ) . Data will be
deposited as soon as possible.
Data from MR studies will be uploaded to UCAM for central archiving. We intend
to use the High Performance Hub for Informations (HPHI) at the Wolfson Brain
Imaging Centre for this purpose. Representatives from each imaging site are in
the process of being granted UCAM computer accounts. These permit access to
the HPHI cluster which has ample storage to host the project data sets. We
also have access to a PACS system and an XNAT instance for making the imaging
data discoverable and sharable in future with third parties.
## c. Restricted-access data sets
Data from the organoid studies in WP1 that is the basis of the restricted
circulation deliverables D1.2 - D1.5 will not be publically released at this
time. This is to protect the prior IP Rights of the partners developing this
aspect of the project, and to make it possible for us to exploit IP that may
arise during the human study (WP2-5).
Discussions regarding the handling of identifiable clinical data for patients
recruited in WP5 are ongoing. We will seek approval for our proposed means of
handling identifiable clinical data from the recruiting hospitals in WP5 and
as part of our Ethics application for the clinical study.
## 5 Tools for Data Sharing
We intend to extend open-source tools from the imaging community during the
NICI project. For imaging, we intend to extend the NIH “Gadgetron” open source
reconstruction framework to support metabolic imaging. For spectroscopy, we
intend to extend the “Oxford Toolbox for Magnetic Resonance Spectroscopy”
(OXSA) for an integrated workflow with NICI metabolic datasets.
We are now piloting the use of the XNAT database, hosted on the HPHI cluster
at UCAM for data sharing across the consortium.
Imaging research and analysis is increasingly dependent on acquiring data from
large numbers of subjects, which in turn means searching across wide
geographical areas to find enough subjects that meet your study's criteria.
One way to manage this is to collaborate with a number of research
institutions to recruit and image subjects from.
While this is economically more feasible (and friendlier to your subjects), it
introduces a new host of challenges for study coordination: disparate scanning
technologies and devices; nonuniform process for image acquisition and data
handling; the challenge of aggregating all this data into a centralized
system, and then managing access to this data across a large number of
collaborators from outside your institution.
XNAT has evolved to solve this problem.
Because XNAT is a web-based application, it has the built-in capability to be
accessed from anywhere in the world. Necessarily, security and fine-grained
access controls are built into XNAT at the root level. XNAT administration has
been built to support the complexities of multi-center research projects,
including:
* Highly configurable DICOM data importing, to unify data from multiple scan sources.
* Fully audited security.
* Siloed data access for each institution, with the capability of sharing data across all institutions.
* Customization of data queries that fit data into your study protocols.
* Protection from inadvertent PHI on data gathered from multiple sources.
* Reporting tools for study coordination.
Currently, XNAT is supporting a number of high-profile multi-site research
studies, including the _Human Connectome Project_ , the _DIAN study_ of
inherited Alzheimer's Disease, the _INTRUST_ _study_ of post-traumatic stress
disorders, and th e _PREDICT-HD_ study of Huntington's Disease.
_**Description of XNAT from <https://www.xnat.org/case-studies/multi-center-
studies.php> ** _
For the novel metabolic imaging methods, we intend to extend the International
Societry for Magnetic Resonance in Medicine Raw Data Format (ISMRMRD) which is
a cross platform format for unprocessed 1 H imaging data to also support the
other nuclei in the NICI project ( 31 P, 23 Na, 13 C and 19 F). Our
adaptations to the ISMRMRD format will be published. Our adaptations to the
software tools that convert from Siemens, Philips and GE proprietary data
formats into ISMRMRD will be published wherever possible. (It is possible that
in some instances these tools will only be available to sites owning a scanner
from the appropriate vendor, and who therefore have a research agreement with
that vendor covering the proprietary data formats used as input for these
conversion steps.)
We envisage at this point the following restrictions on re-use:
1. Existing proprietary rights by the vendors (Siemens, Philips, GE) to raw data formats on their platforms. Our approach will be to convert data into a vendor-independent format (extended ISMRMRD format) for release so that only the acquiring sites need to use the vendors IP in these data formats.
2. The results of the NICI clinical study in terms of a predictive classifier for progression free survival will be evaluated for possible IP protection before they are published. This may be a valuable output from the project that should be exploited commercially by the consortium, or used in follow-on research.
3. Identifiable clinical data from patients enrolled in the validation study (WP5) will likely be subject to restrictions imposed by the Research Ethics Committees and by the recruiting hospitals in this study.
Unless data are earmarked for exploitation, they will be shared. The NICI
consortium will make use of two data sharing modalities, dependent on whether
or not data will be made accessible with or without restrictions:
* Open access data sharing: data will be deposited in an open access depository (e.g. at the NICI Zenodo Community https://www.zenodo.org/communities/nici/)
* Restricted open access data sharing: data will be deposited in a data depository that allows for restricted open access once access has been granted by the consortium. Interested parties can apply for access through a web portal made available by UMCU.
The data, including associated metadata, needed to validate the results
presented in scientific publications will be deposited as soon as possible.
Future revisions of this Data Management Plan may specify which intermediate
datasets can be destroyed and at what point in time. We will aim to balance
the benefits of open access to data against the resources available to share
it.
The NICI consortium Management Board will scrutinise data access requests
during their regular teleconferences. Should the number of data access
requests exceed the capacity of this committee, we will appoint a data access
committee, chaired by the Data Manager (now being recruited to UCAM).
Data deposited at Zenodo will be provided with a machine readable license
where possible.
## 6 Making data interoperable
NICI will use recognised standards to promote re-use, reproduction and
interoperability (responsibility of Management Board). This includes DICOM and
raw data formats for imaging data, PDF for protocols, CDISC for clinical
study, raw data formats, nmrML and nmrCV for NMR data, CHEER standard for
health economic evaluation data, RECIST and DICOM standards for CT reports and
coded/anonymised data for personal data. These are listed in detail in Table 1
above.
To make our data interoperable, we have chosen to use:
* Long lived file and storage formats.
* For all data of long-term value, up to a maximum of 50GB in size, we will use the open access depository, Zenodo, to publish them. Zenodo offers facilities for long term data preservation.
* During the project, internal copies of data will initially be stored at the acquiring time, and archived there for disaster recovery. Data will then be copied to the central data archive on the HPHI system at UCAM for sharing between the consortium.
## 7 Increase data re-use
### 7.1 Data archival
We intend to make the data acquired in the NICI project re-usable for 10
years.
Funding the storage of the raw data (approx. 2TB) is beyond the means of the
NICI project. 2 It also exceeds the capacity of the Zenodo platform’s free
tier. However, we will nevertheless archive the raw data at each site
performing data acquisition on a “best effort” basis by the relevant
consortium member. In practice, this is likely to achieve the 10-year archival
goal.
By publishing our data formats in the scientific literature, by releasing open
source code to interoperate with the data, and by using open access
depositories (e.g., Zenodo), we intend to make all the _**processed** _ data
outlive the NICI project. This processed data will be archived for 10 years
(or longer) through the Zenodo platform, or through one of the long-term
institutional data repositories at the University of Cambridge.
### 7.2 Quality assurance
Quality assurance for the data archival system will be developed as the post-
processing and QA pipeline is constructed in WP4. Our aim is to provide open-
source software that can interpret the data acquired during the project and
publish this alongside our scientific findings.
### 7.3 Data licencing and access procedures
Ownership and access to key knowledge is defined in the NICI Consortium
Agreement, which was based on the comprehensive Model Consortium Agreement for
Horizon 2020 (DESCA 2020). In general:
* Each partner remains sole owner of background information. Background information relevant for carrying out the work plan is available, royalty-free, to the consortium.
* Foreground information will reside with the partner(s) that generate the knowledge.
* The consortium is committed to the protection of IPR related to the project, in collaboration with expertise in legal, financing, business development and, if relevant, patenting procedures. Protection of IPR rights is overseen by the Management Board focusing on those strategies that converts IP into the highest possible value.
* Philips, GE and Siemens have prior agreements on IPR sharing; UMCU, Philips and MR Coils have prior agreements on IPR; UCAM and Siemens have a Master Research Agreement that covers IPR.
In a later version of this Data Management Plan, we will confirm the licence
and procedures for accessing the restricted data in the NICI data repository.
Our initial working point is to adapt the procedures used successfully by the
UK Dementias network or by ConnectomDB. These are both projects that have
gathered substantial amounts of imaging data in human subjects, providing this
with open access, and involving partner organisations in the EU.
## 8 Allocation of resources
It is not currently clear what the full costs for making our data “FAIR” will
be. Costs for personnel time for the Data Manager will be met from the
personnel budget allocated to UCAM. The Data Manager (recruitment in progress)
will pay special attention to:
1. Data collection and storage
2. Data standards
3. Data sharing, accessibility and exploitation
4. Data preservation and curation
The Data Manager will report regularly to the project Management Board and the
Coordinator.
We intend wherever possible to use centrally-funded repositories, avoiding
adding specific costs in the NICI project budget. The acquiring sites will
need to run the conversion tools to convert their proprietary raw data into
the NICI project’s standardised extended ISMRMRD format and Metabolic Imaging
DICOM format, then upload the data to the UCAM HPHI repository. This will
require personnel time and computing hardware which is available through their
NICI project staff and existing infrastructure at the sites.
The NICI consortium intend to nominate a Data Manager to take overall
responsibility for the execution of this Data Management Plan. This role is
planned to be allocated to a Research Associate now being recruited to the
University of Cambridge (UCAM).
The NICI Management Board will review this Data Management Plan by month 24.
An important aspect of this review will be to assess what data are likely to
be of long-standing value to the community so that these can be properly
archived before the end of the NICI project.
## 9 Data security
Data generated during the NICI validation study will be securely transferred
to the High Performance Hub for Clinical Informatics (HPHI) at the University
of Cambridge for processing and archiving. To access the imaging data users
from each NICI site will make a written application for an account on the HPHI
system. Once granted, this account will give researchers at all the
participating sites appropriate access to the data.
The HPHI was funded by part of the MRC Clinical Research Infrastructure Award
which refreshed the imaging hardware of the Wolfson Brain Imaging Centre, and
has been developed and is managed in close collaboration with the University
High Performance Computing Group. It forms part of the BioCloud initiative for
the integration of medical and biological datasets with high performance
computing facilities. The HPHI imaging nodes are hosted in the West Cambridge
Data Centre with storage and offsite backup provided. We benefit from the
Information Security expertise of the University of Cambridge central IT team.
## 10 Ethical aspects
The requirements for sharing human data, acquired in WP3 and WP5 have not yet
been established. We intend to include our proposals for access to this data
in the ethics applications for the technical development steps in WP3 and WP4,
and for the clinical study in WP5. We will incorporate the ethically approved
procedure in a later revision of this Data Management Plan.
## 11 Other issues
The great potential for exploitation of NICI results requires a sound strategy
for knowledge management, protection and data management to achieve targeted
exploitation that returns the highest value.
<table>
<tr>
<th>
**Result**
</th>
<th>
**Further research**
</th>
<th>
**Exploitation of products and processes**
</th>
<th>
**Exploitatio n**
**of services**
</th>
<th>
**Standardisatio**
**n**
</th> </tr>
<tr>
<td>
Predictive biomarker signature for patient stratification using 7T MR imaging
technology
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
Metabolic MR imaging processing pipeline/software
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Biomarker identification and validation processes
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
Validation of MR accessories for deep tissue (phosphorus) imaging
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Sales of 7T upgrades
</td>
<td>
</td>
<td>
X
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Understanding of treatment biology
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
Validated prototype Bodycoil
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Metabolic acquisition
Hardware
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td> </tr> </table>
## d. General strategy for knowledge management and protection
All IP generated before the start of the project will continue to belong to
the partner that brings in this IP. In addition, NICI creates new knowledge in
the form of results, procedures and possibly patents. The important aspects in
the process of valorising this knowledge are:
* The participants of the NICI consortium are committed to the protection of intellectual property rights (IPR) related to the project results. Continuous assessment of potential for commercialization of results obtained in the course of the research will be undertaken in collaboration with expertise in legal issues, financing, business development and, if relevant, patenting procedures. The Technology Transfer Office (TTO) affiliated with UMCU (called
Holding UU) and the corresponding unit at partners Philips, Siemens, GE,
MRCoils and Tesla
will provide specific support to the coordinator for the management of IP
assets. Philips, GE and Siemens already have agreements on sharing IP, and
also UMCU, Philips, Tesla and MR Coils have consortium agreements on IP.
Protection of IP rights within the consortium will be assessed separately each
time knowledge is generated, and is overseen by the Management Board. Measures
for exploitation will be oriented on market needs to ensure that a strategy is
chosen that converts IP into the highest possible value.
* Our general strategy is to provide open access according to the ‘gold’ open access model to all data and results which may be utilized for further research activities by third parties (scientists and developers outside the consortium). Publication of sensitive data, e.g. in the context of commercial exploitation prospects for consortium members will be managed according to the ‘green’ open access route, unless there is a specific benefit of prompt dissemination, or unless disclosure is shown to be impossible during the ethical review. The coordinator will ensure that all those associated with the research, whether staff, students, fellows or visitors, are aware of, and accept, these exploitation requirements.
* The NICI Consortium Agreement, based on DESCA 2020, and approved by all partners provides the specific rules regulating intellectual property in the NICI project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1472_NEWTON-g_801221.md
|
# DELIVERABLE DESCRIPTION
This document establishes the _Data Management Plan_ (DMP) for data collected,
generated and handled by the NEWTON-g consortium, during and after the project
lifecycle.
The definition of a DMP is crucial for adequately manage and preserve project
data and to make data findable, accessible, interoperable, and re-usable. The
present document provides all the information needed for implementing the DMP
of NEWTON-g.
The application of the DMP will be periodically checked, and we expect that,
over the course of the project, this document will be reviewed and updated.
# DATA SUMMARY
## Purpose of the data collection/generation and origin of the data
NEWTON-g aims at overcoming the current limits of terrain gravimetry, that are
imposed by the high cost and by the characteristics of currently available
gravimeters. To pursue this objective, an innovative new system for gravity
measurements will be developed in the framework of the project, based on the
joint use of MEMS devices and a quantum gravimeter. After a phase of design
and production of the instrumentation (years 1 and 2), NEWTON-g involves a
field test of the newly developed “ _gravity imager_ ”, which will be carried
out at Mt. Etna volcano, during the last two years of the project.
We expect that the bulk of the data that will be produced in the framework of
the project will come from the sensors in the “ _gravity imager_ ”, after its
deployment on the summit zone of Etna. The latter will include an array of 20
to 30 “ _pixels_ ”, equipped with continuously recording MEMS gravimeters, at
elevations ranging between about 2000 and 3000m a.s.l. The array of relative
MEMS devices will be anchored to an absolute quantum gravimeter, installed
within the area covered by the array, which will also acquire data in a
continuous fashion. Most experimental data in NEWTON-g will thus come from
continuous measurements of the gravity field at each node of the array, during
the phase of field test (years 3 and 4). Besides gravity, other parameters
will be acquired at the nodes of the array, to recognize anomalies due to
sources other than the ones of interest. These complementary parameters
include ambient temperature, atmospheric pressure, rainfall and soil moisture.
Before the deployment of the “ _gravity imager_ ” in the summit zone of Etna,
experimental data are also produced in the framework of laboratory and on-site
tests, aimed at checking whether the characteristics of the devices under
development comply with the requirements, in terms of accuracy, stability and
ruggedness.
In all cases, experimental data involve the measurement of analogic signals by
means of physical detectors; data are then digitalized and stored through a
suitable data acquisition software.
Besides experimental data, numerical and analytical modeling will be performed
under NEWTON-g, with the aim of simulating the physical processes behind the
development of measurable gravity changes; hence, theoretical information will
be also produced.
The large majority of the data and products of NEWTON-g will be generated by
the partners, in the framework of the project research. It is likely that
already existing data and software are re-used to undertake project activities
(for example, past gravity data collected by instruments in the monitoring
network of Etna, managed by INGV-CT). Reused data and software will mostly
belong to NEWTON-g partners from previous investigations.
Exceptionally, data from research groups out of the consortium might be used,
if the right to do so is granted by the data owners.
## Data types, formats and sizes
Most experimental and theoretical data produced under NEWTON-g will be in the
form of spreadsheet text files (e.g., *.txt; *.dat; *.csv). As stated before,
the bulk of NEWTON-g data will be produced during the phase of field test,
when the newly developed “ _gravity imager_ ” will be deployed in the summit
zone of Etna volcano. Each “ _pixel_ ” of the imager will most likely acquire
data at a rate of 1Hz, implying an average file size within 10 Mb per day, per
device. Hence, we expect that, during the 2-year deployment interval, the
amount of data generated will be on the order of 200 Gb. Considering
additional data and products generated in the framework of other project
activities, the total size of the project database should not exceed 300 Gb
## Data utility
Data generated within NEWTON-g are, in first instance, needed by the partners
to reach the objectives of the project. On the other hand, we foresee that
most of the collected and generated data will be reused for further research
on topics related to terrain gravimetry and volcanology.
# FAIR DATA
## Making data findable
NEWTON-g participates in the ORD Pilot under H2020 and is thus expected to
deposit generated and collected data in an open online data repository. As
reported in Deliverable 1.1 (Data Policy Guidelines), openly shared data and
products will be made available through the ZENODO repository
(https://zenodo.org), an OpenAIRE and CERN collaboration that provides secure
archiving and referability, including digital object identifiers (DOIs).
ZENODO is set up to facilitate the findability, accessibility,
interoperability, and reuse of data sets, thus being especially suitable to
ORD projects. Other online FAIR-compliant data repositories will be
considered, depending on types and formats of data to share. To that end,
beneficiaries will refer to the Registry of Research Data Repositories
(re3data) and Directory of Open Access Repositories (OpenDOAR) for useful
listings of repositories that might be suitable for NEWTON-g outputs. When
uploading new material to ZENODO or to another repository, the producing party
will provide all the mandatory and recommended metadata (type of data,
publication date, title, authors, description, terms for access rights, etc.).
As for data naming convention, if data are related to a published research,
the file name will include the following items:
* _first author name_
* _article reference (standard journal abbreviation, issue number, year, page)_
* _version_
If the dataset involves a set of data files, it will be shared as a single
compressed folder. In this case, the name of the folder follows the above
convention, while the names of the single files in the folder are freely
chosen by the uploading party, who makes sure that repetitions are avoided.
Concerning (i) the naming convention of data not related to published research
(e.g., public presentations, posters, public deliverables) and (ii) further
information on metadata and making data findable (approach towards search
keywords, approach for clear versioning, and specification of standards for
metadata creation), all this will be outlined in subsequent versions of the
DMP, that will be developed as the project progresses and data is identified
and collected
Each task leader will be responsible for depositing relevant data in ZENODO or
another appropriate online repository. Data will be made accessible within one
month of publishing the paper based on the data themselves in peer-reviewed
scientific articles or similar.
Both during the embargo period and afterwards (see next section) experimental
data obtained through field measurements and laboratory tests, as well as
theoretical data from physical modelling will be stored in the project Data
Center (FTP server in the facilities of the coordinator institution). In this
case, metadata will be compliant with the standard that is being developed in
the framework of EPOS (European Plate Observing System; https://www.epos-
ip.org), an infrastructure that is meant to facilitate integrated use of data,
data products, and services from distributed research infrastructures for
solid Earth science in Europe. EPOS metadata standard follows the CERIF model,
which is fully interoperable with the most common metadata formats. EPOS is
also developing an integrating semantic environment, where domains and
keywords are defined, in order to speed up the metadata discovery
(http://wiki.epos-ip.org/index.php/Category:ICTArchitecture).
## Making data openly accessible
The data and products that will be generated in the framework of the project
include:
* experimental datasets resulting from laboratory and on-site tests, during the phases of development of the new devices (years 1 and 2);
* experimental datasets resulting from measurements of gravity and complementary parameters, during the phase of field test at Mt. Etna volcano (years 3 and 4);
* datasets resulting from physical modelling (synthetic data);
* publications and datasets related to published articles;
* public deliverables;
* software;
* demonstrators, videos and photographs related to the dissemination activities;
* technical manuals.
As for the definition of Users, NEWTON-g foresees:
* Anonymous Users for data discovery via metadata exploration; Registered Users for data downloading.
Unpublished NEWTON-g data that could be included in patents or that could
jeopardize further publications, if made immediately open, will be:
Embargoed for a given period after their production;
* Made freely available afterwards.
As reported in Deliverable 1.1 (Data Policy Guidelines), all data produced
under
NEWTON-g will be immediately available to the project partners, via a
dedicated Data Center, in order to warrant continuity to the research
activities and smooth cooperation within the consortium. The datasets related
to published articles, i.e., needed to validate the results presented in the
publications, as well as other information resulting from project activities
(public deliverable reports, demonstrator videos and pictures approved for
dissemination by the consortium, technical manuals for future users, etc.)
will be made publicly available through ZENODO (https://zenodo.org), or other
open repositories. Conversely, access to data and products that may hinder
future publications or patents will be restricted to the project partners for
an embargo period of five years after the date of submission to the database,
or two years after the end of the project, whichever occurs first. At the end
of the embargo period, data generated under NEWTON-g will be open and publicly
available for re-use and sharing. During the embargo period, applications to
use embargoed data, that come from outside the project consortium, will be
considered on a case-by-case basis by the owner(s) of the requested data, who
exercise full control over granting or refusing access to the data.
In order to be granted access to NEWTON-g data, external applicants will need
to provide a brief description of their research subject and of how they
intend to use the data. In addition, applicants will have to agree to the Data
Conditions of Use of NEWTON-g, reported in Section 6 (Condition of Use) of
deliverable D1.1 (Data Policy Guidelines). Both during the embargo period and
afterwards experimental dataset will be stored in the project Data Center, a
repository managed by INGV-OE that will be accessible through an FTP service
on public IP address. Users outside the consortium will be asked to provide
basic information to obtain the credentials to log into the FTP server. The
required registration to access the FTP repository will be implemented by a
form on a dedicated web page of the official project website and will involve
the following steps:
* The User fills a web form where they supply basic information (i.e. name, surname, institute, email) and accepts the policy on the data;
* The request is validated by the Data Center administrator(s);
* Credentials are generated and sent to the user;
* The User logs into the FTP server with his own credentials.
Data, metadata and documentation stored in the Data Center will be organized
in a folder tree. To provide everyone an overview of what is stored in the
Data Center of NEWTON-g, metadata will be made freely accessible through (i)
anonymous login (without registration) to the FTP Data Center and (ii) the
project website.
Most data generated under NEWTON-g do not need specialized software to access
them. One exception involves source code, that may need specialized software
to be executed.
## Making data interoperable
Adequate solutions will be adopted to facilitate interoperability of data
generated by NEWTON-g. Experimental and synthetic data stored in the project
Data Center will be compliant with the standards of EPOS, an infrastructure
which is especially meant to foster worldwide interoperability in Earth
sciences (https://www.epos-ip.org). Data publicly available through ZENODO
will include a description (a dedicated field is available in the upload form)
to identify contents and data collection conditions.
Standard vocabulary will be used for all shared data types, to allow inter-
disciplinary interoperability
## Increasing data reuse
In order to permit the widest reuse of shared data, NEWTON-g will adopt
licensing models, such as Creative Commons (CC). Data can thus be shared and
re-used under terms that are flexible and legally sound.
CC licenses require that users provide attribution (BY) to the creator, when
the material is used and shared. The other conditions of CC licenses depend on
how BY is combined with the other three license elements:
* NonCommercial (NC), which prohibits commercial use of the material;
* ShareAlike (SA), which requires that any adaptations of the material are released under the same license;
* NoDerivatives (ND), which does not allow the user to modify the material.
For each product that is shared for re-use, the kind of CC to adopt will be
chosen by the partner(s) who generated it.
Before being shared for reuse, NEWTON-g outcomes will be quality-checked. The
quality of any shared dataset is the main responsibility of the partner(s) who
produced it. Key concepts of data quality in Earth observation will be
adopted, including: completeness, accuracy, reproducibility, consistency with
other results.
Data and products from NEWTON-g will continue to be re-usable in the long term
after the end of the project.
# ALLOCATION OF RESOURCES
ZENODO repository is free of charge. The Data Center of NEWTON-g is hosted in
the facilities of the coordinator institution (INGV-CT). The work involved in
preparing and uploading the data sets is part of the scientific work on the
project and is covered by the funding for personal costs. We estimate it at
0.2 to 0.4 person-month per year and partner.
The cost of creating/supervising/updating the data management is also covered
by the personal costs.
The project manager, Letizia Spampinato, and the coordinator, Daniele Carbone,
are responsible for the dissemination of the DMP within the NEWTON-g
consortium and for supervising its global implementation. However, partners
are responsible for implementing the DMP at their level: data production,
quality assessment, uploading, providing metadata, etc.
# DATA SECURITY
The data generated and collected in the frame of NEWTON-g are not sensitive,
implying that there are no risks of breaching confidentiality. Indeed, as
reported in Annex 1 of the Grant Agreement: “ _NEWTON-g will not involve
either activities or results raising security issues, or ‘EU-classified
information’ as background or results_ ”.
Data in the project Data Center will be safeguarded through the implementation
of appropriate procedures of regular backup and disaster recovery.
Furthermore, data backup on servers of partner institutions will be
encouraged.
# ETHICAL ISSUES
Not relevant for NEWTON-g data. Indeed, as reported in Annex 1 of the Grant
Agreement: “ _The activities that will be carried out in the frame of the
project do not raise ethical issues_ ”.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1473_VES4US_801338.md
|
<table>
<tr>
<th>
Task-1.1
</th>
<th>
PUFA and pigment data
</th>
<th>
Quantification of omega 3 PUFAs and pigments in the selected species biomass
(Excel and text files)
</th>
<th>
Open access peerreviewed publication as supplementary
material
</th> </tr>
<tr>
<td>
Task-2.1
</td>
<td>
Comparative yields and size distribution of EV isolated and pre-concentrated
by
TFF
</td>
<td>
Quantification of EV proteins (BCA) and size distribution analysis by NTA
(Excel and
text files)
</td>
<td>
Open access peerreviewed publication as supplementary
material
</td> </tr>
<tr>
<td>
Task-3.2
</td>
<td>
Physical and
biochemical characterisation of EVs by microfluidic diffusion sizing
</td>
<td>
Identification of vesicles by lipid and protein staining and of average
vesicle sizes
(Images and text files)
</td>
<td>
Open access peerreviewed publication as supplementary
material
</td> </tr> </table>
Table 2. Identification of key data sets relevant to year-2 of VES4US for
preservation and sharing
Relevant WP + task Data set designation Type of data and data files Outlet and
format for
preservation and
sharing
<table>
<tr>
<th>
Task-1.2
</th>
<th>
Differential EV content of best performing strain grown under varying
conditions
</th>
<th>
Quantitative and qualitative analyses of EV content in cell-free cultivation
medium
(Excel and text files)
</th>
<th>
Open access peerreviewed publication as supplementary
material
</th> </tr>
<tr>
<td>
Task-4.2
</td>
<td>
Qualitative and
quantitative characterization
data for functionalization of liposomal model systems according to a click
chemistry approach
</td>
<td>
Quantification of functional groups (assays, Excel and Origin files), dynamic
light scattering data (raw data + Origin files), zeta potential measurements,
cryo-TEM images, characterization of modified antibody (SDSPAGE and
differential scanning fluorimetry), flow cytometry measurements
(raw data + origin files)
</td>
<td>
Open access peerreviewed publication manuscript +
supporting
information and upload of raw data into open data
repository
</td> </tr> </table>
Task-5.1 Toxicity and Quantitative and qualitative Open access peerbioactive
potential analyses of EV effects reviewed publication of EVs and of their
(including cytotoxicity and as supplementary engineered genotoxicity) on
several cell material counterparts lines (Excel and text files)
_How will the data be collected or created ?_ Various scientific methods,
protocols and instruments will be used in VES4US. Many will be based on well-
established approaches which are detailed in the scientific literature and are
already part of in-house procedures used in the laboratory of the Principal
Investigators (PIs) within the consortium. Some experiments will use
commercially available kits which are batch numbered and manufactured with
Quality Control checks. New frontier research will also be carried out in
VES4US, for which protocols have not been developed, and could be considered
further for potential licencing. A harmonised structure for data storing (file
naming, folder structure) has been agreed between the PIs. Time will be made
for reviewing periodically the consistency and quality of the data collected
(review of data or representation, adequacy of calibration, measurement
repeatability), which will be documented by the PIs, Task Leaders and Work
Package Leaders within VES4US.
_To whom might it be useful ?_ The data generated throughout the project will
be used by the consortium members and may be of interest to the European
Commission services and European Agencies, EU National Bodies, the specialist
niche and broader scientific community as well as the general public. The data
produced as part of VES4US will also be of use to a variety of industrial
actors affiliated to the therapeutic medicine, cosmetic, nutraceutic and
instrumentation manufacturing sectors.
# DOCUMENTATION AND METADATA
_What documentation and metadata will accompany the data ?_ Rigorous data
documentation will be carried out so as to prevent misuse, misinterpretation
or confusion by secondary users and to facilitate understanding and reusing
data. This will include basic details such as:
* who created or contributed to the data,
* data set title,
* date of creation
* conditions under which specific data can be accessed,
* Details on the methodology used,
* analytical and procedural information,
* definitions of variables,
* units of measurement,
* assumptions made,
* format and file type of the data,
* data published or not (if so, link to be added)
The information will be captured at the end of planned experiments and
recorded and stored by the PIs, Task Leaders and/or Work Package Leaders.
Metadata files will be created as ‘readme’ text file to help secondary users
with data localisation and description. The use of metadata standards will be
considered based on the quality of the results; the catalogue of disciplinary
metadata standards maintained by the internationally-recognised centre of
expertise Digital Curation Centre (DCC) will be considered to this effect.
The assignment and management of persistent identifiers (PIDs) to the most
relevant data associated with EV characterisation will be assessed in the
course of the project. A naming convention has been agreed upon for metadata,
datasets and templates will consist of three parts separated by an underscore:
1) a prefix indicating whether the file is a dataset, metadata or a template,
2) a root composed of a short description of the file content and name of file
provider and 3) a suffix indicating the date of the last updated version. An
example could look like the following :
VES4US_dataset_EVdistribution_CNR_Palermo_ABed140219.
_Where will the data/metadata be located ?_ Until data sets are made available
as supplementary material in publications or a data repository centre is
chosen, all key files pertaining to data sets or publication material will be
stored in dedicated folders on the Google Drive account shared by the VES4US
consortium. Selected information will also be made publicly available on the
VES4US website when appropriate.
# ETHICS AND LEGAL COMPLIANCE
_How will ethical issues be managed ?_ Consent or anonymisation will not be
needed as no human personal data will be used or generated. The use of human
cell lines is standard practice in many research and 3 rd level
institutions. Moreover, the use of an invertebrate model as C. elegans has few
ethical concerns for the public and is highly supported by the E.U.
(Resolution on the protection of animals used for scientific purposes,
5/05/2009). The foreseen experiments on human cells and animals (mice and
rats) will be carried out according to the appropriate ethical requirements,
as detailed in part B (chapter 5) of the Grant agreement. As described in the
same section, all related documentation (including copies of authorisations
for the supply of animals and the animal experiments and copies of training
certificates/personnel licences of the staff involved in animal experiments)
are stored and will be provided upon request.
_How will copyright and Intellectual Property Rights issues be managed ?_ No
specific plan for licensing the data is anticipated as of now; this will be
further explored as part of the exploitation plan of VES4US (D.7.3 and D.7.7).
This might be revised depending on the results generated and discussions
amongst the VES4US consortium members. To that end, specific data sharing
might be postponed or partially restricted to protect proprietary information
should licensing or the filing of patents be considered.
# STORAGE AND BACK UP
_How will the data be stored and backed up during the research ?_ Electronic
files will be stored on computers, external storage devices and hard drives
but, most importantly, also placed on shared drives within the host
institution networks, which are automatically backed up periodically and
reviewed by IT services staff. Specifically, a backup of the Google Drive
shared files is periodically (once a week) created on an external hard drive
by the VES4US coordinator. Hard copies of key data will also be kept within
laboratory log-books. Relevant files suitable for sharing will also be made
available via access on the Google Drive specifically set up for the project.
_How will access and security be managed ?_ No sensitive confidential data in
terms of personal information is associated with VES4US; data privacy risk is
therefore not a key issue. Basic security management will be adhered to via
the use of password-protected computers and instruments in restricted access
rooms. Key data files and folders will also be password-protected on a case-
bycase basis. Sensitive data will be identified by PIs, Task Leaders and/or
Work Package Leaders and placed in a dedicated folder on the shared Google
Drive prior to considering their suitability for patenting or publishing.
# SELECTION AND PRESERVATION
_Which data are of long-term value and should be retained, shared, and/or
preserved ?_ No data are anticipated to be subjected to destruction for
contractual, legal or regulatory purposes. Key data generated through the Work
Packages will be selected for long-term retention. This will be informed by
the data deemed suitable for publication or which would be needed as
foundation or validation work for future spin-off experiments. It is
anticipated that data/protocols could be translated and re-used as part of
some of the teaching programmes in place in some of the partner institutions
within VES4US. All staff members involved in VES4US will be required to
prepare data and other files for sharing and preservation for facilitating
data access by secondary users.
_What is the long-term preservation plan for the dataset ?_ Some of the data
will be preserved beyond the period of funding. For example, some strains
might be deposited in international culture collections or biomass/extracts
kept in freezers for future validation by third parties. Key data sets
relevant to EV characterisation will be deposited, during year-3, for long-
term storage in repository centres so that a ‘persistent identifier’ is
associated with the data for easy discoverability; the search for the most
relevant centres will be investigated during the project (eg. Zenodo, EVpedia,
EVtrack, exocartaor vesiclepedia) using specific online tools (eg. Re3data).
Some are free or have reasonable rates, the expenses for which would be
covered by the VES4US budget. Table 3 indicates the data sets that are
anticipated to be deposited in data repository centres.
Table 3. Identification of key data sets probably suitable for deposition in
EV-related data repository centres during the final year of VES4US
<table>
<tr>
<th>
Relevant WP + task
</th>
<th>
Data set designation
</th>
<th>
Type of data and data files
</th>
<th>
Outlet and format for preservation and
sharing
</th> </tr>
<tr>
<td>
Task-3.3
</td>
<td>
Proteomic data for natural source EVs
</td>
<td>
To be determined
</td>
<td>
Open access data
repository centres
</td> </tr>
<tr>
<td>
Task-3.4
</td>
<td>
Lipidomic data for EVs purified from natural source
cultures
</td>
<td>
To be determined
</td>
<td>
Open access data
repository centres
</td> </tr> </table>
# DATA SHARING
_How will the data be shared ?_ Potential data users will be informed about
the type of data available and their location upon dissemination (peer-
reviewed publications, international conferences, national symposia) and
outreach activities (workshop, secondary school visits, social media
platforms). This information will also be present on the relevant VES4US
webpages as well as the final theses of the postgraduate students recruited,
which will be made available via inter university library loans. As per
general practice, published data will be made available within 6 months of
publication and ‘green-gold routes’ given consideration. This specific aspect
will be discussed in more depth during Steering Committee meetings. Open
Access peer reviewed manuscripts and data sets will also be uploaded onto
scientific networking platforms (e.g. ResearchGate, Zenodo) for sharing.
Requests for access to data will be handled directly during the lifetime of
the project. Thereafter, selected data files will be accessible via specific
repositories (yet to be decided upon). Conditions (eg. acknowledging the reuse
of the data) will be made onto potential users depending on the type, size,
complexity and sensitivity of the data sought. Specific data sharing might
also be postponed or partially restricted to protect proprietary information
should licensing or the filing of patents be considered.
# RESPONSIBILITIES AND RESOURCES
_Who will be responsible for data management ?_ The implementation of the DMP
(data capture, metadata production, data quality, storage and backup, data
archiving and data sharing) will be the responsibility of the VES4US steering
committee, which will periodically review progress. All contributors to the
Tasks, Milestones and Deliverables of VES4US will help compiling data sets and
outputs and specifying the level of sharing associated with such activities.
_What resources will be required to deliver the Data Management Plan ?_
Scrutiny into the level of resources to commit towards the full implementation
of the DMP in VES4US will be reviewed during year-2 of the project, especially
when the identification of a suitable data repository centre will be
discussed.
# CONCLUSION
VES4US is committed towards the training of a highly qualified workforce to
meet the future needs of the European society and to develop a knowledge-based
economy. Adherence to the FAIR principles of data findability, accessibility,
interoperability and reusability is seen as essential to sustain the continuum
of data generation and interpretation amongst EU-funded projects with finite
life-cycles. VES4US will hence make the data, publications and/or outcomes
generated throughout its duration (and after its completion) accessible to a
variety of relevant end-users such as European Agencies, National Bodies, the
specialist niche and broader scientific community as well as the general
public and industrial actors.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1477_REG GAM 2018_807089.md
|
<table>
<tr>
<th>
</th>
<th>
REG IADP
Data Management Plan
807089_DELIVERABLE_D5.4
</th>
<th>
</th> </tr> </table>
# DEFINITIONS
**Background** means any data, know-how or information – whatever its form or
nature (tangible or intangible), including any rights such as intellectual
property rights – that: _(a)_ is held by the beneficiaries before they acceded
to the Agreement, and _(b)_ is needed to implement the action or exploit the
results.
**Results** means any (tangible or intangible) output of the action such as
data, knowledge or information – whatever its form or nature, whether it can
be protected or not – that is generated in the action, as well as any rights
attached to it, including intellectual property rights.
**Dissemination** means the public disclosure of the results by any
appropriate means (other than resulting from protecting or exploiting the
results), including by scientific publications in any medium.
**Open Access** means the practice of providing on-line access to scientific
information that is free of charge to the user and that is re-usable. In the
context of R&D, open access to 'scientific information' refers to two main
categories:
1. Peer-reviewed scientific publications (primarily research articles published in academic journals)
2. Scientific research data: data underlying publications and/or other data (such as curated but unpublished datasets or raw data)public disclosure of the results by any appropriate means (other than resulting from protecting or exploiting the results), including by scientific publications in any medium.
REG IADP DMP retains REG GAM 2018 n.807089 Beneficiaries’ obligations only,
i.e. Art.29.2 “Open access to scientific publications” - _Each beneficiary
must ensure open access (free of charge online access for any user) to all
peer-reviewed scientific publications relating to its results_ .
**Exploitation** means the use of results in further research activities other
than those covered by the action concerned, or in developing, creating and
marketing a product or process, or in creating and providing a service, or in
standardisation activities.
**Communication** is about informing the general public about the existence of
the program and its main outcomes.
**Peer-reviewed Publication** means publications that have been evaluated by
other scholars.
# Preamble
### According to H2020 guidelines (reference document “H2020 Programme -
Guidelines to the Rules on Open Access to Scientific Publications and Open
Access to Research Data in
_Horizon 2020_ ”), RED IADP GAM Coordinator provides the Consortium with a
Data Management Plan [REG IADP DMP] for the years covered by the Work Program,
explaining the rules on open access to scientific peer reviewed publications
that Beneficiaries have to follow.
The Data Management Plan is integrated in the Dissemination one.
# Executive Summary
Open access does not imply that the Beneficiaries are obliged to publish their
results; it only sets certain requirements that must be fulfilled if they do
decide to publish them, and Data Management Plan (DMP) describes the data
management life cycle for the published data.
Projects that opt out are still encouraged to submit a DMP on a voluntary
basis: the case for the REG IADP Project.
REG IADP Consortium DMP rationale (i.e. W _hy have open access to publications
in CS2?)._
Modern research builds on extensive scientific dialogue and advances by
improving earlier work. The Europe 2020 strategy for a smart, sustainable and
inclusive economy underlines the central role of knowledge and innovation in
generating growth.
Broader access to scientific publications therefore helps to:
* build on previous research results (improved quality of results)
* encourage collaboration and avoid duplication of effort (greater efficiency)
* speed up innovation (faster progress to market means faster growth)
* involve citizens and society (improved transparency of the scientific process).
This is why the EU wants to improve access to scientific information and to
boost the benefits of public investment in research funded under Horizon 2020.
The Commission considers that there should be no need to pay for information
funded from the public purse each time it is accessed or used. Moreover, it
should benefit European businesses and the public to the full.
This means making publicly-funded scientific information available online, at
no extra cost, to European researchers, innovative industries and the public,
while ensuring that it is preserved in the long term.
Under Horizon 2020, the legal basis for open access is laid down in the
Framework Programme and its Rules for Participation. These principles are
translated into specific requirements in the Model Grant Agreement and in the
Horizon 2020 Work Programmes.
REG IADP Consortium aims to ensure open access (free of charge, online access
for any user) to peer-reviewed scientific publications relating to its
results.
In particular, as soon as possible and at the latest on publication, to
deposit a machinereadable electronic copy of the published version or final
peer-reviewed manuscript accepted for publication in a repository for
scientific publications (i.e an online archive).
Moreover, to ensure open access to the deposited publication and to the
bibliographic metadata that identify it.
Under these definitions, 'access' includes not only basic elements - the right
to read, download and print – but also the right to copy, distribute, search,
link, crawl and mine.
The two main routes to open access are:
**Self-archiving / 'green' open access** – the author, or a representative,
archives (deposits) the published article or the final peer-reviewed
manuscript in an online repository before, at the same time as, or after
publication. Some publishers request that open access be granted only after an
embargo period has elapsed.
**Open access publishing / 'gold' open access** \- an article is immediately
published in open access mode. In this model, the payment of publication costs
is shifted away from subscribing readers.
Costs related to open access are eligible as part of the grant, if they fulfil
the general eligibility conditions specified in the Grant Agreement.
REG IADP Data Management Plan has taken shape by the Horizon 2020 FAIR
(Findable, Accessible, Interoperable and Re-usable) Data Management Plan
template, being inspired by FAIR as a general concept.
The DMP is intended to be a living document in which information can be made
available on a finer level of granularity, through updates as the
implementation of the Project progresses and when significant changes occur:
REG IADP DMP will be updated on yearly base.
To avoid any misconceptions about REG IADP open access to peer-reviewed
scientific publications, the following logic flow-chart, which shows open
access to scientific publication and research data in the wider context of
dissemination and exploitation, has to be retained:
# Applicable documents
REG IADP Consortium applicable documents are:
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1478_GeoTwinn_809943.md
|
# Introduction
The major aim of the GeoTwinn project is to significantly strengthen Croatian
Geological Survey (HGI-CGS)’s research potential and capability. HGI-CGS will
benefit from a range of research tools, technologies, software and methods at
the disposal of GEUS and BGS-UKRI. The project will also develop active
collaboration and partnership between people; involving talented scientists
within HGI-CGS and highly productive scientists within GEUS and BGS-UKRI, who
in a number of cases are world-leading experts in their field. Two-way
scientific exchanges and training programs will support HGI-CGS to strengthen
research in four important geoscience subject areas, which are at the core of
most world-leading geological surveys and geological research institutes:
1. 3D geological surveying and modelling;
2. Advanced groundwater flow and contaminant transport modelling;
3. Geological hazards;
4. Geothermal energy.
GeoTwinn, being Twinning project under Coordination and Support Action
programme, **will not collect new data** . The training programs will mainly
use existing data already at disposal to HGI-CGS. Data issued from other
providers (INA, City of Zagreb, etc.) will be used as requested by the owners.
All results from the GeoTwinn project will be integrated in the Digital
Academic Archives and Repositories ( _DABAR_ ) , under open access licence.
As defined in _Guidelines on FAIR Data Management in Horizon 2020_ ,
GeoTwinn will ensure that all data produced through the project actions are
findable, accessible, interoperable and reusable (FAIR).
This document is considered to be a living document; therefore it will be
continuously updated during the project.
# Data summary
What is the purpose of the data collection/generation and its relation to the
objectives of the project?
The GeoTwinn project aims to deliver a coordinated and targeted programme of
training, knowledge exchange and collaboration, involving world-leading
experts, to embed cutting-edge research techniques, tools and knowledge within
HGI-CGS, leading to a significant and measurable improvement in their
geoscientific research capability. The data collected for the project will
enable learning of data storage and manipulation, which will enable building
of complex geological models. Namely, the training sessions are based on real
data, enabling development of representative geological models.
What types and formats of data will the project generate/collect?
Data of diverse types will be collected and produced at all stages of the
project. Workflows for data manipulation and storage will be developed
according to the planned deliverables. Types, formats, origin and estimated
size of data are listed below ( **Table 1** ).
Most of the data used for training will mainly originate from HGI-CGS.
**Table 1** : Types and formats, origin of the data and estimated size of data
used/produced in GeoTwinn per each WP (*Common typical geological import
formats, but not limited to; **Common typical geological export + custom
formats, but not limited to)
<table>
<tr>
<th>
**WP**
</th>
<th>
**Software used**
</th>
<th>
***Input formats**
</th>
<th>
****Output formats**
</th>
<th>
**Origin of the data**
</th>
<th>
**Size of the data**
</th> </tr>
<tr>
<td>
**1.1**
</td>
<td>
GeoVisionary
</td>
<td>
*.mxd, *.mve, *.sess,
*.xml, *.xls,
*.dxf, *.dat*.dat, *.txt, *.csv, *.xyz, *.tsv
*.dat, *.txt, *.csv, *.asc*.dat,
*.cps, *.cps3, *.dat, *.grd, *.flt, *.*
*.dem *.tif *.tiff *.jpg *.jp2 *.gif *.img *.png *.adf *.dt0 *.bmp *.bt *.ntf *.ter
*.hgt *.dim *.dt *.ecw *.ers *.fits *.lan *.gis
*.nat *.vrt *.bag *.blx *.xlb *.grb *.kap *.adf *.mpr *.xpm *.doq*.shp *.kml *.tab *.adf *.e00 *.ntf, *.map, *.dat, *.ecl, *.GRDECL, *.DATA, *.GRID, *.EGRID,
*.mve, *.int, *.off, *.OFF, *.db, *.ihf, *.idf, *.gp, *.pl, *.mx, *.ts, *vs,*so, *.wl, *.gpx,
*.dat, *.p701, *.flt,*.dat, *.asc*.ext, *.obj, *earth, *.dat, *.bin*.segp, *.segy, *.sgy, *.seg, *.sg, *.dat, *.vrml, *.wrl, *.iv, *.3ds,
*.dat, *.dat, *.zmap, *.3di, *.lin, *.flt,
*.dat, *.out, *.krg, *.xyz
</td>
<td>
*.mxd, *.mve,
*.sess,
*.dat,
*.txt,*.csv,
*.asc*.dat,
*.txt,*.csv,
*.asc*.dat,
*.txt,*.csv, *.asc
*.dat, *.tif *.tiff *.jpg *.jp2 *.gif
*.img *.png,
Grids and
Mesh surfaces, seismic and
well formats, 3D pdf, etc.
</td>
<td>
HGI-CGS, City of Zagreb, Croatian
Hydrocarbon
Agency
</td>
<td>
Up to 10 Gb, possible
more
</td> </tr>
<tr>
<td>
SIGMA
</td> </tr>
<tr>
<td>
SKUA-GOCAD
</td> </tr>
<tr>
<td>
NERC-BGS
GROUNDHOG DESKTOP
</td> </tr>
<tr>
<td>
Midland Valley MOVE
</td> </tr>
<tr>
<td>
**1.2**
</td>
<td>
Landmark
DecisionSpace
\+ OpenWorks
</td> </tr>
<tr>
<td>
**2.1**
</td>
<td>
ArcGIS
</td>
<td>
Raster, shp, dxf, pdf, xls, csv, dat, txt, asc, dem, tif, tiff, jpg
</td>
<td>
Raster, shp, mxd, pdf, tif, tiff, jpg, txt, asc, dat, csv, pdf, xls, gpr
</td>
<td>
HGI-CGS, DHMZ,
Hrvatske vode,
Zagrebac ki holding,
Vodovod i odvodnja
(VIO),
Zapres ic , VG
Vodoopskrba,
ZGOS
</td>
<td>
<100 Gb
</td> </tr>
<tr>
<td>
GMS
</td> </tr>
<tr>
<td>
**2.2**
</td>
<td>
R, ArcGIS
</td>
<td>
Txt, csv, xls, raster, shp
</td>
<td>
Txt, csv, xls, pdf, raster, shp, pdf
</td>
<td>
HGI-CGS,
Freely available data
</td>
<td>
1-10 Gb
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Socet Set
</td>
<td>
Raster, shp
</td>
<td>
Raster, shp, pdf
</td>
<td>
HGI-CGS, safEarth,
Freely available data
</td>
<td>
<100 Gb
</td> </tr>
<tr>
<td>
Geovisionary
</td>
<td>
Raster, shp
</td>
<td>
Raster, shp, pdf
</td> </tr>
<tr>
<td>
ArcGIS
</td>
<td>
Raster, shp
</td>
<td>
Raster, shp, pdf
</td> </tr>
<tr>
<td>
R
</td>
<td>
txt, csv, xls
</td>
<td>
txt, csv, xls, pdf
</td> </tr>
<tr>
<td>
Matlab
</td>
<td>
txt, csv, xls
</td>
<td>
txt, csv, xls, pdf
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
PhreeQC
</td>
<td>
Excel (.xlsx)
</td>
<td>
Excel (.xlsx)
</td>
<td>
DARLINGe project,
HGI-CGS
</td>
<td>
<100 Gb
</td> </tr>
<tr>
<td>
FEFLOW
</td>
<td>
Raster, shp, dxf, pdf, xls, csv, dat, txt, asc, dem, tif, tiff, jpg
</td>
<td>
fem, dac, mxd, raster, shp, pdf
</td> </tr>
<tr>
<td>
ArcGIS
</td> </tr> </table>
Will you re-use any existing data and how?
Being Coordination and Support Action project, GeoTwinn training procedures
will rely on **existing or generated data exclusively** .
Knowledge and data produced within this project is also expected to be the
basis for further research activities beyond this project in the host
organisation. The continued collaboration between the partner organisations
and their wider international research networks is also expected.
What is the origin of the data?
Origin of data is shown in the **Table 1** .
What is the expected size of the data?
The expected size of data for each WP is shown in the **Table 1** .
To whom might it be useful ('data utility')?
The project outputs mainly target project partners, scientific and other
communities, including universities and research institutes.
# FAIR data
## Making data findable, including provisions for metadata
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
Metadata for the GeoTwinn will be made available through HGI-CGS repository
within DABAR (Digital Academic Archives and Repositories) system which can be
found at _https://repozitorij.hgi-cgs.hr_ .
_DABAR_ provides technological solutions that facilitate maintenance of higher
education and science institutions' digital assets. DABAR provides a
possibility of open access publishing and increasing the visibility of the
content and the institution itself, reliable long-term data storage and
implements and promotes standard data exchange protocols (OAI-PMH).
Regarding the standard identification mechanism, every publish object will
have unique identifier which is called URN:NBN number.
All results produced in this project will be openly accessible for further
usage.
What naming conventions do you follow?
All repositories in DABAR have implemented an OAI-PMH interface. Supported
schemes are Dublin Core (DC) and Metadata Object Description Schema (MODS).
Will search keywords be provided that optimize possibilities for re-use?
Search key-words will be taken from thesauri of HGI-CGS repository and will be
provided for optimising the possibility of re-usage of the data.
Do you provide clear version numbers?
GeoTwinn will provide clear version numbers for all published documents.
What metadata will be created? In case metadata standards do not exist in your
discipline, please outline what type of metadata will be created and how.
To ensure that objects are searchable, every object placed in repository has
to be described using a prescribed set of metadata during upload. All
repositories in DABAR have implemented an OAI-PMH interface. Supported schemes
are Dublin Core (DC) and Metadata Object Description Schema (MODS) which
provide standards for our discipline.
## Making data openly accessible
Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.
All results of the analysis produced during this project will be made openly
available through the HGI-CGS repository which can be found at
_repozitorij.hgi-cgs.hr,_ or at the HGI-CGS webpage at _www.hgi-cgs.hr_ .
Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out.
At this moment there is no reason for opting out of open access regarding
publishing any of the results generated by GeoTwinn project.
How will the data be made accessible (e.g. by deposition in a repository)?
All data will be made accessible by deposition in an open access repository
stated above.
What methods or software tools are needed to access the data?
No special methods or software tools will be needed for accessing the data
because the data will be placed on open access repository in open access
formats (xsd, xslt, etc.).
Is documentation about the software needed to access the data included?
Reports containing data produced by this project will provide list of software
needed for accessing the provided data.
Is it possible to include the relevant software (e.g. in open source code)?
At this point, it won’t be possible to include the relevant software in open
source code. But, all results will be published in open formats (xsd, xslt,
etc.) in an open access repository.
Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.
Produced data and documentation regarding that data, will be deposited in a
HGI-CGS’s
repository that can be found at _https://repozitorij.hgi-cgs.hr_ and which is
openly accessible.
Have you explored appropriate arrangements with the identified repository?
There is a possibility for our official HGI-CGS repository to be listed in
OpenDOAR. OpenDOAR (Directory of Open Access Repositories) is a controlled
directory of academic repositories with open access content. It is available
at _http://www.opendoar.org/_ and encompasses various types of repositories.
If there are restrictions on use, how will access be provided?
Open access will be provided for all data produced through GeoTwinn project.
Is there a need for a data access committee?
At this point, we do not believe data access committee will be needed.
Are there well described conditions for access (i.e. a machine readable
license)?
Conditions for access are well described.
## Making data interoperable
Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
As already mentioned, our repository in DABAR has implemented OAI-PMH
interface which supports following schemes: Dublin Core (DC) and Metadata
Object Description Schema (MODS) which will be used for describing general
data i.e. Title, Author, Coauthors, owner of the data etc.
GeoTwinn also considers using Open Geospatial Consortium (OGC) open standards
or INSPIRE (INfrastructure for SPatial InfoRmation in Europe) standards for
specific, geospatial data. _OGC_ is an international, not for profit
organization, committed to making quality open standards for the global
geospatial community.
_INSPIRE_ is dealing with metadata in the domain of European Spatial Data
Infrastructure. The thematic, semantic background of INSPIRE is environmental
protection.
INSPIRE has not yet implemented mandatory new version of HRN ISO 19115: 2014
norms but this can be expected in short period of time (in year 2019).
In the way described above, we will assure all data produced by GeoTwinn
project will be interoperable.
What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
GeoTwinn will use Open Geospatial Consortium (OGC) open standards, as well as
OAI-PMH interface which supports DC and MODS schemes, on DABAR.
Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?
In order to provide inter-disciplinary interoperability, we will use standard
vocabularies for all data types in our data sets.
In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?
In case of usage of uncommon or even generate project specific ontologies or
vocabularies, we will provide mapping to more commonly used ontologies.
## Increase data re-use (through clarifying licences)
How will the data be licensed to permit the widest re-use possible?
The use of the DABAR is free of charge for any use, including public, private
and commercial use, according to the license.
All results produced in GeoTwinn will have Creative Commons Attribution (CC
BY) licence which will allow the widest re-use possible.
Any and all Intellectual Property Rights (IPR) in the geological data provided
by the DABAR are and shall remain the exclusive property of their respective
right holders.
When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible. Not applicable. The data will be re-usable from the moment they are
published.
Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why.
All data produced by GeoTwinn project will be usable by any third parties,
especially after the end of the project.
How long is it intended that the data remains re-usable?
There is no period of time intended for the data to remain re-usable. Once
they will be published in open access on repository, they are planned to
remain publicly available.
Are data quality assurance processes described?
Not applicable. At this moment, DABAR doesn’t have descriptions regarding
quality assurance processes.
# Allocation of resources
What are the costs for making data FAIR in your project?
At this moment, we cannot estimate total cost of making data FAIR in our
project.
How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions).
The data will be published in open access free of charge, where possible. In
other cases authors of scientific publications will use green or gold open
access option. The costs will be partially covered from the budget of the
project.
Who will be responsible for data management in your project?
The lead partner (HGI-CGS) and its representative will be responsible for data
management. The management decisions should be approved by Coordination Board
of the project.
Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?
The use of the DABAR is free of charge for any use, including public, private
and commercial use, according to the license. All data produced in GeoTwinn
will have Creative Commons Attribution (CC BY) licence which will allow the
widest re-use possible.
# Data security
What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)? Is the data safely
stored in certified repositories for long term preservation and curation?
The data will be safely stored in certified repository (
_https://repozitorij.hgi-cgs.hr_ ) for long term preservation. As GeoTwinn
doesn’t use, collect or preserve any sensitive data, no sensitive data will be
stored on repository.
# Ethical aspects
Are there any ethical or legal issues that can have an impact on data sharing?
These can also be discussed in the context of the ethics review. If relevant,
include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).
There aren’t any ethical or legal issues that could have any impact on data
sharing at this moment. For more information, we have prepared a document
regarding ethic issues, which is also one of the Deliverables for this project
(D7.1 POPD Requirement No. 1) and will be uploaded by the end of March 2019.
Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?
Statements regarding General Data Protection Regulation (GDPR) include
informed consent for data sharing and long term preservation.
# Other issues
Do you make use of other national/funder/sectorial/departmental procedures for
data management? If yes, which ones?
No other national/funder/sectorial/departmental procedures for data management
have been used for purposes of creating this DMP.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1480_Klimator-RSI_811562.md
|
# INTRODUCTION
The purpose of this document is to describe the improvements and the functions
that have been made for the RSI-website and to provide the reader with
information of how the webpage, www.roadstatus.info is structured. The website
is developed in the startup stages of the project and the webpage will be
continually updated to include any key technical update throughout the phase 2
project lifetime while maintaining the intellectual rights to any technical
developments.
The document is organized and structured to guide and provide the reader with
knowledge to understand the functions of the website.
To visit the webpage, use the following address:
http://www.roadstatus.info/en/home
**1.1 BACKGROUND**
Klimator has developed the innovative Road Status Information (RSI) technology
which uses several live data sources to reliably and accurately monitor and
predict road conditions within the winter seasons.
KLIMATOR-RSI uses real-time data from connected cars (supplied by our partner,
NIRA Dynamics) together with advanced meteorological modelling to increase the
resolution and accuracy of road status data and forecasts. This proprietary
technology allows a whole network of roads to be monitored continuously and
provide specific information on each small segment of road to support road
treatment decision-making. Another use we have been working on for our
technology, is by supplying real-time friction maps for public road users via
NIRA Dynamics for OEM car manufacturers, making the use of roads safer and
providing another revenue stream.
**1.2 PURPOSE OF THE DOCUMENT**
The purpose of this document is to, in detail describe the improvements and
the functions that has been made for the website and to provide the reader
with information
of how the webpage is structured.
# THE STRUCTURE OF THE WEBSITE
In the chapter below the structure of the website will be described in three
different categories. The categories are; front-page, menu and features.
**2.1 FRONT-PAGE**
The front-page of the website contains the most important information. It is
considered highly important that a person which is visiting the website finds
it user-friendly, which means that it’s easy to find the requested information
and that the user understands the purpose of the site.
## 2.1.1 THE FRONT-PAGE LAYOUT
The layout of the front page is extremely central since this is the first
impression that the visitor gets which means that it must be easy to
understand where different types of information are stored. The front-page
format is also important as it should create a feeling in the visitor. In this
case, it is significant to create a feeling of high levels of knowledge,
robustness, innovation and winter road conditions.
To create movement and to fulfil the feelings above, Klimator has chosen to
work with a “living” front page. This creates movement and the possibility to
present the most important features of RSI without getting a cluttered
feeling. As can be seen in Figure 1, 2 and 3, they are all equipped with
arrows which give the visitor the possibility to control which one of the
front pages they want to see. The features that have been selected is an
introduction to RSI, see Figure 1, an easy access to a demo account, see
Figure 2 and a link to our newsletter and information about where we will be
in the nearby future, see Figure 3\.
**Figure 1 A quick-link to an introduction of RSI.**
**Figure 2 A quick-link to where you can sign up for a demo account.**
**Figure 3 A quick-link to where you can sign up for our newsletter.**
Except for the parts that have been described above, the front-page is
containing a nonmoving part as well. The non-moving part can be divided into
three groups; the quicklinks, collaborators and the menu bar. Whish can be
seen in Figure 4 below. The two first mentioned are presented in 2.1.2
Collaborators and 2.1.3 Quick-links. The menu is presented under 2.2 Menu.
**Figure 4 The non-moving part of the front page. The red circles represent
the three**
**groups of information.**
## 2.1.2 CLEAR EXPOSER OF HORIZON 2020
It is of great importance to show the viewer that RSI has received funding
from the European Union. In Figure 5 below a view of how the expose has been
designed.
**Figure 5. The exposer of Horizon 2020 on the front-page.**
## 2.1.3 QUICK-LINKS
To make it easier for the viewer to find the needed information, quick-links
to products and conferences, has been strategically placed on the front page.
**Figure 6 The quick-links that the viewer can find on the frontpage.**
**2.2 MENU**
At the top of the website a menu has been placed. The menu contains important
information that partly can be found in the quick-links on the front page. The
purpose of the menu is to give the viewer an overview of the content of the
webpage. In Figure 7 below, the menu-bar is presented. The colors have
carefully been chosen to match the logo of RSI.
**Figure 7 The menu-bar that is placed at the top of the website.**
## 2.2.1 EXHIBITIONS
By exposing the different exhibitions that Klimator will attend and to give
the viewer the possibility to subscribe to our newsletter, Klimator has the
possibility to provide the viewer with information about the product both
trough the web and in person. It is of great importance that the access of
information is easy to receive.
## 2.2.2 SOFTWARE
RSI as a product, is divided into three packages; RSI Basic, RSI Standard and
RSI Pro. To get more information about these the viewer can use the sub menu.
The different packages are educational described to fulfill the purpose, to
give the viewer a foundation of information about RSI. The submenu is shown in
Figure 8.
**Figure 8 The submenu that provides the viewer the possibility to learn more
about RSI**
**as a product and what the different packages contains.**
## 2.2.3 ABOUT
Under About, the viewer can get some information about Klimator and Nira
Dynamics as a company and RSI as a product.
### 2.2.4 DEMO ACCOUNT
It is of great importance to make the viewer understand what a revolutionary
tool RSI is. We therefore provide anyone who has an interest of RSI a free
trial account. It is very important that it’s easy to subscribe and access a
demo account. After the viewer has subscribed a demo user key will be sent by
email.
**Figure 9 In this Figure a part of how to get a demo account is shown.**
### 2.2.5 NEWSROOM
By having a newsroom, the viewer can see the latest updates of what Klimator
does.
This is a log of exhibitions, collaborations and other major happenings for
RSI. In Figure 10 below the newsroom is shown. This must be continuously
updated.
**Figure 10 The newsroom with the latest updates.**
### 2.2.6 CONTACTS
To reach out to Klimator, the different co-workers and their contact
information is presented under Contacts. It is of great importance that the
viewer easy can reach out to Klimator for questions about RSI, the company,
the different packages and so on.
**2.3 FEATURES**
The development of features of the website is an ongoing process. So far, the
features that is, is the language feature. At the moment, the website is
possible in Swedish and in English.
**2.4 DEVELOPMENT AREAS**
There are a couple of development areas for the webpage that is still going.
Klimator wants to extend the language merge to Finnish, Norwegian, French,
Spanish, Lithuanian and German.
On the webpage we will also include information of how the data can be used
for other applications.
# DATA MANAGEMENT PLAN
Klimator has formed a DMP (Data Management Plan). The main reason for the DMP
is to create a formal document that outlines how data are to be handled. The
purpose of the DMP is ensures that data are well-managed in the present and
prepared for preservation in the future.
**3.1 DATA STRATEGY**
The overarching and driving structure of managing data is a data strategy. A
data strategy is essentially a roadmap that identifies how Klimator will
collect, handle, manage and store content. While a data strategy can vary in
its content a simple strategy explaining each component is vital.
Klimators roadmap for data strategy contains the following parts; how to
collect, handle, manage and store content. KLIMATOR–RSI works in a three-step-
process:
1. Data collection from multiple sources
2. Data processing in our climate model & interpreter
3. Data representation to the clients via a graphical user interface
The main innovation in our technology is the unique algorithms we have
developed to take the raw data and transform it into a user-friendly format
which has alerts to specifically show which segments of a road network need
treatment. Our RSI technology takes the data from external sources and uses
our climate model to form initial raw information for road surface monitoring
and forecasting and then it is sent through the interpreter model to produce
the final geographically located road surface model. This information is then
transposed using our unique user interface to give our clients a simple but
effective method for monitoring road surface conditions.
Klimator has implemented a barebones data strategy, which outlines who has
access to member data, which computers can be used for downloading reports and
how we use our mobile devices to access data. Klimator also has restrictions
on permissions for anyone who can log into our site from the backend.
Klimator is careful with our data and know exactly who is interacting with it
and how.
2. **DATA COLLECTION**
The data collection is defined by answering the two questions, “What data do
we want to collect?” and “How will we collect that data?”.
As an answer to the first question, the data that we want to collect is
presented in table 1 below. The second question of how we will collect it can
be answered as; the all data will be collected in digital format from B&M,
Nira Dynamics, local weather suppliers and RWIS.
**Table 1 The different parts of Klimator´s roadmap.**
3. **DATA STORAGE**
The data is stored through a trusted platform that can be expandable and
scalable based on your company’s current and future needs. Klimator uses data
storage spaces, such as Dropbox and Google Drive whish ensure a high security
level.
4. **DATA SHARING**
The use of raw data is only available in-house. Only processed data is
available for third-party.
Processed data can be found at the web-page. The user must sign up and log in
with unique user-keys. We also provide news letter to the market which follows
the GDPRregulation.
5. **DMP COMPONENTS**
To develop a well working Data management plan Klimator has used the template:
_TEMPLATE HORIZON 2020 DATA MANAGEMENT PLAN (DMP)_ . This template includes
the areas; data summary, FAIR data allocation of recourses, Data security,
Ethical aspects and others.
## 3.5.1 DATA SUMMARY
The purpose of the data collection/generation is to enable and generate a new
type of product for different market needs and users in road climate such as;
winter road maintenance, insurance industry, transportation and logistics,
automotive industry and the media.
**Figure 11. KLIMATOR climate and interpreter model processing chain.**
Data is absolutely necessary to generate the product RSI. The combination of
the different data sources is what makes RSI a unique product. The specific
types and formats of data generated / collected is presented in Table 2 below.
There is no existing data that is being re-used.
**Table 2 The different parts of Klimators roadmap.**
The origin of the data can be traced down to a couple of companies. The origin
of the data comes form; B&M, Nira Dynamics, local weather suppliers and RWIS.
The expected size of the data can be estimated to 35,52GB The data is needed
to create:
* The Frictionmap
* Nowcast
* Forecast
Which means that the data will be useful for the users of the product RSI. The
users can be defined as; winter road maintenance, insurance industry,
transportation and logistics, automotive industry and the media.
## 3.5.2 FAIR DATA ALLOCATION OF RESOURCES
Since we are not a research projects we have a SME's controversial protected
IP. The data is our trade secret and idea. The data is therefore not available
for third parties. However, processed data is available through our website,
which is described under Section 1 and 2 in the report or visit our website
www.roadstatus.info/en.
## 3.5.3 DATA SECURITY
The data is stored through a trusted platform that can be expandable and
scalable based on your company's current and future needs. Klimator uses data
storage spaces, such as Dropbox and Google Drive whish ensure a high security
level. The security can be divided in to three groups; Documentation, Forecast
and Vehicle.
* Documentation is stored on Dropbox and includes; texts, word-files, cod, excelfiles, picture, GIS-data, presentations and reports.
* Forecast output is stored at Amason S3 which includes the output of the system.
* Vehicle data is stored at Amason S3. The vehicle data contains all data that is connected to vehicles, friction data and measure data.
## 3.5.4 DATA HANDLING
### 3.5.4.1 GENERIC DATA FROM EXTERNAL PROVIDERS
The friction data from vehicles is received from NIRA in an anonymous format.
The data is received and stored as statistics per road segment. No special
protection is needed.
### 3.5.4.2 PRIVATE DATA ACQUIRED FROM CAR USERS
As mentioned earlier KLIMATOR-RSI uses real-time data from connected cars,
which is supplied directly by our partner, NIRA Dynamics. NIRA dynamic handles
ALL data from the cars and has appointed a set of lawyers to ensure that the
acquisition, management and disposal of this data are handled according to the
prevalent rules. The data is, therefore, anonym for Klimator-RSI and cannot be
traced back to a private person. Once the anonym data is used in the RSI
project the data is stored through a trusted platform that can be expandable
and scalable based on the company’s current and future needs. For data
storage, Klimator uses modelling servers with extra backup on Amazon Simple
Storage Service (S3), which is designed to deliver 99.999999999% of durability
( _https://aws.amazon.com/s3/_ ). Klimator RSI project is working towards a
Docker solution where the modelling servers can be recreated and restarted on-
the-fly whenever there is need for extra resources or if a server instance
fails.
For documents, Klimator uses storage space such as Dropbox and Google Drive,
which ensure a high security level.
The disposal of all data is handled according to the General Data Protection
Regulation.
## 3.5.5 OTHER
Klimator is following the laws and restrictions of EU General Data Protection
Regulation (GDPR).
# CONCLUSIONS
The RSI-website has been developed in the early stages of the present project.
The development of the website will frequently be updated to include any key
technical update throughout the phase 2 project lifetime while maintaining the
intellectual rights to any technical developments. By creating a Data
Management Plan, we can ensure a correct use of the data and knowledge
generated by KLIMATOR with respect to the RSI project.
# BIBLIOGRAPHY
1. Klimator H2020 project Grant Agreement 2018.
2. Guidelines on FAIR Data Management in Horizon 2020 V 2.0 – 15.02.2018
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1481_PAINLESS_812991.md
|
# Introduction
## Purpose of the data collection/generation and its relation to the
objectives
The purpose of data collection in PANLESS is to aid the design of new
telecommunications techniques and technologies, and to examine and verify
their usefulness. The purpose of data generation will be again to verify the
usefulness of the techniques, and the overall success of the project, as well
as to disseminate and demonstrate the outcomes of the project.
## Types and formats of data
PAINLESS may collect a) wireless network traffic data, to aid the development
of telecommunication techniques, b) measurement data from our experiments, to
evaluate and refine our techniques. There is no personal data to be collected
throughout the progress of the program. Any data collection within the
proposed research, may involve test data and measurements for the training and
development of communication and energy management algorithms. Within the
training program, data collection may involve attendance statistics and
attendance sheets, cleared by the attendees as per the GDPR. Generated data
may involve a) test/traffic data and results, b) software that implements the
developed algorithms, c) measurement and experimental results, d) papers and
reports of our outcomes. These may be in the formats of raw data sets, new
software and codes, or documents.
## Existing data re-use
Test, measurement or traffic data may be used to develop and refine our
wireless communication and energy management techniques.
## Origin of the data
Traffic and measurement data may already reside with the PAINLESS partners
from previous research or be requested from external parties in the course of
the project. All availability of existing data is subject to the IP
regulations detailed in the CA, and for external partners, subject to IP
agreements if needed.
**_Size of the data_ **
Up to a few Gbits of data.
## Data utility
The generated data will be useful to the research community for further
development, to industry for commercialization and standardization, and
indirectly to the public that will benefit from the PAINLESS technologies.
# Activities carried out and results
## Making data findable, including provisions for metadata
* The project publications will be published in IEEE, IET or other journals and conferences, all of which have unique identifiers such as Digital Object Identifiers. Other data such as measurement results and codes to be made accessible (subject to IP restrictions) will have an associated metadata document (stored as a .txt file) which describes key aspects of the data.
* Event listings are stored in a central spreadsheet and individual events are assigned a unique identifier of the format XXX_YYYYMMDD where XXX is the partner short name (as defined in the definitions and acronyms table) and YYYYMMDD is the start date of the event.
* Project deliverables are assigned a unique identifier PAINLESS-DX.XYYYYMMDD. All files made publicly available should reference PAINLESS in their name, and we recommend the convention PAINLESS-xxxxxxxx where xxxxxxx is a meaningful short description. Photographs and audio/visual recordings should be named PAINLESS-XXX-YYYYMMDD-nnnnnnnn where XXX-YYYYMMDD is the event identifier and nnnnn is a brief description of the event/photograph content.
* An allowance has been made for this in the project metadata to optimize possibilities for re-use.
* Every Dataset will have an associated text document with its associated metadata.
## Making data openly accessible
* The only data which de-facto will not be made openly accessible will be data which contains personally identifiable information (e.g. individual evaluation forms). These will be summarised, and any individual forms used for research publications (such as inclusion in ‘user stories’) will be redacted or anonymised before online storage. In addition, datasets, measurements, codes that are IP restricted as per the CA will not be made available in full, but the consortium will strive to make meaningful parts of these available for reproducibility. We will also strive to keep such restricted data to a minimum.
* During the project, a subset of summary data (e.g. event visitor statistics and feedback summaries) will be made accessible by one or more methods below:
* Via newsletters, reports and other publications on the online knowledge sharing platform (togetherscience.eu) developed as part of WP3;
* Via partner’s local websites;
* Via social media;
* The PAINLESS website will provide open-access to the summer-schools proceedings ensuring a wide spread of the results and an increased awareness of the excellence of the PAINLESS network;
* The project’s journal/magazine articles will be made available to the wide public through open access and self-archiving, such as ArXiv, OpenAir, IEEE Open Access, and we will pursue open access publication venues.
* Detailed data will be available to all consortium partners via the project shared drive (with the exception of individual questionnaires which will be stored at each partner’s premises). The access to this drive is restricted to project partners. Should other individuals wish to access the data for research purposes during the project, it will be openly shared on request. At the end of the project, data to be preserved will be stored in a suitable data repository. At this stage, we are using Microsoft Sharepoint.
* Data will be published using standard file formats (pdf, csv and others).
* With the exception of the knowledge sharing platform, all data will be accessed using standard tools. It is the responsibility of the Beneficiaries to provide appropriate documentation to make measurement results and software readily accessible and reusable.
* A relevant software is not seen as being a requirement, but should it be needed, we will provide the required open source to access and analyse the data, such as codes implementing our algorithmic solutions, or measurement/test results.
* For the duration of the project, any data and associated metadata and documentation will be stored on the shared drive, with no restrictions on use. At this stage, we are using Microsoft Sharepoint. Access conditions will be based on the FAIR principles. Internal or confidential data will only be accessible on a password-controlled central storage facility. For open data, we have not identified a need to identity the person accessing the data.
## Making data interoperable
Data produced in the project are interoperable, therefore standard file
formats and inter-disciplinary vocabularies will be used to facilitate data
exchange and re-use. It is envisaged that every dataset will have metadata,
aside of the project publications which will be open access and accessible as
outlined in the previous section.
## Increase data re-use
It is planned that Creative Commons Licenses will be used for all data to be
preserved. Data will be made available in accordance with what specified in
the Consortium Agreement Section 9, i.e.:
* Access Rights to Results and Background Needed for the performance of the own work of a Party under the Project are requested and granted on a royalty-free basis.
* Access Rights to Results if Needed for Exploitation of a Party's own Results shall be granted on Fair and Reasonable conditions, subject the Party requiring the grant of such Access making a written request to the Party from which it requires the Access Rights.
* Party and the Requesting Party shall not be otherwise deemed granted.
* Access Rights to Affiliated Entities will be granted on Fair and Reasonable conditions and upon written bilateral agreement.
All Personal Identifiable Information will be restricted to internal usage and
not going to be shared with third parties. For shared information, standard
format, open source software, and proper documentation will guarantee re-
usability by third parties.
Data will remain re-usable for 10 + years, subject to EC policy changes.
Quality Assurance is the responsibility of the MB of the project.
## Allocation of resources
An allowance of £2,440 has been made by the co-ordinator to cover the project
website and archiving and storage requirements (including manpower to prepare
and manage data as well as storage fees). Any additional costs will be covered
by the project’s common basket.
The MB is the ultimate responsible body for data management.
## Data security
All envisaged including any personal data such as individual questionnaire
responses will be stored in the project’s share point, which will only be
accessible on a passwordcontrolled central storage facility. Personal data
will be destroyed at the end of the project or as per GDPR regulations.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1483_TRI-HP_814888.md
|
Deliverable D8.8
EXECUTIVE SUMMARY
TRI-HP EU project is aiming to develop trigeneration integrated solutions that
combine heating, cooling and elec-tricity generation, based on heat pumps
running with natural refrigerants and using multiple renewable energysources.
During TRI-HP project, research data will be generated, collected and even
reused. These data will be made avail-able to the general public through
publication with open access, which is a mandate for H2020 projects. In
addition,TRI-HP participates in the Open Research Data Pilot (ORDP), which is
a program that aims to make data accessi-ble and available for anybody. It
applies primarily to the data needed to validate the results presented in
scientificpublications, but other data can also be provided by the
beneficiaries on a voluntary basis, as stated in their DataManagement Plans
(DMPs).
Having a DMP is mandatory for projects participating in ORDP. The purpose of
DMP is to facilitate good data handling during and after the end of a
project, indicating which data to collect/process/generate, the method-ologies
and standards followed, which data will be shared/made open access, and how
data will be curated andpreserved.
In TRI-HP project, the following tasks/activities will generate data that need
to be handled:
• • ••••••••••••• Task 2.3. Barriers, hindrances and incentives towards the
social acceptance of TRI-HP systems. Task 3.3. Laboratory testing at sample
scale (icephobic coatings). Task 3.4. Laboratory testing of immersed coaxial
tubes with circulating water (icephobic coatings).Task 4.2. Testing and
optimizing of supercoolers.Task 4.4. Testing and optimizing a tri-partite gas
cooler.Task 4.6. Testing and optimizing a dual-source heat exchanger.Task 4.7.
Simplified TRNSYS modelling and validation (heat exchangers).Task 5.3.
Assembly of the prototypes and first experimental campaign.Task 5.4. Heat pump
upgrading and second experimental campaign.Task 5.6. TRNSYS modelling and
validation (heat pumps).Task 6.2. Experimental validation of the efficiency-
self-diagnosis system.Task 6.5. Preliminary validation of the advanced
management system in a simulation environment.Task 7.4. Whole system test and
optimization.Task 7.6. Model calibration and yearly simulation assessment.Task
7.7. Simulation scale-up, cost assessment and extrapolations to EU-28.
It has been decided that these data will be uploaded to the repository
currently used in the project, SWITCHdrive,to allow the access and
collaboration of the project partners. Data and metadata will be added and
preparedfollowing the guidelines indicated in this document.
In addition to this, the Norwegian Centre for Research Data (NSD) repository
will be used to comply with H2020 ORDP, making research data underlying
publications available. If additional public data not directly related to pub-
lications are uploaded to this repository, this will be indicated in future
updates of this DMP. Datasets will be givena persistent identifier DOI, with
relevant metadata and closely linked to 814888 grant number and TRI-HP
projectacronym. Data are licensed after signature of the "Archiving Agreement"
with NSD, in which the project partnerswill specify the access and reuse of
the datasets. Data security arrangements are defined for the SWITCHdriveand
NSD repositories. Ethical aspects affecting data sharing have been considered.
**DMP** Data Management Plan
**DOI** Digital Object Identifier
**ICT** Information and Communications Technology **IPR** Intellectual
Property Right
**NSD** Norwegian Centre for Research Data **ORDP** Open Research Data Pilot
## 1 INTRODUCTION
### 1.1 DATA MANAGEMENT PLAN
During TRI-HP project, research data will be generated, collected and even
reused. These data will be made avail- able to the general public through
publication with open access, which is a mandate for H2020 projects. In
addition,TRI-HP participates in the ORDP, which is a program that aims to make
data accessible and available for anybody.It applies primarily to the data
needed to validate the results presented in scientific publications. Other
data canbe provided by the beneficiaries on a voluntary basis, as stated in
their DMPs.
Having a DMP is mandatory for projects participating in ORDP. The purpose of
DMP is to facilitate good data handling during and after the end of a
project, indicating which data to collect/process/generate, the method-ologies
and standards followed, which data will be shared/made open access, and how
data will be curated andpreserved.
### 1.2 STRUCTURE OF THE DOCUMENT
The document is structured as follows:
• • •••• Section 2 states the data summary and the procedures to upload data
to the different repositories. Section 3 describes the main principles for
FAIR data management in TRI-HP and how it will comply with the H2020 Open
Access Mandate. Section 4 describes the allocation of resources to make this
open access to publications and data possible.Section 5 gives a detailed
description of data security arrangements.Section 6 deals with ethical
aspects, if any, connected to data management in TRI-HP project.Section 7
deals with other aspects that do not fit the previous sections.
### 1.3 UPDATES OF THE DATA MANAGEMENT PLAN
Projects evolve while they progress and, thus, it is not realistic to have a
fully detailed DMP with all the answers andinformation clear in the first
version, which in TRI-HP project is due in month 7. A DMP is seen as a living
document,in which information is refined in subsequent updates as the project
progresses. The following updates to the DMPare foreseen in TRI-HP project’s
Grant Agreement (814888 project number):
•• D8.9. Data Management Plan DMP (update M24). Due by M24 (February 28,
2021).D8.10. Data Management Plan DMP (update M48). Due by M48 (February 28,
2023).
## 2 DATA SUMMARY 2.1 GENERAL ASPECTS
The objective of TRI-HP Project is to develop trigeneration integrated
solutions that combine heating, cooling and electricity generation, based on
heat pumps running with natural refrigerants and using multiple renewable
energy sources. In order to fulfill this ambitious objective, the different
partners in the consortium will work solving smallerchallenges. Work in all
these different aspects/topics will comprise experimental and/or simulation
campaigns. has been chosen.The results from these activities will be either
confidential, for internal use in the project, or public and published with
open access to the general public. As part of the ORDP, in TRI-HP we will make
data linked to the publications available (to validate the results from the
publication). Additional data not linked directly to publications will
beincluded only if the partners involved in the specific task consider it
convenient. For this purpose, NSD1 repository
TRI-HP consortium foresees various types of data: results of experimental
campaigns and simulations at com- ponent and system level,
images/pictures/video from tests, answers from interviews, etc. The formats of
thesedata will be also diverse.
A particular case is that of the heat pump models developed within WP 5 and
the results obtained with them, which are confidential. However, the partners
involved in this modelling, TECNALIA and NTNU, will use an approach basedon
the use of Gitlab and Sourcetree to handle and develop them safely and
collaboratively.
### 2.2 DATA SUSCEPTIBLE TO OPEN ACCESS MANDATE
The data used in TRI-HP project will be new data and no re-use of data is
planned. The origin of these data will be different tasks, as indicated in
Table 2.1. This table includes a short description of the data expected, kind
ofdata and probable formats (using as much as possible the preferred file
formats according to the NSD and shownin Table 2.2), which deliverables are
associated with the task and if they are linked to any publication. It is
earlyto define the size of the datasets, so this topic will be included in a
future update. This list is preliminary and willbe completed and changed with
the progress of the project.
The data resulting from the activities in TRI-HP project will be useful for
the following groups:
•••• Researchers: other researchers will have the possibility to use data for
other studies, comparisons, valida-tions, etc.Manufacturers: heat pump
manufacturers, even competitors, will benefit from TRI-HP’s results and
conclu-sions, being a benchmark for their systems and assisting them with
strategic decisions concerning theirbusiness, products, etc.European
regulators: some of the results from TRI-HP could be useful for European
regulators in order tomake decisions concerning which technologies to support,
new project calls to launch, etc.Final users: even if it is unlikely that the
data from this project could be relevant for the average final user,some users
could understand the benefits of installing heat pumps systems, if the outcome
of the projectis inline with their goals.
~~1NSD - Norwegian Centre for Research Data is a natio~~ to research data, and
to improve opportunities for empirical research through a wide range of
information and support services. NSD’score value is that research data is a
collective good that should be shared. For more informationnal archive and
center for research data, that aims to ensure open and easy access www.nsd.no
2.2. Data susceptible to Open Access Mandate
**Table 2.1:** Research data to be generated in TRI-HP project.
**Task Delivarable Short name/description Kind of data and formats
Publication? Comments**
<table>
<tr>
<td>
Task 2.3
</td>
<td>
D2.2
</td>
<td>
Surveys on barriers towards TRI-HP systems
</td>
<td>
Statistical (.xls)
</td>
<td>
Yes
</td>
<td>
No personal data will be handled.
</td> </tr>
<tr>
<td>
Task 3.3/3.4
</td>
<td>
D3.5
</td>
<td>
Test results of icephobic coatings
</td>
<td>
Test data (.xls, .csv)Images (–)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 4.2
</td>
<td>
D4.6
</td>
<td>
Test results of supercoolers
</td>
<td>
Test data (.xls, .csv)Images (–)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 4.4
</td>
<td>
D4.3
</td>
<td>
Test results of tri-partite gas cooler
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 4.6
</td>
<td>
D4.4
</td>
<td>
Test results dual-source heat exchanger
</td>
<td>
Test data (.xls, .csv)Images (–)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 4.7
</td>
<td>
D4.7
</td>
<td>
Validation TRNSYS heat exchanger models
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 5.3
</td>
<td>
D5.5
</td>
<td>
Test results heat pump prototypes
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 5.4
</td>
<td>
D5.6
</td>
<td>
Test results heat pump prototypes refined
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 5.6
</td>
<td>
D5.8
</td>
<td>
Validation TRNSYS heat-pump models
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 6.2
</td>
<td>
D6.3
</td>
<td>
Validation self-diagnosis efficiency system
</td>
<td>
Data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 6.5
</td>
<td>
D6.5
</td>
<td>
Validation AEM system through simulations
</td>
<td>
Data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 7.4
</td>
<td>
D7.4
</td>
<td>
System test results R290 systems
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Partially via D7.9
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 7.4
</td>
<td>
D7.8
</td>
<td>
System test results R744 system
</td>
<td>
Test data (.xls, .csv)
</td>
<td>
Partially via D7.9
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 7.6
</td>
<td>
D7.9
</td>
<td>
Energy performance/cost competitiveness of systems
</td>
<td>
Data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr>
<tr>
<td>
Task 7.7
</td>
<td>
D7.10
</td>
<td>
Benefits TRI-HP systems in Europe
</td>
<td>
Data (.xls, .csv)
</td>
<td>
Yes
</td>
<td>
–
</td> </tr> </table>
2.2. Data susceptible to Open Access Mandate
**Table 2.2:** Type of data and preferred file formats by NSD. Formats most
likely to be used are bolded.
<table>
<tr>
<td>
Textual documents
</td>
<td>
**• • ** ••OpenDocument-text (.odt)Rich text format (.rtf) **PDF/A (.pdf) MS
Word (.doc, .docx) **
</td> </tr>
<tr>
<td>
Plain text
</td>
<td>
•Unicode-text (.txt)
</td> </tr>
<tr>
<td>
Spreadsheets
</td>
<td>
• **•••** • PDF/A (.pdf) OpenDocument-spreadsheet (.ods) **Comma- and
semicolon-separated values (.csv)Tab-separated values (.txt)Excel file format
(.xls, .xlsx)**
</td> </tr>
<tr>
<td>
Database
</td>
<td>
••••Comma- and semicolon-separated values (.csv)Tab-separated values (.txt)MS
Access (.mdb, .accdb)ANSI SQL (.sql)
</td> </tr>
<tr>
<td>
Tabular/statistical data
</td>
<td>
••••PASW/SSPSS (.sav, .por)STATA (.dta)SAS (.sas)R (.R, .Rdata, ...)
</td> </tr>
<tr>
<td>
Image
</td>
<td>
**• • ** • **•** Scaleable vector graphics (.svg) **JPEG (.jpg, .jpeg) TIFF
(.tif, .tiff) PDF/A (.pdf) **
</td> </tr>
<tr>
<td>
Video
</td>
<td>
• **••** •MPEG-2 (.mpg, .mpeg)QuickTime (.mov) **MPEG-4 H264 (.mp4)Lossless
AVI (.avi)**
</td> </tr>
<tr>
<td>
Audio
</td>
<td>
••WAVE (.wav)MP3 AAC (.mp3)
</td> </tr>
<tr>
<td>
Geospatial information
</td>
<td>
•ESRI shapefiler (.shp and similar formats)
</td> </tr> </table>
#### **Type of data Preferred file formats**
2.3. Upload instructions
##### 2.3 UPLOAD INSTRUCTIONS 2.3.1 SWITCHdrive repository
been established for research data that shall be made available for the
general public and will be uploaded in the NSD repository (see next sub-
section). In addition, project partners are encouraged to upload their
research data,including those not associated with publications and even
confidential within the consortium, to the correspondingWP-folders in order to
prevent any loss of valuable information/data.SWITCHdrive has been established
by the project coordinator (SPF-HSR) as repository to allow for safe sharingof
information and data among the partners in the project. A dedicated folder, _
ResearchData_OpenAccess _ , has
###### 2.3.2 NSD repository
It is important to plan archiving in advance, since research data that shall
be made accessible (e.g. linked to pub- lications) latest on article
publication. Each partner is responsible for uploading the datasets
created/collectedby them.need to create an NSD profile, ask for access to the
project (contact Ángel Álvarez Pardiñas through e-mailfied cases, project
partners could transfer to NTNU the task of archiving their research data in
the repository. Inany case, these data shall be first available in the
SWITCHdrive. [email protected] If needed, NTNU - leader of Task 8.4
on data management, will assist the partners.), and upload the data to the
repository. As an alternative and just in justiPartners-
To create an account in NSD.
2341\. Choose the log in option (Figure 2.1). Unless access with FEIDE
(exclusive for Norway) or eduGAIN is pos. Accesssible, use Login with Google
(create an account if needed). https://minside.nsd.no/ . -
65\. If Google log in is selected, Write e-mail and password.. If the log in
information is correct, the user will access his/her profile, which should be
similar to that in. NTNU, as responsible of the DMP, has created a project
with the name "TRI-HP. Trigeneration systems. An e-mail will be sent to the
user inbox with a hyperlink to open the project and add it to the user’s
profile.Figure 2.2.based on heat pumps with natural refrigerants and multiple
renewable sources". Ask NTNU ( [email protected] ) to share access to the
project. angel.a.
NSD suggests the following steps to archive data.
1\. Prepare data
d) Language? English (and Norwegian).b) Are there any personal data? This will
not apply to TRI-HP project.g) Is the data quantitative? Variable names and
descriptions must be understandable, i.e. documentationa) Is it the final
version? e) Is the dataset in one of the preferred data formats (Table 2.2)?
c) Relevant documentation/metadata is included? Clarifications on this issue
are included in sectionf) More than one file? Overview of files must be
enclosed with the description of the individual files, i.e.3.1.4.documentation
at dataset level (section 3.1.4).at variable level (section 3.1.4).
2.3. Upload instructions
**Figure2.1:**
LoginalternativestoNSD
.
**Figure 2.2:** User website in NSD.
h) Are there transcribed interviews? This will not apply to TRI-HP project.
23\. Deposit data files, using the NSD website created for TRI-HP project,
which can be accessed as explained. Sign archiving agreement. The user
receives, within two to three working days, an e-mail confirming recepabove. A
form shall be filled out to capture the most important information about the
project and data.tion of the data and an archiving agreement. This agreement
defines access conditions for the data. Oncesigned and returned to NSD, they
start preparing the data and metadata. Confirmation is sent by NSD whenthe
data is available. -
### 3 FAIR DATA
TRI-HP project works according to the principles of FAIR data (Findable,
Accessible, Interoperable and Reusable).The project aims to maximize access to
the research data generated in the project so that it can be re-used.
Thisapplies to data intended to be public and used in publications. At the
same time, there are datasets that shouldbe kept confidential for commercial
and Intellectual Property Right (IPR) reasons. Details are given in Table 2.1
inSection 2.2.
#### 3.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
**3.1.1 TRI-HP and NSD repository**
TRI-HP will use the TRI-HP project website created in NSD repository as the
main tool to comply with the H2020Open Access Mandate and with TRI-HP’s
participation in ORDP program. All scientific articles/papers and
publicdatasets will be uploaded to this community in NSD, named according to a
convention, with Digital Object Identifier(DOI) and Metadata (see subsequent
subsections).
##### 3.1.2 Naming convention
Data related to publications or deliverables will be named using the following
naming conventions:
_H2020_AcronymProject_DeliverableNumber_DescriptiveTextDataset_UniqueDatasetNumber_Version_
_H2020_AcronymProject_PublicationNumber_DescriptiveTextDataset_UniqueDatasetNumber_Version_
**Example:** H2020_TRI-HP_D4.4_TriPartiteGasCooler_HXs1_1_v 1
**3.1.3 Digital Object Identifiers (DOI)**
DOI’s for all datasets will be reserved and assigned with the DOI
functionality provided by NSD. DOI versioning willbe used to assign unique
identifiers to updated versions of the data records.
##### 3.1.4 Metadata
the aim of the study, who is responsible for the project and the methods
applied. Second, atan overview of the different files and how they relate to
each other. Third, atunderstandable to outsiders.As recommended by NSD1,
metadata should be provided at three different levels. First, at **variable
levelproject level** , in order to make data **dataset level** , describing,
with
_Project level_ The metadata at project level shall include
(examples/explanations are given for some categories):
•••• Title: H2020–TRI-HP–Deliverable/Publication–Descriptivename.Institution:
institution responsible for the data.Responsible: person responsible within
the institution.Copyright.
_3\. FAIR data 3.2. Making data openly accessible_
••••••• Abstract: short description of the data collected, the purpose behind
these data, etc.Keywords: help to maximize the possibilities for re-use of the
data.Dates of collection: Start: YYYY-MM-DD. End: YYYY-MM-DD.Kind of data:
survey, tests (results, images), simulations, etc.Procedures.Access: who
should be given access, when is data made available, etc.Other comments.
_Dataset level_ The metadata at dataset level shall include:
• Name: shall follow the following structure.
••••• Format: .pdf, .xls, .csv, .jpg, etc.Size (optional).Date created: shall
correspond to the date in the name.Date modified (if any).Short description of
information in the file. _TRI-HP_TaskNumber_Date(YY-MM-
DD)_UniqueDatasetNumber_Version_
_Variable level_ This applies to structured and quantitative data. The names
given to the variables shall be as self-explaining as possible, so that it is
possible to minimize the information that needs to be given so that outsiders
understand thedata. Ideally, the metadata at variable level for a certain
dataset could include:
••••• Variable group.Variable name.Description.Units: SI units are
preferred.Values: range of values.
#### 3.2 MAKING DATA OPENLY ACCESSIBLE
The H2020 Open Access Mandate aims to make research data generated by H2020
projects accessible with as few restrictions as possible, but also accepts
protecting sensitive data due to commercial or security reasons.
All public datasets (and associated metadata and other documentation)
underlying publications will be uploadedto NSD repository and made open, free
of charge, latest when the publication is available. Other datasets with dis-
semination level "Public" will also be made open through the same repository.
Publications and related datasetswill be linked through persistent DOIs.
Datasets with dissemination level "Confidential" will not be shared. Infor-
mation on the public datasets was included in Table 2.1 in Section 2.2
It is expected that most of the data will be accessible using usual software
tools (.pdf readers, text editors, spread-sheet editors and others). In case
any special software is needed, it will be detailed in the corresponding Meta-
data.
3. _FAIR data 3.3. Making data interoperable_
3.3 MAKING DATA INTEROPERABLE
Data within TRI-HP project are intended to be re-used by other researchers,
institutions, organizations, etc. Thus,the formats chosen for the datasets
shared are widely used and in most cases accessible using open
softwareapplications. Vocabulary will be kept as standardized as possible, and
further explanations will be given in caseuncommon terminology is used.
#### 3.4 INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES)
TRI-HP project will enable third parties to access, mine, exploit, reproduce
and disseminate (free of charge forany user) all public datasets. The use for
the datasets is specified in the "Archiving Agreement" supplied by NSD,which
is signed by the owner of the dataset after their archive. The information
will be documented as "DepositRequirement", "Citation Requirement" and
"Restrictions" for further use. NSD offers the possibility to have somedata
not freely accessible, but these data can be order through a form, with an
access later and confidentialityagreement to be signed. However, this will not
be the case with the public data within TRI-HP.
##### 3.4.1 Availability of the TRI-HP research datasets
For data published in scientific journals, the underlying data will be made
available no later than by journal publi- cation and linked to this
publication. Data associated with public deliverables will be shared once the
deliverablehas been approved by the European Commission. is to have data for
"unforeseeable future".Public data will remain archived and re-usable for at
least 5 to 10 years in the NSD repository2. NSD’s perspective
~~2Minimum stated in the Core Trust Seal requirements~~
# 4 ALLOCATION OF RESOURCES
TRI-HP uses standard tools and free of charge repository. The costs of data
management activities are limited toproject management costs and will be
covered by the project grant. TRI-HP publications in Open Access journalsor
with Open Access agreements will also be covered by the grant. The following
amounts have been allocated bythe different partners for this purpuse:
•
••••• ISOE: 2000NTNU: 3000UASKA: 3000IREC: 3000 SPF-HSR: 3000 TECNALIA:
3000€€€.. €. . €. €.
NTNU is responsible for TRI-HP’s data management, which is associated with
Task 8.4 from WP 8 - Disseminationand Exploitation. Task 8.4 is lead by NTNU.
### 5 DATA SECURITY
The aspects concerning security of the research data generated/used in TRI-HP
project are covered in this chap-ter. 5.1 ACTIVE PROJECT - UTILIZATION OF
SWITCHDRIVE REPOSITORY
At the beginning of TRI-HP project, a SWITCHdrive repository was established
by SPF-HSR to allow for safe sharing of information and data among the
partners in the project. This repository will be active during the period in
whichthe project is active and beyond, namely until at least one year after
the project ends. Files from SWITCHdrivefolder that have been deleted are not
removed immediately, but moved to the folder "Deleted-files" from wherethey
can be easily restored, if needed. Deleted files are stored in this folder for
90 days and only after that theywill be permanently deleted. SWITCHdrive has a
backup system that can be used to restore the whole system incase of disaster,
but this cannot be claimed for individual restore requests. Thus, the HSR-SPF
network drive andin addition a back-up on an external drive will be used.
Some safety facts about SWITCHdrive are that:
••• all data are stored in SWITCH servers in Switzerland,there is full
compliance with Swiss data protection regulations,and there is no data and
metadata exchange with other Office companies.
A dedicated folder, able for the general public and will be uploaded in the
NSD repository. In addition, project partners are encouragedto upload their
research data, including those not associated with publications and even
confidential for the con-sortium, to the corresponding WP-folders in order to
prevent any loss of valuable information/data. _ResearchData_OpenAccess_ ,
has been established for research data that shall be made avail-
#### 5.2 REPOSITORY \- DATA SECURITY AS SPECIFIED FOR NSD
TRI-HP project has chosen NSD’s repository. All scientific publications,
public deliverables, and public researchdatasets will be uploaded to NSD
repository and made accessible for everyone.
Some facts concerning information security and maintenance are explained in
NSD’s website. These facts aresummarised below.
• The purpose of NSD’s information security is to secure the data’s
confidentiality, integrity and accessibility.
* Confidentiality: data are not accessible to unauthorised persons/systems.
* Integrity: data are not changed or destroyed by unauthorised means. **–** Accessibility: data resources are available for use when required.
••• Access control: NSD keeps an updated overview of who has access to
relevant Information and Communi- cations Technology (ICT) systems.
Training: all NSD employees/users sign the necessary declarations and are
given an introduction to NSD’ssecurity guidelines and the consequences of
breaching the guidelines before they are granted access to anactivated user ID
for NSD’s ICT systems.Declaration of secrecy: everyone with access to personal
data and/or IT systems that NSD is responsiblefor are required to sign the
company’s declaration of secrecy (new one every third year).
_5\. Data security 5.2. Repository - Data security as specified for NSD_
•••• Backups are made in accordance with the requirements of accessibility.
Storage media for the backupare labelled to facilitate finding and recovering
it. NSD keeps backup copies separate from the operatingequipment/computer room
in a locked and fireproof cabinet (external location). To avoid physical
wearand tear on tapes/disks/storage media, incremental backups are replaced at
expedient intervals. Backupcassettes are used for five weeks. After each
period, a complete backup copy is transferred to a secureexternal location.NSD
shall document and store all new datasets using Nesstar Publisherif not
possible in Nesstar. Every other year, NSD reviews the data collection to
check and, if relevant, updatethe file formats.Repository lifetime: the
minimum repository lifetime is 5 to 10 years, but NSD foresees repository for
the"unforeseeable future".CoreTrustSeal: NSD is certified as a credible and
reliable archive of research data and awarded CoreTrust-Seal. NSD meets
requirements connected to: 1 or in the most compatible format
**–––––––––** safe operations and continuous access to archived data in a
long-term perspective,disciplinary and ethical standards,sufficient funding
and expertise,information security,metadata to provide retrieval and
reuse,workflows from data submission to data dissemination,citation,licensing
andtechnical infrastructure.
~~1Nesstar publisher is an advanced data management~~ tool owned and developed
by NSD
# 6 ETHICAL ASPECTS
Currently, no ethical or legal issues that can have an impact on data sharing
have been identified. Ethical aspectsconnected to research data generated by
the project will be considered as the work proceeds.
Use and storage of e-mail addresses in TRI-HP’s SWITCHdrive repository: An
e-mail address is by definition personal information and covered in GDPR
participants are stored in the SWITCHdrive repository. Only the project
participants invited have access. The e-mail address is a prerequisite to
access the project’s working area. By accepting the invitation to
SWITCHdrive,participants consent the use and storage of their e-mail
addresses. E-mail addresses will be deleted when accessto the project area is
no longer needed. 1\. The e-mail addresses of project
SPF-HSR and SWITCH (the organization handling the repository) comply with the
General Data Protection Regula-tion.
# 7 OTHER ISSUES
No other issues or aspects concerning data management are foreseen currently.
~~1The General Data Protection Regulation (EU) 2016/6~~ 79 (GDPR)
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement N. 814888\. The sole
responsibility for the content of this paper lies with the authors.It does not
necessarily reflect the opinion of the European Commission (EC).The EC is not
responsible for any use that may be made of the informationit contains.
©TRI-HP PROJECT. All rights reserved.
Any duplication or use of objects such as diagrams in other electronic
orprinted publications is not permitted without the author’s agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1484_M-Sec_814917.md
|
# 1\. Introduction
This document is developed as part of the M-Sec (Multi-layered Security
technologies to ensure hyper connected smart cities with Blockchain, BigData,
Cloud and IoT) project, which has received funding from the European Union’s
(EU) Horizon 2020 Research and Innovation programme, under the Grant Agreement
number 814917 and by the Commissioned Research of National Institute of
Information and Communications Technology (NICT) under the Grant Agreement
number 19501\.
The purpose of the Data Management Plan (DMP) is to provide an overview of the
available research data arising from the project, the data accessibility,
management and terms of use. The DMP will follow the template that the
European Commission suggests in the _“Guidelines on FAIR Data Management in
Horizon 2020”,_ current version is 3.0, dated 26th July 2016 [EC], consisting
of a set of questions that the project shall address and properly answer with
a level of detail appropriate to the project. 'FAIR' data refers to data that
is Findable, Accessible, Interoperable and Re-usable.
According to these guidelines, the DMP will include the following sections:
1. Data Summary
2. FAIR Data
5. Making data findable, including provisions for metadata
6. Making data openly accessible
7. Making data interoperable
8. Increase data reuse (through clarifying licenses)
3. Allocation of resources
4. Data security
5. Ethical aspects
6. Other issues
This deliverable presents an initial version of the DMP, and it does not
intend to answer all these questions, but to present the information on how
the actual DMP will be put together and its contents when data from the
project will become available. This document will be updated over the course
of the project and will be included within deliverable _“Project Progress
Report”_ updated consequently at the end of each year (M12, M24 and M36).
# 2\. Data Summary
According to DMP guidelines, this section will address the following questions
during the project lifetime:
1. What is the purpose of the data collection/generation and its relation to the objectives of the project?
2. What types and formats of data will the project generate/collect?
3. Will you re-use any existing data and how?
4. What is the origin of the data?
5. What is the expected size of the data?
6. To whom might it be useful ('data utility')?
From all these questions, in this initial DMP we are starting to address the
first two questions, while the other remaining four questions will be analyzed
as soon as the progress of the project provides more concrete information on
the datasets.
_What is the purpose of the data collection/generation and its relation to the
objectives of the project?_
Mainly, data generated during the project’s life will come from the specific
needs of the M-Sec pilots, but also some data will be generated for
measurement and assessment purposes of the M-Sec platform. This data
generation is directly connected with M-Sec project objectives:
* _Objective 1: To design the future decentralized architecture of IoT that will unlock the capacity of smart objects, by allowing to instantly search, use, interact and pay for available assets and services in the IoT infrastructures._
Data and metadata will be generated by risk assessment study for threat and
security threats, and mechanisms to establish seamless hyper-connectivity over
heterogeneous communication channels.
* _Objective 2: To enable seamless and highly autonomous and secure interaction between humans and devices in the context of smart city, through the use of blockchain and for business contexts relevant to specific smart city use cases enabling innovative machine-human and machine-machine interactions._
The content will not be generated by the M-Sec platform; however, the
management of security of some of this content will derive into blockchain
transactions; some of them may contain associated M-Sec metadata. These
metadata will be useful not only for pilots and for the M-Sec platform
evaluation but also for third stakeholders with similar pilots and intending
to adopt M-Sec solution.
* _Objective 3: To engineer new levels of security and trust in large scale autonomous and trust-less multipurpose smart city platforms and integrate privacy enhancing technologies at the design level of M-Sec architecture._
M-Sec platform will implement different mechanisms and security layers in
order to facilitate endto-end data security. Whether these datasets will be
made publicly available or not will have to be decided case by case depending
on several sharing criteria such as their nature, ownership or exploitability.
Preference will always be given to openness, while private datasets shall be
the exception, properly justified.
* _Objective 4: To create reference implementations of future decentralized IoT ecosystems and validate their viability and sustainability._
M-Sec will create demonstrators and ecosystems in two real IoT environments
(Fujisawa, Japan and Santander, Spain) provided by smart cities through real-
life use cases (six different use cases, 2 at a Europe level, 2 at a Japanese
level and 2 Cross-borders) and from a sensor to business model. In addition, a
novel marketplace will be implemented where smart objects will be able to
exchange information and/or services through the use of virtual currencies.
The availability of such datasets for public domain will be entirely dependent
on each use case. If such datasets are already open they will continue being
open, but those of private nature will not be disclosed unless the
corresponding use case owner has the right to take such decision and decides
to do so.
* _Objective 5: To maximize the impact of the project benefits._
This activity should not generate or manage any specific project dataset.
However, in general, all data related to stakeholders involved in community
building will be made open as long as it does not include any private data,
which will be either anonymized if possible or completely removed prior to
disclosure.
_What types and formats of data will the project generate/collect?_
Data generated by the M-Sec platform will mostly consist on open data sources,
smart cities repositories, and blockchain transactions available on the public
ledger. In addition to the data generated by M-Sec itself, the execution of
the M-Sec pilots will also require accessing and collecting different types of
data related to IoT devices, or data generated by mobile applications being
managed by the M-Sec platform.
Once each specific dataset is identified, the consortium will decide on the
precise format considering that, as explicitly mentioned in the DoA, the main
goal is to, as much as possible, use not only open formats to store the data
but also make the software open to provide the scripts and other metadata
necessary to reuse it.
M-Sec’s technical developments and results will be validated and demonstrated
through six pilot use cases, as defined in deliverable “D2.2 M-Sec pilots
definition, setup and citizen involvement report”[D22]. These pilot use cases
will include several data related activities.
# 3\. FAIR Data
## 3.1 Datasets identification and description
As specified in the guidelines of the European Commission on Data Management,
the data to be made available for open access in Europe will have to be
described using the following dataset description template (see Table 1).
These descriptions will be stored in the project’s internal repository and
will be provided within the periodic Project Progress Report.
**Table 1: Dataset description template**
**Dataset reference**
**Identifier for the dataset to be produced**
**and name**
<table>
<tr>
<th>
**Dataset description**
</th>
<th>
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
</th> </tr> </table>
**Standards and** Reference to existing suitable standards of the discipline.
If these do not exist, an **metadata** outline on how and what metadata will
be created.
<table>
<tr>
<th>
**Data sharing**
</th>
<th>
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
</th> </tr> </table>
**Archiving and** Description of the procedures that will be put in place for
long-term preservation of **preservation** the data. Indication of how long
the data should be preserved, what is its approximated end volume, what the
associated costs are and how these are planned to be covered.
## 3.2 Data Management Platforms
Regarding the EU partners, M-Sec will use OpenAIRE [OAIRE] in cooperation with
re3data [RE3], to select the proper open access repository and/or deposit
publications for its research results storage, allowing also for easy linking
with the EU-funded project. This will increase the accessibility to the
obtained results by a wider community, which can be further enhanced by
including the repository in registries of scientific repositories, such as
DataCite [DC] and OpenDOAR [ODOAR], or Zenodo [ZEN]. These are the most
popular registries for digital repositories and, along with re3data, they are
collaborating to provide open research data. For Japanese partners, as an
approval form NICT is necessary for each case, will promptly confirm to obtain
an approval.
## 3.3 FAIR Data Template
'FAIR' data (Findable, Accessible, Interoperable and Re-usable) aim to provide
a framework to ensure that research data can be effectively reused. During the
project lifetime, and according to DMP guidelines, the following questions
shall be addressed:
### Making data findable, including provisions for metadata
* Outline the discoverability of data (metadata provision).
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used.
* Outline the approach towards search keyword.
* Outline the approach for clear versioning.
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how.
### Making data openly accessible
* Specify which data will be made openly available. If some data is kept closed provide rationale for doing so.
* Specify how the data will be made available.
* Specify what methods or software tools are needed to access the data. Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited. Specify how access will be provided in case there are any restrictions.
### Making data interoperable
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your dataset, to allow inter-disciplinary interoperability. If not, will you provide mapping to more commonly used ontologies?
### Increase data re-use (through clarifying licenses)
* Specify how the data will be licensed to permit the widest reuse possible.
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed.
* Specify whether the data produced and/or used in the project is usable by third parties, in particular after the end of the project. If the re-use of some data is restricted, explain why.
* Describe data quality assurance processes.
* Specify the length of time for which the data will remain re-usable.
This questions template will be filled as soon as datasets get defined as
described in previous section 3.1, and will be provided within the periodic
deliverable “Project Progress Report”.
## 3.4 Source Code
M-Sec will make available the generated software and its source code to the
Open Source Community. MSec consortium has not still identified which kind of
Open Source License of source code will be applied. However, it may be
possible that a dual license scheme could be considered in order to protect
the business exploitation perspectives of the partners. Duality means that
both the free software distribution mechanism and traditional software product
business are combined. There is technically only one core product but two
licenses: one for free distribution and free use, and another one for
commercial use (proprietary). The business model will be explained in detail
at deliverable _“D5.6 Market Analysis and Exploitation”[D56]_ .
# 4\. Other Data Management Aspects
The DMP guidelines also refer to the following aspects related to data
management.
## 4.1 Allocation of Resources
All use case partners and technical partners, with their related role, are
involved in data management activities, either collecting, processing, or
creating datasets and the corresponding effort is embedded into the tasks in
which they are undertaking these activities. Hence, all related costs for data
management are already covered by the M-Sec project and no additional
resources will be needed.
## 4.2 Data Security
Any issue regarding the Protection of Personal Data will be included in
deliverable _“D6.4 POPD-Requirement No.4”[D64]_ and hence is not repeated
here. Given that almost all use cases require collection of data from the
field of operation, in addition to personal data protection, M-Sec will use
state-of-the-art technologies for secure storage, delivery and access of
personal information, as well as managing the rights of the users. In this
way, there is complete guarantee that the accessed, delivered, stored and
transmitted content will be managed by the right persons, with well-defined
rights, at the right time.
State-of-the-art firewalls, network security, encryption and authentication
will be used to protect collected data. Firewalls prevent the connection to
open network ports, and exchange of data will be through consortium known
ports, protected via IP filtering and password. Where possible (depending on
the facilities of each partner) the data will be stored in a locked server,
and all identification data will be stored separately.
A metadata framework will be used to identify the data types, owners and
allowable use. This will be combined with a controlled access mechanism and in
the case of wireless data transmission with efficient encoding and encryption
mechanisms, for example WPA2 (Wireless Protected Access II), a security method
for wireless networks that provides stronger data protection and network
access control.
## 4.3 Ethical Aspects
M-Sec partners are to comply with the ethical principles which states that all
activities must be carried out in compliance with:
1. Ethical principles
2. Applicable international, EU and national law.
All information related with Ethical Aspects will be handled within WP6
“Ethics requirements” which include the submission of different deliverables
on M8. WP6 aims to follow-up the ethical issues applicable to the MSec project
implementations. It includes:
* The procedures and criteria that will be used to identify/recruit participants (Deliverable 6.1 [D61]),
* the informed consent procedures that will be implemented for the participation of humans (deliverable 6.2 [D62]),
* procedures for processing personal data, compilation, pseudonymisation, protection and deletion (deliverable 6.3 [D63]),
* in case of processing personal data, information about the appointment of a Data Protection Officer (DPO) (deliverable 6.4 [D64]),
* and finally, in case of personal data is transferred from EU to Japan or to another non-EU country , or the opposite from non-EU countries or international organization to an EU country (deliverable 6.5 [D65] and 6.6 [D66]), confirmation that such transfers are in accordance with GDPR, Japanese personal data protection law (PIPA, Personal information Protection Act)and manual concerning the handling of personal data stipulated by NICT, and also the laws of the country in which the data was collected.
GDPR and IPR (Intellectual Property Rights) protection issues also have a
dedicated WP (WP5 “ _GDPR, dissemination, exploitation and sustainability”_ ).
This WP will provide a guide for compliance of the M-Sec project results with
GDPR law and the intellectual property rights of the project results.
# 5\. Conclusions
This deliverable gives an insight of the initial Data Management Plan of
M-Sec. It is actually a guideline of the different aspects that need to be
covered and tackled as soon as datasets gets identified. The document defines
how those datasets have to be properly described. While the project
progresses, it will be identified what kind of data or metadata can be
publically accessible to other parties, considering both data generated by the
own pilots and data generated by the M-Sec platform itself.
This deliverable also provides an insight about in which conditions source
code will be made available and the respective platforms that will host data.
In this context, M-Sec will provide an updated and concrete Data Management
Plan, including the description of the identified datasets, on the deliverable
“Project Progress Report” which will be submitted at the end of each year.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1486_BLAZE_815284.md
|
# 1 EXECUTIVE SUMMARY
The BLAZE Data Management Plan follows the Horizon 2020 DMP template that was
designed to be applied to any Horizon 2020 project that produces, collects or
process research data. This first Data Management Plan describes the data
management principles and strategies, tools and BLAZE data: data set, “Open
Research Data Pilot” (ODRDP) and BLAZE Demonstrator that will be produced as
part of the project activities and that are relevant to be included in the
DMP. The consortium will also aim at open access when publishing papers and
articles.
The DMP is a living document to be updated as the implementation of the
project progresses and when significant changes occur.
# 2 INTRODUCTION
## 2.1 Objectives and scope of the document
The Data Management Plan (DMP) describes the data management life cycle for
the data to be collected, processed and/or generated by BLAZE project, as a
Horizon 2020 project. The DMP aims at defining the management strategy of data
generated during the project with the purpose to making research data
findable, accessible, interoperable and re-usable (FAIR).
## 2.2 Structure of the deliverable
The document is structured following the guideline of H2020 programme on FAIR
Data Management in Horizon 2020 including the following information:
* Data Management Plan (DMP) guiding principles and strategy
* Description of BLAZE type of data
* Description of FAIR DATA characteristics including DMP Review Process & data inventory
* Allocation of resources
* Data Security
* Ethical Aspects
* Conclusions
# 3 DATA SUMMARY
The BLAZE Data Management Plan (DMP) aims to provide a strategy for managing
key data generated and collected during the project and optimize access to and
re-use of research data. The DMP is intended to be a ‘living’ document that
will outline how the BLAZE research data will be handled during and after the
project, and so it will be reviewed and updated at regular intervals.
All European Union funded projects must try to disseminate as much information
as possible and on top of that the BLAZE project was signed up to the “Open
Research Data Pilot” which means that we are committed to give open access to
data generated during the project unless it goes against our legitimate
interests. In this regard, the main purpose of the DMP is to ensure the
accessibility and intelligibility of the data generated during the BLAZE
project in order to comply with the Guidelines of the “Open Research Data
Pilot”. Each data set created during the project will be assessed and
categorized as open, embargo or restricted by the owners of the content of the
data set.
All the data sets, regardless of their categorization, will be stored in each
of the participant entities databases and in the Google Drive folder created
as internal database of the partners. In addition, those categorized as open
or embargo will be publicly shared (in the case of embargo, after the embargo
period is over) through the public section of the project website and
**ZENODO** ( _https://zenodo.org/_ ) .
ZENODO is an open access repository for all fields of science that allows
uploading any kind of data file formats, which is recommended by the Open
Access Infrastructure for Research in Europe (OpenAIRE).
## 3.1 Data Management Plan (DMP) guiding principles
The Data Management Plan of BLAZE is realized within the Work Package 1\.
The BLAZE project data management plan follows the principle of Open Access
according to the Horizon 2020 guideline summarized in the diagram here below.
Figure 1. Open access to research data and publication decision diagram (from
Guidelines to the Rules on Open Access to Scientific publications and Open
Access to Research Data in Horizon 2020)
The others main principles for the BLAZE project DPM are the following:
1. This Data Management Plan (DMP) has been prepared by taking into account the template of the
“Guidelines on Data Management in Horizon 2020”
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oadata-mgt_en.pdf_
2. The DMP is an official project Deliverable (D1.3) due in Month 6 (August 2019), but it will be updated throughout the project. The first initial version will evolve depending on significant changes arising and periodic reviews at relevant reporting stages of the project.
3. The consortium complies with the requirements of Regulation (EU) 2016/679 and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Guidance on how these regulations interact with open-access data policy can be found here: _https://www.openaire.eu/ordp/_
4. Type of data, storage, confidentiality, ownership, management of intellectual property and access: procedures that will be implemented for data collection, storage, access, sharing policies, protection, retention and destruction will be in line with EU standards as described in the Grant Agreement and the Consortium Agreement.
## 3.2 BLAZE Data Management strategy
As a project participating in the Open Research Data Pilot (ORDP) in Horizon
2020, the DMP’s Data Management strategy of BLAZE project is focused on the
observation of FAIR (Findable, Accessible, Interoperable and Reusable) Data
Management Protocols. This document addresses for each data set collected,
processed and/or generated in the project the following elements:
**Dataset reference and name** : Internal project Identifier for the data set
to be produced. This will follow the format:
_**PNumber_TaskNumber__PartnerName_DataSubset_DatasetName_Version__DateOfStorage,**
_ where the project name is BLAZE, the Partner Name represents the name of the
data custodian (WP Lead/ Task Leader).
**Dataset description** : description of the data generated or collected,
including its origin (in cases where data is collected), nature and scale and
to whom it could be useful, and whether it underpins a scientific publication.
Information on the existence (or not) of similar data and the potential for
integration and reuse.
**Standards and metadata** : reference to existing suitable standards. If
these do not exist, an outline on how and what metadata will be created.
**Data sharing** : description of how data will be shared, including access
procedures, embargo periods (if any), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling reuse, and
definition of whether access will be open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating the type of repository (institutional,
standard repository for the discipline, etc.). In cases where the dataset
cannot be shared, the reasons for this will be stated (e.g. ethical, rules of
personal data, intellectual property, commercial, privacy-related, security-
related).
**Archiving and preservation** (including storage and backup): description of
the procedures to be put in place for long-term preservation of the data,
including an indication of how long the data should be preserved, the
approximate end volume, associated costs, and how these are planned to be
covered.
## 3.3 BLAZE type of data
Among project datasets and deliverables, following categories of outputs are
declared “ORDP” that will be made “Open Access” (to be provided free of charge
for public sharing). These will be included in the Open Research Data Pilot
and thus be managed according to the present DMP:
* Project deliverables D2.2., D3.2, D5.4
* Articles published in Open Access scientific journal
* Conference and Workshop abstracts/articles
Once generated (or collected), these data will be stored in several formats,
which are: Documents, Images, Data, and Numerical codes.
In particular the following project deliverables are relevant:
### D.2.2. "Bio-syngas composition and contaminants that affect SOFC and
related gasifier parameters and bed materials to reduce SOFC hazardous
effects"
Bio-syngas composition and contaminants that affect SOFC operation, and
related gasifier parameters and bed materials to reduce SOFC hazardous
effects. . It refers to Task 2.2. Identify the operating conditions in terms
of S/B, ER, olivine/dolomite ratios and amounts of sorbents to be added in
order to obtain at the exit of the gasifier the produced gas with the best
characteristics, i.e. the highest CGE and carbon conversion ( 90%), as well as
the lowest contents of tar (a few grams/Nm3dry) and inorganic contaminant
vapours (tens of ppm) connected to the use of in-bed additives. [ENEA – M12]
### D.3.2 "Report summarising the literature review"
This report aims to select, via literature review, the most representative
syngas composition and contaminants. The indicators of success are the
identification of at least 5 experimental and 5 simulative international peer
reviewed papers on gasifiers/hot gas conditioning systems within select
(possibly experimental data) at least 2 representative compositions and 2
organic and 3 inorganic representative contaminants levels (with the
respective gasification and hot gas conditioning systems) that can fed the
SOFC with acceptable SOFC efficiency, power density and durability.
### D.5.4 “Assembled CHP system”
The system, starting from the Hot Syngas Conditioner, will be assembled
incorporating the 25 kWe SOFC-stack from SP_YV (Task 5.2), the heat-driven
anode gas recirculator from EPFL (Task 5.4) and the steam generator. For a
proper integration, all interfaces between the various sub-systems and
components will be described in detail by the different supplying partners. It
will be possible to by-pass the SOFC stack and anode gas recirculator during
testing. Before its delivery the new upscaled gas recirculation device is
characterised and extensively tested in the laboratory at EPFL. Electronic
hardware for system control and the electronic control unit as developed in
Task 5.5 will be integrated. A full i/o test will be done. After completion of
the installation, a phase of checks on each unit will be undertaken,
separately and when needed using auxiliary/synthetic gaseous streams, in order
to verify the functionality of the components, control systems and data
acquisition. All components are manufactured and integrated. The system is
successfully tested for its operability.
Summarising, BLAZE generates and collects the following research data relevant
for the DMP:
<table>
<tr>
<th>
**TITLE**
</th>
<th>
**WP No**
</th>
<th>
**LEAD**
**BENEFICIARY**
</th>
<th>
**NATURE**
</th>
<th>
**DISSEMINATION**
</th> </tr>
<tr>
<td>
D2.2. Bio-syngas composition and contaminants that affect SOFC and related
gasifier parameters and bed materials to reduce SOFC hazardous effects
</td>
<td>
WP2
</td>
<td>
ENEA
</td>
<td>
data sets, microdata, etc.
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D3.2 Report summarising the literature review
</td>
<td>
WP3
</td>
<td>
SP
</td>
<td>
ORDP: Open
Research Data
Pilot
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.4 Assembled CHP system
</td>
<td>
WP5
</td>
<td>
HyGEAR
</td>
<td>
Demonstrator
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Articles published in Open Access scientific journal
</td>
<td>
WP8
</td>
<td>
EUBIA
</td>
<td>
Articles/
Research data
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Conference and Workshop abstracts/articles
</td>
<td>
WP8
</td>
<td>
EUBIA
</td>
<td>
Articles/
Research data
</td>
<td>
Public
</td> </tr> </table>
Table 1. BLAZE research data
# 4 FAIR DATA
## 4.1 Making data findable, including provisions for metadata
Metadata is data on the research data themselves. It enables other researchers
to find data in an online repository and is, as such, essential for the
reusability of the dataset. By adding rich and detailed metadata, other
researchers, can better determine whether the dataset is relevant and useful
for their own research. Metadata (type of data, location, etc.) will be
uploaded in a standardized form. This metadata will be kept separate from the
original raw research data.
As described in the project Grant Agreement (Article 29.2), the bibliographic
metadata include all of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable
* a persistent identifier
BLAZE open data will be collected in an open online research data repository:
**ZENODO** . Its repository structure, facilities and management are in
compliance with FAIR data principles. ZENODO is an OpenAIRE that allows
researchers to deposit both publications and data, providing tools to linking
them to these through persistent identifiers and data citations. ZENODO is set
up to facilitate the finding, accessing, re-using and interoperating of data
sets, which are the basic principles that ORD projects must comply with.
Zenodo repository is provided by OpenAIRE and hosted by CERN. Zenodo is a
catchall repository that enables researchers, scientists, EU projects and
institutions to:
* Share research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science.
* Display their research results and get credited by making the research results citable and integrating them into existing reporting lines to funding agencies like the European Commission.
* Easily access and reuse shared research results.
* Integrate their research outputs with the OpenAIRE portal.
### Search keywords
Zenodo allows to perform simple and advanced search queries on Zenodo using
the keywords. Zenodo also provides a user guide with easy to understand
examples.
### Naming conventions
Files and folders at data repositories will be versioned and structured by
using a name convention consisting as follow: **BLAZE_Dx.y_YYYYMMDD_Vzz.doc**
_Version numbers_
Individual file names will contain version numbers that will be incremented at
each revision (Vzz).
## 4.2 Making data openly accessible
In order to maximise the impact of BLAZE research data, the results are shared
within and beyond the consortium. Selected data and results will be shared
with the scientific community and other stakeholders through publications in
scientific journals and presentations at conferences, as well as through open
access data repositories.
The BLAZE project datasets are first stored and organized in a database by the
data owners (personal computer, or on the institutional secure server) and on
the project database (project website). All data are made available for
verification and re-use, unless the task leader can justify why data cannot be
made openly accessible. To protect the copyright of the project knowledge,
Creative Commons license will be used in some cases. The BLAZE dataset
deliverables are both public (data access policy unrestricted) and they will
be accessible by:
* BLAZE project web site
* Partners database
* OpenAIRE
* ZENODO ( https://zenodo.org ) for ORDP data and datasets
* Open access journals
All data deposited on ZENODO are accessible without restriction for public.
For other data, potential users must contact the IPR team or the data owner in
order to gain access. If necessary, appropriate IPR procedure (such as non-
disclosure agreement) will be used.
## 4.3 Making data interoperable
Partners will observe OpenAIRE guidelines for online interoperability,
including OpenAIRE Guidelines for Literature Repositories, OpenAIRE Guidelines
for Data Archives, OpenAIRE Guidelines for CRIS Managers based on CERIF-XML.
These guidelines can be found at:
_https://guidelines.openaire.eu/en/latest/._ Partners will also ensure that
BLAZE data observes FAIR data principles under H2020 open-access policy:
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oadatamgt_en.pdf_
In order to ensure the interoperability, all datasets will use the same
standards for data and metadata capture/creation.
As the project progresses and data is identified and collected, further
information on making data interoperable will be outlined in subsequent
versions of the DMP. In specific, information on data and metadata
vocabularies, standards or methodology to follow to facilitate
interoperability and whether the project uses standard vocabulary for all data
types present to allow interdisciplinary interoperability.
## 4.4 Increase data re-use (through clarifying licences)
Creative Common Licensing with be used to protect the ownership of the
datasets. Both Share-Alike and NonCommercial-ShareAlike licenses will be
considered for the parts of datasets for which the decision of making that
part public has been made by the Consortium.
However, an embargo period may be applied if the data (or parts of data) are
used in published articles in “Green” open access journals. The recommended
maximum embargo period length by European Commission is 6 months.
For datasets deposited on a public data repository (ZENODO) the access is
unlimited.
Restrictions on re-use policy are applied for all protected data (see Figure
1: Open access to research data and publication decision diagram), whose re-
use will be limited within the project partners.
Other restrictions could include:
* the “embargo” period imposed by journals publication policy (Green Open access);
* some or all of the following restrictions may be applied with Creative Commons licensing of the dataset:
* Attribution: requires users of the dataset to give appropriate credit, provide a link to the license, and indicate if changes were made.
* NonCommercial: prohibits the use of the dataset for commercial purposes by others.
* ShareAlike: requires the others to use the same license as the original on all derivative works based on the original data.
Internal process of Quality evaluation is activated throughout the entire
project duration to assess both project data /products and project process
(See the D1.2 Quality Assurance Plan and Report for project monitoring and
risk management). An internal peer review is performed for the main project
deliverables to guarantee the deliverable is developed with an high level of
quality. Each WP leader has to submit all the produced documents to another
partner assigned as internal reviewer to check for the quality of the
documents produced.
The project data will remain re-usable for at least 1 year.
## 4.5 DMP Review Process & data inventory
Internal process of quality evaluation and reporting is activated throughout
the entire project duration to assess both project data /products and project
process (See the D1.2 Quality Assurance Plan and Report for project monitoring
and risk management). Results data will be also analysed and collected
throughout the project entire duration. To this purpose the Dissemination and
Communication Report (See the D8.3 Communication and Dissemination Plan) will
also be filled in by each partner about every six months: it includes the
description of articles, papers and scientific publications too. Thus, all
research data generated and publications will be analysed and described by
using the Data Inventory Table (Annex I), WP leaders and partners authors of
publications are required fill in periodically.
Further updating of the Data Management Plan will include the eventually
updating of online research data repository where data are collected and
shared and the data the description of dataset and research data gradually
generated and collected.
# 5 ALLOCATION OF RESOURCES
Costs related to open-access to research data in Horizon 2020 are eligible for
reimbursement under the conditions defined in the H2020 Grant Agreement, in
particular Article 6 and Article 6.2.D.3, but also other articles relevant for
the cost category chosen. Project beneficiaries will be responsible for
applying for reimbursement for costs related to making data accessible to
others beyond the consortium.
The costs for making data FAIR includes:
* Fees associated with the publication of scientific articles containing project’s research data in “Gold” Open access journals. The cost sharing, in case of multiple authors, shall be decided among the authors on a case-by-case basis.
* Project Website operation: to be determined
* Data archiving at ZENODO and on other on line data base: free of charge
* Copyright licensing with Creative Commons: free of charge
The project member of General Assembly are also responsible of the Data
Management of BLAZE dataset and research data in accordance with each
organization internal Data Protection Officer (DPO).
Each partner is responsible for the data they produce. Any fee incurred for
Open Access through scientific publication of the data will be the
responsibility of the data owner (authors) partner(s).
# 6 DATA SECURITY
The following guidelines will be followed in order to ensure the security of
the data:
* Store data in at least two separate locations to avoid loss of data;
* Encrypt data if it is deemed necessary by the participating researchers; - Limit the use of USB flash drives.
* Label files in a systematically structured way in order to ensure the coherence of the final dataset.
All project deliverables and data will be stored and shared in the Google
Drive folder restricted to the project consortium. As an initial step, only
the Consortium Partners will have access to the cloud storage where dataset
and metadata are filed. Following, scientific publications and articles, the
dataset deliverables and the final demonstrator research results will be
shared through ZENODO and other database to promote the data making FAIR.
# 7 ETHICAL ASPECTS
The work package 9 aims at to ensuring that ethical requirements are met for
all research undertaken in the project, including data management aspects, in
compliance with H2020 ethical standards. All partners will assure that the EU
standards regarding ethics and data management are fulfilled in compliance
with the ethical principles (see Article 34) and confidentiality (see Article
36 as set out in the Grant Agreement). In addition:
1. In accordance with the General Data Protection Regulation 2016/679, the data controllers and processors are fully accountable for the data processing operations.
2. Templates for informed consent forms and information sheet are also available. More details in relation to Ethics (and Security) in relation to Data Management can be found in Section 5 of the Grant Agreement.
3. The BLAZE consortium also includes the Switzerland as Non-EU consortium member and project data will be exchanged between the partners at all times during the project.
See the following deliverables for more details:
* D.9.1 H - Requirement No. 1
* D.9.2 POPD - Requirement No. 2
* D.9.3 EPQ - Requirement No. 3
# 8 CONCLUSIONS
This documents describes the man principles and guidelines for the Data
Management for the BLAZE project. As living document it will be updated
throughout the project lifetime. Further updating of the Data Management Plan
will include the eventually updating of online research data repository where
data are collected and shared and the data the description of dataset and
research data gradually generated and collected.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1487_FLOTANT_815289.md
|
<table>
<tr>
<th>
DISTRIBUTION LIST
</th>
<th>
</th> </tr>
<tr>
<td>
Copy no.
</td>
<td>
Company/
Organization
(country)
</td>
<td>
Name and surname
</td> </tr> </table>
1. PLOCAN (ES) Ayoze Castro, Alejandro Romero, Octavio Llinás
2. UNEXE (UK) Lars Johanning, Philipp Thies, Giovanni Rinaldi
3. UEDIN (UK) Henry Jeffrey, Anna García, Simon Robertson
4. AIMPLAS (ES) Ferrán Martí, Blai López
5. ITA-RTWH (DE) Thomas Koehler, Dominik Granich, Oscar Bareiro
6. MARIN (NL) Erik-Jan de Ridder, Sebastien Gueydon
7. TFI (IE) Paul McEvoy
8. ESTEYCO (ES) Lara Cerdán, Javier Nieto, José Serna
9. INNOSEA (FR) Mattias Lynch, Rémy Pascal, Hélène Robic
10. INEA (SI) Igor Steiner, Aleksander Preglej, Marijan Vidmar
11. TX (UK) Sean Kelly
12. HB (UK) Ian Walters
13. FULGOR (EL) George Georgallis
14. AW (HR) Miroslav Komlenovic
15. FF (ES) Bartolomé Mas
16. COBRA (ES) Sara Muñoz, Rubén Durán, Gregorio Torres
17. BV (FR) Claire-Julie , Jonathan Boutrot, Jonathan Huet,
**Acknowledgements**
Funding for the FLOTANT project (Grant Agreement No. 815289) was received from
the EU Commission as part of the H2020 research and Innovation Programme.
The help and support, in preparing the proposal and executing the project, of
the partner institutions is also acknowledged: Plataforma Oceánica de Canarias
(ES), The University of Exeter (UK),The University of Edinburgh (UK), AIMPLAS-
Asociación de Investigación Materiales Plásticos y Conexas (ES), Rheinisch-
Westfaelische Technische Hochschule Aachen
(DE), Stichting Maritiem Research Instituut Nederland (NL), Technology From
Ideas Limited (IE), Esteyco SA (ES), Innosea (FR), Inea Informatizacija
Energetika Avtomatizacija DOO (SI), Transmission Excellence Ltd (UK), Hydro
Bond Engineering Limited (UK), FULGOR S.A., Hellenic Cables Industry (EL),
Adria Winch DOO (HR), Future Fibres (ES), Cobra Instalaciones y Servicios S.A
(ES), Bureau Veritas Marine & Offshore Registre International de
Classification de Navires et eePlateformes Offshore (FR).
**Abstract**
Deliverable D9.11 “Data Management Plan” (DMP) is produced in the aim of Work
Package WP9 related to the Dissemination and Communication of the FLOTANT
project.
The aim of this FLOTANT DMP is to establish guidelines for the Consortium on
the procedure for collecting and storing data, which will be produced in the
framework of the project.
This Data management Plan presents the type of data and format that will be
created in the different Work Packages; what methodology or standards are
used; data availability, if it will be open access or confidential; size; how
data will be disseminated during the project and how data is available after
the conclusion of the project; to whom and who is responsible.
# DATA SUMMARY
This section will provide an overview of the different datasets that will be
created, collected or processed in the FLOTANT project.
<table>
<tr>
<th>
**Type of Data/Format**
</th>
<th>
**Open Access**
</th>
<th>
**Confidential and why**
</th>
<th>
**Size**
</th>
<th>
**How will data be disseminated during**
**project**
</th>
<th>
**How data is available after project (re-use)**
</th>
<th>
**Data utility**
</th>
<th>
**Lead Partner**
</th>
<th>
**WP**
</th> </tr>
<tr>
<td>
Mooring and Anchoring System design / *doc,
*pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium due to IPR protection and to
preserve legitimate commercial interests.
</td>
<td>
To be defined
</td>
<td>
Deliverable D.2.1
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
TFI
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
Parameter set for hybrid polymer carbon fibre yarns
/ *doc, *pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium due to IPR protection and to
preserve legitimate commercial interests.
</td>
<td>
To be defined
</td>
<td>
Deliverable D2.2
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
ITA
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
Hybrid polymer carbon fibre mooring cables - 20 tons / 100 tons
/ *doc, *pdf
</td>
<td>
Current FF cable production technology will be combined with the novel anti-
bite and biofouling solutions developed by Amplias. Sensors will be also
embedded into
the cable structure for its continuous stress/strain monitoring.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverables
D.2.3 & D.2.4
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, manufactures and research community
</td>
<td>
ITA
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
Polymer spring
component
design / *doc,
*pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium the due to IPR protection and
to preserve legitimate commercial interests.
</td>
<td>
To be defined
</td>
<td>
Deliverable D.2.5
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
TFI
</td>
<td>
WP2
</td> </tr> </table>
<table>
<tr>
<th>
Active Heave Compensation design / *doc,
*pdf
</th>
<th>
\-
</th>
<th>
Confidential, only for members of the consortium due to IPR and to preserve
legitimate commercial interests
</th>
<th>
To be defined
</th>
<th>
Deliverable D.2.5
</th>
<th>
It will be kept in PLOCAN and lead partner repository
</th>
<th>
Developers and manufacturers
</th>
<th>
AW
</th>
<th>
WP2
</th> </tr>
<tr>
<td>
Integrated sensing/ *doc,
*pdf
</td>
<td>
The results obtained from the testing of D.2.3 & D.2.4 will be analyzed and
published in this report
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.2.7
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
FF
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
Deliver connector
72.5 kV prototype
/ *doc, *pdf
</td>
<td>
Description of the connector which will be manufactured and lab test results.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.3.1
and D3.2
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
HB
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
Insulated core of dynamic 72.5 kV cable for aging testing / *doc,
*pdf
</td>
<td>
The insulated core will be produced in FULGOR manufacturing facilities and
will be verified according to FULGOR specifications (i.e. produced length,
cross section,
insulation thickness, DC resistance of conductor, routine voltage test,
partial discharge test etc.) and a
report will be issued. The insulated
cable core will be used for the 2
year water aging test acc Cigre
TB722 and a report will be issued as described
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.3.3
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
FULGOR
</td>
<td>
WP3
</td> </tr> </table>
<table>
<tr>
<th>
Final 72.5 kV dynamic cable sample / *doc,
*pdf
</th>
<th>
Development of complete cable with novel outer armouring and involves the
production of a complete 72.5 kV dynamic
submarine cable sample. The
complete cable will be produced and a report will be issued
</th>
<th>
\-
</th>
<th>
To be defined
</th>
<th>
Project website, social media, deliverable D.3.4
</th>
<th>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</th>
<th>
Developers, research community
</th>
<th>
FULGOR
</th>
<th>
WP3
</th> </tr>
<tr>
<td>
Local cable
component
analysis and fatigue modelling
/ *doc, *pdf
</td>
<td>
Overview of the local cable analysis methods and results. The analysis will
enable a meaningful
comparison against current cable
designs and cable design variations.
The results will provide KPIs for each of the innovation measures and will
allow estimating the overall systems gain.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.3.5
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
FULGOR
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
Structural and
Naval Architecture design basis / *doc, *pdf
</td>
<td>
This data will be a description of
the design criteria, the relevant
standards to be taken into account during the design process, the
verification criteria, the description
of the metocean conditions, the selected turbines, their main
features and the reference sample wind farms to be considered as input through
the design process.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.4.1
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
ESTEYCO
</td>
<td>
WP4
</td> </tr> </table>
<table>
<tr>
<th>
Specifications of a generic wind turbine / *doc,
*pdf
</th>
<th>
To provide a realistic model of the wind turbine for the investigation of the
floater global performance and thus loading in the mooring lines and the power
cable.
</th>
<th>
\-
</th>
<th>
To be defined
</th>
<th>
Project website, social media, deliverable D.4.2
</th>
<th>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</th>
<th>
Developers, research community
</th>
<th>
INNOSEA
</th>
<th>
WP4
</th> </tr>
<tr>
<td>
Naval architecture and structural report / *doc,
*pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium due to IPR protection and to
preserve legitimate commercial interests.
</td>
<td>
To be defined
</td>
<td>
Deliverable D.4.3 &
D.4.4
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
ESTEYCO
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
Integrated modelling, codeto-code comparison / *doc, *pdf
</td>
<td>
Definition of the floater model and estimation of its performances.
Another main goal is to provide loading input to other work
packages for specific equipment
design i.e. particularly mooring and power cable. This Deliverable will
finally include the description of
scaled model of the FOWT with its
mooring and cable which will be used as input for code-to-code comparison
performed in WP5
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.4.5
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
INNOSEA
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
Dynamic cable
Configuration,
CFD and loadings
/ *doc, *pdf
</td>
<td>
Report of the numerical study focussing on the viscous loading on the dynamic
cable for ULS. Conclusions on the pertinence of using a more advanced method
than the state-of-the art method will be included in this report.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.4.6
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
MARIN
</td>
<td>
WP4
</td> </tr> </table>
<table>
<tr>
<th>
Feasibility and
economic study for floating substation / *doc,
*pdf
</th>
<th>
Feasibility and economic study of a floating substations aiming to identify
cost drivers and to optimise cost at a wind farm level.
</th>
<th>
\-
</th>
<th>
To be defined
</th>
<th>
Project website, social media, deliverable D.4.7
</th>
<th>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</th>
<th>
Developers, research community
</th>
<th>
ESTEYCO
</th>
<th>
WP4
</th> </tr>
<tr>
<td>
Novel mooring components
performance and durability / *doc,
*pdf
</td>
<td>
It will be reported on the test setup, program and results for large-scale
performance and
durability testing of the novel
‘shock absorber’ mooring components (MSA).
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.5.1
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
UNEXE
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Specifications for performing the reduced scaletests / *doc, *pdf
</td>
<td>
It will contain the information regarding the specifications and
parameters to be tested along the campaign for the floating platform.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.5.2
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
ESTEYCO
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Reduced scale model design and construction / *doc, *pdf
</td>
<td>
Specifications of the design and construction of the scaled model that will be
used for the wave basin model-tests.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.5.3
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
MARIN
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Results of wave tank tests / *doc,
*pdf
</td>
<td>
Data reports and conclusions drawn from the analysis of the wave basin model-
tests.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.5.4
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
MARIN
</td>
<td>
WP5
</td> </tr> </table>
<table>
<tr>
<th>
Report on VIV (hydrodynamic) behaviour / *doc,
*pdf
</th>
<th>
Data reports and conclusions drawn from the analysis of the towing tank model-
tests.
</th>
<th>
\-
</th>
<th>
To be defined
</th>
<th>
Project website, social media, deliverable D.5.5
</th>
<th>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</th>
<th>
Developers, research community
</th>
<th>
MARIN
</th>
<th>
WP5
</th> </tr>
<tr>
<td>
Power cable characteristics /
*doc, *pdf
</td>
<td>
Description of the tests and test results and assessment of test results
according to FULGOR inspection and test plan.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.5.6,
D.5.7 & D.5.8
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
UNEXE &
FULGOR
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Antifouling and
Anti-bite test /
*doc, *pdf
</td>
<td>
The samples exposed during different periods in sea water conditions will be
evaluated
according the methodology
described in the standard ASTM
D3623 and ASTM D6990 and compare with a sample without anti-bite and anti-
fouling additives.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.5.9
& D.5.10
</td>
<td>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers, research community
</td>
<td>
PLOCAN &
AIMPLAS
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Control system, sensors and supervision system / *doc,
*pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium due to IPR protection and to
preserve legitimate commercial interests
</td>
<td>
To be defined
</td>
<td>
Deliverable D.6.1
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
INEA
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
Installation and
O&M / *doc, *pdf
</td>
<td>
Information on suitable installation and removal techniques, as well as
suggested O&M strategies,
according to farm design and proposed innovations.
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, deliverable D.6.2,
D.6.3, D.6.4 & D.6.5
</td>
<td>
This information will be publicly shared through the
FLOTANT website and social media. It will be kept in
PLOCAN repository
</td>
<td>
Developers, research community
</td>
<td>
COBRA &
UNEXE
</td>
<td>
WP6
</td> </tr> </table>
<table>
<tr>
<th>
Techno-economic, environmental and socioeconomic impact assessments / *doc,
*pdf
</th>
<th>
General results on technoeconomic, environmental and
socio-economic impacts will be made available to the public to
contribute to the advancement in the understanding of impacts
caused by floating wind systems
and particularly by the innovations introduced within FLOTANT.
</th>
<th>
\-
</th>
<th>
To be defined
</th>
<th>
Project website, social media, deliverable D.7.1,
D.7.2, D.7.3 & D.7.4
</th>
<th>
It will be publicly shared through the FLOTANT
website and social media
and it will be kept in PLOCAN and lead partner repository
</th>
<th>
Developers, research community
</th>
<th>
COBRA,
UEDIN &
PLOCAN
</th>
<th>
WP7
</th> </tr>
<tr>
<td>
Design Basis / *doc, *pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium due to IPR protection and to
preserve legitimate commercial interests
</td>
<td>
To be defined
</td>
<td>
Deliverable D.8.1 &
D.8.2
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
BV
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
Business plan & commercialization strategy / *doc,
*pdf
</td>
<td>
\-
</td>
<td>
Confidential, only for members of the consortium due to legitimate commercial
interests.
</td>
<td>
To be defined
</td>
<td>
Deliverable D.8.4 &
D.8.5
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
COBRA
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
FLOTANT Data
Base/ *xls
</td>
<td>
\-
</td>
<td>
Data will be collected and processed confidentially to preserve legitimate
commercial interests. This primary data will be processed to calculate O&M
strategies, LCOE, LCA and GVA. The result of the processing will be openly
accessible in the CAPEX and OPEX reduction study.
</td>
<td>
To be defined
</td>
<td>
Project Intranet
</td>
<td>
It will be kept in PLOCAN and lead partner repository
</td>
<td>
Developers and manufacturers
</td>
<td>
UNEXE &
UEDIN
</td>
<td>
WP6 &
WP7
</td> </tr>
<tr>
<td>
CDE material /
*doc, *pdf, *mp4, * jpg, *png
</td>
<td>
Dissemination material elaborated to FLOTANT targeted audience,
reached by different established CDE measures to maximize project impact
</td>
<td>
\-
</td>
<td>
To be defined
</td>
<td>
Project website, social media, mass media,
deliverable D.9.1, D.9.2,
D.9.3, D.9.4, D.9.5, D.9.7 & D.9.8
</td>
<td>
CDE material will be publicly shared through the FLOTANT website, social media
and
dissemination events. It will be kept in PLOCAN repository
</td>
<td>
Developers, research
community & general society
</td>
<td>
PLOCAN
</td>
<td>
WP9
</td> </tr>
<tr>
<td>
General society personal data / *doc, *pdf
</td>
<td>
\-
</td>
<td>
Restricted due to Data Protection
</td>
<td>
To be defined
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
Developers,
industry, research community
</td>
<td>
PLOCAN
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
Advisory and
Stakeholders Board (ASB) personal data /
*doc, *pdf
</td>
<td>
\-
</td>
<td>
Restricted due to Data Protection
</td>
<td>
To be defined
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
Developers,
industry, research community
</td>
<td>
PLOCAN
</td>
<td>
WP1 &
WP9
</td> </tr>
<tr>
<td>
Social-Acceptance
Survey / *doc,
*pdf
</td>
<td>
\-
</td>
<td>
Restricted due to Data Protection
</td>
<td>
To be defined
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
Developers, industry, research community
</td>
<td>
PLOCAN
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
Workshop and webinars
participants list /
*doc, *pdf
</td>
<td>
\-
</td>
<td>
Restricted due to Data Protection
</td>
<td>
To be defined
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
Developers, industry, research
community, public administration
</td>
<td>
UEDIN,
UNEXE &
PLOCAN
</td>
<td>
WP9
</td> </tr> </table>
TABLE 1 DATA SUMMARY
# FAIR DATA
FLOTANT Communication, Dissemination and Exploitation actions will focus on
building a stakeholder community that can be sustained and increased during
and after the project lifetime. The consortium will sponsor a broad
dissemination and communication plan for research and policy communities based
on traditional and innovative approaches including Gold open publishing and
FAIR (Findable, Accesible, Interoperable and Reusable) data principles.
## Making data findable, including provision for metadata
The FLOTANT project will produce different types of data which will be stored
at AdminProject as the main repository. AdminProject is a collaborative portal
specifically created for EUfunded projects that provides several management
tools, as well as a repository and data sharing point available to all
partners with a specific user and password. All data types will have a clear
description, the creation date (yymmdd), the project name (FLOTANT or FLT),
the partners responsible for the creating of the data, its format, version (as
a rule, first version will be v0, and the creator of the data will be
responsible for the version numbering), information of all modification on
data and keywords (metatags). Adequate keywords will allow data to be
findable.
All those documents which have been classified as public will be published in
the FLOTANT website ( _www.flotantproject.eu_ ), they can be published in
other social media platforms, such as Facebook, Linkedin or Twitter.
The FLOTANT project will ensure the data to be findable to the bibliographic
metadata that identify the deposited publication. The bibliographic metadata
will be in a standard format and will include all items as it is indicated in
the Article 29.2 of the Grant Agreement.
## Making data openly accessible
FLOTANT project Partners will have to provide open access to all peer-reviewed
scientific publications relating to its results according to Article 29.2 of
the Grant Agreement and H2020 Guidelines on Open Access to Scientific
Publications (EC, 2013). There are two possible ways of publication: green 1
open access or gold 2 open access. Therefore, the authors of all
peerreviewed scientific publications would choose the most appropriate way of
publishing their results and any scientific peer-reviewed publication can be
read online, downloaded and printed.
FLOTANT Consortium agrees with the following principles of the Europe 2020
Strategy for a smart, sustainable and inclusive economy, as well as, with the
EC Guidelines on Open Access to Scientific Publications and Research Data in
Horizon 2020:
* Modern research builds on extensive scientific dialogue and advances by improving earlier work.
* Knowledge and Innovation are crucial in generating growth.
* A broader access to scientific publications and data therefore helps to: (1) build on previous research results (improved quality of results); (2) encourage collaboration and avoid duplication of effort (greater efficiency); (3) speed up innovation (faster progress to market means faster growth); (4) involve citizens and society (improved transparency of the scientific process).
For these reasons, FLOTANT partners, in compliance with Article 29.2 of the EC
Grant Agreement, and by means of a combination of the two main routes to open
access (Green 1 & Gold 2 ): will ensure open access to all peer-reviewed
scientific publications relating to its results. ¡Error! No se encuentra el
origen de la referencia. shows the flow of FLOTANT data to meet the Open
Access policy. To meet this requirement, the beneficiaries will, at the very
least, ensure that any scientific peer-reviewed publications can be read
online, downloaded and printed. Since any further rights - such as the right
to copy, distribute, search, link, crawl and mine - make publications more
useful, beneficiaries should make every effort to provide as many of these
options as possible.
FIGURE 1.REASEARCH DATA FLOW, OPTIONS AND TIMING
FLOTANT proposes a complete range of activities leading to the optimal
visibility of the project and its results, increasing the likelihood of market
uptake and ensuring a smooth handling of the partners’ IPR, thus paving the
way to knowledge transfer. Internal knowledge management will be facilitated
through a web-based secure collaborative space (Intranet AdminProject
described in D1.2 Project Intranet) for information and document sharing.
Nowadays, FLOTANT Partners count on a solid individual IPR strategy. In fact,
the ownership of the knowledge (background) related to the project has been
already protected under diverse IPR mechanisms as well as the foreground. The
project will follow the provisions of H2020 on knowledge management and
protection, as set out in the Grant Agreement and to be developed in the CA.
Without prejudice to the above, FLOTANT will facilitate the sharing of main
results and public deliverables within and beyond the consortium through our
website. Nevertheless, FLOTANT ensures open access must be compatible with the
IPR management. The IPR strategy is described in the D 8.3 “IPR management
Plan”, which is based on the project Consortium Agreement (background IP will
belong to the individual partners and arising IP specific to an innovation
will belong to the developer partner), will establish rules for the use of
foreground, side ground and background knowledge and its distribution within
the project as well as rules for handling sensitive or confidential
information. This IPR strategy will be very focused and specific in order to
best protect the innovations and knowledge developed. Due to the reasons
listed above, two levels of accessibility have been established, which are
described in the table below.
<table>
<tr>
<th>
Del. Rel. No.
</th>
<th>
Title
</th>
<th>
Dissemination level
</th> </tr>
<tr>
<td>
D1.1
</td>
<td>
Project Management Guide
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D1.2
</td>
<td>
FLOTANT intranet portal operative
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D1.3
</td>
<td>
System Engineering Management Plan
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D2.1
</td>
<td>
Mooring and Anchoring System Design
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
Parameter set for hybrid polymer carbon fibre yarns
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D2.3
</td>
<td>
Hybrid polymer carbon fibre mooring cables \- 20 tons
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D2.4
</td>
<td>
Hybrid polymer carbon fibre mooring cables \- 100 tons
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D2.5
</td>
<td>
Polymer spring component design report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D2.6
</td>
<td>
Active Heave Compensation design report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D2.7
</td>
<td>
Integrated sensing report
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D3.1
</td>
<td>
Deliver connector 72.5 kV prototype
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D3.2
</td>
<td>
Novel connector specifications and lab tests
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
Insulated core of dynamic 72.5 kV cable for aging testing
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D3.4
</td>
<td>
Final 72.5 kV dynamic cable sample
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D3.5
</td>
<td>
Local cable component analysis and fatigue modelling
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D4.1
</td>
<td>
Structural and Naval Architecture design basis
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D4.2
</td>
<td>
Specifications of a generic wind turbine
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D4.3
</td>
<td>
Naval architecture report
</td>
<td>
Confidential, only for members of the consortium (including the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Commission Services)
</th> </tr>
<tr>
<td>
D4.4
</td>
<td>
Structural analysis report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D4.5
</td>
<td>
Integrated modelling, code-to-code comparison
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D4.6
</td>
<td>
Dynamic cable Configuration, CFD and loadings
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D4.7
</td>
<td>
Feasibility and economic study for floating substation
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.1
</td>
<td>
Novel mooring components performance and durability
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.2
</td>
<td>
Specifications for performing the reduced scale-tests
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.3
</td>
<td>
Reduced scale model design and construction
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.4
</td>
<td>
Results of wave tank tests
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.5
</td>
<td>
Report on VIV (hydrodynamic) behaviour
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.6
</td>
<td>
Report on mechanical power cable characteristics
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.7
</td>
<td>
Report electrical power cable characteristics
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.8
</td>
<td>
Report on insulated core testing after aging is completed
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.9
</td>
<td>
Detail Antifouling and Anti-bite test plan
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.10
</td>
<td>
Antifouling and Anti-bite test results report
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D6.1
</td>
<td>
Control system, sensors and supervision system report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D6.2
</td>
<td>
Installation processes
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D6.3
</td>
<td>
Marine management strategy & offshore operations
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D6.4
</td>
<td>
Proactive maintenance strategies based on failure prognostic
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D6.5
</td>
<td>
O&M optimization processes
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D7.1
</td>
<td>
LCOE Techno-economic assessment
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D7.2
</td>
<td>
Viability and sensitivity studies on FLOTANT solutions
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D7.3
</td>
<td>
Environmental Life Cycle Assessment
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D7.4
</td>
<td>
Social and Socio-economic assessment
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D8.1
</td>
<td>
Preliminary Design Basis report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D8.2
</td>
<td>
Final approval of the Design Basis
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D8.3
</td>
<td>
IPR management plan
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D8.4
</td>
<td>
Integrated business models report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D8.5
</td>
<td>
Commercialization strategies and Market uptake report
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D9.1
</td>
<td>
FLOTANT initial CDEP
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.2
</td>
<td>
FLOTANT basic CDE package
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.3
</td>
<td>
Initial (Communication & Dissemination) video
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.4
</td>
<td>
Final (Dissemination and Exploitation) video
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.5
</td>
<td>
1st Annual CDEP Update
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.6
</td>
<td>
2nd Annual CDEP Update
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.7
</td>
<td>
3rd Annual CDEP Update
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.8
</td>
<td>
FLOTANT Workshops Report
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.9
</td>
<td>
FLOTANT Webinars Report
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.10
</td>
<td>
FLOTANT Policy Brief
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D9.11
</td>
<td>
Data Management Plan
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D10.1
</td>
<td>
H - Requirement No. 1
</td>
<td>
Confidential, only for members of the consortium (including the
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Commission Services)
</td> </tr>
<tr>
<td>
D10.2
</td>
<td>
POPD - Requirement No. 2
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D10.3
</td>
<td>
EPQ - Requirement No. 3
</td>
<td>
Confidential, only for members of the
consortium (including the Commission Services)
</td> </tr> </table>
TABLE 2 DISSEMINATION LEVEL OF THE DELIVERABLES
Finally, other data sets count with restricted level of public accessibility,
due to protect personal data which can be collected into this data group or
according IPR strategy. Other data set are not considered as relevant for the
public access, even though there are not specific legal restrictions to
consider them as confidential. The restrictions have been described and
justified in the table 1.1.
Personal data will be collected and stored according to point 3 of this
deliverable, under the terms of the General Data Protection Regulation
2016/679.
## Making data interoperable
The Project Coordinator PLOCAN will be in charge of making sure that
provisions on scientific publications and guidelines on Data Management in
H2020 are adhered to. As indicated, scientific research data should be easily
discoverable, accessible, assessable and intelligible, useable beyond the
original purpose for which it was collected, and interoperable to specific
quality standards.
All the public data that will be produced in the FLOTANT project will use
standard formats, such as Microsoft Office extensions (*.docx, *.xlsx, *.pptx,
etc.) or Portable Document Format from Adobe (*.pdf) for deliverables, papers
and publications; also common extensions for videos (*.mp4 or *.mov) and
pictures (*.jpg or *.png).
FLOTANT will not use standards or methodologies to make the data
interoperable.
## Increase data re-use (through clarifying license)
All the public data that it is produced under the project activity will be
available as soon as possible respecting the communication and dissemination
plan. According to Article 29.2 of the Grant Agreement, open data will be
stored in an Open Access repository, such as the project website and other
social media portals, during and after the life of the project.
Open access to scientific publications should be warranted and open data will
be usable by third parties in particular after the end of the project, since
FLOTANT pretends to become a reference case for the floating offshore wind
developers.
This must be compatible with the details which are being described in the IPR
management Plan, which will always respect the H2020 IPR rules as outline in
Regulation (EU) No1290/2013 of the European Parliament and of the Council of
11 December 2013 laying down the rules for participation and dissemination.
# ALLOCATION RESOURCES
FLOTANT open data will be available at the project website at least for 5
years after the end of the project.
All consortium-shared and processed data will be stored in secure environments
at the locations of consortium partners with access privileges restricted to
the relevant project partners
Among the different options it can be highlighted the following, under
contract with PLOCAN.
* Project Intranet AdminProject will serve as the main project management tool and document repository. The following items will be included, among others:
* Legal documentation: Consortium Agreement (CA), Grant Agreement, Description of the Action (DoA).
* Project reporting: internal monitoring, control reports, templates, EC periodic reports and all submitted deliverables.
* Project registers: such as, project detailed implementation plan, Risk Register, Issue Register and Quality Register.
* Project Meetings: will serve the organization of the Project in-person meetings, and include all associated documentation pre- and post- meeting including:
logistics, agenda, presentations and minutes.
* Dissemination and Outreach material.
The license for 17 partners and 40 month of duration has a total cost of 2.000
€.
More information on how AdminProject stores our data is available here:
_https://ap.adminproject.eu/privacy_
* Google Sites allows you to create a website easily without specialized knowledge. It falls under the Collaborative category of Google Applications, meaning that you can get other users in on the website creation process too, which is what makes it so powerful and a valuable tool for teams.
This storage service is under contract with PLOCAN. This does not include cost
to the FLOTANT project budget.
More information on how Google stores our data is available here:
_https://cloud.google.com/about/locations/_
* Microsoft Teams is a cloud-based team collaboration software. The core capabilities include business messaging, calling, video meetings and file sharing. As a business communications app, Teams enables local and remote workers to collaborate on content in real time and near-real time across different devices, including laptops and mobile devices.
This storage service is under contract with PLOCAN. This does not include cost
to the FLOTANT project budget.
More information on how Microsoft stores our data is available here:
_https://products.office.com/en-us/where-is-your-data-
located?geo=Europe#Europe_
# DATA SECURITY
Open, restricted and confidential data will be stored as it has been described
above and storage will be enabled in the three main allocations which have
been described. Regarding data security, we have to remark special
considerations for personal data:
Data protection:
The key principles that apply to personal data protection are detailed here:
* Data processing will be authorised and executed fairly and lawfully. In case of any detected alteration or unauthorised disclosure, the data subject will be informed without delay.
* It is forbidden to process personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life.
* The data subject will have the right to remove consent, on legitimate grounds, to the processing of data relating to him/her. He/she will also have the right to remove consent, on request and free of charge, to the processing of personal data that the controller anticipates being processed for the purposes of direct marketing. He/she will finally be informed before personal data are disclosed to third parties for the purposes of direct marketing, and be expressly offered the right to remove consent to such disclosures.
Data retention and destruction:
The data controller (PLOCAN) will facilitate the data subject to access,
rectify their data and practice his/her 'right to be forgotten' [GDPR, Article
17]. In addition, the controller will not hinder any attempt of the data
subject to transfer the collected data to another controller [GDPR, Article
20].
* Intranet AdminProject.
Intranet will be activated until the end of FLOTANT project, once the
deactivation was requested, all personal data is immediately locked and stored
for up 30 calendar days (due to accidental deleted), after that, all personal
data will be permanently deleted.
* Google Sites
After completion of the project, personal data that has been storage in the
Google Cloud account property of PLOCAN will be deleted. According the terms
of Google Cloud, restored deleted files will not be possible after 180
calendar days.
* Microsoft Teams
Retention policy terms are included here:
_https://docs.microsoft.com/en-us/microsoftteams/retention-policies_
Regarding data destruction, terms are included here:
_https://docs.microsoft.com/en-us/microsoftteams/data-collection-practices_
# ETHICAL ASPECTS
FLOTANT project partners will comply with the ethical and research integrity
set out in Article 34 of the Grant Agreement regarding ethics and research
integrity.
The WP10 “Ethics requirements” follows up any ethical aspects, which have been
included in deliverables D.10.1 H Requirement No1, D.10.2 POPD Requirement N2
and finally D.10.3 EPQ Requirement N3.
As a summary of the main topics explained in the three ethics deliverables we
would like to highlight the following items:
* Procedures and criteria that will be used to identify and recruit human participants in the three main groups: General Society; Stakeholders Society and the Advisory and Stakeholders Board and Social-Acceptance survey.
* Informed consent procedures. Subjects must voluntarily give their informed consent before participate in a study. In the framework of FLOTANT Project a Social Acceptance Survey will be performed, this represents a clear example of a social science research, this clearly has to comply with any legal frameworks and regulation, but we cannot forget other activities which will be manage personal data, such as two way communication with general society who manifest an special interest in the project, or Stakeholders Society and the Advisory and Stakeholders Board who will support the project during and after its life.
* Relevance and purpose of data intend to collect from external participants by complying with the principle of minimum amount of personal data and absolutely necessary for carrying out the purpose for which the data will be collected and processed.
* Procedures for data collection, storage, protection, retention and destruction.
* Technical and organisational measures that will be implemented to safeguard the rights and freedoms of the data subject participants.
* Environmental evaluation and legal framework. Taking in consideration:
* Environmental Strategy of PLOCAN, specifically, adopted measurements to be in compliance with National Law 41/2010 and PLOCAN responsibilities on Environmental Protection and Monitoring (Resolution of the 15th of January).
* PLOCAN certificate ISO 9001 for Quality management System.
* PLOCAN certificate ISO 14001 for Environmental Management System.
* Health and safety procedures of the research staff:
* PLOCAN overall policy for Health and Safety at Work
* PLOCAN specific Health and Safety considerations on land-based facilities o PLOCAN specific Health and Safety consideration on the offshore facilities o PLOCAN OHSAS certificate
* MARIN Health and Safety Policy
* MARIN ISO 9001 Quality management systems Certificate o UNEXE Health and Safety Policy
* UNEXE ISO 9001 Quality management systems Certificate
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1488_5G ALL-STAR_815323.md
|
# Introduction
This deliverable describes the data management life cycle for the data to be
collected, processed and/or generated by the Horizon 2020 project 5G-ALLSTAR.
As part of making research data findable, accessible, interoperable and
reusable (FAIR), it includes information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved (including after the end of the project).
This deliverable is a living document that will updated continuously during
the project.
5G-ALLSTAR
# Data summary
_What is the purpose of the data collection/generation and its relation to the
objectives of the_ _project?_
Data collection/generation pursue several goals. It will first be a support
for exchanges between partners (meeting reports, emails, spreadsheets…). Data
will also be used as a mean of recording project results (mainly thanks to
deliverables) for possible future use. They will be then used for the
demonstration of the project results (simulation and experiment). Finally,
data will be a basis for dissemination of the project outcomes (publications,
slidesets). All data also aims at demonstrating to the EU and to the Korean
government that 5G-ALLSTAR as reached its objectives.
_What types and formats of data will the project generate/collect?_
The project will generate text, spreadsheets, emails, slidesets, software and
algorithms.
_Will you re-use any existing data and how?_
A software (Quadriga) previously developed by Fraunhofer will be used for
channel simulation. This software will be enhanced during the project to meet
the project requirements. Aside this, the 5G-ALLSTAR project aims at producing
new results, therefore no re-use of existing data is planned. Obviously,
existing literature will be used to compile the state of the art in each
scientific field studied in the project.
_What is the origin of the data?_
All data will be generated during the project. Text will come from
deliverables, reports, publications and press releases. Spreadsheets would
collect for example simulation and experiment results. Slidesets will be
generated by physical and phone meetings but also by external presentations.
Software and algorithms will be developed during the project to answer
5G-ALLSTAR problematics.
_What is the expected size of the data?_
At this early stage of the project, the total volume of data and the number of
files cannot be evaluated. This section will be iteratively updated during the
project.
_To whom might it be useful ('data utility')?_
Data will be first useful as a mean of exchanges between 5G-ALLSTAR partners.
Data will then be used by project reviewers, to evaluate the progress of the
project. Each partner will also use the data produced by the project to serve
its company’s objectives (for example normalization). Finally, data will be
used by the scientific community as a basis for future studies or by
industries for validating development objectives.
5G-ALLSTAR
# FAIR data
## Making data findable, including provisions for metadata
_Are the data produced and/or used in the project discoverable with metadata,
identifiable and_ _locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?_
No Digital Object Identifiers are used in the project.
_What naming conventions do you follow?_
For deliverables, the name must follow the pattern:
5G-ALLSTAR_Dx.y.docx(pdf) with x the work package number and y deliverable
number in the work package.
Other documents must start with 5G-ALLSTAR_. This must be followed by a date
and a place (if from a meeting), the WP number if relevant, and the type of
document (minutes, agenda, etc.)
_Will search keywords be provided that optimize possibilities for re-use?_
Keywords will be provided in deliverables.
_Do you provide clear version numbers?_
The data will be stored in a shared space, using the BSCW (Basic Support for
Cooperative Work) tool provided by Fraunhofer FIT. The BSCW system is based on
the notation of a shared workspace, a joint storage facility that may contain
various kinds of objects such as documents, tables, graphics, spreadsheets or
links to other Web pages. A workspace can be set up and objects stored,
managed, edited or downloaded with any Web browser. The BSCW system will keep
the members of a group informed about each other’s relevant activities in a
shared workspace.
This tool provides a versioning capability that will be used in the 5G-ALLSTAR
project.
_What metadata will be created? In case metadata standards do not exist in
your discipline,_ _please outline what type of metadata will be created and
how._
No metadata will be created.
## Making data openly accessible
_Which data produced and/or used in the project will be made openly available
as the default? If_ _certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly_ _separating legal and contractual
reasons from voluntary restrictions._
_Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their_ _data closed if relevant provisions are made in
the consortium agreement and are in line with the_ _reasons for opting out._
All public deliverables will be made openly available as the default. Quadriga
software will be made openly available before the end of the project. Inside
the project, all data produced by the project will be shared between partners.
5G-ALLSTAR
_How will the data be made accessible (e.g. by deposition in a repository)?_
The data will be stored in a shared space, using the BSCW (Basic Support for
Cooperative Work) tool provided by Fraunhofer FIT. The BSCW system is based on
the notation of a shared workspace, a joint storage facility that may contain
various kinds of objects such as documents, tables, graphics, spreadsheets or
links to other Web pages. A workspace can be set up and objects stored,
managed, edited or downloaded with any Web browser. The BSCW system will keep
the members of a group informed about each other’s relevant activities in a
shared workspace.
Public documents will be made available in the public section of the
5G-ALLSTAR project.
_What methods or software tools are needed to access the data?_
Please see previous answer.
_Is documentation about the software needed to access the data included?_
The BSCW tool includes a very detailed documentation on how to use it.
Furthermore, good practice rules have been provided to all partners.
_Is it possible to include the relevant software (e.g. in open source code)?_
Aside the channel simulation software, the software produced during the
project will not be openly accessible. (This answer may be revised during the
project).
_Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible._
Please see the answer to the question “How will the data be made accessible?”.
_Have you explored appropriate arrangements with the identified repository?_
The data in the repository is arranged in a way that makes it easy to access.
_If there are restrictions on use, how will access be provided?_
For project members, no restriction on use are foreseen.
_Is there a need for a data access committee?_
There is no need for a data access committee.
_Are there well described conditions for access (i.e. a machine readable
license)?_
The conditions of use of the BSCW tool are available on the web.
_How will the identity of the person accessing the data be ascertained?_
Each person is assigned a login and a password to access the data in the BSCW
tool.
5G-ALLSTAR
## Making data interoperable
_Are the data produced in the project interoperable, that is allowing data
exchange and re-use_ _between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for_ _formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?_
Data for project internal usage will use the Windows Office format (i.e.
.docx, .xlsx and .pptx). Text data that will be made available outside the
project will use the pdf (portable document format) format that can be read
with open software.
_What data and metadata vocabularies, standards or methodologies will you
follow to make your_ _data interoperable?_
Common data vocabularies, standards or methodologies will be used.
_Will you be using standard vocabularies for all data types present in your
data set, to allow interdisciplinary interoperability?_
Yes.
_In case it is unavoidable that you use uncommon or generate project specific
ontologies or_ _vocabularies, will you provide mappings to more commonly used
ontologies?_
No uncommon or project specific ontologies or vocabularies will be used.
## Increase data re-use (through clarifying licences)
_How will the data be licensed to permit the widest re-use possible?_
Most of deliverables will be public, and therefore freely available on the
project website. The software Quadriga, which will be enhanced during the
project, will be made freely available before the end of the project.
Conference and journal publications will be available on the web, with license
depending on the type of publication (IEEE,…).
_When will the data be made available for re-use? If an embargo is sought to
give time to publish_ _or seek patents, specify why and how long this will
apply, bearing in mind that research data_ _should be made available as soon
as possible._
Public deliverables will be made available as soon as accepted by the
reviewers.
Channel model emulator will be made available on month 18.
_Are the data produced and/or used in the project useable by third parties, in
particular after the_ _end of the project? If the re-use of some data is
restricted, explain why._
Please see previous answer.
_How long is it intended that the data remains re-usable?_
No time restriction is planned.
5G-ALLSTAR
_Are data quality assurance processes described?_
No data quality assurance processes is available.
# Allocation of resources
_What are the costs for making data FAIR in your project?_ During the project,
for partners:
* BSCW is operated by Fraunhofer HHI free of charge for project partners. BSCW will be shut down 3 months after project ends. All contained data has to be archived at the partners premises.
* Domain registration for project website (5g-allstar.eu) costs 25 € annually and will be covered by Fraunhofers expenses. Website will remain online for at least 3 years after the end of the project.
_How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions)._
_Who will be responsible for data management in your project?_ Please refer to
the answer to the previous question.
_Are the resources for long-term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?_
This question will be discussed during the project, and this deliverable will
be updated accordingly.
# Data security
_What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?_
Access to the BSCW server is password protected on a per-person level. The
server is located at the Fraunhofer HHI premises in Berlin, Germany. No third
party has access to the stored data without permission. All connections to the
server are encrypted (Certified SSL connection). Weekly incremental backups
are in place in case of hardware failure.
_Is the data safely stored in certified repositories for long-term
preservation and curation?_
BSCW will be shut down 3 months after project ends. All contained data has to
be archived at the at the project partners premises for long-term
preservation.
# Ethical aspects
_Are there any ethical or legal issues that can have an impact on data
sharing? These can also_ _be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA)._
There are no ethical or legal issues that can have an impact on data sharing.
_Is informed consent for data sharing and long term preservation included in
questionnaires_ _dealing with personal data?_
No personal data will be used in the project.
5G-ALLSTAR
# Other issues
_Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?_
We do not make use of other national/funder/sectorial/departmental procedures.
5G-ALLSTAR
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1492_SunHorizon_818329.md
|
interests in relation to the project outputs, and the introduction of IPR
agreements between partners prior to dissemination of findings.
## Open access in the Grant Agreement
The importance given by the European Commission to the open access issue is
clearly outlined in the SunHorizon Grant Agreement. Particularly, Article 29.2
and 29.3 states the responsibilities of beneficiaries and the actions to be
undertaken in order to ensure open access to scientific publications and to
research data respectively. The text of the aforementioned articles is
reported below.
<table>
<tr>
<th>
**Article 29.2:** _Open access to scientific publications_
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results.
In particular, it must:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;
Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications.
2. ensure open access to the deposited publication — via the repository — at the latest:
1. on publication, if an electronic version is available for free via the publisher, or
2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
(c) ensure open access — via the repository — to the bibliographic metadata
that identify the deposited publication. The bibliographic metadata must be in
a standard format and must include all of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and - a persistent identifier.
</th> </tr> </table>
**Article 29.3:** O _pen access to research data_
Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
(a) deposit in a research data repository and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
— free of charge for any user — the following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
2. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see
Annex 1);
(b) provide information — via the repository — about tools and instruments at
the disposal of the beneficiaries and necessary for validating the results
(and — where possible — provide the tools and instruments themselves).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex 1, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access.
The confidentiality aspects have been duly taken into account in the
preparation of this document in order do not compromise the protection of
project results and legitimate interests of project partners.
## Open access in research data pilot
Horizon2020 has launched an **Open Research Data Pilot (ORDP)** aiming at
improving and maximising access to and re-use of research data generated by
projects (eg. from experiments, simulations and surveys). These data are
typically small sets, scattered across repositories and hard drives throughout
Europe. The success of the EC’s Open Data Pilot is therefore dependent on
support and infrastructures that acknowledge disciplinary approaches on
institutional, national, and European levels. The pilot is an excellent
opportunity to stimulate and nurture the data-sharing ecosystem and has the
potential to connect researchers interested in sharing and re-using data with
the relevant services within their institutions (library, IT services), data
centres and data scientists. The pilot should serve to promote the value of
data sharing to both researchers and funders, as well as to forge connections
between the various players in the ecosystem.
The SunHorizon project recognizes the value of regulating research data
management issues. Accordingly, in line with the rules laid down in the Model
Grant Agreement, the beneficiaries will deposit the underlying research data
needed to validate the results presented in the deposited scientific
publications in a clear and transparent manner.
Open Research Data Pilot project aims at supporting researchers in the
management of research data throughout their whole lifecycle, providing
answers to key issues such as “what”, “where”, “when”, “how” and “who” 1 .
<table>
<tr>
<th>
**WHAT**
</th> </tr>
<tr>
<td>
The Open Data Pilot covers all research data and associated metadata resulting
from EC-funded projects, if they serve as evidence for publicly available
project reports and deliverables and/or peer reviewed publications. To support
discovery and monitoring of research outputs, metadata have to be made
available for all datasets, regardless of whether the dataset itself will be
available in Open Access. Data repositories might consider supporting the
storage of related project deliverables and reports, in addition to research
data.
</td> </tr>
<tr>
<td>
**WHERE**
</td> </tr>
<tr>
<td>
All research data has to be registered and deposited into at least one open
data repository. This repository should: provide public access to the research
data, where necessary after user registration; enable data citation through
persistent identifiers; link research data to related publications (eg.
journals, data journals, reports, working papers); support acknowledgement of
research funding within metadata elements; offer the possibility to link to
software archives; provide its metadata in a technically and legally open
format for European and global re-use by data catalogues and third-party
service providers based on wide-spread metadata standards and interoperability
guidelines. Data should be deposited in trusted data repositories, if
available. These repositories should provide reliable long-term access to
managed digital resources and be endorsed by the respective disciplinary
community and/or the journal(s) in which related results will be published
(e.g., Data Seal of Approval, ISO Trusted Digital Repository Checklist).
</td> </tr>
<tr>
<td>
**WHEN**
</td> </tr>
<tr>
<td>
Research data related to research publications should be made available to the
reviewers in the peer review process. In parallel to the release of the
publication, the underlying research data should be made accessible through an
Open Data repository. If the project has produced further research datasets
(i.e. not necessarily related to publications) these should be registered and
deposited as soon as possible, and made openly accessible as soon as possible,
at least at the point in time when used as evidence in the context of
publications.
</td> </tr>
<tr>
<td>
**HOW**
</td> </tr>
<tr>
<td>
The use of appropriate licenses for Open Data is highly recommended (e.g.
Creative Commons CC0, Open Data Commons Open Database License).
</td> </tr>
<tr>
<td>
**WHO**
</td> </tr>
<tr>
<td>
Responsibility for the deposit of research data resulting from the project
lies with the project coordinator (delegated to project partners where
appropriate
</td> </tr> </table>
## Open access in research data repository
All the data collected from monitoring sensors related both to building
consumptions, weather data and technology performances, will be stored and
preserved in an online monitoring cloud platform with access limited to the
SunHorizon Consortium, managed by SE and intended for internal uses. The
collected data will be also stored in the Consortium repository, hosted in
NextCloud, managed by RINA-C. Particular attention will be paid to the
confidential and/or sensitive data and the consortium will not disclose or
share this information to third parties.
At M18 a preliminary analysis will be performed in order to identify the data
suitable to get open access disclosure: this preliminary list will be
integrated and confirmed at the end of the project (M36). Furthermore it is
important to remark that this Data Management Plan will be updated at each
reporting period.
Concerning the open access of discoverable data, different online public
repository possibilities will be investigated in subsequent stages of the
project. Some examples of suitable repositories under evaluation are shown
below:
* ZENODO (http://www.zenodo.org/) is the open access repository of OpenAIRE (the Open Access Infrastructure for Research in Europe, https://www.openaire.eu/). The goal of OpenAIRE portal is to make as much European funded research output as possible available to all. Institutional repositories are typically linked to it. Moreover, dedicated pages per project are visible on the OpenAIRE portal, making research output (whether it is publications, datasets or project information) accessible through the portal. This is possible due to the bibliographic metadata that must accompany each publication.
* LIBER (www.libereurope.eu) supports libraries in the development of institutional research data management policies and services. It also enables the exchange of experiences and good practices across Europe. Institutional infrastructures and support services are an emerging area and will be linked to national and international infrastructure and funder policies. Building capacities and skills, as well as creating a culture of incentives for collaboration on research data, management are the core targets of LIBER.
# Scientific publications
As reported in the DoA, a dissemination and communication plan has been set up
in order to raise awareness on the project outcomes among specialized
audience. In this framework, the consortium commits itself to perform
publications in peer reviewed international journals, in order to make the
outcomes available to the scientific community. The partner in charge of
dissemination activities are responsible for the scientific publications as
well as for the selection of the publishers considered as more relevant for
the subject matter. Further details on dissemination activities are already
included in D8.3 “Dissemination and stakeholders’ engagement plan” which is
delivered at M6 and will be included in D8.5 “Report on dissemination and
communication activities” delivered at M24.
Fully in line with the rules laid down in the SunHorizon Grant Agreement and
reported in section 2.2.1, each beneficiary will ensure open access to all
peer reviewed scientific publications relating to its results.
The project will make use of a mix of the three different possibilities for
open access, namely:
1\. **Open access publishing** (without author processing charges): partners
may opt for publishing directly in open access journals, i.e. journals which
provide open access immediately, by default without any charges.
2\. **Gold open access publishing:** partners may also decide to publish in
journals that sell subscriptions, offering the possibility of making
individual articles open accessible (hybrid journals). In such case, authors
will pay the fee to publish the material for open access, whereby highest
level journals offer this option.
3\. **Self-archiving/ “green” open access publishing** : alternatively,
beneficiaries may deposit the final peer reviewed article or manuscript in an
online disciplinary, institutional or public repository of their choice,
ensuring open access to the publication within a maximum of six months.
Moreover, the relevant beneficiary will deposit at the same time the research
data presented in the deposited scientific publication into a data repository.
The consortium will evaluate which of these data will be part of the data to
be published on SunHorizon Open Research Data Platform mainly according to
Ethics and confidentiality reasons.
## Selection of suitable publishers
Each publisher has its own policy on self-archiving (i.e, the act of the
author's depositing a free copy of an electronic document online in order to
provide open access to it). Since publishing conditions of some publishers
might not fix to open access requirements applying to SunHorizon on the basis
of the Grant Agreement, each partner in charge of dissemination activities
will identify the most suitable repository. Particularly, beneficiaries will
not choose a repository which claims rights over deposited publications and
precludes access.
At this stage any specific journal has been identified each beneficiary, in
collaboration with the project coordinator, will evaluate if the identified
journal and it article sharing policy can respect the consortium agreement in
terms of Open Access. According to consortium partners’ previous Open Access
experience, ELSEVIER journals could be considered a good option.
As example, ELSEVIER article sharing policy is summarized in the table below
2
<table>
<tr>
<th>
</th>
<th>
**Share**
</th> </tr>
<tr>
<td>
**Pre submission**
</td>
<td>
Preprints 1 can be shared anywhere at any time
PLEASE NOTE: Cell Press, The Lancet, and some society-owned titles have
different preprint policies. Information of these is available on the journal
homepage.
</td> </tr> </table>
https://www.publishingcampus.elsevier.com/websites/elsevier_publishingcampus/files/Guides/Brochure_Ope
nAccess_1_web.pdf 2
<table>
<tr>
<th>
**After acceptance**
</th>
<th>
Accepted manuscripts 2 can be shared:
* Privately with students or colleagues for their personal use.
* Privately on institutional repositories.
* On personal websites or blogs.
* To refresh preprints on arXiv and RePEc.
* Privately on commercial partner sites.
</th> </tr>
<tr>
<td>
**After publication**
</td>
<td>
Gold open access articles can be shared:
* Anytime, anywhere on non-commercial platforms.
* Via commercial platforms if the author has chosen a CC-BY license, or the platform has an agreement with us.
Subscription articles can be shared:
* As a link anywhere at any time.
* Privately with students or colleagues for their personal use.
* Privately on commercial partner sites.
</td> </tr>
<tr>
<td>
**After embargo**
</td>
<td>
Author manuscripts can be shared:
* Publicly on non-commercial platforms.
* Publicly on commercial partner sites 3 .
</td> </tr>
<tr>
<td>
1 Preprint is the initial write up of author results and analysis that have
not yet been peer reviewed or submitted to a journal.
2 Accepted manuscript is a version of author manuscript which typically
includes any changes you have incorporated through the process of submission,
peer review and in your communications with the editor
3 For an overview of how and where author can share his article, it is
possible to check Elsevier.com/sharing-articles
</td> </tr> </table>
## Bibliographic metadata
As mentioned in the Grant Agreement, metadata for scientific peer reviewed
publications must be provided. The purpose is to maximize the discoverability
of publications and to ensure EU funding acknowledgment.
The inclusion of information relating to EU funding as part of the
bibliographic metadata is necessary also for adequate monitoring, production
of statistics and assessment of the impact of Horizon 2020\.
All the following information must be included in the metadata associated to
each SunHorizon publication. Information about the grant number, name and
acronym of the action:
* European Union (UE)
* Horizon 2020 (H2020)
* Innovation Action (IA)
* SunHorizon [Acronym]
* Grant Agreement: GA N° 818329
Information about the publication date and embargo period if applicable:
* Publication date
* (eventual) Length of embargo period Information about the persistent identifier:
* Persistent identifier, if any, provided by the publisher (for example an ISSN number)
# Research Data
Research data refers to data that is collected, observed, or created within a
project for purposes of analysis and to produce original research results.
Data are plain facts. When they are processed, organized, structured and
interpreted to determine their true meaning, they become useful and they are
called information.
In a research context, research data can be divided into different categories,
depending on their purpose and on the process through which they are
generated. It is possible to have:
* Observational data, which are captured in real-time, for example, sensor data, survey data, sample data.
* Experimental data, which derive from lab equipment, for example resulting from fieldwork Simulation data, generated from test or numerical models
* Derived data
Research data may include all of the following formats:
* Text or word documents, spreadsheets
* Laboratory notebooks, field notebooks, diaries
* Questionnaire, transcripts, codebooks
* Audiotapes, videotapes
* Photographs, films,
* Test responses
* Slides, specimen, samples
* Collection of digital objects acquired and generated during the research process Data files
* Database contents
* Models, algorithms, scripts
* Contents of software application such as input, output, log files, simulations
* Methodologies and workflows
* Standard operating procedures and protocols
## Key principle for open access to research data
According to the “ _Guidelines on FAIR Data Management in Horizon 2020_ ”,
research data must be _findable_ , _accessible_ , _interoperable_ , _re-
usable_ 2 .
The FAIR guiding principles are reported in the following table 3 .
<table>
<tr>
<th>
**FINDABLE**
</th> </tr>
<tr>
<td>
**F1** (meta)data are assigned a globally unique and eternally persistent
identifier
**F2** data are described with rich metadata
**F3** (meta)data are registered or indexed in a searchable resource **F4**
metadata specify the data identifier
</td> </tr>
<tr>
<td>
**ACCESSIBLE**
</td> </tr>
<tr>
<td>
**A1** (meta)data are retrievable by their identifier using a standardized
communications protocol
**A1.1** the protocol is open, free, and universally implementable
**A1.2** the protocol allows for an authentication and authorization
procedure, where necessary.
</td> </tr>
<tr>
<td>
**A2** metadata are accessible, even when the data are no longer available
</td> </tr>
<tr>
<td>
**INTEROPERABLE**
</td> </tr>
<tr>
<td>
**I1** (meta)data use a formal, accessible, shared, and broadly applicable
language for knowledge representation **I2** (meta)data use vocabularies that
follow FAIR principles
**I3** (meta)data include qualified references to other (meta)data.
</td> </tr>
<tr>
<td>
**RE-USABLE**
</td> </tr>
<tr>
<td>
**R1** meta(data) have a plurality of accurate and relevant attributes.
**R1.1** (meta)data are released with a clear and accessible data usage
license
**R1.2** (meta)data are associated with their provenance
**R1.3** (meta)data meet domain-relevant community standards
</td> </tr> </table>
4.2 Roadmap and procedures for data sharing
SunHorizon will generate a relevant amount of data mainly related to the eight
different demosites (campaign monitoring data, energy consumption, weather
data…). Part of these data could be made available not only for the purpose of
the project but also for other tools and studies and presented in a specific
section of the project website.
To facilitate the project data publication and in parallel guarantee
confidentiality of the data and the linking with the open research data, a
repository will be developed in order to share the selected project data
towards external communities.
The access to this repository (section of project website) will be given after
end-user registration and approval from the Project coordinator. The website
provides a source catalogue, metadata and description of all the resourced to
be shared with external.
According to the aforementioned principles (Section 4.1), information on data
management is disclosed by detailing the next elements
* **Data set reference and name** : Identifier for the data set to be produced.
* **Data set description** : its origin (in case it is collected), nature and scale and to whom it could be useful, whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse will be also included.
* **Standards and metadata** : reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created has to be given.
* **Data sharing** : Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. The repository where data will be stored will be identified, if already existing, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, IP, privacy related, security-related etc.).
* **Archiving and preservation** (including storage and backup): Procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered.
Since at M6, data set has not been generated yet, the previous list has to be
intended as a guideline for data generated in the future. Obviously, the
sharing of data will be strictly linked to the level of confidentiality of the
data itself. In particular, the level of confidentiality of gathered data will
be checked by the partner responsible for the activity (task leader) in which
data has been collected, with the data owners (such as public authority,
energy provider, industry, associations, etc...) in order to verify if data
can be disclosed or not. For the purpose, a written confirmation to publish
data in the SunHorizon Open Access Repository will be asked via e-mail by the
task leader to the data owner. It will be possible to make such data available
only following the received confirmation provided by the data owner.
No confidential data generated within the project will be made available in
digital form.
# Expected Dataset
The purpose of the data collection is to bring an overview on the potential of
the technology packages implemented in the demosites and studies in the
virtual demosites through predefined KPI on different areas such as
technology, energetic, economic, social and environmental. A preliminary set
of KPI is already defined in WP2. The data will be related both to technology
performance and personal data (like building consumption). This chapter
address the origin and definition of the datasets that will be produced during
the project for each work package, with the aim of clearly differentiate which
are sensitive and which can be freely distributed in addition to other
features.
_What types of data will the project generate/collect?_
## **INPUT DATA**
In order to collect all the data necessary to be provided as inputs for
achieving the objectives of the project, a survey will be carried out and
distributed to collect the information both from building owners and building
occupants. In this regard, data will be collected on file office format (word
and excel) and the areas of reference are:
General information to characterize the building:
* General building information,
* General information on single dwellings.
Information to define the building plants features and uses of occupants:
* Heating system information,
* Cooling system information,
* Domestic Hot Water (DHW) system information,
* Ventilation system information,
* Energy use information
* Electric consumption
* Monitoring systems (Indicate the existence of any of these sensors)
* Control systems
* Internet connection
In addition to data collected with the surveys other type of data have been
collected from demosite responsible partners such as drawings, pictures,
scanned documents. Already existing data will also be used to defined a
baseline to which compare the performance of the SunHorizon solution. Some
data will be extracted from appropriate platform or sources (climate, historic
building data, etc.) and reused inside the SunHorizon project to be able to
reach the described project objectives. The data related to the building
itself (building plant, energy profiles…) will be possibly taken directly by
building owners or from interviews with building occupants.
**MONITORING DATA:** Data collection ongoing during the project development
Monitoring data will be acquired by designated metering equipment (provided
and deployed by SE) and communicated via encrypted and secured communication
means to the SunHorizon cloud. Data acquired in this way will be stored in the
central database as part of a dedicated server provided by RINA-C and managed
by SE, while applying all the necessary data protection and security measures
in order to ensure complete communication and encryption of stored data.
**OUTCOME DATA:** Data generated during the project
Based on the analytics performed upon the monitoring data (e.g. consumption
analytics, demand forecasting, supply/demand optimization, etc.), the project
will generate a set of information that will be displayed on users’ app and
will provide to end-users feedback on their consumption.
These data will be visualized to the end user in different forms, such as
graphics for indicating the energy profiles and trends, messages in natural
language suggesting the energy conservation measures, or scoreboards for user
benchmarking and performance indication.
Other general data from SunHorizon outcomes:
* Database including local energy prices, energy utility tariffs, gas/hydrogen/fuel prices, cost of BOP components, all data if explicitly anonymized.
* Database of emissions, technology and costs for the EU-28 countries: coming from already public databases.
* LCA/LCC useful databases for analysis is already public
* Results from stakeholders surveys properly anonymized and after informed consent signature
* Dissemination event materials
* Techno-economic framework initial assessment for replication and business model promotion
It is also important to consider that SunHorizon demosites will be open for
dissemination visits within the so called SunHorizon Open Days.
At this stage of the project, the main datasets from each work packages have
been already identified, specifying in the following table some relevant
aspects such as the origin of the data, their utility and format. The datasets
will be updated during the project to create a complete dataset of SunHorizon
outcomes.
Table 1. Datasets from SunHorizon project
<table>
<tr>
<th>
</th>
<th>
**WP2 SunHorizon use cases scenario definition and demonstration strategy**
</th> </tr>
<tr>
<td>
_Dataset 2.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 2.2
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
* Maps of solar resources
* Mapping of building demand
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
_Dataset 2.2_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 2.3
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
KPIs of demosites for the baseline scenario
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
</td>
<td>
**WP3 PILLAR 1: SunHorizon enabling technologies**
</td> </tr>
<tr>
<td>
_Dataset 3.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 3.1
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Thermal compression HP specifications
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
_Dataset 3.2_
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_Related project task_
</th>
<th>
Task 3.2
</th> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Adsorption HP specifications
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
_Dataset 3.3_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 3.3
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Hybrid HP specifications
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
_Dataset 3.4_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 3.4
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Hybrid PVT specifications
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
_Dataset 3.5_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 3.5
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
High vacuum thermal panels specifications
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
_Dataset 3.6_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 3.6
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Stratified thermal storage specifications
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
_Dataset 3.7_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 3.7
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Technical datasheet of SunHorizon Technologies packages
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
</td>
<td>
**WP4 PILLAR 2: Functional Monitoring Platform and Optimization Tool**
</td> </tr>
<tr>
<td>
_Dataset 4.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 4.4
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Simulation data from demosites
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_Confidentiality_
</th>
<th>
Confidential
</th> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
..docx
</td> </tr>
<tr>
<td>
</td>
<td>
**PILLAR 3: Thermal Comfort and Monitoring Data Driven Control**
**WP5**
**System**
</td> </tr>
<tr>
<td>
_Dataset 5.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 5.2
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Thermal comfort data from building demosites
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
</td>
<td>
**WP6 Demonstration at TRL 7**
</td> </tr>
<tr>
<td>
_Dataset 6.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 6.1
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
* Information coming from each demosites
* Boundary conditions for the applicability of SunHorizon solutions
* Monitoring data of H&C before the installation
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx, .pptx
</td> </tr>
<tr>
<td>
_Dataset 6.2_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 6.2
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Environmental analysis of SunHorizon emissions impact
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
_Dataset 6.3_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 6.3
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Data related to set up contract negotiation and signature, design,
permitting, procurement and phases assessment
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
_Dataset 6.4_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 6.4
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Monitoring data after 6 months the SunHorizon solution implementations
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
_Dataset 6.5_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 6.4
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Monitoring data after 12 months the SunHorizon solution implementations
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
_Dataset 6.6_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 6.4
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Monitoring data after 18 months the SunHorizon solution implementations
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
</td>
<td>
**WP7 SunHorizon Replication and Exploitation**
</td> </tr>
<tr>
<td>
_Dataset 7.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 7.1
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Information on new barriers to SunHorizon investor
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
_Dataset 7.2_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 7.3
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Feasibility studies on virtual demosites
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx
</td> </tr>
<tr>
<td>
</td>
<td>
**WP7 SunHorizon Replication and Exploitation**
</td> </tr>
<tr>
<td>
_Dataset 8.1_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 8.1
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Project identity toolkit (Public reports and presentations of the Project)
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx, .pptx
</td> </tr>
<tr>
<td>
_Dataset 8.2_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
_Related project task_
</td>
<td>
Task 8.2
</td> </tr>
<tr>
<td>
</td>
<td>
_Data_
</td>
<td>
Data related to the two stakeholder workshops with the 2 Stakeholders Groups
</td> </tr>
<tr>
<td>
</td>
<td>
_Confidentiality_
</td>
<td>
Public
</td> </tr>
<tr>
<td>
</td>
<td>
_Type and Format_
</td>
<td>
.xlsx, .docx, .pptx
</td> </tr> </table>
# Potential exceptions to Open Access
Within SunHorizon project, five different technologies packages will be
properly studied and validated via specific simulation models and considering
the integration of the different technologies (heat pump, solar panel and
thermal storage) together with the control platform. These prototypes will be
kept confidential until the final release is ready (according to what is
reported in the DoA).
As already reported above, the level of confidentiality of data will be
verified with the data owners in order to disclose only the information for
which the consortium has received a written permission to publish from the
data owners themselves. It is foreseen that some data may be kept confidential
and/or subject to restriction in the diffusion.
One potential exception to open access could be represented by the individual
specifications of the different technologies that will be implemented during
SunHorizon project duration and related to the exploitation strategies that
have already been described as Background and Foreground of project partners
in the consortium agreement. Some of the partners have already indeed asked to
keep these data as confidential. Therefore, data could be only partially
available.
Additional data could be represented by energy consumption and production data
available at demosite level which could be owned by local building owners.
These data will be used for validating the different SunHorizon technologies
packages at level of the validation site. It is reasonable to assume that part
of such data will be kept confidential.
Moreover, in order to define models for evaluating heating and cooling
consumption, responsible demosite will use the results of energy audits
carried out in different building typologies. Specific data used for
elaborating the energy audits will be kept confidential since they are of
property of the citizen itself while the models elaborated for evaluating the
energy consumption will be publicly available.
Data subject to confidentiality restrictions would be provided by the
participants themselves, industries, local DSOs or heating providers, cities,
etc..., and they will be stored and protected with state-of-the-art security
measures on the private project cloud platform managed by RINA-C as project
coordinator, accessed only by selected and restricted personnel of partners,
and will be used to validate the performances of the SunHorizon innovations.
This list of potential exceptions to open access must be considered
provisional. As reported above the data management plant will be updated at
each reporting period in order to update it based on the project’s evolution.
Furthermore, data collection will be performed fully in compliance with
European Standard and Regulations about Protection of Personal Data, as
already outlined in D1.7 “Ethics Assessment” in order to avoid incidental
findings during the analysis of the eight demosite data that could be redirect
to personal habits, preferences, heating and cooling consumption etc.
# ETHICAL ASPECTS
In the framework of SunHorizon project, a list of ethic requirements that the
project must comply with has been established as reported in specific
deliverable on WP1 and WP9 (D1.7 and D9.1 respectively).
Engagement with end-users will be one of the key components of the project.
Hence, a complete ethics selfassessment has been carried out in order to
ensure that the proposal is compliant with applicable international, European
and national law. Two areas of concern for ethical issues have been
identified: “Humans” and “Personal data”. Starting from these considerations,
a set of procedures will be adopted to protect the privacy of the involved
human end-users
In particular, activities will be carried out in compliance with the highest
ethical principles and fundamental rights dictated in:
1. the Universal Declaration of Human Rights (UDHR, 1948);
2. the EU Charter on Fundamental Rights (CFREU, 2010);
3. the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR, 1950);
4. the Helsinki Declaration in its latest version (2013);
5. the UNESCO Universal Declaration on Bioethics and Human Rights (2005);
6. the European Code of Conduct for Research Integrity (ECCRI, 2011).
With regard to the rights to privacy and to the protection of personal data,
SunHorizon will adhere to:
1. the International Covenant on Civil and Political Rights (ICCPR, 1966);
2. the EU Charter on Fundamental Rights (art. 7 and 8);
3. the European Convention for the Protection of Human Rights and Fundamental Freedoms (art. 8);
4. the CoE Convention No. 108 for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981);
5. the Data Protection Directive (1995/46/EC) and the Directive on Privacy and Electronic Communications (2002/58//EC);
6. the General Data Protection Regulation (GDPR) approved by the EU Parliament on 14 April 2016. Enforcement date: 25 May 2018.
In the framework of SunHorizon two different deliverables have been foreseen
connected to ethics requirements:
* _D9.1 POPD – Requirement No.1:_ 1\. The host institution must confirm that it has appointed a Data Protection Officer (DPO) and the contact details of the DPO are made available to all data subjects involved in the research. For host institutions not required to appoint a DPO under the GDPR a detailed data protection policy for the project must be submitted as a deliverable. 2. Detailed information on the informed consent procedures in regard to data processing must be submitted as a deliverable 3. Templates of the informed consent forms and information sheets (in language and terms intelligible to the participants) must be kept on file 4. In case of further processing of previously collected personal data, an explicit confirmation that the beneficiary has lawful basis for the data processing and that the appropriate technical and organisational measures are in place to safeguard the rights of the data subjects must be submitted as a deliverable
* _D1.7 Ethics Assessment:_ Self-Assessment Report describing how data will be managed in the project in order to avoid any incidental personal data findings and how to integrate in the project extra-EU partners
**_Ethical policy_ **
Preliminary to any data collection activity all the end users, being strictly
volunteers, shall be informed and given the opportunity to provide their
consent to monitoring and data acquisition processes.
Moreover, detailed oral and written information about the activities in which
they will be involved shall be given to them.
Therefore, participant will be provided with the following material, written
in their own languages:
* A document including a commonly understandable description of the project and its goals, together with the planned activities _(Information sheet)_
* A written advice on unrestricted disclaimer rights on their agreement _(Informed Consent)._
The templates prepared for the above mentioned documents will be enclosed in
the Deliverable 9.1 at M18.
# Conclusions
The present document, deliverable D1.2 Data Management Plan, has the aim to
describe the data management life cycle for the data to be collected,
processed and created in the framework of SunHorizon project. All the data
produced during the project will be as open as possible, focusing on sound
data management for the sake of best research practice, and in order to create
added-value usable from other EU initiatives, and foster knowledge and
innovation solutions.
In SunHorizon, eight different demosite will be carried out to demonstrate the
project objectives. So during the project span, Data will be collected. Most
of the data are related •General information to characterize the building,
Information to define the building plants features and uses of occupants,
General information to map the occupants’ behavior towards energy consumption.
Hence, the present document has intended to outline a preliminary strategy for
the management of data generated throughout SunHorizon project. Considering
that this deliverable is due at month six, few dataset has been generated yet,
so it is possible that in the future some aspects outlined in the present
document will need to be refined or adjusted.
In particular, this document specifies how SunHorizon research data will be
handled in the framework of the project as well as after its completion.
More in detail, the report indicated:
* what data will be collected, processed and/or created and from whom
* which data will be shared and which one will be maintained confidential
* how and where the data will be stored during the project
* which backup strategy will be applied for safely maintaining the data how the data will be preserved after the end of the project
The present Data Management Plan has to be considered as a living document and
it will be updated over the project development according to any significant
changes arising during the project implementation. The updates of the data
management plan will be reported in the different periodic reports at the end
of each reporting period.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1496_CICERONE_820707.md
|
## 1 Introduction
CICERONE brings together programme owners, research organizations and other
stakeholders to create a platform for efficient Circular Economy programming.
The priority setting and the organization of the future platform will be
driven by Programme Owners (POs), involved either as project partners, or via
a stakeholder network, and the work will be carried out in close cooperation
with research & technology organisations (RTOs), which contribute with their
expertise of the main scientific and technological challenges. Consultation
mechanisms will also ensure that all stakeholders will be able to actively
contribute (civil society, industry, innovative SMEs, startups, cities,
investors, networks, etc.).
### 1.1 Purpose of the Data Management Plan
The purpose of this document is to lay out a plan for the management of data
generated/collected in CICERONE. It covers the following:
* Identification of data to be collected/processed/generated
* Methodology and standards to be applied
* Data handling during and after the project
* Sharing, curating ad preserving data
At the time of this writing, CICERONE partners have identified 4 data sets to
be included in the DMP at this stage. All of these data sets identified at
this stage will be made openly accessible to the public through repositories
such as Zenodo, and they will be preserved after the end of the project. The
DMP is a living document – if necessary, it will be updated throughout the
project’s lifetime.
### 1.2 Data set properties
Following the guidelines of the EC (EC, 2016), this document contains the
following properties for each of the identified data sets:
1. Name
2. Short description
3. Standards to be applied, metadata
4. Data sharing
5. Curation/archiving/preservation
A short description of each of these properties is provided below.
_1.2.1 Name and reference code_
In order to imbue the names of datasets with easily identifiable meaning that
conveys important information, the following naming convention shall apply:
_CountryCode.DataOwner.Openness.Title_
_CountryCode_ : this string identifies the country to which the data
pertains/where the data was collected using the ISO 3166 Alpha-2 coding
system.
_DataOwner_ : this string identifies the project partner in CICERONE that is
associated with the dataset (data collector/custodian) using the official
abbreviated partner names.
_Openness_ : this string determines whether a given dataset is intended to be
shared with the public as Open Data. It may take the following values:
1. Open: can be accessed, used and shared by anyone without limitations, accessible on the internet in a machine-readable format, free of restrictions on use in its licensing)
2. Shared: available to use, but not under an open data license. Restrictions on its use or reproduction may apply (limited to a given group of people or organisations, may not be reproduced without authorisation, etc.)
3. Closed: can only be accessed by its subject, owner or holder _Title_ : a short and descriptive string to identify the contents of the data Using these strings, the name of a dataset would look like this:
_FR.LGI.Open.CommuteHouseholdSurvey_
A dataset with this name would describe a household survey on commuting
preferences conducted in France and curated by LGI.
### 1.3 Data licensing
Without a license to set out the terms of use, data is not truly open. Data
without a license may be publicly accessible, but users do not have the
certainty that they can use and share the data, leaving them in a legal grey
area. Data licensing standards are used to lay out the openness of data sets
in concrete terms, and an open data license gives explicit permission to use
the data both for commercial and non-commercial purpose. There are many types
of licenses to choose from, and this document will not cover them in depth.
The table below provides a summary of common data licenses that will be
considered for use in the project (based on definitions from
opendefinition.org):
<table>
<tr>
<th>
**Name**
</th>
<th>
**Domain**
</th>
<th>
**Attribution**
</th>
<th>
**Sharealike***
</th>
<th>
**Notes**
</th> </tr>
<tr>
<td>
Creative Commons CCZero (CC0)
</td>
<td>
Content, data
</td>
<td>
N
</td>
<td>
N
</td>
<td>
All rights (including those of attribution) waived
</td> </tr>
<tr>
<td>
Open Data
Commons Public
Domain Dedication and Licence (PDDL)
</td>
<td>
Data
</td>
<td>
N
</td>
<td>
N
</td>
<td>
All rights (including those of attribution) waived
</td> </tr>
<tr>
<td>
Creative Commons
Attribution 4.0 (CCBY-4.0)
</td>
<td>
Content, data
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
Credit must be given, a link to the license must be provided, changes made
must be indicated. If these terms are not followed, license may
be revoked
</td> </tr>
<tr>
<td>
Open Data
Commons Open
Database License (ODbL)
</td>
<td>
Data
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Credit must be given, share-alike must be assured, data may be
redistributed using DRM as long as a
DRM-free version is also released
</td> </tr> </table>
_*Share-alike is the requirement that any materials created using the given
dataset must be redistributed under the same license_
## 2 Description of the data
The following detailed information sheet will be produced for every dataset to
be produced/collected/curated in the project:
<table>
<tr>
<th>
Name of the dataset
</th>
<th>
A name to identify the data, see 1.2.1 for details.
</th> </tr>
<tr>
<td>
Description of the dataset
</td>
<td>
* A brief, easy to understand description of what the dataset contains and what it will be used for in the project
* A list of institutions to whom the data set could be useful outside the project
* Whether the dataset has been/will be used for a scientific publication (if yes, brief details about the content and journal)
* If the dataset is collected, a brief description of its origin and how it was collected will be provided
* Openness of the dataset
* Whether the dataset is anonymised or not
</td> </tr>
<tr>
<td>
Format/license
</td>
<td>
The format in which the data will be available (e.g. .xls, .csv, .txt) will be
provided. The license to be used will also be provided.
</td> </tr>
<tr>
<td>
Archiving/preservation
</td>
<td>
Efforts and means to keep the data available after the end of the project will
be described here, including where/how the data will be preserved, the
duration of preservation, the associated costs and the plans of the consortium
to cover these costs.
</td> </tr> </table>
## 3 Summary of identified datasets
This DMP contains 4 datasets identified by the CICERONE partnership. The
following tables provide information on the various aspects of these datasets.
The sheets completed by partners are provided in Annex I.
### 3.1 Format/license
<table>
<tr>
<th>
Name of the dataset
</th>
<th>
Format
</th>
<th>
License
</th> </tr>
<tr>
<td>
FRLGIOpenPOsSurvey
</td>
<td>
.xls
</td>
<td>
ODbL
</td> </tr>
<tr>
<td>
FIVTTKICOpenPOsSurvey
</td>
<td>
.xls
</td>
<td>
ODbL
</td> </tr>
<tr>
<td>
ESC-KICOpenPOsSurvey
</td>
<td>
.xls
</td>
<td>
ODbL
</td> </tr>
<tr>
<td>
World.Juelich.Open.International benchmark on CE(T1.2, D.1.3)
</td>
<td>
.xls
</td>
<td>
ODbL
</td> </tr> </table>
### 3.2 Archiving/preservation
<table>
<tr>
<th>
Name of the dataset
</th>
<th>
Sharing medium
</th>
<th>
Duration of preservation
</th>
<th>
Costs
</th>
<th>
How
costs
will be covered
</th> </tr>
<tr>
<td>
FRLGIOpenPOsSurvey
</td>
<td>
Zenodo.org
</td>
<td>
Perpetual
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
FIVTTKICOpenPOsSurvey
</td>
<td>
Zenodo.org
</td>
<td>
Perpetual
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
ESC-KICOpenPOsSurvey
</td>
<td>
Zenodo.org
</td>
<td>
Perpetual
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
World.Juelich.Open.International benchmark on CE(T1.2, D.1.3)
</td>
<td>
Zenodo.org
</td>
<td>
Perpetual
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr> </table>
## 4 Data Protection Officer
In accordance with applicable regulations, the host institution (Climate-KIC)
is not required to appoint a Data Protection Officer. A detailed data
protection policy for the project is kept on file.
## 5 Ethical aspects
This Data Management Plan (DMP) was drafted and updated taking into account
the General Data Protection Rules (GDPR) for the collection, storage and re-
use of the data, in line with the following general principles. :
Personal data shall be:
1. processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’);
2. collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), however not be considered to be incompatible with the initial purposes (‘purpose limitation’);
3. adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);
4. accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’);
5. kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, in accordance with Article 89(1) subject to implementation of the appropriate technical and organisational measures required by this Regulation in order to safeguard the rights and freedoms of the data subject (‘storage limitation’);
6. processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).
## 6 Restrictions for re-use
Data generated through interviews and surveys will not be re-used directly due
to privacy concerns. To allow re-use and avoid loss of research data, two
different techniques could be used to disseminate its data while abiding by
regulations on privacy.
### 6.1 Anonymization of data
“Anonymization" of data means processing it with the aim of irreversibly
preventing the identification of the individual to whom it relates. Data can
be considered anonymised when it does not allow identification of the
individuals it is related to, and no individuals can be identified from the
data by any further processing of that data or by processing it together with
other information which is available or likely to be available.
There are different anonymization techniques. Here are the two most relevant:
* Generalisation : generalising data means removing its specificity. For example, in the case of a table containing household income levels, with 4 figures mentioned: €135,000, €60,367, €89,556, and €365,784. One way of generalising this numbers would be to write that the values are “more than €150,000, less than €80,000, between €90,000 and €120,000, and more than €300,000” respectively. Essentially it means taking exact figures, establishing a baseline category, and then obfuscating the data by assigning it to one of the categories in order to remove any sense of specificity from it.
* K-anonymity; A release of data is said to have the k-anonymity property if the information for each person contained in the release cannot be distinguished from the other individuals whose information also appear in the release. For instance, in a table composed of six attributes (Name, Age, Gender, State of Domicile, Religion and Disease), removing the name and the religion column while generalising the age is a way to effectively k-anonymise the data. Other techniques, such as “masking” or “pseudonymisation”, which are aimed solely at removing certain identifiers, may also play a role in reducing the risk of identification. In many cases, these techniques work best when used together.
### 6.2 Pseudonymisation of data
"Pseudonymisation" of data means replacing any identifying characteristics of
data with a pseudonym, or, in other words, a value which does not allow the
data subject to be directly identified. Although pseudonymisation has many
uses, it should be distinguished from anonymization, as it only provides a
limited protection for the identity of data subjects in many cases as it still
allows identification using indirect means. Where a pseudonym is used, it is
possible to identify the data subject by analysing the underlying or related
data.
Task leaders will be responsible for the anonymization of data in CICERONE for
all datasets where this is deemed necessary.
## 7 Personal data transfer and processing
In case personal data will be transferred from the EU to a non-EU country
(NCKU is an international partner in the project) or international
organisation, such transfers will be made in accordance with Chapter V of the
General Data Protection Regulation 2016/679, and such transfers will comply
with the laws of the country in which the data was collected.
In case of further processing of previously collected personal data, CICERONE
will ensure that the beneficiary has legal grounds for the data processing and
that the appropriate technical and organisational measures are in place to
safeguard the rights of the data subjects.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1497_ESTiMatE_821418.md
|
# Executive Summary
The Data Management Plan (DMP) of the ESTiMatE project describes the
management of datasets that will be generated as well as the software that
will be used during the lifetime of this project. This document is deliverable
D1.2 from the project and gathers such information. To this purpose the
following information is put forward:
* The datasets generated during the project and their management during and after it.
* The methodologies and standards (if any) that will be applied to manage each of the datasets.
* The datasets storage during and after the project and their accessibility after the conclusion of the project.
Some of the datasets generated in the project are expected to be confidential
and, in consequence, not distributable. The selection of which of them will be
public or not has still to be discussed with the Topic Manager of the project.
Any relevant change with regards to the current DMP contained in this document
will be submitted to the Commission.
# Introduction
The ESTiMatE is a Clean Sky H2020 project aimed at developing a modelling
strategy using CFD simulations for the prediction of soot in terms of chemical
evolution and particle formation in conditions relevant to aero engine
operation. This DMP describes how data generated during the project will be
managed during and after it.
The document follows the Horizon 2020 FAIR DMP template and the FAIR data
guiding principles; i.e. data must be Findable, Accessible, Interoperable, and
Re-usable.
# Structure of the ESTiMatE annex
As stated in the Grant Agreement (GA), in the ESTiMatE project several flame
configurations as well as the atomization process for an air-blast atomizer
will be measured and simulated, each one of them referred as a case
configuration. On one hand, experimental detailed information about velocity,
species, etc. spatial fields and soot measurements or particle size
distributions will be obtained depending on the experiment. On the other hand,
such measurements will be compared with simulations that will require High
Performance Computing (HPC) and the application of advanced combustion and
soot models.
In this way, for each case configuration several databases will be created
according to the following general structure (some of the following database
may be omitted depending on the configuration case):
* Boundary conditions for the configuration.
* Experimental measurements of the configuration.
* Simulation set-up for the configuration (constant models, meshes, etc.). ● Simulation results for the configuration.
Each configuration has a summary sheet where the main information about the
configuration is included and a second part where sheets with detailed
information about the repositories are given together with the FAIR metrics.
In addition, each code used in the project has a descriptive sheet with its
main characteristics. This information is given in the annex of this document.
In the following, the items included in each of the different sheets are
described.
## Data summary
In this sheet a summary of the dataset related to one configuration case is
given with the following entries:
<table>
<tr>
<th>
**Item**
</th>
<th>
**Comments/explanation**
</th> </tr>
<tr>
<td>
Project
</td>
<td>
Name of the configuration case
</td> </tr>
<tr>
<td>
Relevant aspects
</td>
<td>
Aspects to be emphasized about the case configuration
</td> </tr>
<tr>
<td>
Codes
</td>
<td>
Codes used for calculations
</td> </tr>
<tr>
<td>
WPs involved
</td>
<td>
Project work packages involved in the configuration case
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Description of the activities carried out in the configuration case
</td> </tr> </table>
Table 1: list of items that describe the main characteristics of each case
configuration.
<table>
<tr>
<th>
**Item**
</th>
<th>
**Comments/explanation**
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Name of the datasets related to the configuration case
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Description of the datasets related to the configuration case
</td> </tr>
<tr>
<td>
Data category
</td>
<td>
Data category according to table 4
</td> </tr>
<tr>
<td>
Repository location
</td>
<td>
Name of the repository where datasets are located
</td> </tr>
<tr>
<td>
FAIR code
</td>
<td>
Average mark for each category of the FAIR metrics
</td> </tr>
<tr>
<td>
References to other datasets/software
</td>
<td>
Name of other referenced datasets/software
</td> </tr> </table>
Table 2: list of items that describe the datasets for each configuration.
## Dataset sheet
The following information is included for each dataset sheet related to a
configuration case:
<table>
<tr>
<th>
**Item**
</th>
<th>
**Comments/explanation**
</th> </tr>
<tr>
<td>
Name
</td>
<td>
Descriptive name to identify the dataset
</td> </tr>
<tr>
<td>
Data category
</td>
<td>
Data category code (see Table Data Category for the corresponding codes)
</td> </tr>
<tr>
<td>
Licence
</td>
<td>
Chosen among the most appropriate ones
</td> </tr>
<tr>
<td>
Repository location
</td>
<td>
Institutional or public repository name and URL, if available
</td> </tr>
<tr>
<td>
Author
</td>
<td>
Data author(s) name(s)
</td> </tr>
<tr>
<td>
Naming
Conventions
</td>
<td>
File names structure and conventions
</td> </tr>
<tr>
<td>
Versioning
</td>
<td>
How and where the version of the dataset can be found
</td> </tr>
<tr>
<td>
Format
</td>
<td>
Standard formats and content standards, definitions, ontologies, etc. Link to
description of format document. General or specific format - libraries or
parsing code
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Estimation of total files size
</td> </tr>
<tr>
<td>
Storage
</td>
<td>
Physical support
</td> </tr>
<tr>
<td>
Archive path
</td>
<td>
Folders structure
</td> </tr>
<tr>
<td>
Associated metadata
</td>
<td>
Reference to metadata standards
</td> </tr>
<tr>
<td>
Provenance
</td>
<td>
Structured dataset origin information
</td> </tr>
<tr>
<td>
Backups needs
</td>
<td>
Periodicity, subsets backup needs analysis, etc.
</td> </tr>
<tr>
<td>
Access permissions
</td>
<td>
Lifecycle dependency: only specific groups of collaborators, all partners,
whole community…
</td> </tr>
<tr>
<td>
Legal/ethical restrictions
</td>
<td>
Privacy and security issues
</td> </tr>
<tr>
<td>
Reproducibility
</td>
<td>
If yes: connection to code and environment
</td> </tr>
<tr>
<td>
Data transfer needs
</td>
<td>
Replicas and periodic transfers to/from other repositories
</td> </tr>
<tr>
<td>
Long term preservation
</td>
<td>
Needs at 3-5-7-10 years (if any)
</td> </tr>
<tr>
<td>
Metadata management
</td>
<td>
Way to access metadata when data are not available
</td> </tr>
<tr>
<td>
Resources need
</td>
<td>
Analysis of resources needs at each step of data lifecycle
</td> </tr>
<tr>
<td>
References to other datasets
</td>
<td>
Name of other referenced datasets
</td> </tr> </table>
Table 3: list of items in the dataset sheet and their definition.
The list of the data category is given here.
<table>
<tr>
<th>
**Data category**
</th>
<th>
**Code**
</th>
<th>
**Name**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
Scientific data
</td>
<td>
1.1
</td>
<td>
Models
</td>
<td>
Data generated by the application of models
</td> </tr>
<tr>
<td>
1.2
</td>
<td>
Experimental
</td>
<td>
Data coming from observation, measurements or produced by
detectors/sensors or by any other experimental device and or activity
</td> </tr>
<tr>
<td>
1.3
</td>
<td>
Synthetic
</td>
<td>
Data generated by a simulation and/or are not obtained by direct measurement
</td> </tr>
<tr>
<td>
1.4
</td>
<td>
Test
</td>
<td>
Datasets (experimental or synthetical) used to validate models
</td> </tr>
<tr>
<td>
Software
</td>
<td>
2.1
</td>
<td>
Libraries
</td>
<td>
Implementation of libraries
</td> </tr>
<tr>
<td>
2.2
</td>
<td>
Applications
</td>
<td>
Development of applications
</td> </tr>
<tr>
<td>
2.3
</td>
<td>
Services
</td>
<td>
Services provided
</td> </tr>
<tr>
<td>
2.4
</td>
<td>
APIs
</td>
<td>
Creation of application programming interfaces
</td> </tr>
<tr>
<td>
Administrative documents
</td>
<td>
3.1
</td>
<td>
Documents
</td>
<td>
Any documentation, either public or private, such as code documentation,
technical notes, etc., not directly mentioned
in the project deliverable list.
</td> </tr>
<tr>
<td>
3.2
</td>
<td>
Internal reports
</td>
<td>
Meeting minutes, internal notes to document the evolution of the project, such
as
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
calendar, resources management, mailing lists, etc.
</td> </tr>
<tr>
<td>
3.3
</td>
<td>
Deliverables
</td>
<td>
Project output documents
</td> </tr>
<tr>
<td>
Other
</td>
<td>
4.1
</td>
<td>
Metadata
</td>
<td>
Any data describing data properties. If they contain scientific information,
they can also be classified as scientific data
</td> </tr> </table>
Table 4: summary of the different data categories.
## Software sheet
In a similar way to the dataset sheet, the software sheet contains a detailed
description of the codes used in the simulations according to the following
table:
<table>
<tr>
<th>
**Item**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
Reference name of the program or workflow
</td>
<td>
Name of the code
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Brief description of the functionality and applicability of the software
</td> </tr>
<tr>
<td>
Author
</td>
<td>
Authors of the software
</td> </tr>
<tr>
<td>
Programming language
</td>
<td>
Programming language(s) used for code implementation
</td> </tr>
<tr>
<td>
Rules and best coding practices
</td>
<td>
Conventions for filenames, link to an external manual, if exists (ex: PEP8,
etc.)
</td> </tr>
<tr>
<td>
Access permissions and license
</td>
<td>
Lifecycle dependency: groups of collaborators, all partners, whole community,
etc.
</td> </tr>
<tr>
<td>
Code size
</td>
<td>
Code size
</td> </tr>
<tr>
<td>
Repository type
</td>
<td>
GitHub, GitLab, Bitbucket, SourceForge...
</td> </tr>
<tr>
<td>
Repository structure
</td>
<td>
Branches, tags, etc.
</td> </tr>
<tr>
<td>
Provenance information
</td>
<td>
Containers, virtual environments
</td> </tr>
<tr>
<td>
Backup and archiving needs
</td>
<td>
If any
</td> </tr>
<tr>
<td>
Legal/ethical restrictions
</td>
<td>
If any
</td> </tr>
<tr>
<td>
Versioning control and rules/workflows managing
</td>
<td>
Specify the repository
</td> </tr>
<tr>
<td>
Code transfer needs and security
</td>
<td>
If any
</td> </tr>
<tr>
<td>
Long term preservation needs
</td>
<td>
Only if applies to a given official release version
</td> </tr>
<tr>
<td>
Documentation and inline comments rules
</td>
<td>
If any
</td> </tr>
<tr>
<td>
Metadata management
</td>
<td>
Available even when the software is not
</td> </tr>
<tr>
<td>
Resources need
</td>
<td>
Requirements for software at each step of the life cycle (access to
repository, computational needs, accessibilities, permissions, ...)
</td> </tr> </table>
Table 5: list of items in the software sheet.
# FAIR data
The FAIR Guiding Principles (Wilkinson et al.; 2016; DOI:
10.1038/sdata.2016.18) describe distinct considerations for contemporary data
publishing environments with respect to supporting both manual and automated
deposition, exploration, sharing and reuse. A metric to quantify the degree of
“FAIRness” of each dataset in ESTiMatE has been defined. It results on a
normalized value (between 0 and 1) for each of the 4 FAIR components. In turn,
this (0,1) value results from assigning a flag value again between 0 and 1 to
each of the FAIR subcomponents defined by Wilkinson et al. (2016) and listed
in Table 6\.
<table>
<tr>
<th>
**F**
</th>
<th>
**FINDABLE**
</th>
<th>
</th> </tr>
<tr>
<td>
F.1
</td>
<td>
Persistent Identifiers (PDI)
</td>
<td>
(Meta)data are assigned a globally unique and persistent identifier
</td> </tr>
<tr>
<td>
F.2
</td>
<td>
Rich metadata
</td>
<td>
Data are described with rich
metadata (defined by subcomponent R.1 below)
</td> </tr>
<tr>
<td>
F.3
</td>
<td>
Metadata specifies the PDI
</td>
<td>
Metadata clearly and explicitly include the identifier of the data it
describes
</td> </tr>
<tr>
<td>
F.4
</td>
<td>
Data registered in searchable resources
</td>
<td>
(Meta)data are registered or indexed in a searchable resource
</td> </tr>
<tr>
<td>
**A**
</td>
<td>
**ACCESSIBLE**
</td>
<td>
</td> </tr>
<tr>
<td>
A.1
</td>
<td>
Retrievable by the PDI with a standardized protocol
</td>
<td>
(Meta)data are retrievable by their identifier using a standardized
communications protocol.
</td> </tr>
<tr>
<td>
A.1.2
</td>
<td>
Open, free protocol
</td>
<td>
The protocol is open, free and universally implementable
</td> </tr>
<tr>
<td>
A.1.3
</td>
<td>
Authentication and authorization
</td>
<td>
The protocol allows for an authentication and authorization procedure, where
necessary
</td> </tr>
<tr>
<td>
A.2
</td>
<td>
Metadata availability
</td>
<td>
Metadata are accessible beyond the data availability
</td> </tr>
<tr>
<td>
**I**
</td>
<td>
**INTEROPERABLE**
</td>
<td>
</td> </tr>
<tr>
<td>
I.1
</td>
<td>
Formal, accessible, shared and applicable language
</td>
<td>
(Meta)data use a formal, accessible, shared and broadly applicable language
for knowledge
representation
</td> </tr>
<tr>
<td>
I.2
</td>
<td>
FAIR vocabulary
</td>
<td>
(Meta)data use vocabularies that follow FAIR principles
</td> </tr>
<tr>
<td>
I.3
</td>
<td>
Metadata references
</td>
<td>
Metadata includes qualified references to other metadata
</td> </tr>
<tr>
<td>
**R**
</td>
<td>
**REUSABLE**
</td>
<td>
</td> </tr>
<tr>
<td>
R.1
</td>
<td>
Relevant metadata
</td>
<td>
(Meta)data have plurality of accurate and relevant attributes
</td> </tr>
<tr>
<td>
R.1.1
</td>
<td>
Usage license
</td>
<td>
(Meta)data are released with a clear and accessible data usage license
</td> </tr>
<tr>
<td>
R.1.2
</td>
<td>
Provenance
</td>
<td>
(Meta)data are associated with detailed provenance
</td> </tr>
<tr>
<td>
R.1.3
</td>
<td>
Community standards
</td>
<td>
(Meta)data meet domain-relevant community standards
</td> </tr> </table>
Table 6: definition of the different FAIR components used to quantify the
degree of fairness of each dataset.
## Making ESTiMatE data Findable
ESTiMatE datasets suited for publication will be easily citable and easily
findable with the assignation of Persistent Identifiers.
* The codes will be stored in repositories which permit versioning and tags for the identification of official releases and the connection with their outputs.
* Whenever possible, a rich metadata model and the register in disciplinary repositories will be used to allow other scientists to find the datasets produced by the project.
* Given the variety of the data of the project, the specific solutions and data models adopted for each dataset and software will be found in the corresponding sheet of this DMP.
## Making ESTiMatE data openly Accessible
Datasets access will depend on the different case and will be described in the
corresponding dataset sheet. Restriction of access will be guaranteed in cases
confidential data from the Topic Manager is used or generated. Metadata will
be made available in the web, independently on the accessibility of data.
## Making ESTiMatE data Interoperable
The choice of metadata standards and the way to access the data is still under
discussion between the consortium members. Metadata standards will be chosen
to guarantee the maximum interoperability.
## Increase ESTiMatE data Re-use
The ESTiMatE open-datasets will be licensed under some Creative Commons data
licensing (see Table 7).
<table>
<tr>
<th>
</th>
<th>
Allowed
</th> </tr>
<tr>
<td>
**Creative**
**Commons**
</td>
<td>
**Description**
</td>
<td>
**Modification of the content**
</td>
<td>
**Commercial Use**
</td>
<td>
**Free cultural works**
</td>
<td>
**Open**
**definition**
</td> </tr>
<tr>
<td>
CC0
</td>
<td>
Free content, no restrictions
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
BY
</td>
<td>
Attribution
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
BY-SA
</td>
<td>
Attribution+ ShareAlike
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
BY-NC
</td>
<td>
NonCommercial
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
No
</td>
<td>
No
</td> </tr>
<tr>
<td>
BY-ND
</td>
<td>
NoDerivatives
</td>
<td>
No
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
No
</td> </tr>
<tr>
<td>
BY-NC-SA
</td>
<td>
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
No
</td>
<td>
No
</td> </tr>
<tr>
<td>
BY-NC-ND
</td>
<td>
</td>
<td>
No
</td>
<td>
No
</td>
<td>
No
</td>
<td>
No
</td> </tr> </table>
Table 7: data licensing options.
# Allocation of resources
There is no additional cost for making the ESTiMatE datasets FAIR:
* The code performance evaluation datasets of the open source codes of the project will be maintained at BSC facilities and could be included in publications.
* The rest of the open-data will be stored at the project site for at least three years after the end of the project. The infrastructure and personnel funds granted from the European Community will cover the storage, hardware and staff time to manage the servers on which the data will be stored.
# Data security
Each dataset will be evaluated separately and exceptional security measures
will be identified and applied. Regular backups for preventing loss of
information will be used.
# Engagement with EUDAT
Solutions for data management and movement will be provided. In particular,
the use of EUDAT services to store and publish research data (B2SHARE),
distribute and store large volumes of data based on data policies (B2SAFE) and
transfer data between data resources and external computational facilities
(B2STAGE), exploiting data citation (B2HANDLE), that for EUDAT hosted data is
managed through Persistent Identifiers (PIDs), and metadata enrichment
(B2NOTE), will be fostered.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1500_FOCUS_822401.md
|
1\. Introduction
The FOCUS consortium aims to ensure that the highest standards of data
management, are respected throughout the project. This document sets out the
consortium’s approach to managing data that is collected, generated, and/or
used during the research.
### 1.1 Project overview
In 2015 and 2016 the EU experienced an unparalleled influx of refugees and
migrants. This has posed multiple challenges for social- and health services
and labour markets in host communities, as well as for the lives of the
refugees.
In response to this situation, the vision of the FOCUS project is to increase
understanding of, and provide effective and evidence-based solutions for, the
challenges of forced migration within host communities. This, it is believed,
will contribute to increased tolerance, peaceful coexistence, and reduced
radicalization across Europe and the Middle East.
Based on comprehensive mapping and trans-disciplinary multi-site field
research conducted in Jordan, Croatia, Germany and Sweden, FOCUS explores the
socio-psychological dimensions of refugee- and host-community relations. It
aims to determine the relation between socio-economic and socio-psychological
integration. The project will analyse the socio-economic integration of
refugees, and the consequences of this in host societies.
Knowledge developed in the project will be used to transform and strengthen
existing promising solutions for social- and labour market integration. The
integration solutions will be pilot tested in at least five European countries
by governmental and non-governmental end-users. The solutions are finally
brought together in a “Refugee and Host Community Toolbox”, which will support
policy makers, municipal actors, civil society organisations and other
stakeholders in responding to the needs of refugees and host communities.
In addition, FOCUS undertakes an ambitious programme of engagement with policy
makers, end-users, host communities, refugees and other stakeholders. This
will ensure that FOCUS research and solutions are acceptable and useful for
policy makers, while meeting the needs of end-user organisations and,
ultimately, refugees and host communities.
FOCUS is a three-year project, beginning January 2019. The project is funded
by the European Union through the Horizon 2020 research and innovation
programme (grant agreement number 822401).
### 1.2 Overview of data generated in FOCUS
A good deal of data will be generated in FOCUS. This includes:
* data about the FOCUS project partners, such as internal communications of the consortium (including personal data), e.g. emails, meeting agendas, notes, and minutes, actions plans, working documents, etc.;
* external communications between the consortium and members of the FOCUS Advisory Board and Ethics Advisory Board (including personal data), e.g. emails, meeting agendas, notes, and minutes, actions plans, working documents, etc.; external communications between the consortium and third-party stakeholders (including personal data), e.g. EU Project Officers and appointed reviewers;
* data generated from desk research activities, e.g. mapping documents, literature reviews, thematic or policy analyses and recommendations, etc.; tools, frameworks, methodologies, training materials, and operational solutions to build trust between host communities and refugees (the “Refugee and Host Community Toolbox”);
* data (including personal data) generated from interaction with research participants, including interviewees, conference/workshop attendees, field-work participants, focus group members, survey respondents, pilot-testing participants;
* dissemination and communication materials and activities, e.g. planning/strategy/sustainability documents, promotional materials (website, presentations, posters, newsletters, press releases, articles, project-related videos, social-media output), peer-reviewed publications and conference presentations;
* metadata, of various kinds, associated with the generation, processing, or use of the any of the above categories of data or research objects.
### 1.3 Data Management Plan: overview
The Data Management Plan (DMP) describes the data management life cycle for
all data collected, processed, and generated during the FOCUS project. It
provides details on:
* the types and formats of data generated, collected, and processed during the project; • the purposes for which this data is generated, collected and processed;
* how the consortium complies with the principles of “FAIR data management” (i.e. that data should be _findable, accessible, interoperable, and reusable_ ); and how it meets its responsibilities to make its findings available through _open access_ ;
* what resources are allocated to data management in the project, and who is responsible;
* how the data is secured against loss, misuse, corruption, etc.
* ethical aspects of data management in the project and how the consortium meets its data protection responsibilities.
The DMP is a living document. As such it will be updated throughout the
project, in accordance with the timeline set out in _section 1.5_ below.
### 1.4 Ethics Management Plan
Data management raises several ethical issues (concerning, e.g., privacy,
responsibilities to disseminate research findings, etc.). These are addressed
throughout the DMP. There are many other ethical issues raised by a project of
this kind. Since many of these are directly or indirectly related to data
management, and since these issues require a responsive management approach,
the DMP – as a living document – is a suitable place to record them.
Accordingly, we include as an annex to this document the project Ethics
Management Plan (EMP).
The EMP ( _section 8_ ) sets out the management structure that the consortium
has developed in order to best address ethics and research ethics requirements
(including details of the Ethics Advisory Board). It also includes brief but
detailed analysis of ethics issues and challenges that are anticipated in the
project. The consortium recognises that research involving refugees poses
particular challenges. The EMP sets out how we intend to meet these
challenges.
### 1.5 Timetable for updates
The DMP is a living document that will be updated at key moments throughout
the project. The timetable is as follows. Note, however, that we maintain
flexibility and will produce additional “unscheduled” versions of the DMP if
there are significant changes that require an immediate update.
<table>
<tr>
<th>
**Month**
</th>
<th>
**Date**
</th>
<th>
**Version**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
M7
</td>
<td>
31 July 2019
</td>
<td>
DMP (first issue)
</td>
<td>
Vast majority of data collection is in future; hence this version is
indicative of plans that are not yet finalised.
</td> </tr>
<tr>
<td>
M24 ( _or after end of WP4 / fieldwork_ )
</td>
<td>
31 December 2020
</td>
<td>
DMP
(intermediate issue)
</td>
<td>
</td> </tr>
<tr>
<td>
M36
</td>
<td>
31 December 2021
</td>
<td>
DMP (final issue)
</td>
<td>
</td> </tr> </table>
### 1.6 This version in context
This version of the DMP is the _first issue_ . This means that it is produced
at a moment in the project when the vast majority of data collection and
analysis lies before us. Accordingly, this version is largely prospective,
less granular in detail than will be subsequent versions, and is only
indicative of our plans for good data management.
2\. Data Summary
### 2.1 Section summary
2.1.1 What is the purpose of the data collection/generation and its relation
to the objectives of the project?
The vision of the FOCUS Consortium is to increase our understanding of, and
provide effective and evidence-based solutions for, the challenges of forced
migration within host communities and thereby contribute to increased
tolerance, peaceful coexistence, and reduced radicalisation across Europe and
in the Middle East. The FOCUS project aims to conduct state-of-the-art
research on host-community-refugee relations and develop solutions for the
successful coexistence of host communities and refugees. To achieve this, the
FOCUS objectives are centred on three dimensions:
1. Research
2. Solutions
3. Policy engagement
The FOCUS objectives are listed in the Description of Action. For each
objective, it is indicated in which work package the objective will be
addressed and how it can be verified during the project period, if the
objective has been fulfilled.
<table>
<tr>
<th>
**Dimension**
</th>
<th>
**Objectives**
</th>
<th>
**Indicators and verification method**
</th>
<th>
**WP**
</th> </tr>
<tr>
<td>
Research
</td>
<td>
1\. Contribute to the **evidence base** on understanding refugee/host
community relations through addressing the central research question: _How do
different patterns of the socio-economic integration of refugees influence the
sociopsychological dimensions of refugee- and host-community relations, and
vice-versa?_
</td>
<td>
Comprehensive mapping of available evidence, policies and solutions on forced
migration conducted by M6.
Joint socio-economic and socio-psychological research methodology in place by
M6
Major research programme completed and reports from Jordan, Croatia, Sweden
and Germany in place by M24
A set of socio-economic and socio-psychological indicators to measure
integration are developed and pilot tested by M36
</td>
<td>
_WP2_ _WP3_ _WP4_
</td> </tr>
<tr>
<td>
Solutions
</td>
<td>
2\. **Develop and pilot test solutions** to foster peaceful coexistence
between refugees and host communities
</td>
<td>
Refugee and Host Community Toolbox developed and pilot tested in five European
countries by M30
</td>
<td>
_WP2_ _WP5_ _WP6_
</td> </tr>
<tr>
<td>
Policy engagement
</td>
<td>
3\. Provide an overall framework for **policy makers** to adopt and adapt the
solutions and recommendations for the adoption of effective policies and
practices in diverse settings.
</td>
<td>
At least 20 policy makers at different levels engaged throughout the project
implementation through a series of consultations and interviews.
Network of Host Communities established by M7.
Policy road map in place by M30
Guide for Adaption Solutions in place by M36
</td>
<td>
_WP2_ _WP6_ _WP7_
</td> </tr> </table>
2.1.2 What types and formats of data will the project generate/collect?
The main types of data are: (i) outcomes of desk research (summaries,
literature reviews, analysis of existing datasets, etc.); (ii) feedback from
stakeholder workshops and similar events; (iii) responses to surveys and focus
groups carried out in fieldwork with refugees and host communities, as well as
in pilot-testing activities with end-users; (iv) content and metadata from the
online Network of Host Communities; (v) contact details of project
stakeholders. This data is held, respectively in the following formats: (i)
.docx, .pdf, .xlsx; (ii) .docx, .pdf, .xlsx; (iii) .xlsx; .csv, .sav, .sps,
.txt (iv) .csv (v) .xlsx and stored in partners’ email clients and servers. 1
2.1.3 Will you re-use any existing data and how?
We use existing, publicly available datasets (or datasets which are available
on application) in WPs 2, 3, and 4. These are analysed to provide statistical
perspectives on socio-economic and psycho-social aspects of integration. In
Sweden these come from Statistics Sweden (SB). For Germany we use Socio-
economic panel data (SOEP) which provides micro data on refugees in Germany.
2 For the data on the flow of asylum seekers we have used the publicly
available dataset of destatis 3 , as well as the dataset of the Federal
Office for Migration and refugees. 4
2.1.4 What is the origin of the data?
Primary data is gathered during fieldwork with refugees and host communities
in Jordan, Croatia, Germany, and Sweden, through pilot-testing for the
developed tools in Austria, Denmark, Germany, Sweden, and the United Kingdom,
and through various stakeholder workshop events. 5 Contact details of
stakeholders are collected from either the existing networks of project
partners or from publicly available sources. Swedish statistics come from
Statistics Sweden (SB). The German SOEP data, which will mainly be used for
the purpose of secondary data analysis In WP4 is not publicly available.
Access to the data necessitates an official contract. 6 Existing datasets
used for providing statistical perspectives on socio-economic and psycho-
social aspects of integration are publicly available. Data is contributed to
the Network of Host Communities on a voluntary basis by stakeholders.
2.1.5 What is the expected size of the data?
Data collected during the fieldwork (which is the main focal point for data
collection in the project and the only point at which we expect more than
“low” volume of data collection) is projected to amount to 2,400 host
community survey responses, 2,000 refugee survey responses, and around 16-10
focus group transcripts. Details on data minimisation in fieldwork are
provided in the Fieldwork Data Protection Impact Assessment ( _section 7.1_ ).
2.1.6 To whom might the data be useful (“data utility”)?
Data gathered in the project will be of use to researchers working in relevant
areas, organisations (NGOs, civil society organisations, etc.) active in the
field, and policy-makers.
### 2.2 Types, formats, sources/origins
The following table shows the different **types of datasets** that we expect
to collect, generate, or use in the FOCUS project, the **sources or origins**
of the data, the **format** in which the data sets will be stored, the
**volume** of data expected, the Work Packages and tasks with which the data
sets are associated, and the partners responsible for that dataset/task.
**Table 1:** Types, format and sources of datasets in FOCUS
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset / Type**
</th>
<th>
**Source / Origins**
</th>
<th>
**Format**
</th>
<th>
**Volume**
</th>
<th>
**WP / Task**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
1
</td>
<td>
**Host-community/refugee relations desk research**
</td>
<td>
Desk research: literature reviews, policy analysis, migration data analysis
</td>
<td>
.docx, .pdf, .xlsx,
.csv
</td>
<td>
Low
</td>
<td>
WP2: T2.1, T2.2, T2.3, T2.5
</td>
<td>
**MAU** (WP2, T2.1,
T2.3, T2.5)
**FFZG** (T2.2)
</td> </tr>
<tr>
<td>
2
</td>
<td>
**International (UNHCR, IOM), regional (EU), national asylum/migration flow
data**
</td>
<td>
Publicly available datasets
</td>
<td>
.pdf, .xlsx, .csv
</td>
<td>
Low
</td>
<td>
WP2: T2.5
</td>
<td>
**MAU** (WP2,
T2.5)
</td> </tr>
<tr>
<td>
3
</td>
<td>
**Policy-maker structured interviews data**
</td>
<td>
Structured interviews with govt and non-govt policy makers at EU, MS, and
local levels
</td>
<td>
.docx, .pdf
</td>
<td>
Low
</td>
<td>
WP2: T2.3
WP6: T6.1
</td>
<td>
**MAU** (WP2,
T2.3)
**Q4** (WP6, T6.1)
</td> </tr>
<tr>
<td>
4
</td>
<td>
**End-user semi-structured interviews data**
</td>
<td>
Semi-structured interviews with endusers at local government and NGO levels
</td>
<td>
.docx
</td>
<td>
Low
</td>
<td>
WP2: T2.4
</td>
<td>
**MAU** (WP2,
T2.4)
</td> </tr> </table>
<table>
<tr>
<th>
5
</th>
<th>
**End user workshop data**
</th>
<th>
Workshop with members of the project end-user board
</th>
<th>
.docx, .pdf
</th>
<th>
Low
</th>
<th>
WP2: T2.4
</th>
<th>
**MAU** (WP2,
T2.4)
</th> </tr>
<tr>
<td>
6
</td>
<td>
**Indicators of sociopsychological and socioeconomic integration**
</td>
<td>
Analysis of WP2 data/results
</td>
<td>
.docx, .pdf
</td>
<td>
Low
</td>
<td>
WP3: T3.1
</td>
<td>
**FFZG** (WP3,
T3.1)
</td> </tr>
<tr>
<td>
7
</td>
<td>
**National integration-relevant data**
</td>
<td>
Publicly available datasets (such as register, census, and survey data)
</td>
<td>
.pdf, .xlsx, .csv
</td>
<td>
Low
</td>
<td>
WP3: T3.1
WP4: T4.5
</td>
<td>
**FFZG** (WP3,
T3.1)
**CSS** (WP4)
**MAU** (T4.5)
</td> </tr>
<tr>
<td>
8
</td>
<td>
**Methodology workshop data**
</td>
<td>
Workshop with consortium, Advisory Board, Ethics Advisory Board
</td>
<td>
.docx, .pdf
</td>
<td>
Low
</td>
<td>
WP3: T3.2
</td>
<td>
**FFZG** (WP3,
T3.2)
</td> </tr>
<tr>
<td>
9
</td>
<td>
**Fieldwork pilot testing data**
</td>
<td>
Primary data collection from (n=20) host community members and (n=10) refugees
at each study site (Jordan,
Croatia, Germany, Sweden)
</td>
<td>
.xlsx, .csv, .docx,
.pdf
</td>
<td>
Low
</td>
<td>
WP3: T3.3
</td>
<td>
**FFZG** (WP3,
T3.3)
</td> </tr>
<tr>
<td>
10
</td>
<td>
**Fieldwork survey data**
</td>
<td>
Primary data collection, via survey, from n=600 host community member
participants and n=600 refugee participants in Jordan, German, Croatia (i.e.
1,200 in each country) and roughly n=600 host community participants and n=200
refugee participants in Croatia
</td>
<td>
.csv, .xlsx, .sav,
.sps
</td>
<td>
Medium
</td>
<td>
WP4: T4.3
</td>
<td>
**FFZG** (WP4,
T4.3)
</td> </tr>
<tr>
<td>
11
</td>
<td>
**Fieldwork focus group data**
</td>
<td>
Primary data collection via 4-5 focus groups in each country (Jordan, Croatia,
Germany, Sweden)
</td>
<td>
.docx, .pdf, .txt
</td>
<td>
Low
</td>
<td>
WP4: T4.4
</td>
<td>
**CSS** (WP4)
**HU** (T4.4)
</td> </tr>
<tr>
<td>
12
</td>
<td>
**Cross-site analysis data**
</td>
<td>
Analysis of datasets 9 & 10\.
</td>
<td>
.xlsx, .csv, +TBC
</td>
<td>
.xlsx, .csv,
+TBC
</td>
<td>
WP4: T4.6
</td>
<td>
**CSS** (WP4, T4.6)
</td> </tr> </table>
<table>
<tr>
<th>
13
</th>
<th>
**Refugee and Host Community Toolbox, version 1**
</th>
<th>
Selection of tools identified in WP2
</th>
<th>
.docx
</th>
<th>
Low
</th>
<th>
WP5: T5.1
</th>
<th>
**DRC** (WP5, T5.1)
</th> </tr>
<tr>
<td>
14
</td>
<td>
**Toolbox training seminar data**
</td>
<td>
3-Day training seminar with pilottesting participants.
</td>
<td>
.docx
</td>
<td>
Low
</td>
<td>
WP5: T5.3
</td>
<td>
**DRC** (WP5, T5.3)
</td> </tr>
<tr>
<td>
15
</td>
<td>
**Toolbox pilot test data**
</td>
<td>
Primary data collection via pilot testing in different countries
</td>
<td>
.xlsx, .csv, .docx
</td>
<td>
Low
</td>
<td>
WP5: T5.3, T5.4
</td>
<td>
**DRC** (WP5, T5.3,
T5.4)
</td> </tr>
<tr>
<td>
16
</td>
<td>
**Refugee and Host Community Toolbox, version 2**
</td>
<td>
Refined set of tools identified in WP2, honed in WP5 & WP6
</td>
<td>
.docx
</td>
<td>
Low
</td>
<td>
WP5: T5.3
WP6: T6.2
</td>
<td>
**DRC** (WP5, T5.3,
T6.2)
**Q4** (WP6)
</td> </tr>
<tr>
<td>
17
</td>
<td>
**Project videos**
</td>
<td>
Videos taken to promote the project, which may include members of the
consortium speaking about the project, its goals and progress.
</td>
<td>
.mp4 or similar
</td>
<td>
Low
</td>
<td>
WP7: T7.2, T7.4
</td>
<td>
**ART** (WP7)
</td> </tr>
<tr>
<td>
18
</td>
<td>
**CMT (consortium internal) data**
</td>
<td>
Data (content and metadata) generated by the consortium’s use of the Community
Management Tool (CMT) for purposes of project management and interaction
</td>
<td>
.csv & online
</td>
<td>
Low
</td>
<td>
WP1: T1.1
WP7
</td>
<td>
**DRC** (WP1, T1.1)
**ART** (WP7)
</td> </tr>
<tr>
<td>
19
</td>
<td>
**NHC data**
</td>
<td>
Data (content and metadata) generated by the use of the CMT to facilitate the
Network of Host Communities (NHC); data generated from external cooperation
activities (stakeholder workshops, final
conference, etc.)
</td>
<td>
.csv & online
</td>
<td>
Low
</td>
<td>
WP7: T7.1, T7.2, T7.3
</td>
<td>
**ART** (WP7, T7.1,
T7.2, T7.3)
</td> </tr>
<tr>
<td>
20
</td>
<td>
**Stakeholder, end-user,**
**Advisory Board, Ethics Advisory Board member contact details**
</td>
<td>
Publicly available sources
</td>
<td>
.docx, .xlsx., .csv, email-client, partner servers
</td>
<td>
Low
</td>
<td>
WP1: T1.1, T1.3
WP7, T7.1
</td>
<td>
**DRC** (WP1, T1.1)
**AND** (T1.3)
**ART** (WP7, T7.1)
</td> </tr> </table>
### 2.3 Purposes, data utility
The following table specifies the **purposes** for which each of the datasets
identified in Table 1 is collected, generated, and processed, as well as the
**“data utility”** , i.e. an indication of to whom the data might be useful
(outside the specific context of the FOCUS project).
**Table 2:** Purpose and utility of datasets in FOCUS
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset / Type**
</th>
<th>
**Purpose / Output**
</th>
<th>
**Data Utility**
</th> </tr>
<tr>
<td>
1
</td>
<td>
**Host-community/refugee relations desk research**
_Desk research (literature reviews, policy analysis, migration data analysis,
etc.)_
</td>
<td>
This data is collected and analysed to identify current status, trends, and
state-of-the-art knowledge on the socio-economic and sociopsychological
integration of refugees and impact of refugee migration in host societies.
This feeds into the development of the methodology (e.g. supporting
identification of research questions and relevant indicators) in WP3 for the
fieldwork to be conducted in WP4.
This feeds into the development of the Refugee and Host Community Toolbox, the
Guide for Adapting Solutions, and the Policy Roadmap developed in WP6.
Findings based on this data are presented in deliverable D2.1 ( _Mapping of
host-community/refugee relations_ ).
</td>
<td>
This data will be useful to researchers working in relevant areas.
This data will be useful to organisations (NGOs, civil society organisations,
etc.) active in the field.
This data may be useful to policymakers.
</td> </tr>
<tr>
<td>
2
</td>
<td>
**International (UNHCR, IOM), regional (EU), national asylum/migration flow
data**
_Publicly available datasets_
</td>
<td>
This data is processed in order to map flows and patterns of asylum migration
from Syria.
This feeds into the development of the methodology (e.g. supporting
identification of research questions and relevant indicators) in WP3 for the
fieldwork to be conducted in WP4.
This feeds into the development of the Refugee and Host Community Toolbox, the
Guide for Adapting Solutions, and the Policy Roadmap developed in WP6.
Findings based on this data are presented in deliverable D2.1 ( _Mapping of
host-community/refugee relations_ ).
</td>
<td>
Analysis and outputs based on this publicly available data will be useful to
researchers working in relevant areas.
Analysis and outputs based on this publicly available data will be useful to
organisations (NGOs, civil society organisations, etc.) active in the field.
Analysis and outputs based on this publicly available data may be useful to
policy-makers.
</td> </tr> </table>
<table>
<tr>
<th>
3
</th>
<th>
**Policy-maker structured interviews data**
_Structured interviews with govt and non-govt policy makers at EU, MS, and
local levels_
</th>
<th>
This data is collected and processed in order to conduct a comparative
analysis of integration policies at EU, MS and local levels, including
identification of perceived gaps, challenges, and future policy directions.
This feeds into the development of the Refugee and Host Community Toolbox, the
Guide for Adapting Solutions, and the Policy Roadmap developed in WP6.
Findings based on this data are presented in deliverable D2.1 ( _Mapping of
host-community/refugee relations_ ).
</th>
<th>
Raw data will not be retained (see section 4).
Analysis and outputs based on this data will be useful to researchers working
in relevant areas.
Analysis and outputs based on this data will be useful to organisations (NGOs,
civil society organisations, etc.) active in the field.
Analysis and outputs based on this data may be useful to policy-makers.
</th> </tr>
<tr>
<td>
4
</td>
<td>
**End-user semi-structured interviews data**
Semi-structured interviews with endusers at local government and NGO levels
</td>
<td>
Data is collected via an end-user workshop in order to identify and map tools
and solutions in implementing successful host-
community/refugee integration and to ensure the ideation of the toolbox is
inclusive of end-user needs, work processes and perspectives
This feeds into the development of the Refugee and Host Community Toolbox, the
Guide for Adapting Solutions, and the Policy Roadmap developed in WPs 5 and 6.
Findings based on this data are presented in deliverable D2.1 ( _Mapping of
host-community/refugee relations_ ).
</td>
<td>
Raw data will not be retained (see section 4).
Analysis and outputs based on this data will be useful to researchers working
in relevant areas.
Analysis and outputs based on this data will be useful to organisations (NGOs,
civil society organisations, etc.) active in the field.
Analysis and outputs based on this data may be useful to policy-makers.
</td> </tr>
<tr>
<td>
5
</td>
<td>
**End user workshop data**
_Workshop with members of the project end-user board_
</td>
<td>
Data is collected via an end-user workshop in order to identify and map tools
and solutions in implementing successful host-
community/refugee integration and to ensure the ideation of the toolbox is
inclusive of end-user needs, work processes and perspectives.
This feeds into the pilot-testing of the Refugee and Host Community Toolbox in
WP5.
</td>
<td>
The mapping of tools and solutions will be of interest to researchers working
in relevant areas.
The mapping of tools and solutions will be useful to organisations (NGOs,
civil society organisations, etc.) active in the field.
The mapping of tools and solutions may be useful to policy-makers.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
This feeds into the development of the Refugee and Host Community Toolbox, the
Guide for Adapting Solutions, and the Policy Roadmap developed in WP6.
Findings based on this data are presented in deliverable D2.1 ( _Mapping of
host-community/refugee relations_ ).
</th>
<th>
</th> </tr>
<tr>
<td>
5
</td>
<td>
**Indicators of socio-psychological and socio-economic integration**
_Analysis of WP2 data/results_
</td>
<td>
Data will be collected and analysed in WP3 (based on findings from WP2) in
order to identify the most appropriate indicators of sociopsychological
integration and socio-economic effects of refugee migration and integration,
as well as to define precise research questions for the fieldwork in WP4
concerning factors of sociopsychological integration such as attitudes,
perception and contact between host communities and refugees and the relation
of these indicators to those of socio-economic integration.
The collection and analysis of this data is essential to research design in
the project.
Findings and developments based on this data are presented in deliverable D3.1
( _Research design and methodology_ ).
</td>
<td>
The research questions and indicators identified may be of interest to
researchers working in relevant areas.
</td> </tr>
<tr>
<td>
7
</td>
<td>
**National integration-relevant data**
_Publicly available datasets (such as register, census, and survey data)_
</td>
<td>
Data will be collected and analysed in WP3 (based on findings from WP2) in
order to identify the most appropriate indicators of sociopsychological
integration and socio-economic effects of refugee migration and integration,
as well as to define precise research questions for the fieldwork in WP4
concerning factors of sociopsychological integration of refugees in host
communities, as well as the related socio-economic factors influencing
integration on a local level.
The collection and analysis of this data is essential to research design in
the project.
Findings and developments based on this data are presented in deliverable D3.1
( _Research design and methodology_ ).
The data will also be used to perform statistical analysis in support of
country-reports and a cross-country comparative report on the socioeconomic
integration of refugees in local communities and the socio-
</td>
<td>
The research questions and indicators identified may be of interest to
researchers working in relevant areas.
The four country reports and the cross-site analysis report based on this data
will be of interest to researchers working in relevant areas.
The four country reports and the cross-site analysis report based on this data
will be of interest to organisations (NGOs, civil society organisations, etc.)
active in the field.
The four country reports and the cross-site analysis report based on this data
may be useful to policy-makers.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
economic effects of refugee migration and integration on the host communities
on a set of factors such as the ones described in WP3.
The four country reports and the cross-site analysis report based on this data
will be presented in deliverable D4.3 ( _Cross-site analysis_ ).
</th>
<th>
</th> </tr>
<tr>
<td>
8
</td>
<td>
**Methodology workshop data**
_Workshop with consortium, Advisory_
_Board, Ethics Advisory Board_
</td>
<td>
This data is collected in order to refine and improve the research design for
the project fieldwork. Feedback from consortium members, as well as members of
the Advisory Board and Ethics Advisory Board will be collected, assessed, and
integrated into the methodology as appropriate.
The collection and analysis of this data is essential to research design in
the project.
Findings and developments based on this data are presented in deliverable D3.1
( _Research design and methodology_ ).
</td>
<td>
The research questions and indicators identified may be of interest to
researchers working in relevant areas.
</td> </tr>
<tr>
<td>
9
</td>
<td>
**Fieldwork pilot testing data**
_Primary data collection from (n=20) host community members and (n=10)
refugees at each study site (Jordan,_
_Croatia, Germany, Sweden)_
</td>
<td>
This data is feedback on the survey procedure, needed in order to check the
suitability of the research methodology and approach developed in WP3. This is
an essential pre-step to ensure that the fieldwork conducted in WP4 is
successful. Certain elements of the WP4 research design, such as the
formulation or language adaptation of instruments may be altered based on this
data to ensure applicability in the main fieldwork study
</td>
<td>
Due to its low volume, and the fact that survey responses are not retained,
the data itself is not useful outside the context of the project.
</td> </tr>
<tr>
<td>
10
</td>
<td>
**Fieldwork survey data**
_Primary data collection, via survey, from n=600 host community member
participants and n=600 refugee participants in Jordan, German, Croatia (i.e.
1,200 in each country) and n=200 in Croatia_
</td>
<td>
This data is collected in order to study the socio-psychological dimensions of
the host community and refugee relations and to analyse the socio-economic
integration of refugees and the consequences of this in the host societies.
The contents of the survey will be determined in WP3 and will focus on socio-
psychological issues such as intergroup relations, perceptions of intergroup
threat, intergroup contacts, social distance and social networking across the
host and refugee groups, views towards social integration and acculturation,
perceptions of the socio-economic effects (costs and benefits) of social and
labour integration of refugees.
The survey will take in consideration a representative sample of a minimum of
600 host community members and 600 refugee
</td>
<td>
This data will be of great interest to researchers working in relevant areas.
Findings based on this data will be useful to organisations (NGOs, civil
society organisations, etc.) active in the field.
Findings based on this data may be useful to policy-makers.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
community members in each country with the exception of Croatia, where the
sample size of refugees will be 200 people due to the comparatively low number
of refugees from Syria. This sample size is sufficient to reach a 0.95
confidence level with +/- 4% marginal error (confidence interval) and in
Croatian refugee sample the confidence interval of +/- 5%.
**_Study sites_ **
Four study sites (Germany, Sweden, Croatia, Jordan), focusing on communities
with high concentration and number of refugees.
**_Target groups_ **
Target groups are host community members and refugees from Syria. The target
group of refugees from Syria is described as forced migrants from Syria who
have been recognized as refugees by UNHCR from 2011 onward in Jordan, or have
received the international protection status (asylum) from 2015 onward for
European countries, and have been living in respective host communities from
the point of receiving this status to date. Inclusion criteria are:
* Age (between 18 and 65 years).
* Refugee/asylum status (must have received positive decision regarding their status).
* Year of receiving refugee status (received after 2015 (2011 in Jordan) qualify for the study). In Jordan the applicable criteria for acknowledging the refugee status will be used.
* Not living in a camp/shared accommodation for refugees.
Host community members are defined as persons who have citizenship or
permanent residency in the respective European country and have been living in
the same host community for at least 7 years (at least since 2013, i.e. two
years prior to the beginning of the migration wave from Syria to Europe). For
Jordan, the host community members are defined as Jordanians, as in Jordan
foreigners cannot receive citizenship or permanent residence. Inclusion
criteria are:
* Age (between 18 and 65 years).
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
* Number of years living in the respective country (more than 7).
* Citizenship or residence (must have one).
Exclusion criteria are:
* Health conditions that prevent normal communication in Arabic or the language of the host community
* Failure to provide informed consent
* Inability to reach the identified target participant after three attempts
* Participants’ refusal to be contacted.
**_Sampling host community participants_ **
Survey of host community members will use two probabilistic sampling
techniques to select the participants. Due to differences among the four sites
with access to registers of host community members, the Random Walk Technique
(RWT) will be used in Germany, Jordan and Croatia. In Sweden, citizen
registries will be used for randomised selection of participants and the
validated interviewing procedures will be followed as in other similar
population based studies in Sweden.
In the selected target areas (regions, cities) the sample size will be
proportional to population. Participants will be selected by probability
sampling to ensure the sample structure reflects the areas’ population
characteristics based on available statistics, such as the total male and
female population in the 18 to 65 age group.
The host community members will be sampled using the sampling frame that will
ensure the full probabilistic representativeness. The sampling protocol will
use cluster sampling with several levels of clusters: 1) target geographical
and political entities in each country with highest concentration and number
of Syrian refugees (governorates in Jordan, Federal states in Germany,
counties in
Croatia, municipalities in Sweden), 2) among these clusters select cities with
highest number of refugees, 3) implement a randomized procedure of recruiting
participants using national registries where available and permitted (such as
Statistics Sweden) or the standard random walk technique (RWT) with several
local starting points within
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
the selected cities to ensure probabilistic sample composition that will
reflect the population parameters.
**_Sampling refugee participants_ **
The sampling design for the refugee survey will aim at achieving heterogeneity
to reflect the refugee population parameters, but true probabilistic sampling
is not expected at all study sites. RWT of sampling refugee respondents will
be used if possible in Jordan, while random sampling of refugees based on
registries will be used in Sweden. In Germany and Croatia refugee respondents
will be approached through NGOs that maintain contact with them and if needed
with advertisements and invitations to participate in the study that will be
placed at locations frequented by refugees from Syria.
During initial contact with potential refugee participants the
Information Letter about the study and invitation to participate will be
distributed through NGO channels. Willing participants will send message
through the NGO intermediaries and will then be contacted.
To minimise potential self-selection and other referral biases, in each area
(region, city) at least five different entry points into the target population
(i.e. NGOs, locations for placing the advertisements and invitations to
participate in the study) will be used.
The refugee study participants will be recruited within the same host
communities as described above. They will be identified using available
national registries where available and permitted (such as Statistics Sweden)
or through partners’ various professional field channels such as social
services, local Red Cross and similar care organizations, and refugee
community groups.
The potential participants will be approached using a combination of
information channels (online, printed, verbal) and invited to participate in
the study.
**_Data collection_ **
Data collection will be conducted in a comparable way across countries using
standard and validated procedures, such as computer assisted telephone
interviewing (CATI), computer assisted personal
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
interviewing (CAPI), or face-to-face paper and pencil interview in the
language preferred by the participants, using the same questionnaire, and in
all cases carried out by trained staff.
**_Quality assurance during data collection_ **
While gathering data, the interviewers will maintain a separate “survey log”
in the paper format for each completed and attempted interview. In this log
they will note the address, time, date and outcome of each completed or
attempted interview, whether original or replacement household.
At the end of the interview, the participants will be asked if they agree to
be contacted by the survey supervisor for the purpose of monitoring the work
of the interviewers. If the participant agrees, his/her phone number will be
written in the specific follow-up table together with the participant’s
personal code. This will enable the survey supervisor to verify about 10 % of
the completed interviews per each interviewer. The telephone numbers will be
randomly selected among the participants who have agreed to be called back. If
selected for the follow-up call, the supervisor will ask the participant if
he/she was interviewed during the previous three days at home (or in case of
refugee participants possibly at other locations) by means of a tablet about
the integration of host community members and refugees. The supervisor will
not be able to identify the individual participant.
In case of irregularities, the personal code will serve to delete this
participant’s data. In such a case, all other interviews done by the same
interviewer will be also deleted. Such interviewer will be immediately
dismissed and other interviewers will collect data from the replacement
households and participants.
The survey logs will be kept separate from the participants’ responses which
will be entered into the tablet computer during the interview and in no way
will they be linked to the data of an individual participant.
To avoid interviewer bias, none of the interviewers will interview more than
15% of the sample, i.e. a maximum of 90 participants from at least nine
sampling points.
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
11
</th>
<th>
**Fieldwork focus group data**
_Primary data collection via 4-5 focus groups in each country (Jordan,_
_Croatia, Germany, Sweden)_
</th>
<th>
Focus group data is required in order to provide illustrative and profound
information about host community and refugee integration gaps, opportunities
and solutions. This data is collected in order to study the socio-
psychological dimensions of the host community and refugee relations and to
analyse the socio-economic integration of refugees and the consequences of
this in the host societies.
Participants in the qualitative part of the study will be recruited into 4 to
5 focus groups of key informants among the host and refugee community members
in the same cities where the quantitative survey will be done. Both host and
refugee participants will be identified among the general population using
different information channels and reaching out to, for example, schools, work
places, welfare services, job services and other locations where the potential
participants will be approached. The key informants will be defined as
individuals (both women and men, between 18 and 65 years of age), who have
been living in the respective community at least two seven past years, are
aware of the presence of refugees living in the community, and are able to
articulate their experiences and views. The principle of maximal heterogeneity
regarding the age, education level and gender will guide the recruitment of
focus groups composition.
The focus groups will be held in the mother tongue of the participants. The
topics will address the same issues as addressed in the survey. It is expected
that 4 to 5 focus groups with host representatives and with refugee
representatives with 5 to 8 members in each group should be sufficient to
achieve the theoretical saturation of data at each study site. Should this
number prove not to be enough, further data collection will be done until such
criterion is achieved. Findings based on this data are presented in
deliverable D4.2 ( _Qualitative studies in host-communities_ ).
</th>
<th>
Raw data (transcripts of focus groups) will not be retained as only of use
within the context.
The analysis and output based on it will be shared as it may be useful to
researchers working in relevant areas, organisations (NGOs, civil society
organisations, etc.) active in the field, and policy-makers.
</th> </tr>
<tr>
<td>
12
</td>
<td>
**Cross-site analysis data**
_Analysis of datasets 9 & 10\. _
</td>
<td>
This data is collected in order to study the socio-psychological dimensions of
the host community and refugee relations and to analyse the socio-economic
integration of refugees and the consequences of this in the host societies.
</td>
<td>
This data will be useful to researchers working in relevant areas.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Country-level and cross-site analyses are valuable due to the variety of
cultural and socioeconomic contexts in which the interaction of refugees and
hosting communities occurs. Analysis of datasets 9 & 10 in the light of
country-related issues will enable the consortium to discern common and
divergent findings, including their critical interpretation. This analysis
will take into account the different types of policies that each country
implemented.
Findings based on this data are presented in deliverable D4.3 ( _Crosssite
analysis_ ).
</th>
<th>
This data will be useful to organisations (NGOs, civil society organisations,
etc.) active in the field.
This data may be useful to policymakers.
</th> </tr>
<tr>
<td>
</td>
<td>
**Secondary micro and aggregate data**
_Complementary to the survey data_
</td>
<td>
This data will be sourced from the Swedish administrative data and German Soep
data. The analysis of secondary data will be used to validate the survey data.
</td>
<td>
This data will be of use only in the context of the project.
</td> </tr>
<tr>
<td>
13
</td>
<td>
**Refugee and Host Community Toolbox, version 1**
_Selection of tools identified in WP2_
</td>
<td>
This data (a selection of tools, solutions, methods, approaches, etc. for
encouraging/enhancing trust and integration between hostcommunities and
refugees) is collected in order to begin the development of the first version
of one of the major project outcomes: the Refugee and Host Community Toolbox
(version 1).
The Refugee and Host Community Toolbox will enable municipal actors, civil
society organisations and other stakeholders to foster dialogue and build
trust and resilience among refugees and host communities. The solutions are
based on group and individual models that integrates labour market approaches
to integration with social and psychosocial aspects of integration. The
toolbox will include a focus on local helpers providing practical guidance,
opening doors to local networking, and providing cultural and linguistic
interpretation easing the way into the society and community.
The solutions are identified in the mapping of existing literature, policies
and solutions (WP2), the multi-site research exploring the socio-psychological
dimensions of refugee- and host-community relations and the socio-economic
integration of refugees (WP4), ) and,
</td>
<td>
The first version of the Refugee and Host Community Toolbox will be of
interest mainly within the consortium.
Later versions of the Toolbox will be of interest to researchers working in
relevant areas.
Later versions of the Toolbox will be useful to organisations (NGOs, civil
society organisations, etc.) active in the field.
Later versions of the Toolbox may be useful to policy-makers.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
in order to remain current, as an integral part of the development and
adaptation work of the toolbox (WP5).
</th>
<th>
</th> </tr>
<tr>
<td>
14
</td>
<td>
**Toolbox training seminar data**
_3-Day training seminar with pilottesting participants_
</td>
<td>
Data will be collected from a three-day training seminar, that will be
organised for level 2 pilot organisations to be trained in the Refugee and
Host Community Toolbox before they conduct the pilot tests in their respective
countries.
Data from the training seminar is necessary both to support improvement of the
Toolbox, and to improve training methods.
</td>
<td>
This data will be of use only in the context of the project.
</td> </tr>
<tr>
<td>
15
</td>
<td>
**Toolbox pilot test data**
_Primary data collection via pilot testing in different countries_
</td>
<td>
Data will be collected from pilot-testing of the Refugee and Host Community
Toolbox with end-users in different countries. The data will be feedback on
the use of the Toolbox.
Analysis of this data (the results of the pilot tests) will result in
essential recommendations for improvements and updates of the operational
level solutions (i.e. the contents of the Toolbox).
The Refugee and Host Community Toolbox will be finalised and presented in
deliverable D6.2 ( _Refugee & Host Community Toolbox _ ).
</td>
<td>
The pilot-test data will be of use only in the context of the project.
The Refugee and Host Community Toolbox, based, _inter alia_ , on the pilot-
test data, will be of interest to researchers working in relevant areas.
The Refugee and Host Community Toolbox, based, _inter alia_ , on the pilot-
test data, will be of great interest to organisations (NGOs, civil society
organisations, etc.) active in the field.
The Refugee and Host Community Toolbox, based, _inter alia_ , on the pilot-
test data, will be of interest to policy-makers.
</td> </tr>
<tr>
<td>
16
</td>
<td>
**Refugee and Host Community Toolbox, version 2**
_Refined set of tools identified in WP2,_
_honed in WP5 & WP6 _
</td>
<td>
The Refugee and Host Community Toolbox will be finalised and presented in
deliverable D6.2 ( _Refugee & Host Community Toolbox _ ).
</td>
<td>
The Refugee and Host Community
Toolbox will be of interest to researchers working in relevant areas.
The Refugee and Host Community
Toolbox will be of great interest to organisations (NGOs, civil society
organisations, etc.) active in the field.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
The Refugee and Host Community Toolbox will be of interest to policymakers.
</td> </tr>
<tr>
<td>
17
</td>
<td>
**Project videos**
</td>
<td>
Videos taken to promote the project, which may include members of the
consortium speaking about the project, its goals and progress.
</td>
<td>
The data is of interest to other parties and available on the project website.
</td> </tr>
<tr>
<td>
18
</td>
<td>
**CMT (consortium internal) data**
_Data (content and metadata) generated by the consortium’s use of the
Community Management Tool (CMT) for purposes of project management and
interaction_
</td>
<td>
This data includes all communications, records of meetings, planning
documents, etc. generated in the project. Such data is essential to
organising, managing, and carrying out the project.
</td>
<td>
This data is of no use outside the consortium.
</td> </tr>
<tr>
<td>
19
</td>
<td>
**NHC data**
_Data (content and metadata) generated by the use of the CMT to facilitate the
Network of Host Communities (NHC); data generated from external cooperation
activities (stakeholder workshops, final conference, etc.)_
</td>
<td>
The project is ambitious in terms of stakeholder engagement: it aims to
establish an active Network of Host Communities in the field of migration and
forced displacement that will be sustainable in the future. This network shall
connect stakeholders (organisations and individuals) dealing with forced
displacement and facilitate the implementation of policies and the uptake of
research and innovation by end-users. Contact details used by individuals and
organisations to sign up to the NHC, the content (and its associated metadata)
that they upload to the network (using the CMT), are required to ensure that
the NHC is active and engaged. Contact details, as well as content developed
for and from network events (stakeholder workshops, final project conference,
etc.) are similarly required to ensure that members of the NHC are active and
engaged. Encouraging this level of engagement is necessary in order to promote
the long-term sustainability of the NHC.
</td>
<td>
Data from the NHC will be of use to the NHC members (i.e. various stakeholders
in the field of migration and asylum).
</td> </tr>
<tr>
<td>
20
</td>
<td>
**Stakeholder, end-user, Advisory Board, Ethics Advisory Board member contact
details**
_Publicly available sources_
</td>
<td>
This data is necessary in order to run the project advisory boards and to
support stakeholder engagement with the project.
</td>
<td>
This data is of use only in the context of the project.
</td> </tr> </table>
3\. FAIR data
FOCUS complies with the principles of FAIR data management, i.e. that as much
as possible of our research data is **findable, accessible, interoperable, and
reusable** . This section sets out how we intend to ensure this compliance.
### 3.1 Findability: making data findable, including provisions for metadata
**Table 3:** Findability of datasets in FOCUS
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset / Type**
</th>
<th>
**Data available ? (y/n)**
</th>
<th>
**Format in which data is openly available**
</th>
<th>
**Where is the data available?**
</th>
<th>
**Metadata / identifiers**
</th>
<th>
**Keywords**
</th>
<th>
**Naming convention (includes versioning)**
</th> </tr>
<tr>
<td>
1
</td>
<td>
**Hostcommunity/refugee relations desk research**
</td>
<td>
Yes
</td>
<td>
**Report** : D2.1
</td>
<td>
Zenodo 7 , FOCUS community space.
</td>
<td>
Title, author(s), publication date, DOI, keyword(s), funding source and GA
no., title and acronym of action.
</td>
<td>
TBC
</td>
<td>
_lead_ \- _author_ -etal_yyyy_[ _shortened_
\- _title_ _v _1.0_
</td> </tr>
<tr>
<td>
2
</td>
<td>
**International (UNHCR, IOM), regional (EU),**
</td>
<td>
Yes
</td>
<td>
Publicly available datasets
</td>
<td>
na
</td>
<td>
na
</td>
<td>
na
</td>
<td>
na
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**national**
**asylum/migration flow data**
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
3
</td>
<td>
**Policy-maker structured interviews data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Legal/contractual** _ : This includes personal data and so is not shared in raw form for reasons of privacy and data protection.
* _**Mitigation** _ : Note that analyses based on this data are reported in D2.1 (see line 19 below).
</td> </tr>
<tr>
<td>
4
</td>
<td>
**End-user semistructured interviews data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Legal/contractual** _ : This includes personal data and so is not shared in raw form for reasons of privacy and data protection.
* _**Voluntary** _ : It is also of limited use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D2.1 (see line 19 below).
</td> </tr>
<tr>
<td>
5
</td>
<td>
**End user workshop data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Legal/contractual** _ : This includes personal data and so is not shared in raw form for reasons of privacy and data protection.
* _**Voluntary** _ : It is also of limited use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D2.1 (see line 19 below).
</td> </tr>
<tr>
<td>
6
</td>
<td>
**Indicators of sociopsychological and socio-economic integration**
</td>
<td>
Yes
</td>
<td>
**Report** : D2.1
</td>
<td>
Zenodo, FOCUS community space.
</td>
<td>
Title, author(s), publication date, DOI, keyword(s), funding source and GA
no., title and acronym of action.
</td>
<td>
TBC
</td>
<td>
_lead_ \- _author_ -etal_yyyy_[ _shortened_
\- _title_ _v _1.0_
</td> </tr>
<tr>
<td>
7
</td>
<td>
**National integrationrelevant data**
</td>
<td>
Yes
</td>
<td>
Publicly available datasets
</td>
<td>
na
</td>
<td>
na
</td>
<td>
na
</td>
<td>
na
</td> </tr>
<tr>
<td>
8
</td>
<td>
**Methodology workshop data**
</td>
<td>
No
</td>
<td>
**Justification** :
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
* _**Legal/contractual** _ : This includes personal data and so is not shared in raw form for reasons of privacy and data protection.
* _**Voluntary** _ : It is also of limited use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D3.1 (see line 19 below).
</th> </tr>
<tr>
<td>
9
</td>
<td>
**Fieldwork pilot testing data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Voluntary** _ : This is of limited or no use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D3.1 (see line 19 below).
</td> </tr>
<tr>
<td>
10
</td>
<td>
**Fieldwork survey data**
</td>
<td>
Yes (personal data removed)
</td>
<td>
**Report** : D4.1 + anonymised dataset
</td>
<td>
Zenodo, FOCUS community space.
</td>
<td>
Title, author(s), publication date, DOI, keyword(s), funding source and GA
no., title and acronym of action.
</td>
<td>
TBC
</td>
<td>
_lead_ \- _author_ -etal_yyyy_[ _shortened_
\- _title_ _v _1.0_
</td> </tr>
<tr>
<td>
11
</td>
<td>
**Fieldwork focus group data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Voluntary** _ : This is of limited or no use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D3.1 (see line 19 below).
</td> </tr>
<tr>
<td>
12
</td>
<td>
**Cross-site analysis data**
</td>
<td>
Yes (personal data removed)
</td>
<td>
**Report** : D4.3
</td>
<td>
Zenodo, FOCUS community space.
</td>
<td>
Title, author(s), publication date, DOI, keyword(s), funding source and GA
no., title and acronym of action.
</td>
<td>
TBC
</td>
<td>
_lead_ \- _author_ -etal_yyyy_[ _shortened_
\- _title_ _v _1.0_
</td> </tr>
<tr>
<td>
13
</td>
<td>
**Refugee and Host Community Toolbox,**
**version 1**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Voluntary** _ : This is a preliminary version of the Toolbox.
* _**Mitigation** _ : The final version will be openly available and reported D6.2 (see lines 15 & 19 below).
</td> </tr> </table>
<table>
<tr>
<th>
14
</th>
<th>
**Toolbox training seminar data**
</th>
<th>
No
</th>
<th>
**Justification** :
* _**Voluntary** _ : This is of limited or no use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D6.2 (see lines 15 & 19 below).
</th> </tr>
<tr>
<td>
15
</td>
<td>
**Toolbox pilot test data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Voluntary** _ : This is of limited or no use outside the project context.
* _**Mitigation** _ : Note that analyses based on this data (wholly anonymised) are reported in D6.2 (see lines 15 & 19 below).
</td> </tr>
<tr>
<td>
16
</td>
<td>
**Refugee and Host Community Toolbox, version 2**
</td>
<td>
Yes
</td>
<td>
**Report** : D6.2
</td>
<td>
Zenodo, FOCUS community space.
</td>
<td>
Title, author(s), publication date, DOI, keyword(s), funding source and GA
no., title and acronym of action.
</td>
<td>
TBC
</td>
<td>
_lead_ \- _author_ -etal_yyyy_[ _shortened_
\- _title_ _v _1.0_
</td> </tr>
<tr>
<td>
17
</td>
<td>
**Project videos**
</td>
<td>
Yes
</td>
<td>
**Online video**
</td>
<td>
Project website
</td>
<td>
na
</td>
<td>
na
</td>
<td>
_na_
</td> </tr>
<tr>
<td>
18
</td>
<td>
**CMT (consortium internal) data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Voluntary** _ : This is of limited or no use outside the project context.
* _**Mitigation** _ : Insofar as this data relates to project outcomes, it is covered by line 19 below.
</td> </tr>
<tr>
<td>
19
</td>
<td>
**NHC data**
</td>
<td>
No
</td>
<td>
**Justification** :
* _**Legal/contractual** _ : This includes personal data and so is not shared in raw form for reasons of privacy and data protection and compliance with the CMT terms of use.
* _**Voluntary** _ : It is also of very limited interest to people who are not already members, or interested in becoming a member, of the NHC itself.
* _**Mitigation** _ : Data with value to the community is largely available to NHC members through the platform.
</td> </tr>
<tr>
<td>
20
</td>
<td>
**Stakeholder, end-user,**
**Advisory Board, Ethics Advisory Board member contact details**
</td>
<td>
No
</td>
<td>
**Justification** :
\- _**Legal/contractual** _ : This includes personal data and so is not shared
in raw form for reasons of privacy and data protection.
</td> </tr> </table>
<table>
<tr>
<th>
21
</th>
<th>
**All project deliverables and related publications**
</th>
<th>
Yes
</th>
<th>
**Project deliverables**
**Peer-reviewed publications**
</th>
<th>
Zenodo, FOCUS community space.
</th>
<th>
Title, author(s), publication date, DOI, keyword(s), funding source and GA
no., title and acronym of action.
</th>
<th>
TBC
</th>
<th>
_lead_ \- _author_ -etal_yyyy_[ _shortened_
\- _title_ _v _1.0_
</th> </tr> </table>
### 3.2 Accessibility: making data openly accessible
Data that is not to be made openly accessible is listed above (see Table 3).
Open access to data will be provided via the Zenodo repository, using a
designated FOCUS community space (no special software or other tools are
necessary beyond standard browser and office programmes). Full instructions on
how to use the repository are available on the site. All peer-reviewed journal
articles that are published will be either be directly available via open
access on the journal website, or else authors versions will be available
within 12 months 8 of their publication in Zenodo.
Where access to data is restricted this is either (i) because it is of limited
utility outside the context of the FOCUS project or (ii) because personal data
is involved and, as a matter of best practice and compliance in data
protection, we cannot share data subjects’ personal data. In case (i) there is
simply no value to the community in making the data accessible. In case (ii),
we have either made the datasets available with only anonymised data or have
ensured that the outcomes/analyses of the data are accessible in such a way as
to reveal no personal data. 9
We have not provided instructions on how to gain access to restricted data as
there is no reason anybody should need access. Similarly, there is no
requirement for a data access committee, or for measures to record access
requests. In the unlikely event that an equivalent to a data access committee
is required, the project Steering Committee will take that role, with advice
also sought from the Ethics Advisory Board.
### 3.3 Interoperability: making data interoperable
The data generated in the project falls, broadly, into two categories: (i)
qualitative or reportbased data; and (ii) quantitative or database-based data.
The former includes all project deliverables; the latter includes the raw data
(anonymised) from fieldwork conducted in WP4.
All qualitative or report-based data is stored in standard .docx and .pdf
formats. These are readable by all modern computers with standard software
installed (including freely available software). All quantitative or database-
based data is stored in .xlsx or .csv formats. 10 Again, these are readable
by all modern computers with standard software, and are amenable to simple
exchange and re-combination with different datasets by other researchers.
Standard data and metadata vocabularies will be used in order to allow for
inter-disciplinary interoperability. 11
### 3.4 Reusability: increasing data re-use (through clarifying licences)
The datasets and deliverables generated in the project will be shared under a
Creative Commons Attribution 4.0 International licence. 12 This allows users
to _**share** _ (copy and redistribute the material in any medium or format)
and _**adapt** _ (remix, transform, and build upon the material for any
purpose, even commercially) under conditions of _**attribution** _ (they must
give appropriate credit, provide a link to the license, and indicate if
changes were made; they may do so in any reasonable manner, but not in any way
that suggests the licensor endorses them or their use) and _**no additional
restrictions** _ (they may not apply legal terms or technological measures
that legally restrict others from doing anything the license permits). This
broad licensing is intended allow the maximum value to be gained from FOCUS
data by the research community.
Publications produced in the course of the project will be published under
open access terms and will be uploaded to Zenodo.
All datasets and deliverables will be uploaded to Zenodo within 1 year of
their production. The date of production will be signalled either by the date
of submission to the European Commission (for deliverables), or the closure
date of the associated WP (for datasets). This 1 year period is a maximum (we
will strive to make data accessible considerably sooner if possible) and is
designed to allow the consortium researchers adequate time to publish their
findings.
Prior to deliverables or datasets being uploaded to Zenodo, the relevant WP
leader will ensure that quality assurance procedures have been respected. The
whole consortium will be informed in advance of uploading and will have time
to object (on some reasonable ground,
e.g. to allow time to publish); in case of disputes, the project Steering
Committee will decide in accordance with its standard procedures.
Deliverables or datasets uploaded to Zenodo will be available for re-use
indefinitely. Research artefacts uploaded to Zenodo are not editable. However,
as a means of quality assurance, in case any project deliverable or dataset is
updated having been uploaded, the latest version will be uploaded too (with
its own DOI and other metadata, and a clear version number).
4\. Allocation of resources
The consortium has allocated resources to the implementation of this data
management plan as follows.
### 4.1 Maintenance of the DMP and responsibility for oversight
Responsibility for drafting and updating the DMP lies with AND, who have been
allocated PMs for this task in WP1. All other partners have sufficient
resources to dedicate time to reviewing and contributing to the DMP. (In
general, each beneficiary has a fair number of person months (PMs), sufficient
to allow for the efforts of time that are required to ensure that the DMP is
properly implemented.)
Ultimate responsibility for ensuring that the DMP is respected lies with DRC,
as project coordinator, and, more generally, with the project Steering
Committee.
The main contact points for any questions concerning the DMP are:
* Andrew Rebera (AND), FOCUS ethics manager
* Martha Bird (DRC), FOCUS project coordinator
### 4.2 Costs of ensuring FAIR data management
The cost of publishing open access in a journal which is not open access by
default is typically between 1,000 and 2,000 EUR. Only one FOCUS beneficiary
(DRC) has budgetary resources (2,000 EUR) explicitly dedicated to the costs
associated with open access publishing.
Given that by no means all the highest-impact-factor journals are open access,
partners wishing to publish will have to either:
* find suitable open access journals;
* meet the costs from their own institutions’ resources;
* transfer budget from elsewhere in the consortium’s overall resources.
We recognise that this is not ideal. It has been identified as an issue that
should be addressed at the next project Steering Board meeting and will be
updated in the next version of the DMP.
Note that there are no ongoing costs associated with the use of the Zenodo
repository.
5\. Data security
In this section we discuss data security in the sense of _securing the project
research data against loss or corruption_ . We will not here discuss data
security in the context of data protection (i.e. protection of _personal data_
). The latter is discussed in Section 7 below.
### 5.1 Measures to ensure data security
Each partner is responsible for ensuring the security of the data they collect
or generate in the course of the research (See Table 4 for a list of
identified issues and recommended measures to ensure security of data
collected or generated in the course of the research). Each partner will
follow the data security policies prescribed by their own organisation, with
the provision that the following minimum standards will be respected.
**Table 4:** Recommended measures to ensure the security of the data.
<table>
<tr>
<th>
**Issue**
</th>
<th>
**Measures**
</th> </tr>
<tr>
<td>
Digital data backups
</td>
<td>
All electronic data will be backed up on at least one physically distinct
medium (e.g. on a separate server, external hard drive, etc.)
</td> </tr>
<tr>
<td>
Physical data backups
</td>
<td>
All physical data (e.g. papers, informed consent forms, etc.) will be stored
in a secure environment.
</td> </tr>
<tr>
<td>
Recovery procedures (non-personal data)
</td>
<td>
In case data is lost, the partner will submit a description of the event to
the coordinator and the ethics manager (who are responsible for overseeing
implementation of the DMP). The data will be recovered from the available
backups, and new backups created.
</td> </tr>
<tr>
<td>
Recovery procedures (personal data)
</td>
<td>
In case personal data is lost, the partner will submit a description of the
event to the coordinator and the ethics manager (who are responsible for
overseeing implementation of the DMP). The data will be recovered from the
available backups, and new backups created. The ethics manager, with the
support of the Ethics Advisory Board and Steering Committee, will make a
recommendation on whether further steps (e.g. notification of the data
subjects or data protection authorities) are required.
</td> </tr>
<tr>
<td>
Protection of sensitive materials
</td>
<td>
Any personal, confidential, or otherwise sensitive electronic data will be
stored in an encrypted format; any personal, confidential, or otherwise
sensitive physical data will be stored in a locked drawer or filing cabinet
(etc.).
</td> </tr>
<tr>
<td>
Access control
</td>
<td>
All project data, whether personal or not, sensitive or not, will, insofar as
it is stored by partners, be subject to strict access control. Only persons
engaged in the project by the partner will be given access to project data,
and then only with the permission of an authorised team member of the relevant
partner. Once data is deposited in the Zenodo repository, it will
</td> </tr>
<tr>
<td>
</td>
<td>
be publicly available. This data is not subject to access control (but note
that this data will not include any personal data).
</td> </tr>
<tr>
<td>
Data transfers (general)
</td>
<td>
There will be no sharing of personal data from fieldwork between partners
within EU Member. Any such data will be anonymised prior to transfer. The
partners will, in general, keep non-personal data sharing to a minimum. For
additional security, only encrypted data will be transferred.
</td> </tr>
<tr>
<td>
Data transfers (outside EU)
</td>
<td>
Fieldwork is planned in Jordan, and an important project partner (CSS) is
based there. However, as already stated, no personal data will be shared
between partners. In WP4 data analysis partners will transfer databases
containing numerical data, but this includes no data that could be linked to
any individual. We will also be transferring anonymised and translated
transcripts of the qualitative data from the focus groups to CSS in order to
be able to develop the coding and perform the cross-site analysis of this
data. Personal data will be retained the research partner that collects it and
not shared with any other partner. There will be no transfers of personal data
from fieldwork between the EU and non-EU countries.
</td> </tr>
<tr>
<td>
Long term preservation
</td>
<td>
Partners will keep all project data for a period of 5 years after the end of
the project in order to comply with possible reviews or audits. After this
point – unless an internal policy or local institutional or regional/national
law or best-practice recommends or requires otherwise – all personal data
gathered during the project will be destroyed. Non-personal data may be
retained by the partners (but the main project datasets will be available via
the Zenodo repository, in accordance with the positions set out above
(sections 3 & 4)).
</td> </tr> </table>
6\. Ethical aspects
This section deals with ethical aspects of the data management practices
discussed above. A full discussion of ethics management more generally – i.e.
covering all aspects of research ethics in the project, not only those
concerning data management – is included in section 8, which presents the
FOCUS project _Ethics Management Plan_ (EMP).
### 6.1 Personal data processing: risk assessment
Processing of personal data will take place in the project as described in
Table 5, below. Please note that a Data Protection Impact Assessment specific
to the planned fieldwork has been conducted. This is reported in _Section 7.1_
.
**Table 5:** Risk assessment for personal data processed in the scope of
FOCUS.
<table>
<tr>
<th>
**WP**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
1
</td>
<td>
**Data subject:** Members of the EAB and Advisory Board.
**Data type:** Name, contact details, and some biographical information (about
career history).
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To manage the boards and arrange travel for attendance at
meetings.
**Ethics manager comments:** Small amount of non-sensitive data, processed
with the data subjects’ explicit consent.
**Risk assessment:** **Low risk.**
</td> </tr>
<tr>
<td>
2
</td>
<td>
**Data subject:** Consortium members.
**Data type:** Name, contact details, professional opinions relative to the
project.
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To carry out the project effectively.
**Ethics manager comments:** Relatively small amount of non-sensitive data,
processed in the course of their contractual employment.
**Risk assessment:** **Low risk.**
</td> </tr>
<tr>
<td>
3
</td>
<td>
**Data subject:** Stakeholders being interviewed or attending workshops.
**Data type:** Name, contact details, and some biographical information (about
career history).
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To get valuable advice, feedback, and perspectives on issues
pertinent to the development of the Toolbox.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Ethics manager comments:** Small amount of non-sensitive data, processed
with the data subjects’ explicit consent.
**Risk assessment:** **Low risk.**
</th> </tr>
<tr>
<td>
4
</td>
<td>
**Data subject:** Members of the EAB and Advisory Board.
**Data type:** Professional opinions as expressed in a workshop.
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To get valuable advice and feedback on the development of the
fieldwork methodology.
**Ethics manager comments:** Small amount of non-sensitive data, processed
with the data subjects’ explicit consent.
**Risk assessment:** **Low risk.**
</td> </tr>
<tr>
<td>
5
</td>
<td>
**Data subject:** Fieldwork participants (refugees and host communities).
**Data type:** Name, contact details, feedback on the proposed fieldwork
survey procedure.
**Volume of data:** Low (n=30 per site; total n=120)
**Sensitivity of data:** Low
**Purpose:** To validate the fieldwork methodology.
**Ethics manager comments:** Small amount of data, processed with the data
subjects’ explicit consent. Note that survey responses are not collected.
**Risk assessment:** **Low risk.**
</td> </tr>
<tr>
<td>
6
</td>
<td>
**Data subject:** Fieldwork participants (survey) (refugees and host
communities).
**Data type:** Name, contact details, responses to the fieldwork survey
(includes small amount of sensitive data).
**Volume of data:** Medium (n=1200 per 3 sites and n=800 in 1 site; total
n=4,400)
**Sensitivity of data:** Some survey questions could directly or indirectly
reveal special categories of data, including ethnic or racial origin,
religion, political opinions, health.
**Purpose:** Main fieldwork of project: to better understand refugee/host-
community integration.
**Ethics manager comments:** A medium amount of data, processed with the data
subjects’ explicit consent. Some sensitive data will be collected. A data
protection impact assessment is required (see _Section 7.1_ below). Ethics
approvals for the research are required.
**Risk assessment:** **Medium risk (pending ethics approvals from competent
research ethics committees).**
</td> </tr>
<tr>
<td>
7
</td>
<td>
**Data subject:** Fieldwork participants (focus groups) (refugees and host
communities).
**Data type:** Name, contact details, feedback from focus group discussions.
**Volume of data:** medium (4-5 focus groups per site, 16-20 in total)
**Sensitivity of data:** Low.
**Purpose:** Main fieldwork of project: to better understand refugee/host-
community integration.
**Ethics manager comments:** A medium amount of data, processed with the data
subjects’ explicit consent. Sensitive data is not intended to be collected,
but it may be volunteered by participants. This activity is included in the
data protection impact
</td> </tr>
<tr>
<td>
</td>
<td>
assessment mentioned in line 6. Ethics approvals for fieldwork (line 6) also
cover this activity.
**Risk assessment:** **Low risk (pending ethics approvals from competent
research ethics committees).**
</td> </tr>
<tr>
<td>
8
</td>
<td>
**Data subject:** Stakeholders attending a Toolbox training seminar.
**Data type:** Name, contact details, and some biographical information (about
career history), opinions on the Toolbox.
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To get valuable advice, feedback, and perspectives on issues
pertinent to the development of the Toolbox.
**Ethics manager comments:** Small amount of non-sensitive data, processed
with the data subjects’ explicit consent.
**Risk assessment:** **Low risk.**
</td> </tr>
<tr>
<td>
9
</td>
<td>
**Data subject:** Pilot-testing participants (end-users & stakeholders).
**Data type:** Name, contact details, some background biographical
information, responses to the pilot-test survey questions (details to be
confirmed).
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To get valuable advice, feedback, and perspectives on issues
pertinent to the development of the Toolbox.
**Ethics manager comments:** A low volume of non-sensitive data. Data will be
processed with the data subjects’ explicit consent. Note that procedures for
gathering consent are yet to be confirmed. Data to be collected is not yet
confirmed.
**Risk assessment:** **Low risk (but to be confirmed).**
</td> </tr>
<tr>
<td>
10
</td>
<td>
**Data subject:** Network of Host Communities members.
**Data type:** Name, contact details, contributions to the online network,
associated metadata generated by their online activity.
**Volume of data:** Low.
**Sensitivity of data:** Low.
**Purpose:** To develop a strong, active, sustainable network of stakeholders.
**Ethics manager comments:** Low volume of non-sensitive data. Data will be
processed with the data subjects’ explicit consent.
**Risk assessment:** **Low risk.**
</td> </tr> </table>
Note that where personal data is processed on the basis of data subject
consent, that consent will be clearly documented, in accordance with the
standards demanded by Article 7 of the GDPR (“Conditions for consent”). Where
consent for personal data processing is collected along with consent to
participation in a research activity (e.g. fieldwork), the two consents will
be collected and recorded separately, in accordance with paragraph 2 of
Article 7 of the GDPR. 13
A data protection impact assessment for the fieldwork data collection is
presented in _Section_
_7.1_ below.
Measures to ensure that personal data is processed in compliance with the GDPR
are discussed in _section 8.3_ of the Ethics Management Plan.
No personal data will be included in any dataset or deliverable uploaded to
Zenodo, nor in any article published in the course of the project.
(Again, please note that a full discussion of all ethics issues in the project
is presented in _section 8_ below, Ethics Management Plan.)
13 “If the data subject’s consent is given in the context of a written
declaration which also concerns other matters, the request for consent shall
be presented in a manner which is clearly distinguishable from the other
matters, in an intelligible and easily accessible form, using clear and plain
language” (GDPR, Art.
7.2).
7. Other Issues
This section contains two subsections. The first presents a **data protection
impact assessment (DPIA)** for the fieldwork data collection. The second
presents the designated contact points in respect of ethics and data
management for each consortium partner.
The consortium is not aware of any other issues that are not addressed either
above in the DMP, here in the DPIA, or below in the _EMP_ .
7.1 Fieldwork data protection impact assessment (DPIA)
**DATA PROTECTION IMPACT ASSESSMENT**
**July 2019**
### Introduction: why conduct a DPIA in FOCUS
Article 35 of GDPR establishes the obligation of data controllers to conduct a
Data Protection Impact Assessment (DPIA) when a proposal for processing of
personal data is likely to result in a high risk to the fundamental rights and
freedoms of individuals. The DPIA is a procedure whereby controllers identify
the data protection risks that arise when developing new products and services
or when undertaking any new activities in the course of a project that involve
the processing of personal data. The early identification of the risks
subsequently allows data controllers to take appropriate measures to prevent
or minimise the impact of any identified risks.
In order to determine the level of risk that a particular project carries,
controllers need to conduct a threshold assessment – that is, a preliminary
screening for factors signalling any potential for a widespread or serious
impact on individuals. As described by the UK’s Information Commissioner’s
Officer (ICO):
_the important point here is not whether the processing is actually high risk
or likely to result in harm – that is the job of the DPIA itself to assess in
detail. Instead, the question is a more high-level screening test: are there
features which point to the potential for high risk? You are screening for any
red flags which indicate that you need to do a DPIA to look at the risk
(including the likelihood and severity of potential harm) in more detail._ 13
Article 35(3) GDPR requires data controllers to conduct a DPIA, irrespective
of the result of any threshold assessment, when:
1. a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person;
2. processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10.
3. a systematic monitoring of a publicly accessible area on a large scale.
These conditions are not present in the FOCUS fieldwork. 14 However, the
Article 29 Data Protection Working Party (WP29) has noted the list contained
in Article 35(3) GDPR is not intended to be exhaustive and that there may be
other “high risk” processing operations that should therefore be subjected to
DPIAs. 15 In 2016, the WP29 issued guidelines on the correct interpretation
of Article 35 GDPR, which included a set of the criteria to be used in
determining where it is likely that a processing operation would entail a high
risk to the fundamental rights of data subjects. These criteria have been
endorsed by the European Data Protection Board (EDPB) and the European Data
Protection Supervisor (EDPS) through the adoption of a Decision on 16 July
2019. 16 This Decision contains a template that data controllers must review
to determine if their operations merit a DPIA. Article 3 of the EDPS’ decision
states that “[w]hen assessing whether their planned processing operations
trigger the obligation to conduct a DPIA […] the controller shall use the
template in Annex 1 to this Decision to conduct a threshold assessment.”
The nine criteria which may act as indicators of likely high-risk processing
are the following:
1. Systematic and extensive evaluation of personal aspects or scoring, including profiling and predicting.
2. Automated decision making with legal or similar significant effect.
3. Systematic monitoring.
4. **Sensitive data or data of a highly personal nature.**
5. Data processed on a large scale, whether based on number of people concerned and/or amount of data processed about each of them and/or permanence and/or geographical coverage.
6. Datasets matched or combined from different data processing operations performed for different purposes and/or by different data controllers in a way that would exceed the reasonable expectations of the data subject.
7. **Data concerning vulnerable data subjects.**
8. Innovative use or applying technological or organisational solutions that can involve novel forms of data collection and usage.
9. Preventing data subjects from exercising a right or using a service or a contract
The idea of the template is that if two or more of the criteria in the list
apply, the controller should carry out a DPIA. 17 If the controller
considers that, in the specific case at hand, although more than one criterion
in the template is applicable the risks are nonetheless not high, they may
omit to carry out a DPIA. In such a case, the controller shall clearly
document and justify that decision. 18 On the other hand, the ICO advises
that in case of any doubt, or if only one factor is present, a DPIA _should_
be conducted to ensure compliance and encourage best practice. 19
The FOCUS fieldwork includes collection of sensitive data (religious
background and opinions, some mental health information) from host communities
and refugees in order to analyse how different factors influence the level of
integration of refugees and the perception of and attitudes towards them in
host communities. Moreover, since refugees are considered a _vulnerable group_
under the GDPR, 20 _prima facie_ the project fulfils two of the criteria
listed by the EDPS’ threshold assessment, specifically number 4 and 7 (in
bold). Thus, although we do not feel the proposed data processing is high
risk, we nonetheless follow the advice of both the EDPS and ICO by conducting
a DPIA at this stage in order to identify potential risks to the fundamental
rights and freedoms of research participants in FOCUS.
The GDPR describes the minimum structure of a DPIA 21 :
_The assessment shall contain at least:_
1. _a systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller;_
2. _an assessment of the necessity and proportionality of the processing operations in relation to the purposes;_
3. _an assessment of the risks to the rights and freedoms of data subjects referred to in paragraph 1; and_
4. _the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned._ 22
To meet, and then go beyond, these minimum standards, we follow the WP29
approach to DPIA. On this approach, a DPIA is not a one-off event, but an
ongoing process.
Description of
processing
Assessment of
necessity and
proportionality
Measures already
envisaged
Assessment of the
risks to rights and
freedoms
Measures
envisaged to
adress the risks
Documentation
Monitoring
**Data Protection Impact Assessment**
**Cycle**
The FOCUS DPIA is conducted by AND Consulting Group (AND), with input
collected from the other consortium partners. It relates specifically to Work
Packages 3 and 4 of the project, in which fieldwork (survey and focus groups)
will be conducted with refugees from Syria and with people from host
communities in Jordan, Croatia, Germany, and Sweden in which refugees from
Syria have settled.
### Step 1: Description of the processing
#### High-level description 23
The intention is to conduct academic fieldwork, within the scope of an H2020
project, to identify factors affecting the integration of refugees from Syria
and people from the host communities in which they now live. The fieldwork
involves two streams of data collection: a survey, and focus groups. These
will be conducted with research participants (i.e. refugees from Syria and
members of host communities) in four countries: Sweden, Germany, Croatia, and
Jordan. The data collected from each of these activities will be processed by
academic institutions in each country (which are FOCUS consortium partners):
* Sweden – MAU
* Germany – HU/Charite
* Croatia – FFZG
* Jordan – CSS
These partners are the data controllers, responsible for their own data
collection and processing in their respective countries. Each partner will
have, in effect, two classes of data: (1) responses to the survey or focus
group; (2) copies of informed consent forms. 24 Each partner will
pseudonymise the data of class 1, using standard methods (described further
below). Data of class 2 will be stored securely for a period of not more than
10 years. This period is based on institutional requirements. This data will
not be further processed or shared at all unless either: (i) the data subject
requests that their data be withdrawn from the study or otherwise exercises
their subject access rights (as per GDPR Chapter 3); or (ii) the institution
is subject to a legal requirement requiring processing of this data (e.g. some
sort of project review or audit).
The pseudonymised research data will be collated by CSS, who are also
responsible for the data analysis. The data that is sent to CSS is, from the
perspective of CSS, anonymous. CSS does not have the capacity to re-identify
any data subject from the datasets provided by MAU, HU/Charite, or FFZG. Thus,
while it must be noted, that data collected by MAU, HU/Charite, and FFZG will
be transferred outside the EU/EEA, this data is, from the respective
perspectives of MAU, HU/Charite, and FFZG _pseudonymous_ and, from the
perspective of CSS (and anyone else), _anonymous_ .
All data analysis is conducted on the entire dataset. This means that
individual records will not be examined and that individual research
participants will not be identifiable from the combined dataset. The data will
be used to produce insights into factors that affect the integration of the
two target groups. Research outputs based on this data will contain no
personal data whatsoever. The dataset will be made public, but no personal
data will be made public (i.e. it will not be possible to identify any data
subject from the dataset).
The only way in which individual data subjects’ responses can be linked to
their identity is with the use of a unique code, known only to the data
subject and to the partner that collected their data. This is necessary in
order to allow data subjects to request that their data be removed from the
study. The data enabling the linking of a survey response with an identifiable
individual (i.e. the Class 1 data, as defined above) will be stored for no
longer than 10 years. At this point, the data will be irrevocably destroyed.
Publicly available datasets will be stored indefinitely (but, as mentioned,
they contain no personal data whatsoever).
#### Necessity of a DPIA
The rationale for conducting a DPIA in FOCUS concerning the fieldwork is fully
described in the introduction to this section.
A further note is necessary concerning the data collection and processing
taking place in Jordan, which is in neither the EU nor the EEA. The GDPR
applies “ _to the processing of personal data in the context of the activities
of an establishment of a controller or a processor in the Union, regardless of
whether the processing takes place in the Union or not_ ” 25 and “ _to the
processing of personal data of data subjects who are in the Union by a
controller or processor not established in the Union_ ” 26 . It is
questionable whether personal data processing by CSS in Jordan of Jordanian
residents is subject to the GDPR. Regardless, it was agreed in FOCUS, even at
the stage of writing the project proposal, that we would apply the standards
of the GDPR to all project activities, irrespective of where they take place
and who the data subjects are.
What data is collected and processed?
Three kinds of data are collected and processed:
1. Survey responses
2. Focus group contributions
3. Personal details for purposes of recruitment and informed consent
##### Survey responses
There are two surveys. One is addressed to refugees from Syria (henceforth
‘refugees’); one is addressed to members of communities hosting refugees in
Sweden, Germany, Croatia, and Jordan (henceforth ‘host community members’).
The two surveys differ somewhat on the specific questions asked, but the same
surveys are used in each country for refugees and host community members.
The procedure for conducting the surveys is such that, even though the survey
itself does not include the subject’s name or other directly identifying
information, the subject can be identified by drawing a connection between a
unique code on each individual survey response and the same code included on
the informed consent form that the participant completes. The possibility of
this connection is designed into the survey procedure in order to allow that
research subjects are able to withdraw their data from the study even after
they have completed the survey. From a purely data protection perspective, it
may have been desirable to avoid this possibility. However, from a research
ethics perspective, it is very desirable to allow research participants to be
able to withdraw their data. Having balanced the risk of reidentification –
which we consider to be low likelihood – against the positive impact of
enabling withdrawal of data, we concluded that it was, overall, better for
data subjects that we make this possible.
##### Focus group contributions
There will be two forms of focus group: one with refugees, one with host
community members. The target topic is the overall integration of refugees
from Syria in the relevant country. The discussion will be directed so as to
elicit participants’ perceptions of the process of integration in their
country. As such, it will cover an array of issues ranging from labour market
integration to the extent and nature of interaction between host community
members and refugees.
As with the surveys, the focus group procedure is designed in such a way that
participants’ contributions are not wholly anonymous (though they are
pseudonymised) in order that the participants can withdraw their contributions
after the event, if they so wish.
##### Personal details for purposes of recruitment and informed consent
Recruitment for the surveys is by random walk technique or random sampling.
This does not entail additional processing of personal data. Recruitment for
focus groups is by snowballing methods. In order to facilitate participation,
some contact details (name, email, telephone number) are required.
There is a robust informed consent procedure for the fieldwork. Participants
must provide their name (and a signature, unless an oral consent procedure is
used). The participant’s name will be associated with a unique code number.
The code is included on both the informed consent form (thus, alongside the
participant’s name) and the participant’s survey response. The point of this
code is to provide a link between the participant’s identity and their survey
responses. In the absence of the code, there is nothing in the survey to link
a particular response with a particular person: with the code, a particular
survey can be linked to a particular person. Focus group participants will
have their contributions pseudonymised during the transcription process (e.g.
instead of “John Smith” use “Participant 1”). Researchers will have a record
of which participant is given which number. Again, the point of this is to
provide a link between the participant’s identity and their contributions.
#### Selection of participants/data subjects
The data subjects are refugees from Syria settled in the respective countries,
or members of host communities in the respective countries. Participants are
selected only on the basis of being a member of one of these groups. They will
be excluded from participation on the basis of either _age_ (only adults
involved) or _asylum status_ (people whose application for asylum is ongoing,
or who have been refused, are not eligible to take part in the study). The
researchers have no pre-existing relationship with the data subjects.
Participation is completely voluntary. It is made clear to potential
participants, via an information sheet, that they are in no way obliged to
take part, that they receive no special benefit from taking part except a
thank you gift of no more than 20€ or equivalent such as shopping coupons
(agreed by the experts in the consortium to be appropriate), and that they may
withdraw from the process at any point (including after their survey, at which
point they may still withdraw their data). 27 The informed consent process
makes absolutely clear to the potential participants what their participation
involves and how their data will be used. Since no further processing of the
data is to be conducted, the data will not be processed in ways that the
participants might not expect.
#### Purpose
The overall purpose of the processing is to better understand the nature of,
and factors affecting, the integration of refuges from Syria in host
communities in Sweden, Germany, Croatia, and Jordan. From this, the consortium
will extrapolate views concerning the nature of, and factors affecting, the
integration of refuges from Syria in host communities in general.
The aggregated dataset will be made publicly available (in anonymised form).
The purpose of this is to enable other researchers to use the data, thereby
maximising its potential to provide value to society. This is in line with the
requirements of the European Commission’s approach to the use of research data
collected with the use of EU public funding. (This is outside the scope of the
GDPR, as the data is anonymous, but it is here noted as a matter of full
transparency.)
#### Legal basis
The legal basis for the processing of personal data in FOCUS fieldwork is
_consent_ , under
GDPR, Article 6(1)(a). This implies the following requirements, derived from
GDPR, Article 7 ( _Conditions for consent_ ).
1. Where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data.
_Each partner is responsible for being able to demonstrate that the data
subject consented to participate. This is achieved by the use of informed
consent forms, stored (in physical format) by the respective partners. Where
an oral consent procedure is used, the forms will be retained (not signed by
the subject), but signed by the person who witnessed the giving of consent._
2. If the data subject’s consent is given in the context of a written declaration which also concerns other matters, the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language.
_The informed consent forms are specific to the survey or focus group (there
are separate information sheets for each activity) and do not address any
other activities. The informed consent forms cover research ethics as well as
data protection. However, the participant must check a specific box to
indicate their consent to the data processing aspects of the research._
3. The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.
_As mentioned above, the activities are explicitly designed to ensure that
consent can be withdrawn and the subject’s data can be removed from the study.
The information sheets that accompany the informed consent form make clear
that the subject has the right to withdraw, even after the activity is
finished. Although the process for withdrawing takes longer than the process
for consenting, it is not significantly more difficult._
4. When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.
_The researcher collecting the informed consent is responsible for assessing
whether the consent is freely given. Nothing else is conditional on the
subject’s consent._
#### Processing of special categories of personal data
GDPR Article 9 in principle prohibits the processing of certain categories of
data, unless certain conditions obtain.
_Processing of personal data revealing racial or ethnic origin, political
opinions, religious or philosophical beliefs, or trade union membership, and
the processing of genetic data, biometric data for the purpose of uniquely
identifying a natural person, data concerning health or data concerning a
natural person’s sex life or sexual orientation shall be prohibited_ . 28
In the FOCUS fieldwork:
* the informed consent and recruitment processes **do not** involve processing of special categories of personal data;
* the focus groups **do not** involve processing of special categories of personal data (however, it is possible that such data may be volunteered by a participant).
* the surveys **_do_ ** involve processing of special categories of personal data.
The surveys **directly ask for** information concerning:
* **health** (‘psychological wellbeing’, ‘access to mental health services’, ‘physical wellbeing’)
* **religion/ethnic origin** (‘what is your religion?’, ‘how often do you attend religious meetings?’, etc.)
* **political opinions** (‘what is your political orientation?’, etc.)
The surveys ask for information that could be **indirectly suggestive** of:
* **ethnic origin** (‘where was your spouse/partner born?’;
* **trade union membership** (‘what is your current profession?’, etc.)
The surveys also ask for data that **could be sensitive** , even though it is
not included as a special category of data:
* **asylum status**
* **‘what are your net earnings…?’**
* **receipt of government welfare**
With regard to these aspects of the proposed processing, we rely on GDPR
Article 9(2)(a): _Paragraph 1 shall not apply if one of the following
applies:_
_(a) the data subject has given explicit consent to the processing of those
personal data for one or more specified purposes, except where Union or Member
State law provide that the prohibition referred to in paragraph 1 may not be
lifted by the data subject;_
As described in the _Legal basis_ section above, we have taken steps to ensure
that data subjects give explicit consent to the personal data processing. The
information sheet makes clear that the survey asks questions that reveal
information about health, religious views, ethnic origin, and political
opinions.
#### Involvement of vulnerable groups
Vulnerability may derive from a number of factors. For example, the CIOMS
guidelines on Health-related Research Involving Humans list the following
potentially vulnerable groups or factors: 29
* capacity to consent;
* individuals in hierarchical relationships;
* institutionalised persons;
* women;
* people receiving welfare benefits or social assistance and other poor people and the unemployed;
* people who perceive participation as the only means of accessing medical care; • some ethnic and racial minorities;
* homeless persons, nomads, refugees or displaced persons;
* people living with disabilities; people with incurable or stigmatized conditions or diseases;
* people faced with physical frailty, for example because of age and co-morbidities;
* individuals who are politically powerless;
* members of communities unfamiliar with modern medical concepts;
* in some contexts, vulnerability might be related to gender, sexuality and age.
A further difficulty in ensuring adequate protection of the rights and
interests of potentially vulnerable people, is that it can be difficult to
assess whether someone is (a) a member of vulnerable-group-category, and (b)
whether their membership of that vulnerable-groupcategory _in fact_ makes them
vulnerable in the specific case. 30
In FOCUS, the most obvious factor affecting potential vulnerability concerns
people who are refugees from Syria, as these people are directly targeted in
sampling. But other factors from the list above should be considered.
<table>
<tr>
<th>
**Directly applicable**
</th>
<th>
**Indirectly applicable**
</th>
<th>
**Not applicable**
</th> </tr>
<tr>
<td>
Homeless persons, nomads, refugees or displaced persons
</td>
<td>
Women
</td>
<td>
Capacity to consent
</td> </tr> </table>
<table>
<tr>
<th>
_\- Refugees from Syria are a target population._
</th>
<th>
\- _Women will certainly be included but should not be vulnerable as such._
</th>
<th>
\- _Capacity to consent is a condition of participation._
</th> </tr>
<tr>
<td>
</td>
<td>
People receiving welfare benefits or social assistance and other poor people
and the unemployed
\- _Refugees from Syria may be in receipt of welfare._
</td>
<td>
Individuals in hierarchical relationships \- _Not likely._
</td> </tr>
<tr>
<td>
</td>
<td>
People who perceive
participation as the only means of accessing medical care
\- _Since access to healthcare is an issue in integration, involvement of such
people cannot be excluded._
</td>
<td>
Institutionalised persons - _Not likely._
</td> </tr>
<tr>
<td>
</td>
<td>
Some ethnic and racial minorities
\- _Refugees from Syria are typically ethnic/racial minorities in host
communities_
</td>
<td>
People living with disabilities; people with incurable or stigmatized
conditions or diseases
\- _No more likely than in any other research._
</td> </tr>
<tr>
<td>
</td>
<td>
Individuals who are politically powerless
\- _Refugees from Syria are arguably less well represented politically._
</td>
<td>
People faced with physical frailty, for example because of age and co-
morbidities
\- _No more likely than in any other research._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Members of communities unfamiliar with modern medical concepts
\- _No more likely than in any other research._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
In some contexts, vulnerability might be related to gender, sexuality and age
\- _No more likely than in any other research._
</td> </tr> </table>
These factors will be reflected in the risk analysis conducted at Steps 4 and
5.
### Step 2: Assessment of necessity and proportionality
#### Necessity
The proposed personal data processing is a necessary step relative to the
stated objective of the FOCUS research project. Here we demonstrate this by
presenting a recapitulation of the overall objectives of the project, as well
as the specific goals of the fieldwork.
The overall goal of the FOCUS project is to “ _increase understanding of, and
provide effective and evidence-based solutions for, the challenges of forced
migration within host communities and thereby contribute to increased
tolerance, peaceful coexistence, and reduced radicalization across Europe and
in the Middle East_ ”. FOCUS also “ _undertakes an ambitious programme of
engagement with policy makers, end-users, host communities, refugees and other
stakeholders_ ”. This is in order to “ _ensure that FOCUS research and
solutions are acceptable and useful for policy makers, while meeting the needs
of end-user organisations and ultimately refugees and host communities_ ”. 31
The fieldwork component of the project can be seen as necessary and integral
to the pursuit of these goals. For it is the fieldwork which enables the FOCUS
researchers to gain first-hand insight into the “ _the challenges of forced
migration within host communities_ ”. There is no other means of gathering
this information which would be both as effective and involve processing less
personal data. The insights gained from this research serves as the basis for
the development of evidence-based solutions to promote integration.
The overall project, and the fieldwork in particular, is thus directed towards
a legitimate, societally desirable goal, which is in the public interest,
supported by public funding from the European Commission.
**The survey is a necessary component of the fieldwork** . Surveying is the
most efficient and effective means of gathering feedback from a statistically
significant number of participants. The survey has been developed through a
detailed process. FOCUS WP2 provided key background information on the socio-
economic and socio-psychological integration of refugees and host communities,
as well as analysis of integration policies, tools, and asylum migration
patterns. WP3 used the outcomes of WP2 to develop indicators of integration
and specific research questions. The survey has been designed to target these
indicators and research questions. The highly experienced experts in this
field have leveraged their experience and expertise to ensure that (a) the
survey questions directly and fully address the stated research questions and
that no questions are included which cannot be linked to a specific research
goal.
The **focus groups are a necessary complement to the survey** . The same
topics, indicators, and research questions as are addressed in the survey are
also addressed in the focus groups. But the focus groups provide qualitative
data. It is very valuable to have quantitative data from the survey and
qualitative data from the focus groups. Again, the focus groups have been
designed and planned by experienced experts and will be moderated to ensure
they keep on-topic, not gathering unnecessary personal data from participants.
The **processing of personal details for purposes of recruitment is
necessary** to make the focus groups possible. The **processing of personal
data via informed consent forms is a basic requirement** of good research
ethics.
#### Proportionality
Are these processing activities proportional? Yes. The FOCUS project pursues a
legitimate, societally desirable goal in the public interest, the fieldwork is
a necessary component of the project, and the fieldwork is designed to collect
the minimum viable amount of data from participants. Other things being equal,
and pending the risk analysis presented below, the proposed processing is
necessary and proportional in pursuit of a legitimate goal. Below we support
this assessment by outlining the measures to ensure good practice in data
protection.
### Step 3: Measures already envisaged
#### Data minimisation
The surveys and focus groups have been developed, over an extended period of
months, by senior, highly experienced experts in this field. They have
leveraged all their experience and expertise to ensure that (a) the study
design is geared towards explicitly stated research questions, and (b) that
the content of the surveys and focus groups involves the collection of enough
data to fully address the research questions, but no more than is required.
The number of data subjects will be 4,400 across the four research sites.
These numbers have been agreed as the smallest amount that can reliably
produce the desired results at the necessary level of confidence.
#### Preventing function creep
The purpose of the personal data processing is set out above. To ensure that
these purposes do not ‘creep’, i.e. that the consortium does not use the data
for other, non-stated, purposes, it has been ensured that the data is
pseudonymised (relative to the partner who collected it) and anonymised
(relative to everyone else) prior to aggregation and analysis. The aggregated,
anonymised dataset will be made publicly available at the end of the project.
This means that there is actually little point in using personal data for
unstated purposes as all research goals can be achieved with the anonymised
dataset anyway.
#### Data retention
The survey and focus group contributions are pseudonymised at, or very shortly
after, the point of collection. The nature of the pseudonymisation is strong.
Surveys can only be reidentified using the code that is kept securely by the
relevant partner. Data is never shared in non-pseudonymous form. Indeed, when
shared, the data is effectively anonymous because no one except the party that
collected it has access to the code. The only data that that directly
identifies the data subject is the informed consent form and the contact
details that are required for recruitment. Contact details will be deleted
after the focus groups have been completed. (Note also that the recordings of
the focus groups will be destroyed as soon as the transcription has been
completed.) That leaves only the informed consent forms. These are retained
for not more than 10 years, in line with institutional requirements.
#### Security and data sharing
Each partner is responsible for ensuring the security of the data they
collect. As established academic institutions, they have the organisational
capacity and expertise to ensure data security. Each partner will follow the
data security policies prescribed by their own organisation, with the
provision that the following minimum standards will be respected.
* _Digital data backups:_ All survey responses and transcriptions of focus group discussions will be backed up on at least one physically distinct medium (e.g. on a separate server, external hard drive, etc.). Recordings are on devices that are not connected to the Internet.
* _Physical data backups:_ All informed consent forms will be stored in a secure environment, in a locked drawer or equivalent. No digital backups or photocopies will be made.
* _Access control_ : All personal data from the fieldwork will be subject to strict access control. Only persons engaged in the project by the relevant partner will be given access to fieldwork data, and then only with the permission of an authorised team member of the relevant partner. Once made publicly available, the anonymous dataset will not be subject to access control (but note that this data will not include any personal data).
* _Data transfers (general)_ : There will be no sharing of personal data from fieldwork between partners within EU Member States (any such data will be anonymised prior to transfer and then transferred in encrypted format).
* _Data transfers (outside EU)_ : As stated, no personal data will be shared between partners. Data transferred to Jordan will be effectively anonymous (it will be personal data only to the research partner that collected it, for whom it will be pseudonymous).
#### Accountability
Each partner responsible for collecting data in the fieldwork is, as data
controller, responsible for demonstrating their compliance with the GDPR. They
are also responsible for ensuring the good practice and compliance of any data
processors working on their behalf.
#### Transparency
Data subjects go through an informed consent process prior to participation
and data collection. This process provides them with information about the
project and about their involvement in it. They will be informed of the
purposes and nature of the data processing, as well as that some special
categories of personal data will be processed. They will also be informed
about the pseudonymisation of their data and given assurances that their data
is not shared except in anonymised form. Full details of the measures to
protect their personal data are provided on a dedicated page on the project
website, to which they are provided the links.
#### Data subjects’ rights
Data subject’s rights are set out in GDPR Chapter 3. Participants are provided
with information on how to exercise their rights, by contacting the relevant
data protection officer or contact point, on the informed consent form that
they sign when they agree to participate in the research. They are also
addressed to the project website, which contains privacy notices specifically
for the fieldwork. These notices provide full details of all measures taken to
protect their personal data and full instructions on how to exercise their
rights.
#### Compliance of data processors
Each data controller (MAU, HU/Charite, FFZG, CSS) is responsible for ensuring
the compliance of any data processors that they engage (as per GDPR Article
28). Data processors may be employed, via contract, to conduct the survey
fieldwork. People conducting the fieldwork are given training via a manual,
developed within the scope of WP3. This provides guidance on best practice for
the survey.
#### Safeguarding international transfers
Data is pseudonymised prior to transfer. Since only the data controllers who
collected the data have the codes that link particular survey responses to
particular individuals, the data is effectively anonymous to any other party,
including the intended recipients (CSS). In effect then, the data is
transferred in an anonymous state. All data transfer will also be encrypted
for additional security.
#### Measures to minimise impact of processing special categories of data
It was established above that the survey collects the following ‘special
categories’ of data and potentially sensitive data.
<table>
<tr>
<th>
**Category directly asked about**
</th>
<th>
**Specifically**
</th> </tr>
<tr>
<td>
Health
</td>
<td>
‘Psychological wellbeing’, ‘access to mental health services’, ‘physical
wellbeing’
</td> </tr>
<tr>
<td>
Religion/ethnic origin
</td>
<td>
‘What is your religion?’, ‘how often do you attend religious meetings?’, etc.
</td> </tr>
<tr>
<td>
Political opinions
</td>
<td>
‘What is your political orientation?’, etc.
</td> </tr>
<tr>
<td>
**Category indirectly asked about**
</td>
<td>
**Specifically**
</td> </tr>
<tr>
<td>
Ethnic origin
</td>
<td>
‘Where was your spouse/partner born?’
</td> </tr>
<tr>
<td>
Trade union membership
</td>
<td>
‘What is your current profession?’, etc.
</td> </tr>
<tr>
<td>
**Other potentially sensitive data / topics**
</td> </tr>
<tr>
<td>
Asylum status
</td> </tr>
<tr>
<td>
‘What are your net earnings…?’
</td> </tr>
<tr>
<td>
Receipt of government welfare
</td> </tr> </table>
Although this data is sensitive, the impact of processing it is very small.
First, the data is very unlikely to lead to any kind of discrimination because
it will not be shared with any partners who are not engaged in the project and
committed to its positive societal objectives. Second, the data is
pseudonymised (relative to the party that collects it) and anonymised
(relative to the parties to whom it is transferred). When the data is made
public it will be completely anonymous.
#### Measures to minimise impact of involvement on vulnerable groups
Refugees from Syria are a target population of the fieldwork. There are two
main ways in which people from this group could be negatively affected by the
data collection and processing.
Firstly, in conducting the research (survey or focus group), they could be led
to revisit painful episodes in their past or present circumstances, leading to
discomfort or pain. In response to this, it should be noted that the research
is not designed to address any such issues. The likelihood of any such
eventualities is very small. Nonetheless, the consortium has taken steps to
deal with any such issues, if they should occur. The information sheet used in
the informed consent process makes clear that if a participant feels
distressed at any time during or after the survey/focus group, they can
contact professionals at the relevant project partner for support. In
addition, there is a short leaflet available to participants which gives
information about what to do if you feel distressed. 32
Secondly, there is, in principle, a very remote possibility that data
collected in the survey or focus groups could, if leaked, have a negative
impact on the data subject. In response to this it should be noted firstly
that the partners take appropriate technical and organisational measures to
ensure against data leaks. Secondly, it is important to note that the worst
effects for people would be if their asylum status was challenged on the basis
of any information revealed by the research. But as against this possibility,
it is important to note that a condition of participation is that the
participant’s asylum status is settled. Thus, the people who are most
vulnerable to this kind of problem are not included among the data subjects.
A general point that should be considered is that the research in FOCUS has
been designed by experts who are experienced in working with refugees and
migrants. They bring all their experience working with such people, and
sensitivity to the risks, to the project.
It has also been identified that some other vulnerable groups are potentially
indirectly impacted by the research. In all cases, the impact on these groups
is secondary to the direct impact that would be brought about by their status
as refugees from Syria. Therefore, the considerations mentioned above in this
subsection apply.
### Step 4: Assessment of the risks to rights and freedoms
In this section we assess risks to the rights, freedoms, and interests of data
subjects, taking into account the ‘measures already envisaged’ as outlined at
Step 3.
<table>
<tr>
<th>
**Risk ID**
</th>
<th>
**Description**
</th>
<th>
**Likelihood**
**(1-3) 34 **
</th>
<th>
**Severity**
**(1-3) 35 **
</th>
<th>
**Risk Score**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Lower protections in non-EU/EEA legal frameworks for data protection.
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td> </tr>
<tr>
<td>
2
</td>
<td>
Data subjects not fully aware of planned data processing.
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td> </tr>
<tr>
<td>
3
</td>
<td>
Personal data processed for secondary purposes not originally planned or
communicated to data subjects.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr>
<tr>
<td>
4
</td>
<td>
Data collection creep: additional data that is not specifically required for
the assigned purposes is collected.
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td> </tr>
<tr>
<td>
5
</td>
<td>
Data is not accurately recorded.
</td>
<td>
1
</td>
<td>
1
</td>
<td>
1
</td> </tr>
<tr>
<td>
6
</td>
<td>
Pseudonymisation procedures are not effective.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr>
<tr>
<td>
7
</td>
<td>
Data is not securely stored.
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td> </tr>
<tr>
<td>
8
</td>
<td>
Conditions for valid consent under the GDPR are not met.
</td>
<td>
2
</td>
<td>
2
</td>
<td>
4
</td> </tr>
<tr>
<td>
9
</td>
<td>
Collection of special categories of data not appropriately communicated to
data subjects.
</td>
<td>
2
</td>
<td>
2
</td>
<td>
4
</td> </tr>
<tr>
<td>
10
</td>
<td>
Special categories of data not suitably protected.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr>
<tr>
<td>
11
</td>
<td>
Data subjects do not understand how to exercise their rights under GDPR
Chapter 3.
</td>
<td>
2
</td>
<td>
2
</td>
<td>
4
</td> </tr>
<tr>
<td>
12
</td>
<td>
Controllers do not adequately respond to data subjects who exercise their
rights under GDPR Chapter 3.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr>
<tr>
<td>
13
</td>
<td>
Data subjects not able to withdraw their data from the aggregated dataset.
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td> </tr>
<tr>
<td>
14
</td>
<td>
Data subjects can be reidentified from the aggregated dataset.
</td>
<td>
3
</td>
<td>
2
</td>
<td>
6
</td> </tr> </table>
34. 1 = remote; 2 = possible; 3 = probable.
35. 1 = minimal; 2 = significant; 3 = severe.
<table>
<tr>
<th>
15
</th>
<th>
Data processors do not comply with their responsibilities.
</th>
<th>
1
</th>
<th>
2
</th>
<th>
2
</th> </tr>
<tr>
<td>
16
</td>
<td>
Adequate records of data processing not maintained.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr>
<tr>
<td>
17
</td>
<td>
Personal data transferred to third countries.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr>
<tr>
<td>
18
</td>
<td>
Personal or sensitive data that is not strictly relevant to FOCUS’s research
goals or questions may be volunteered during focus groups.
</td>
<td>
1
</td>
<td>
2
</td>
<td>
2
</td> </tr>
<tr>
<td>
19
</td>
<td>
Leaked data about an individual could negatively affect their asylum
application or status.
</td>
<td>
1
</td>
<td>
3
</td>
<td>
3
</td> </tr> </table>
### Step 5: Measures envisaged to address the risks
In this section we describe mitigations to address the risks identified at
Step 4. In many cases, sufficient mitigations are described at Step 3 – and
hence already taken into account at Step 4 – and so the residual risk is no
lower than the original.
<table>
<tr>
<th>
**Risk ID**
</th>
<th>
**Risk**
**Description**
</th>
<th>
**Mitigation**
</th>
<th>
**Effect** (new likelihood / severity score)
</th>
<th>
**Residual risk score**
(old score in grey)
</th> </tr>
<tr>
<td>
1
</td>
<td>
Lower protections in non-EU/EEA legal frameworks
for data protection.
</td>
<td>
Data processing in Jordan in FOCUS meets GDPR standards. This is a projectwide
policy in FOCUS.
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
2
</td>
<td>
Data subjects not fully aware of planned data processing.
</td>
<td>
Information provided during the informed consent process, with additional
information available on the FOCUS project website (links provided).
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
3
</td>
<td>
Personal data processed for secondary purposes not originally planned or
communicated to data subjects.
</td>
<td>
Personal data is pseudonymised (relative to the collecting party) and
effectively anonymised (relative to any other party). Thus if data is re-used
by any party except the collecting party, it is not personal data. For the
collecting party, access to the codes that break the pseudonymisation is
restricted.
Aggregated and anonymised datasets are published, hence there is no real
incentive to use the non-anonymous data.
</td>
<td>
1 / 3
</td>
<td>
3 (3)
</td> </tr>
<tr>
<td>
4
</td>
<td>
Data collection creep: additional data that is not specifically required for
the assigned purposes is collected.
</td>
<td>
The surveys have been designed by a team of experts with clear research
objectives and stated research questions. The surveys are fixed and will be
reviewed by independent institutional research ethics committees.
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
5
</td>
<td>
Data is not accurately recorded.
</td>
<td>
Data collection is carried out by professionals with training.
</td>
<td>
1 / 1
</td>
<td>
1 (1)
</td> </tr>
<tr>
<td>
6
</td>
<td>
Pseudonymisation procedures are not effective.
</td>
<td>
Pseudonymisation techniques are standard for the field and have been fully
described. The only link between a survey response and the individual who gave
the data is a single code which is held by only the data subject and the
collecting party on the informed consent form. The informed consent forms are
stored, in
</td>
<td>
1 / 3
</td>
<td>
3 (3)
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
physical format in locked drawers (or equivalent). No digital copies are
taken.
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
7
</td>
<td>
Data is not securely stored.
</td>
<td>
Each data controller is experienced in fieldwork. They have institutional
standards, in addition to the minimum standards that have been defined in
FOCUS. Technical and organisational security measures will be adopted. All
data transfers will (a) involve only aggregated and (effectively anonymised)
data and (b) be secured by encryption.
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
8
</td>
<td>
Conditions for valid consent under the GDPR are not met.
</td>
<td>
The informed consent process has been designed to ensure that subjects give a
clear and explicit indication of their consent, that the processing of
sensitive data is made plain to them, that they can consult the full details
of the data protection practices on the website and may do so prior to signing
the informed consent form, and that the consent to data processing is distinct
from the general (research ethics) consent for participating in the study.
In some cases, oral consent may be sought from the participants. This is not
unusual in research of this kind as the requirement to sign a form is, for
reasons of cultural difference, considered more sensitive than it typically
would be in research with culturally European subjects. When consent is given
orally it will be witnessed and recorded by the researcher. In all cases,
whether consent is written or oral, the subject will receive written
information and links to the project website. All resources, from the informed
consent form to the information sheet to the website, will be available in the
language of the host communities
(Swedish, German, Croatian, and Arabic).
</td>
<td>
1 / 2
</td>
<td>
2 (4)
</td> </tr>
<tr>
<td>
9
</td>
<td>
Collection of special categories of data not appropriate communicated to data
subjects.
</td>
<td>
The collection of special categories of data has been carefully assessed –
including categories that are not technically included in the GDPR but are
considered potentially sensitive anyway. Information that this data is to be
collected is communicated to the subjects during the informed consent process.
</td>
<td>
1 / 2
</td>
<td>
2 (4)
</td> </tr> </table>
<table>
<tr>
<th>
10
</th>
<th>
Special categories of data not suitably protected.
</th>
<th>
See Risk 7.
</th>
<th>
1 / 3
</th>
<th>
3 (3)
</th> </tr>
<tr>
<td>
11
</td>
<td>
Data subjects do not understand how to exercise their rights under GDPR
Chapter 3.
</td>
<td>
During the informed consent process, subjects receive (in writing) information
about their rights, how to exercise them, and who to contact. They are given
the addresses for the relevant pages on the project website, which explains in
full what their rights are and how they can exercise them.
</td>
<td>
1 / 2
</td>
<td>
2 (4)
</td> </tr>
<tr>
<td>
12
</td>
<td>
Controllers do not adequately respond to data subjects who exercise their
rights under
GDPR Chapter 3.
</td>
<td>
Controllers have provided the names of contact points for data protection in
their organisations. Each is an established university or research institute,
with experience and capacity of running research projects of this kind. The
FOCUS consortium also includes data protection experts who can provide advice
to consortium partners on demand.
</td>
<td>
1 / 3
</td>
<td>
3 (3)
</td> </tr>
<tr>
<td>
13
</td>
<td>
Data subjects not able to withdraw their data from the aggregated dataset.
</td>
<td>
The pseudonymisation process has been designed to ensure that the data is as
secure as possible while maintaining a possibility for subjects to withdraw
their data. As long as the subject has their individual code, they can have
their data removed from the aggregated dataset. The organisation responsible
for analysis of the dataset (CSS) will have the code numbers, but no link
between then code numbers and individuals. Hence if the subject contacts the
organisation who collected their data, they can verify their identity and ask
to have their data withdrawn. That organisation can then contact CSS to ask
them to remove the entries for that particular code number without having to
reveal the subject’s identity.
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
14
</td>
<td>
Data subjects can be reidentified from the aggregated dataset.
</td>
<td>
Data subjects _can_ be reidentified from the aggregated dataset, but _only_ by
selected members of the research team at the organisation that collected their
data. _In general_ , data subjects cannot be reidentified from the dataset.
Hence the likelihood that an individual _could_ be reidentified is, by design,
high; but the likelihood that they could be reidentified without their unique
code number is very low.
</td>
<td>
1 / 2
</td>
<td>
2 (6)
</td> </tr>
<tr>
<td>
15
</td>
<td>
Data processors do not comply with their responsibilities.
</td>
<td>
The data processors are employed under contract. In the case of Sweden/MAU,
they are a national body. In all cases they are professionals who have been
duly informed about the project and what they must do. Data controllers are
responsible for engaging only reliable data processors. The data controllers
are experienced in this kind of research and so the risks are very low.
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
16
</td>
<td>
Adequate records of data processing not maintained.
</td>
<td>
The data controllers are experienced in this kind of research and so the risks
of administrative problems such as this are very low.
</td>
<td>
1 / 3
</td>
<td>
3 (3)
</td> </tr>
<tr>
<td>
17
</td>
<td>
Personal data transferred to third countries.
</td>
<td>
The data that is transferred to Jordan will be aggregated and, effectively,
anonymised. Technically, the data is pseudonymised because there exists a code
that links the survey to an individual. But since the recipient in Jordan is
not in possession of and not going to receive those codes, from the
perspective of anyone but the controller who collected the data, it is
anonymous. Therefore personal data is not transferred between partners in the
fieldwork at all.
</td>
<td>
1 / 3
</td>
<td>
3 (3)
</td> </tr>
<tr>
<td>
18
</td>
<td>
Personal or sensitive data that is not strictly
relevant to
FOCUS’s research goals or questions
may be volunteered during focus groups.
</td>
<td>
Focus group moderators are experienced and will ensure that discussion stays
on topic. In case non-relevant sensitive data is volunteered, the moderator
will steer the discussion back on-topic. Such data
will be erased at the point of transcription.
</td>
<td>
1 / 2
</td>
<td>
2 (2)
</td> </tr>
<tr>
<td>
19
</td>
<td>
Leaked data about an individual could negatively affect their asylum
application or status
</td>
<td>
Individuals whose asylum application or status would be vulnerable to such an
event are, by design, not eligible to participate in the survey. Therefore the
likelihood of this happening is very low, and the severity is very low too.
</td>
<td>
1 / 1
</td>
<td>
1 (3)
</td> </tr> </table>
After the mitigations are applied, no risk has a likelihood of greater than 1.
This means that the risks are ‘remote’ and have thus been effectively
mitigated. They are nonetheless monitored, as per Step 7\.
### Step 6: Documentation
This DPIA was conducted as part of the process of preparing for the fieldwork
to be conducted in FOCUS WP3 and WP4. The DPIA itself is included in the FOCUS
project Data Management Plan. As such, it is presented alongside:
* **The Data Management Plan (DMP)** : this explains how data (personal _and_ nonpersonal) is generated and used during the project, and how the consortium meets its obligations to make as much of its generated data as possible available for public use by other researchers. In covering general data management in the FOCUS project, the DMP is clearly relevant to the DPIA and should be read alongside it.
* **The Ethics Management Plan (EMP)** : this explains how the consortium
addresses the research ethics issues posed by the project. It includes
descriptions of the informed consent processes to be used for the fieldwork.
As such, the EMP is clearly relevant to the DPIA and should be read alongside
it.
### Step 7: Monitoring
This DPIA was conducted by AND-CG. AND-CG is response for ethics management in
the FOCUS project. Part of our role, as the project continues, is to monitor
the data processing within the project. We will monitor data processing
against the DMP, the DPIA, and the GDPR more generally. The DMP will be
updated periodically throughout the lifetime of the project. We will review
the DPIA also to ensure compliance with its risk management approach.
[END OF DPIA]
#### 7.2 Consortium ethics and data protection contact points
**Table 6** : Designated data and ethics management contact points in the
FOCUS project.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Data and Ethics Management Contact Point**
</th> </tr>
<tr>
<td>
Danish Red Cross (DRC)
</td>
<td>
Martha Bird, [email protected]_
Anouk Boschma, [email protected]_
</td> </tr>
<tr>
<td>
AND Consulting Group (AND)
</td>
<td>
Andrew Rebera, [email protected]_
</td> </tr>
<tr>
<td>
Faculty of Humanities and Social Science,
University of Zagreb (FFZG)
</td>
<td>
Jana Kiralj, [email protected]_
</td> </tr>
<tr>
<td>
Malmö University, Institute for Studies of
Migration, Diversity and Welfare (MAU)
</td>
<td>
Pieter Bevelander, [email protected]_
</td> </tr>
<tr>
<td>
University of Jordan, Center for Strategic
Studies (CSS)
</td>
<td>
Walid Alkhatib, [email protected]_
</td> </tr>
<tr>
<td>
Berlin Institute for Integration and
Migration Research at the Faculty of
Humanities and Social Sciences at
</td>
<td>
Anna Brenner, [email protected]_
</td> </tr>
<tr>
<td>
Humboldt University of Berlin (HU) /
Charite
</td>
<td>
</td> </tr>
<tr>
<td>
Arttic (ART)
</td>
<td>
Andreas Schweinberger, [email protected]_
</td> </tr>
<tr>
<td>
Q4 Public Relations (Q4)
</td>
<td>
Peter MacDonagh, [email protected]_
</td> </tr> </table>
### 8\. Ethics Management Plan
The first principle of the ALLEA _European Code of Conduct for Research
Integrity_ states that: “Good research practices are based on fundamental
principles of research integrity”. 33 In FOCUS, this is also our first
principle.
The key to recognising this principle’s force and, especially, its impact, is
to notice that it focusses not on the inherent value of research integrity as
something which is simply important in its own right (although this is
obviously the case), but rather on the status of research integrity as the
foundation of good, effective research. That is to say **, in order to
successfully pursue our goals in FOCUS, ethics and integrity are not
constraints or obstacles (as they are sometimes mistakenly, if understandably,
seen** 34 **), but values and objectives that make good research possible** .
This Ethics Management Plan describes how the FOCUS project embeds research
ethics and integrity into its activities as a key pillar of effective
research, and, of course, as a protection of the rights of research
participants and other stakeholders.
_Section 8.1_ (‘Ethics Management Structure’) sets out the management
structure that has been established in the project to implement and oversee
research ethics and integrity. It describes the role of consortium partners
and the Ethics Management Team. _Section 8.2_ describes the role of the
external Ethics Advisory Board, who provide independent advice and oversight.
Research in FOCUS is supported by participants from refugee and host community
groups. In all research involving humans, there are some basic principles that
should be respected. A number of sources are recognised as providing solid,
reliable standards for research ethics and integrity. Some of these, such as
_The European Code of Conduct for Research Integrity_ , deal specifically with
research ethics and integrity; others, such as the _Charter of Fundamental
Rights of the European Union_ , are more general statements of human or
fundamental rights. Some key resources are listed below.
**Key resources**
* _The Nuremberg Code_ (1947)
* Council of Europe, _European Convention on Human Rights_ (1950/2010)
* World Medical Association, _Declaration of Helsinki_ (1964/2013)
* _The Belmont Report_ (1979)
* The Council of Europe, Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (The Oviedo Convention) (1997)
* Council for International Organizations of Medical Sciences (CIOMS), _International Ethical Guidelines for Health-related Research Involving Humans_ (2016)
* UNESCO, Universal Declaration on Bioethics and Human Rights (2005)
* European Union, _Charter of Fundamental Rights of the European Union_ (2012)
* ALLEA, _The European Code of Conduct for Research Integrity_ (2017)
Respecting the principles encoded in these documents generally requires such
steps as: gathering genuinely informed consent from participants; carefully
balancing any foreseeable risks to participants against the likely benefits of
the research; ensuring that research findings are not misused; and ensuring
that confidentiality and the privacy rights of participants are protected.
However, **as widely recognised – including by the European Commission DG
Research and Innovation in their _Guidance Note for research with refugees,
asylum seekers, and migrants_ ** 35 **– research with refugees usually
entails further specific commitments, recognising the increased vulnerability
of some potential research participants** .
Moreover, the subject of _integration_ is a sensitive one, particularly
considering current societal and political tensions in Europe. As such,
_section 8.3_ (‘Key ethics issues in FOCUS’) discusses specific concerns
raised by research in this area and indicates how our general principles of
research ethics and integrity can be applied in concrete situations in FOCUS.
This section also includes discussion of the consortium’s compliance with the
General Data Protection Regulation (GDPR). 36
#### 8.1 Ethics management structure
Ethics management in FOCUS is based around a four-tier system of progressively
moreindependent oversight and advice.
**Table 7** : 4-tiered ethics management structure
<table>
<tr>
<th>
**Tier**
</th>
<th>
**Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
1
</td>
<td>
\- All partners
</td>
<td>
**Tier 1: Acknowledgement of responsibility**
At tier 1 each consortium member commits to ensuring that appropriate
standards of research ethics, research integrity, and privacy are respected
throughout the project and in all its activities.
The partners undertake to ensure that the rights of research participants and
the core values embedded in the EU Charter of Fundamental Rights are respected
and promoted at all stages of the project.
Each partner will appoint at least one person as their ethics point of
contact, who will be responsible for liaising with the Ethics Management Team
(see _section 7.1_ above).
</td> </tr>
<tr>
<td>
2
</td>
<td>
\- AND
</td>
<td>
**Tier 2: Ethics Manager and Ethics Management Team**
Tier 2 sees the basic implementation of internal ethics management in the
project. In FOCUS this is largely (but not only) through Task 1.3.
The Ethics Management Team will be run by AND. 37 The Ethics Management Team
is available to provide partners with information, advice and support for any
ethical or data protection issues that arise during the project.
The Ethics Manager will be Dr Andrew Rebera. Andrew has a DPhil in
Philosophy and is also an IAPP (International Association of Privacy
Professionals) accredited “Certified Information Privacy Manager”
(CIPM). 38 Andrew is highly experienced in collaborative research, projects
and specialises in ethics management (i.e. coordinating the development,
implementation, and oversight of management structures aimed at ensuring
excellence in research ethics, data protection, and privacy).
The Ethics Manager will be supported by Mr Dimitris Dimitriou. Dimitris holds
MSc degrees in Health Psychology and Environmental Psychology. He is
specialised in ethics risk management and communication, with
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
extended experience on ethics-related research in the fields of ICT and health
risk communication.
The Ethics Management Team will carry out the following activities in the
project:
* draft and periodically update the project’s DMP and EMP;
* coordinate the Ethics Advisory Board (EAB) (see tier 3 below);
* provide detailed research ethics input to the fieldwork and pilottesting methodologies;
* support partners in obtaining the Research Ethics Committee (REC) approvals for fieldwork;
* develop research ethics validation procedures for fieldwork and pilot testing.
</th> </tr>
<tr>
<td>
3
</td>
<td>
* AND
* EAB
</td>
<td>
**Tier 3: Ethics Advisory Board (EAB)**
The EAB consists of three external (i.e. non-consortium) advisors. Their role
is to provide feedback, advice, and recommendations on relevant ethical,
fundamental rights, privacy and data protection, and societal issues.
The EAB was appointed in the first months of the project, from suggestions put
forward by the consortium. The EAB will meet face-toface twice during the
project and will otherwise hold virtual meetings at least every 6 months. The
Ethics Manager will coordinate and chair EAB meetings.
The EAB is entitled to: (a) review any project deliverables or relevant
internal working documents; (b) review research protocols; (c) request contact
with any researcher involved in project; (d) take action – including
contacting the coordinator, the EC Project Officer, or other Commission
officers – to ensure that relevant issues are appropriately handled.
(Further details in _section 8.2_ .)
</td> </tr>
<tr>
<td>
4
</td>
<td>
* All partners collecting significant (by volume or sensitivity) personal
data
* AND
* EAB
</td>
<td>
**Tier 4: External Ethics Approvals (Research Ethics**
**Committees)**
Tier 4 concerns measures to collect and submit all required ethics and data
protection approvals from the competent bodies, such as local Research Ethics
Committees (RECs) and national Data Protection Authorities (DPAs).
We anticipate the need for the following REC approvals:
* REC approvals for field research in Germany;
* REC approvals for field research in Sweden;
* REC approvals for field research in Croatia;
* REC approvals for field research in Jordan;
* We also anticipate a potential requirement for REC approvals for pilot testing activities in WP5 (taking place in Denmark, Sweden, Austria, Germany and the United Kingdom). However, since the methodology for the pilot-test will not be prepared until the 2 nd year of the project at least, we cannot say with certainty whether approvals will actually be required.
In all decisions as to what approvals are required, we will be guided by the
Ethics Management Team, the many experienced and senior
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
researchers in the FOCUS team, the EAB, and any support from the Project
Officer or project reviewers.
In all applications for approvals, the Ethics Management Team will, as
necessary and as requested, support the lead partners in preparing research
protocols and other supporting documentation; but it remains the
responsibility of the partner leading the research activity to obtain all
required approvals. Authorisations, opinions, and notifications received from
competent bodies will be retained by the lead partners, with copies –
translated if necessary – submitted to the coordinator and the Ethics Manager
upon request. These will be included (e.g. as annexes) in project management
reports to the EC.
The consortium recognises that sensitive data may be collected during
fieldwork. The old Data Protection Directive (95/46/EC) required data
controllers to notify the relevant supervisory body (the national Data
Protection Authority (DPA)) for certain acts of data processing. The GDPR
takes a different approach, with notification being only required in certain
very specific circumstances. Following the data protection impact assessment
reported in _section 7.1_ , we do not need to notify national DPAs of our
processing activities. (This is discussed at a greater level of detail in
_section 8.3_ .)
The Ethics Management Team will use the REC approvals in developing the ethics
validation procedures for fieldwork and pilot testing. This will ensure that
all requirements stipulated by the RECs are implemented.
</td> </tr> </table>
#### 8.2 Ethics Advisory Board (EAB)
The FOCUS consortium aims to ensure that the highest standards of research
ethics and privacy are respected throughout the project, particularly bearing
in mind the risks and challenges associated with research involving refugees
and forced migrants. The Ethics Advisory Board (EAB) is an important component
of the project’s overall ethics management structure. This section is the
basis of the **EAB** **Terms of Reference** that define the EAB’s role, and
set out EAB members’ duties and responsibilities.
##### 8.2.1 Mission
The role of the Ethics Advisory Board (EAB) is to provide the FOCUS consortium
with feedback, advice, and recommendations on relevant ethical, fundamental
rights, privacy, data protection, and societal issues.
##### 8.2.2 Independence
The EAB is independent from the consortium in the sense of having no
significant stake in the success or failure of the project. The EAB’s role is
simply to provide advice on what is required in order to ensure best-practice
with respect to ethics, fundamental rights, privacy, data protection, and
societal issues, particularly taking into account the rights and interests of
any research participants (including personal data subjects). Members will,
when signing up for the EAB, confirm that they are independent of the
consortium and have no relevant conflicts of interest. The EAB will not be
remunerated for serving on the EAB but will be reimbursed (by the Danish Red
Cross) for travel expenses incurred in the course of their duties.
##### 8.2.3 Membership
The EAB consists of three external (i.e. non-consortium) advisors. EAB Members
may be experts in ethics and/or fundamental rights, privacy and data
protection experts, academics working in areas related to the project, or
representatives of relevant NGOs or CSOs. A formal professional position or
qualification in ethics or privacy is not necessary: the idea is to have an
interdisciplinary group, with different backgrounds, but a common interest in
the ethical issues surrounding research with refugees and migrants. EAB
members serve in a personal capacity (not on behalf of any institution).
Membership of the EAB is voluntary.
##### 8.2.4 Appointment
EAB members will be selected by the Ethics Management Team. Membership is
voluntary and does not constitute employment or affiliation to any consortium
member. If a member resigns their position, the Ethics Management Team will
appoint a replacement as soon as possible.
##### 8.2.5 Termination of membership
An EAB member wishing to resign their position should inform the Ethics
Management Team in writing at the earliest possible moment.
##### 8.2.6 Conflict of interest
EAB members are required to declare any actual or perceived conflict of
interests to the Ethics Management Team.
##### 8.2.7 Meetings
The EAB will meet face-to-face twice during the project and will otherwise
hold virtual meetings at least every 6 months. Agendas will be provided no
less than 10 (ten) working days before commencement of the meeting. Dr Andrew
Rebera will coordinate and chair EAB meetings. A member of the Ethics
Management Team will take minutes (no other recording of the meeting will take
place). Meetings will be attended only by EAB members and the Ethics
Management Team.
##### 8.2.8 Decision-making
Any decisions or recommendations that the EAB provides (e.g. approving
feedback to the coordinator) will be approved by majority voting. Note that
while the Ethics Management Team may attend EAB meetings and contribute freely
to discussion, they are not members of the board and have no voting rights:
voting rights are only enjoyed by the EAB members (the role of the Ethics
Management Team is only to serve as a bridge between the EAB and the
consortium). When consensus cannot be reached among the EAB members, decisions
will be taken by majority voting, but minority opinions will be reported in
meeting minutes. If a decision is required between meetings, agreement and
voting will be conducted via email.
##### 8.2.9 Feedback to the consortium
Minutes of EAB meetings will be prepared by the Ethics Management Team and
circulated to the attendees for approval. Opinions will be minuted without
attribution to particular EAB members (Chatham House Rule). EAB members will
be asked to provide feedback and approve the minutes within 10 (ten) working
days. The minutes will include an Executive Summary (summarising discussions
and presenting any EAB decisions or recommendations) which will be shared with
the consortium and the European Commission Project Officer.
##### 8.2.10 Travel
The EAB will be reimbursed for reasonable travel expenses incurred attending
the face-toface meetings. All travel arrangements and expenses will be handled
by the Danish Red Cross. AND-CG will put EAB members in contact with the
relevant Danish Red Cross team members in order to arrange travel and
reimbursement.
##### 8.2.11 Responsibilities
EAB members shall:
1. Offer advice and recommendations on any ethical, fundamental rights, privacy, or societal issues reported to them by the Ethics Management Team or project coordinator.
2. Support the consortium by providing feedback concerning research ethics standards and privacy/data-management in the development of research methodologies in the project (particularly for the field work [WP3, WP4] and pilot testing [WP5] activities).
3. Review and provide advice concerning research protocols prepared by partners seeking approvals from research ethics committees (RECs) or data protection authorities (DPAs).
4. Review and, as necessary, suggest improvements to the project Ethics Management Plan and relevant sections of the project Data Management Plan.
5. Review and, as necessary, suggest improvements to any project deliverable selected by the Ethics Management Team.
6. Actively participate in EAB meetings (2 face-to-face, teleconference at least every 6 months).
7. EAB members shall not disclose to any third party any confidential information acquired in the course of their participation without the prior approval of AND-CG and/or the project coordinator. In the unlikely event that we need to share sensitive confidential information, members will be asked to sign a non-disclosure agreement.
##### 8.2.12 Powers
The EAB is entitled to:
1. Review any project deliverables, research protocols, or internal working documents.
2. Request contact with any researcher involved in project.
3. Contact the project coordinator or European Commission project officer. (EAB decisions, recommendations, and opinions will be communicated to the coordinator, consortium and project officer via the Ethics Management Team. However, should the EAB so wish, they may get into contact with anyone involved in the project directly, via the coordinator).
##### 8.2.13 EAB members
The members of the EAB are:
* **Brigitte Lueger-Schuster** , who is an Associate Professor for Clinical Psychology at the University of Vienna. She has a background in psychology, human rights, psychosocial work with refugees, and has been involved with ethics and data protection boards.
* **Julia Muraszkiewicz** , Juris Doctor, who is a Research Manager at Trilateral
Research where she leads the team’s work on human trafficking research and
innovation, working on security, human rights, crisis, gender and privacy-
social impacts of policy and innovative solutions.
* **Mozhdeh Ghasemiyani** , who is a crisis psychologist for Doctors without Borders and the Danish Institute against Torture. Mozhdeh has specialist expertise in trauma, refugees and crises. She has worked in government, local government and NGO’s in Denmark, UK, US to improve the treatment of refugees, especially children.
##### 8.2.14 EAB meeting dates
**Table 8** : EAB prospective meeting dates
<table>
<tr>
<th>
**ID**
</th>
<th>
**Dates**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
1
</td>
<td>
22 Mar 2019
</td>
<td>
1 st teleconference [agenda and minutes available]
</td> </tr>
<tr>
<td>
2
</td>
<td>
8-9 May 2019
</td>
<td>
1 st face-to-face meeting, Zagreb (in conjunction with WP3 workshop) [agenda
and minutes available]
</td> </tr>
<tr>
<td>
3
</td>
<td>
5-9 Sep 2019
</td>
<td>
2 nd teleconference
</td> </tr>
<tr>
<td>
4
</td>
<td>
2-6 Dec 2019
</td>
<td>
3 rd teleconference
</td> </tr>
<tr>
<td>
5
</td>
<td>
6-10 Apr 2020
</td>
<td>
4 th teleconference
</td> </tr>
<tr>
<td>
6
</td>
<td>
1-5 Jun 2020
</td>
<td>
2 nd face-to-face meeting, location to be confirmed
</td> </tr>
<tr>
<td>
7
</td>
<td>
3-7 Aug 2020
</td>
<td>
5 th teleconference
</td> </tr>
<tr>
<td>
8
</td>
<td>
30 Nov – 4 Dec 2020
</td>
<td>
6 th teleconference
</td> </tr>
<tr>
<td>
9
</td>
<td>
5-9 Apr 2021
</td>
<td>
7 th teleconference
</td> </tr>
<tr>
<td>
10
</td>
<td>
9-13 Aug 2021
</td>
<td>
8 th teleconference
</td> </tr>
<tr>
<td>
11
</td>
<td>
6-10 Dec 2021
</td>
<td>
9 th teleconference
</td> </tr> </table>
#### 8.3 Key ethics issues in FOCUS
This section presents research ethics challenges identified in the project.
These will be updated throughout the project lifetime as necessary. The issues
are presented in alphabetical order, not order of importance.
(Please note that the **FOCUS Research Ethics Manual** , developed for the
purposes of the fieldwork research planned to be carried out in the scope of
WP4, provides practical advice and recommendations in relation to most of the
points presented below, but is specific to the fieldwork. The Research Ethics
Manual is included as an _Annex_ to this document.
##### 8.3.1 Confidentiality
_Challenge_
In conducting research in refugee communities, particularly when using methods
such as snowballing to recruit participants, it is essential to ensure that
all information gathered is kept fully confidential. Besides storing data
securely and data pseudonymisation techniques implemented by all research
teams, researchers should be careful not to reveal information about (or from)
one participant to another. In some circumstances, if a researcher tells a
participant that the previous participant said such-and-such it may be
possible for the participant to figure out who the previous participant was.
It is also important to ensure that all researchers understand and agree to
their responsibilities regarding confidentiality, particularly if they have
been recruited as cultural insiders, translators or interpreters, or from the
target population, in which case their own cultural confidentiality
expectations may be different (Olijiofor 2016: 6-7). Confidentially must of
course also be respected in the publication of findings.
_Response_
In FOCUS we use experienced and/or trained researchers, who have been fully
briefed on the nature of the project, and on the nature of their
responsibilities regarding research ethics and integrity (see FOCUS Research
Ethics Manual). The briefing will be conducted by the senior researchers of
each partner conducting fieldwork or pilot-testing.
##### 8.3.2 Cross-cultural factors
_Challenge_
Europe is itself a multi-cultural society, but working with refugees from
beyond Europe makes it inevitable that cross-cultural factors will be relevant
in the project, especially in research design and research ethics. Failure to
take cultural differences into account in research design and data collection
risks undermining relationships with participants (Olijiofor et al. 2016: p.
4).
One fundamental challenge – which is quite commonly remarked in the literature
– is that in cross-cultural contexts there may be different understandings and
expectations of what is ethical (e.g. what constitutes coercion). On a more
specific level, cultural differences between researchers and research
participants may lead to misunderstandings, misperceptions, and divergent
expectations concerning informed consent, privacy and so on.
_Response_
Although all FOCUS field research teams already have extensive experience in
research with refugees, cultural insiders will, as far as is possible, be
included on the research teams. Birman (2005: 170) describes this as “the most
important strategy” in working with refugees, since it “ensure[s] that
understanding of the community and culture informs the ways in which [research
ethics] aspects of the study are designed and implemented.”
A cultural insider is someone who knows the language and culture of the target
group in virtue of being a member of that group (Birman 2005: 171-72). The
advantage of a cultural insider is that they have “inside” knowledge of the
target group. But, of course, it is not possible for anyone to have _full_
knowledge of the target group – particularly bearing in mind the issues
discussed elsewhere concerning diversity within target populations – and, as a
speaker of the research language and a member of the research team, the
cultural insider is likely to be considered an outsider by the target group to
a certain extent and in a certain sense. So, the idea of a “cultural insider”
is an abstraction and we do better to think in terms of a continuum, rather
than of a simple insider/outsider binary (Birman 2005: 172).
Cultural insiders are important team members for recruiting participants,
building good relationships, and carrying out field research. Cultural
insiders can also have an important role in research design. This is an
important way of ensuring that their insights are built into the structure of
the study and of the fieldwork. As far as possible, cultural insiders should
be academically trained researchers, as senior as possible. Given the
important and influential role of cultural insiders, it is important that the
cultural insiders in the project are themselves diverse, as far as possible
(e.g. a balance of genders, ages, ethnicities, religions, etc.).
Recruitment of cultural insiders is not always easy. In FOCUS we will strive
to meet the standards outlined above as well as possible. Inevitably, in some
cases it will not be possible to fully meet all standards perfectly. At a
minimum, we will consult cultural insiders regarding the issues mentioned
above (i.e. even if they are not full members of the teams). In all cases, we
will rely on the extensive experience of the FOCUS field research teams in
research with refugees.
##### 8.3.3 Dissemination and Communication of Findings
_Challenge_
The subjects of migration, immigration policy, and the integration and
assimilation of forced migrants and refugees are frequently divisive, often
bound up with other emotive topics such as terrorism and security, and, as
such, are of great political and societal significance. It follows that a
research project such as FOCUS, which address itself directly to these
subjects, must pay great attention to the way in which its findings are
disseminated, communicated, and used. Research in this area cannot be morally
neutral (Birman 2005: 155), for it inevitably feeds into ongoing morally
significant debates, not least concerning policymaking. Indeed, it is an
explicit intention of the project that its findings should inform policymaking
in the area. In FOCUS it will be essential to consider how our findings might
be interpreted or used, once they are disseminated.
_Response_
All dissemination materials will be subject to an internal review process by
the consortium before they are cleared for release. Each partner will bring
particular expertise, e.g. research partners can verify scientific quality and
the Ethics Management Team can verify compliance with research ethics and
integrity standards. The consortium is experienced enough to be able to
determine whether a publication is suitable for release. In cases of doubt,
the Advisory Board and the EAB will provide independent advice.
8.3.4 Flexibility and “Learning-as-we-go”
_Challenge_
It is recognised in the literature that, especially in cross-cultural
contexts, no one set of ethical guidelines can ever be a perfect guide to
best-practice (see e.g. Birman 2005: 175; Jacobsen & Landau 2003: sec. V;
Olijiofor 2016: 7). It is important to think deeply and continually about how
to ensure the highest standards of research ethics.
_Response_
We recognise that not every single issue of research ethics and privacy can be
foreseen before the work begins. We have therefore designed the ethics
management structure to ensure that research ethics is constantly monitored
and that there are clear communication channels between the Ethics Manager,
the Ethics Advisory Board, the coordinator, and the partners. This is in
addition to a project-wide commitment to being flexible about how ethical
considerations may influence other tasks, and to “learning-as-we-go”, i.e.
periodically reflecting on our performance, on challenges that have been
encountered, and on how we can be better at being better. While best-practice
can be set out in broad terms, flexibility in how the research is conducted is
both necessary and inevitable.
The FOCUS consortium has put a lot of thought into how research ethics, with
particular sensitivity to the challenges of working with refugees, can be
built into the project (particularly with respect to field work: see WP3 and
WP4). For example, the WP3 methodological workshop, held in Zagreb in May
2019, set out the principal requirements and procedural steps in relation to
the fieldwork, which enabled the Ethics Management Team and the Ethics
Advisory Board to provide targeted feedback and produce key outputs for
partners involved in the research. Since then, the ‘Interviewer Manual’ and
‘Training Manual’ have been developed by research partners following the WP3
workshop, which provide more information on procedural aspects of the research
with refugees and host community members. These further inform the research
ethics feedback that the Ethics Management team can provide. The process is an
ongoing one and that is as it should be.
##### 8.3.5 Incentives to Participate
_Challenge_
In conducting research with refugees, there is likely to be a power
differential between researchers and research participants. Refugees may have
an uncertain legal status, they may have reduced rights or opportunities, they
may be in a difficult economic position, and they may be uncertain of the
rights and obligations of other actors (organisations, public bodies, etc.).
This may, on the one hand, disincentivise them to participate in research, due
to suspicion or mistrust of those conducting or funding the research. On the
other hand, it may lead to misunderstandings of the voluntary nature of
participation. They may feel obliged to participate, or that participation is
somehow linked to other institutional processes they are engaged in (Krause
2017) (such as citizenship applications), or that research interviews are
somehow connected with other interviews that concerned their legal status
(Ellis et al. 2007: 466), or that it is “the done thing” to participate, or
that it makes good sense to participate because it will (they suppose) support
their integration in various concrete ways (e.g. that it will make them more
attractive to employers, etc.).
Moreover, when participants are recruited through gatekeepers or by
snowballing, it can be difficult to fully ascertain people’s reasons for
participating. Researchers may be only dimly – or not at all – aware of the
power relationships between participants and the people or organisations that
introduce them to the project. It should not be taken for granted that those
who introduce new participants to the project understand the research process
in the same way as the project researchers (Olijiofor 2016: 6).
If financial or other incentives or reimbursements are provided these should
be provided equally to all participants in a transparent way. The procedure
for receiving the payment or reimbursement should be made clear to
participants in advance and should be carefully thought through. For example,
proposed payments direct to a bank account may be problematic if individuals
or groups have reduced or no access to banking services; or if participants
are required to provide a social security number this may exclude those who
either do not have one or who prefer not to give it).
_Response_
It will be made clear to potential participants, more than once, that their
involvement with the project is in no way whatsoever connected with their
legal status, their economic prospects, etc. That is to say, the overwhelming
incentive to participate is simply to support the research and its aims. This
will be made clear to all potential participants, whose contribution to
research will also be acknowledged in partners’ publications and dissemination
activities. In such context, stakeholders and project partners from all
representative countries benefit from this equally.
Financial incentives to participate will be clearly stated in the research
design and will be no more than 20€ or equivalent (e.g. shopping coupons). In
the case of focus group sessions, information about monetary reimbursements
for participation in a session is included in both the Letter of Invitation
and the Information Sheet. It is our intention that there be as little
variation in methodology and other factors (e.g. incentives) across research
sites. In practice, this is unlikely to be possible. However, we will strive
to ensure, as much as possible, that the relative value of incentives provided
is constant across sites (e.g. relative to local cost of living).
##### 8.3.6 Informed Consent
_Challenge_
Informed consent procedures are an essential component of responsible
research. Properly designed and implemented, an informed consent procedure
ensures that potential research participants can take the decision to
participate (or not to) on the basis of a sound understanding of: the aims and
scope of the project; the team(s) conducting it and the organisation(s)
funding it; the aims, scope, and specific details of the particular research
activity to which they are invited to contribute; and the risks, benefits, and
any other relevant and likely consequences, of participating.
There are a number of benefits to be gained by implementing a good informed
consent procedure. These include benefits to participants, such as protecting
their rights, and enabling them to identify breaches of or threats to research
ethics standards connected with their participation; and benefits to the
research, such as developing strong and trusting relationships with
participants, potentially leading to greater engagement and higher quality
contributions. So there are many incentives to get informed consent right.
A well-known concern is that informed consent forms can be long, overly
technical, use specialist terms or jargon, and can have a legalistic “small
print” style. This can have the effect of either discouraging participation or
of undermining a participant’s understanding of what they are signing up to
(which is unfair, undermines their autonomy, and sacrifices all the benefits
of a good informed consent procedure). A further shortcoming of poorly
written, legalistic or jargon-filled informed consent forms is that they may
prove difficult to translate effectively – which is to say, in a manner that
is both faithful to the original text but also easily comprehensible to
readers (Birman 2005: 166).
Refugees may be or feel in a precarious legal or immigration position. As
such, they may be willing to participate in the research on an anonymous
basis, but unwilling to sign informed consent forms. It is important that an
informed consent procedure does not obstruct participation. Accordingly, a
flexible approach is required. In cases such as those mentioned above (but
also others, e.g. when a participant has limited literacy) consent can be
provided orally (Olijiofor 2016, 5; Ellis et al. 2007: 467-69). The same
information as is provided on the informed consent form and information sheet
should be communicated orally to the participant (and the same points about
comprehensibility apply). The participant should provide an explicit
indication of consent. If the participant agrees, this process should be
recorded (video or audio). If the participant does not agree, or if this is
not possible, then the process should ideally be witnessed by a second member
of the research team. If there is no second member in the research team, then
it is the researcher’s responsibility to ensure that informed consent
principles apply in practice, including participants’ right to withdraw from
the research and have their data removed at any point. Researchers should keep
records, for all participants, whether fully anonymous or not, of how informed
consent was obtained and what records (forms, videos, etc.) are stored.
Informed consent is a process, not an event. This is most obviously reflected
in the fact that consent is always revocable, i.e. a participant can withdraw
his/herself and his/her data from the study at any time, for any reason. (In
cases of anonymous participation, withdrawal of data may not be possible.) It
also implies that researchers should be alert to signs of distress or
discomfort, and should be sensitive to the needs of participants. It may take
time and dialogue for a participant to agree to take part, and they may need
reassurance and open discussion of concerns after having agreed in order that
they feel comfortable to continue. The interests of the participant always
come before the interests of the work and so it is imperative that researchers
set aside sufficient time to collect informed consent in fair and effective
ways.
_Response_
In FOCUS we will develop a flexible approach to informed consent which is
consistent with both the requirement to clearly record consent and the
requirement to minimise the processing of personal data. In the qualitative
data collection process in WP4, our intention is to collect as little personal
data from participants as possible. The research partners will apply a
pseudonymisation technique to ensure that survey respondents are to almost all
intents and purposes anonymous, but nonetheless retain the possibility to have
their data withdrawn at any time in the research.
Informed consent forms will bear a unique code, corresponding to the same code
on the participant’s questionnaire. This enables the data to be processed
pseudonymously from the point of view of the party collecting the data and
anonymously from the perspective of everyone else. The informed consent forms
will be kept separately from the questionnaires (to minimise the risk of the
two becoming somehow associated).
Participants in focus groups will provide written informed consent (we need
their contact details in order to organise the workshops, hence obtaining
written informed consent requires gathering no additional data).
The Ethics Management Team has provided feedback and recommendations in the
development of the Information Sheets (and Invitation Letter, in the case of
focus groups). In addition, the Ethics Management Team produced two types of
Informed Consent Forms:
a) interview surveys, and b) focus groups. Consent will be granular, meaning
that consent to specific aspects of the research (particularly the data
processing aspects) will be distinct from other acts of consent.
##### 8.3.7 Language and Translation
_Challenge_
Translation – of informed consent forms, information sheets, data collection
tools (e.g. questionnaires) and results – is a difficult but very important
matter. It is obviously important that translations are as accurate as
possible.
Translation of certain terms is likely to be problematic, either because they
are relatively technical (and so the problem is to render them in terms that
preserve the meaning – assuming that a single common meaning is agreed upon
within the consortium – while also being understandable to non-experts), or
because some phrases or terms have cultural connotations that are absent in
either the source or target language, or because – if interpreters are used –
they inadvertently introduce biases into the data (Olijiofor et al. 2016: 4).
_Response_
Data collection in fieldwork will, as necessary, be conducted in the language
of the participants. In EU countries where research will be conducted, there
will be two versions of the questionnaires: one provided in the local
language, and the other one in Arabic. Backtranslation will be used, whereby
the source document is translated from English into the target language and
then the translated document is translated back into the English by a
different translator, and the two English versions are compared to ensure that
the sense has been adequately preserved through the process (Jacobsen & Landau
2003; Bloch 2004: 14546). To address issues of unexpected or unintended
cultural connotations, translated materials will be reviewed with cultural
insiders or members of the target group (e.g. community leaders).
##### 8.3.8 Privacy and Data Protection
_Challenge_
The rights of participants with respect to data protection are set out in
legislation, most notably the EU General Data Protection Regulation (the
GDPR). The GDPR mandates 7 basic principles relating to the processing of
personal data:
* _**lawfulness, fairness and transparency** _ : personal data shall be processed lawfully, fairly and in a transparent manner.
* _**purpose limitation** _ : personal data shall be collected for specified, explicit, and legitimate purposes, and not further processed in a manner incompatible with those purposes.
* _**data minimisation** _ : personal data shall be adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed.
* _**accuracy** _ : personal data shall be accurate and kept up to date; reasonable steps must be taken to ensure that inaccurate data are erased or rectified without delay.
* _**storage limitation** _ : personal data shall be kept in a form which permits identification of data subjects for no longer than is necessary.
* _**integrity and confidentiality** _ : personal data shall be processed securely, including protection against unauthorised or unlawful processing, accidental loss, destruction or damage, using appropriate technical or organisational measures.
* _**accountability** _ : the controller shall be responsible for, and be able to demonstrate compliance with, the above principles.
_Response_
The above basic principles form the basis of the approach adopted in FOCUS.
Data minimisation was reviewed in WP3 during research design. This is
reflected in the DPIA reported in _section 7.1_ above. In general, each
partner is responsible for justifying to the consortium each form of personal
data they propose to collect. The Statement of Compliance with the Principle
of Data Minimisation – endorsed by all consortium partners – describes the
means by which we ensure that any data collected and processed in FOCUS is
relevant and limited to the purposes of the project:
###### Statement of Compliance with the Principle of Data Minimisation
_“The processing of personal data is essential to the fieldwork conducted in
FOCUS. The consortium undertakes to ensure that the minimum amount of data is
processed. To ensure this, a standardised methodology is developed in WP3 to
ensure that all partners collect the same type of personal data by means of a
single instrument, to be developed and used across the different countries
where research takes place. This information will be collected in the Project
Data Management Plan. The Ethics Manager will review these categories and,
with the support of the WP3 work package leader, will ensure that there is
adequate justification for gathering that data. Any categories of data that
cannot be shown to be genuinely necessary for the scientific integrity of the
fieldwork will be rejected: researchers will be instructed to not collect that
data (and research plans adjusted if/as necessary). In case of any doubt or
debate, the Ethical Advisory Board will be called upon to provide a final
decision.”_
As a consortium, we have stipulated that the standards mandated by the GDPR
will be applied in relation to fieldwork in Jordan by non-EU data controllers
even if it is not legally required (CSS reports that it has its own data
protection software and independent offline servers.) FFZG, as lead of
research design confirms that any research activities carried out in Jordan
regarding methodological aspects, data anonymisation/pseudonymisation
techniques, data collection and conduct with the participants, data management
and data protections will be the same as in other countries. No research will
be conducted in Jordan that would not be conducted in the EU country within
the exactly same research framework. We have reviewed data protection
practices in Jordan to determine whether they impose restriction stronger than
those in the GDPR (they do not).
It is important to note that there will be no transfer of personal data
between consortium partners and therefore – to be quite specific on this point
– no transfer of personal data to or from Jordan. Any data to be transferred
to Jordan for the purposes of between-country analyses will be in aggregated
form only, and effectively anonymous at that stage (reidentification would
only be possible for the party that collected the data).
All research partners have agreed to use a standard pseudonymisation technique
by which each survey participant will be ascribed a numerical code indicating
only the serial number of the completed questionnaire (i.e. from 001 to 999)
at each study site. Data such as the name or address of the participant will
not be recorded on the questionnaire. The questionnaire will not collect any
data that could, in itself, be linked to identifiable individuals. The name of
participants will only appear in signed copies of informed consent forms,
which will be kept separate at all times from the questionnaires.
Participants in the focus groups will not be addressed by their (real) names
if they choose so. The records with their names which are needed only to
establish contact will be kept separately from the audio recordings and the
transcripts in a locked cabinet. Similarly, to the survey interviews, each
participant will be ascribed a unique code which will be used in the
transcripts of the focus groups (pseudonymisation). Data in the transcripts
that could lead to identification of an individual will be deleted prior to
data processing. Audio recordings will be destroyed immediately after
producing the transcripts and a protocol testifying this will be kept by the
senior researcher at each fieldwork site.
A strict data minimization policy will be adopted in the project in order that
no personal data which is not strictly necessary will be gathered from
participants. The personal data collected via the informed consent form
includes participant’s name and surname. The Letter of Invitation designed for
the recruitment of participants for the focus groups, also requires from
participants to indicate their profession and provide an email address or
phone number to get in contact for organisation matters regarding the focus
group session. As stated above, any such data are kept separately from
participants’ responses as part of the fieldwork. Whenever participants are
requested to submit personal data, they will be informed of what data is
collected, how it is stored and processed, and by whom.
A small amount of sensitive personal data is collected during the survey
concerning racial or ethnic origin, religious and political beliefs, and
health. This data has been agreed as necessary by experts in the consortium.
It is scrupulously minimised. Participants will be explicitly informed of this
data collection before consenting. This is addressed in the DPIA in _section
7.1_ above.
Participants’ data will not be mined or used for any purposes other than those
explicitly stated and clearly necessary for the relevant activity within
FOCUS. We will comply with the general principles listed above, as well as the
specific requirements of the GDPR, as set out in its later articles and
recitals.
##### 8.3.9 Research Design
_Challenge_
It is a basic principle of research ethics that the risks to participants must
be outweighed by the benefits (to the participants and/or to others) of the
likely outcomes of the research (this formulation is rather too simplistic,
but for present purposes the point holds). The most obvious implication of
this is that research which involves very serious risks to participants should
be likely to have very significant benefits. In designing their research,
therefore, researchers must minimise risks to participants. However, it also
follows from the basic principle that researchers should, in designing their
research, aim to maximise its positive and valuable outcomes. In practice,
maximising positive outcomes and minimising risks to participants is a
balancing act.
That a balance must be struck should not be taken to imply that research
ethics and effective research design are in tension, or that there is a trade-
off to be made: good practice in research ethics promotes effective research
design (Olijiofor et al. 2006; Ellis et al. 2007: 462). For example, a
carefully designed informed consent procedure will build trust between
researchers and participants, making it more likely that participants provide
candid and full responses to researchers’ questions. Or again, careful
attention to diversity and hierarchies of power and social capital within a
target population is good practice in research ethics, but it also makes it
more likely that the data gathered will be accurately reflect the diversity of
perspectives in that population. Looking at it from the other side, carefully
designed research methods promote ethics. A number of factors are noted by
Jacobsen & Landau (2003: sec. III), including:
* _**sampling** _ : that the sample should be genuinely representative;
* _**construct validity** _ : that what is measured should be appropriately linked to what it is assumed to show;
* _**objectivity** _ : that researchers’ subjective biases do not intrude into data collection or interpretation;
* _**reactivity** _ : that a researcher’s presence does not influence participants’ responses; - _**translation** _ : that inaccuracies and biases do not contaminate the data; and:
* _**lack of use of control groups** _ : control groups are often omitted in social science research involving refugees.
Similar requirements, which are not necessarily tailored to refugees, are
identified by a number of other authors – Emanuel et al. (2000), for instance.
Ellis et al. (2007: 462-71) developed these requirements, shifting the focus
onto refugees. They identify the following important factors:
* _**social or scientific value** _ : that the research contributes to increasing knowledge and that the findings be disseminated;
* _**scientific validity** _ : that the research should be methodologically sound, especially given the challenges of cross-cultural contexts ;
* _**fair subject selection** _ : that subject selection is driven by the requirements of the research question, not by the ease or difficulty of accessing participants;
* _**favourable risk/benefit ratio** _ : that potential benefits to participants
(individually and collectively) outweigh potential risks;
* _**independent review** _ : that research proposals are reviewed by, e.g., funding bodies, RECs, etc., and by members of the refugee community;
* _**informed consent** _ : that participation is voluntary and on the basis of an adequate understanding of the research project and what is involved in taking part;
* _**respect for potential and enrolled participants** _ : that researchers address the power differential between them and the participants, e.g. by involving cultural insiders in research design.
Research design and research ethics should, therefore, be developed in tandem.
It is also important that research participants are shown respect as persons,
and not objectified or treated as “mere research subjects”. This kind of
respect can be built into the research design by, for example, ensuring that
participants are given time and opportunity to speak their minds freely (and
possibly even on matters that go beyond the research questions), rather than
simply being “mined” for their data (Krause 2017: 20).
_Response_
The Ethics Management Team have contributed significantly in the process by
reviewing materials developed in the scope of WP3, and have involved the
Ethics Advisory Board in the process by having all members attending the WP3
Methodological Workshop help in Zagreb in May 2019, providing structured
feedback and opinion on several aspects of the research. It should however be
noted that the academic partners responsible for research design and
implementation are senior and highly respected professionals, with extensive
experience of this kind of research design with refugees and other vulnerable
groups.
More information on basic inclusion/exclusion criteria and procedures to be
implemented for the identification and recruitment of research participants
are provided in _section 8.3.14_ (“Selection and recruitment of participants”)
below.
##### 8.3.10 Research Ethics Training
_Challenge_
Any recruited researchers – including interviewers, translators, interpreters,
and anyone else involved in, e.g., identification of participants – should be
given some project-specific training in research ethics issues (cf. Olijiofor
2016: 8).
_Response_
Each partner will ensure that all researchers and associated people conducting
the research are experienced and/or well-trained in both the scientific and
ethical aspects of their tasks. The Ethics Management Team has developed the
**FOCUS Research Ethics Manual** (see _Annex_ below) which aims to provide
practical guidance to fieldwork interviewers on ethicsrelated aspects of the
research.
##### 8.3.11 Research integrity
_Challenge_
Research integrity is a central responsibility of all researchers. Some
authors have raised concerns in this field about “advocacy research”, i.e.
“where a researcher already knows what she wants to see and say and comes away
having ‘proved’ it” (Jacobsen & Landau 2003: 187). This is typically well-
meaning, but it undermines the quality of academic and societal debate, and
possibly skews policymaking.
_Response_
In addition to respecting the fundamental principles of research integrity
(see The European Code of Conduct for Research Integrity, ALLEA 2017), we
acknowledge the concerns mentioned above. To mitigate this risk, in FOCUS we
will develop research methodologies in a transparent way (e.g. the research
design is coordinated separately from the research implementation), will
encourage critical discussion of the methodology and of the findings and how
to interpret them, and will ensure that results are effectively communicated
alongside clear and open explanations of how the data was gathered and
interpreted.
##### 8.3.12 Rights of Participants
_Challenge_
The rights of participants with respect to data protection are discussed in
_section 8.3.8_ above. Besides privacy, all other standard rights of research
participants will be respected. These include:
* that participation shall be voluntary;
* that participants shall be clearly and adequately informed of the purpose of the research, what it involves, and how the findings will be used;
* that there is no undercover data collection or use of deceptive practices;
* that participants may withdraw themselves and their data (if possible) from the project at any time and for any reason;
* that participants shall be respected as persons, not merely “research subjects”;
_Response_
The Ethics Management team have worked in WP3 to ensure that respect for all
such rights is built into the research design. We will also be developing an
“ethics validation procedure”, which will enable us to monitor whether the
rights have been respected in practice. More specifically:
* _**Voluntariness** _ . Participation in any research activity in FOCUS will be entirely voluntary. Researchers will be alert to signs of coercion, especially when participants are introduced to the project through snowballing or through their employer.
* _**Informed consent** _ . Participants who cannot give genuinely informed consent (e.g.
minors) will not be included in the research. All related materials (e.g.
informed consent forms) will be provided in a language with which the
participant is comfortable and can fully understand (where this cannot be
confirmed, the participant will be excluded from the study).
* _**Follow-ups** _ . Participants will be provided the name and contact details of a member of the research team whom they can contact at any stage after the research (e.g. in case of complaint or a request to withdraw data).
* _**Withdrawal** _ . Participants retain the right to withdraw themselves and their data (if possible) at any time for any reason. They will be informed how to indicate this intention.
* _**Risks/benefits of participation** _ . Participants will be briefed as to the possible risks or benefits of participation. Participants will not be placed in any situation in which there is a likelihood of physical or psychological harm. There may be a risk of recalling mental or emotionally stressful events but not larger than occurs in their everyday discussion with peers. Benefits of participating are that one supports the research and its goals. If any form of reimbursement is provided, this will be
provided to all participants and with as much equivalence across research
sites as possible (see _section 8.3.5_ ).
* _**Cultural, religious, other issues** _ . Possible cultural, religious, or other issues (e.g. kinds of food provided at workshops) will be identified in advance and measures will be taken in order to avoid any offence or embarrassment.
* _**Respect for participants** _ . Participants will be shown respect as persons, not merely “research subjects” by inviting them to provide feedback (if they wish) on their experience in the research activity to which they contributed. This will be achieved by informal conversations with researchers and/or feedback forms provided to them (some may prefer to talk, some to provide anonymous feedback). The forms will include a freeform open section in which the participant can say whatever they like about their experience. The forms and feedback will be anonymous (though the participant can add their name and contact details if they wish) and entirely voluntary (not obligatory at all).
##### 8.3.13 Security of Researchers
_Challenge_
Researchers conducting fieldwork should be informed of any risks involved.
_Response_
It is not expected that researchers conducted fieldwork will be at any non-
normal risk. This will, however, be verified, prior to commencement of the
fieldwork. The likelihood of researchers encountering difficult or upsetting
stories will also be assessed and a support framework will be assured by the
individual partners responsible.
##### 8.3.14 Selection and Recruitment of Participants
_Challenge_
It should not be assumed that identifying forced migrant and refugee
populations will be simple (Jacobsen & Landau 2003). Identification of
potential research participants in FOCUS will be a challenge. Strategies to be
employed include snowballing techniques and access to registries. Such methods
are not perfect. In terms of effectiveness, people of uncertain or precarious
immigration status may be reluctant to take part in research studies, fearing
that it may bring them to the attention of the authorities, regardless of who
introduced them to the project; and in terms of quality, such methods are
somewhat susceptible to selection bias and concomitant danger in drawing
generalisations (Olijiofor 2016: 14). In addition to the methodological risks,
snowballing carries the ethical risk of potentially harmful information being
revealed within a participant’s social network. As Jacobsen & Landau (2003:
sec. III) point out, “simply informing a respondent how you obtained a name or
contact information demonstrates a particular kind of link” .
Moreover, determining the target populations within the broader grouping
“forced migrants and refugees” increases in difficulty as the diversity within
the broader group increases. This also raises an issue concerning sample size
and makeup. It is, generally, desirable to collect large samples. To be of the
most value, these samples must be relatively homogenous; but not only can it
be difficult to identify suitable populations, it can also be difficult to
collect samples that represent (at a suitable size) the different subgroups
with in the target population. This can lead to minority groups being either
ignored or subsumed into majority groups (Birman 2005: 161; Ellis et al. 2007:
464), or to results which are questionably representative and which do not
allow comparative studies across groups (Jacobsen & Landau 2003: sec. III). It
should be further noted that if (as is not uncommon) there is a dearth of
reliable statistics on, for example, the ethnic or religious makeup of the
population, it will be difficult to conclusively determine whether steps taken
to ensure a balanced and representative sampling have been successful. These
problems have an impact on the reliability of the research findings – which is
particularly problematic when the research is intended to inform policymaking
in a critical area (Birman 2005: 163).
_Response_
It should be noted that such problems will afflict any study of this kind to
some extent. Our approach in FOCUS, which draws on the recommendations of
Birman (2005: 163-4) is to openly acknowledge the problem, to address it as
far as possible in WP3 and WP4, and to carefully record – and include in the
reporting of our findings – the limitations of the research methods. To
address the limitations of snowballing, we will attempt to ensure use of
multiple starting points (Bloch 2004: 149).
We do not underestimate the seriousness of this challenge. As stated
elsewhere, FOCUS is fortunate in benefitting from the involvement, at senior
project positions in research design and implementation, of highly respected
professionals, with extensive experience of this kind of research design with
refugees and other vulnerable groups. We will rely on these colleague’s
experience and expertise, alongside input from the Advisory Board and EAB, to
provide high quality assurances of effective, methodologically and ethically
sound selection and recruitment of participants.
Below, we provide an account of the procedures and criteria that will be used
for identification and recruitment of research participants for the conduct of
surveys and focus groups in the scope of WP4. The following information is
taken from the Training Manual developed by WP3 leaders FFZG.
The four study sites will include Germany, Sweden, Croatia and Jordan focusing
on communities with high concentration and number of refugees. The survey
target groups include host community members and refugees from Syria living in
the respective communities.
The target group of refugees from Syria is described as forced migrants from
Syria who have been recognized as refugees by UNHCR from 2011 onward in
Jordan, or have received the international protection status (asylum) from
2015 onward for European countries, and have been living in respective host
communities from the point of receiving this status to date. The criteria of
different years of being recognized as a refugee (in Jordan) or receiving
asylum (in Europe) was chosen since the peak of influx of refugees from Syria
to Jordan was in 2013, but the refugees from Syria started arriving in greater
numbers in 2011/2012. The European Union experienced massive increases in
influx of refugees in 2015. The inclusion criteria for ascent to the study
are:
* Age – respondents between 18 and 65 years.
* Refugee/asylum status – respondents who have received the decision regarding their status; if rejected the refugee/asylum status do not qualify for the study.
* Year of receiving refugee status – respondents who received their refugee/asylum status after 2015 (2011 in Jordan) qualify for the study. In Jordan the applicable criteria for acknowledging the refugee status will be used.
* Not living in a camp/shared accommodation for refugees – respondents who live in a camp or shared accommodation for refugees do not qualify for the study.
Host community members are defined as persons who have citizenship or
permanent residency in the respective European country and have been living in
the same host community for at least 7 years (at least since 2013). The
criterion of length of stay in the same community has been chosen as a sum of
two years prior to the beginning of the migration wave from Syria to Europe
and the number of years passed since, making a total of 7 years. For Jordan,
the host community members are defined as Jordanians, as in Jordan foreigners
cannot receive citizenship or permanent residence. It is important that the
survey participants in the host communities are long-residing individuals in a
respective community to have been able to develop profound experience of
living in and attachment to the community. The inclusion criteria for ascent
to the study are:
* Age – participants between 18 and 65 years.
* Number of years living in the respective country – participants living in the host community more than 7 years.
* Citizenship or residence – participants who have country citizenship or permanent residence.
**Sampling host community participants**
Survey of host community members will use two probabilistic sampling
techniques to select the participants. Due to specific differences among the
four study sites regarding access to registers of host community members, the
Random Walk Technique (RWT) will be used in Germany, Jordan and Croatia. In
Sweden, citizen registries will be used for randomised selection of
participants and the validated interviewing procedures will be followed as in
other similar population based studies in Sweden.
In the selected target areas (regions, cities) the size of the sample will be
proportional to the population of that target area (region, city), and
participants will be selected by probability sampling which will ensure that
the sample structure reflects the areas’ population characteristics based on
available statistics, such as the total male and female population in the 18
to 65 age group.
**Sampling refugee participants**
The sampling design for the refugee survey will aim at achieving heterogeneity
to reflect the refugee population parameters, but true probabilistic sampling
is not expected at all study sites. RWT of sampling refugee respondents will
be used if possible in Jordan, while random sampling of refugees based on
registries will be used in Sweden. In Germany and Croatia refugee respondents
will be approached through NGOs that maintain contact with them and if needed
with advertisements and invitations to participate in the study that will be
placed and published at locations frequented by refugees from Syria.
During the initial contact with potential refugee participants the Information
Letter about the study and invitation to participate will be distributed
through the NGO channels. If they are willing to participate, they will send
message through the NGO intermediary and will then be contacted.
In order to minimise the potential self-selection and other referral biases,
in each area (region, city) at least five different entry points into the
target population (i.e. NGOs, locations for placing the advertisements and
invitations to participate in the study) will be used.
Data collection will be conducted in a comparable way across countries using
the standard and validated procedures, such as computer assisted personal
interviewing (CAPI) or faceto-face paper and pencil interview in the language
preferred by the participants, using the same questionnaire, and in all cases
carried out by trained staff.
Participants in the qualitative part of the study will be recruited into 4 to
5 focus groups of key informants among the host and refugee community members
in the same cities where the quantitative survey will be done. Both host and
refugee participants will be identified among the general population using
different information channels and reaching out to, for example, schools, work
places, welfare services, job services and other locations where the potential
participants will be approached to ascent to the study. The focus group
participants will be modestly reimbursed for their effort. The key informants
will be defined as individuals (both women and men, between 18 and 65 years of
age), who have been living in the respective community at least seven years,
are aware of the presence of refugees living in the community, and are able to
articulate their experiences and views. The principle of maximal heterogeneity
regarding the age, education level and gender will guide the recruitment of
focus groups composition. The focus groups will be held in the mother tongue
of the participants.
**Quality assurance during data collection**
While gathering data, the interviewers will maintain a separate “survey log”
in the paper format for each completed and attempted interview. In this log
they will note the address, time, date and outcome of each completed or
attempted interview, whether original or replacement household.
At the end of the interview, the participants will be asked if they agree to
be contacted by the survey supervisor for the purpose of monitoring the work
of the interviewers. If the participant agrees, his/her phone number will be
written in the specific follow-up table together with the participant’s
personal code. This will enable the survey supervisor to verify about 10 % of
the completed interviews per each interviewer. The telephone numbers will be
randomly selected among the participants who have agreed to be called back. If
selected for the follow-up call, the supervisor will ask the participant if
he/she was interviewed during the previous three days at home (or in case of
refugee participants possibly at other locations) by means of a tablet about
the integration of host community members and refugees. The supervisor will
not be able to identify the individual participant.
In case of irregularities, the personal code will serve to delete this
participant’s data. In such a case, all other interviews done by the same
interviewer will be also deleted. Such interviewer will be immediately
dismissed and other interviewers will collect data from the replacement
households and participants.
The survey logs will be kept separate from the participants’ responses which
will be entered into the tablet computer during the interview and in no way
will they be linked to the data of an individual participant.
To avoid interviewer bias, none of the interviewers will interview more than
15% of the sample, i.e. a maximum of 90 participants from at least nine
sampling points.
##### 8.3.15 Vulnerability
_Challenge_
Forced migrants and refugees may often face discrimination in virtue of their
status as immigrants. They may also face other forms of discrimination that
exist in their host society (in terms, e.g., of race, gender, religion,
disability, poverty, etc.). Moreover, immigrants from any given region
themselves have diverse backgrounds in terms of ethnicity, socioeconomic
status, religion, and many other factors. This means that an individual may be
vulnerable to discrimination within that group. Research design – particularly
the selection of participants and research assistants – should, as far as
possible, take account of this diversity and potential for discrimination.
This does not necessarily mean that the selected participants should
thoroughly reflect the make-up of the groups studied – although in some cases
it may – but it implies that the problem of gathering appropriate reliable
data from an appropriate range of the target population should be
acknowledged, carefully considered, and effectively addressed.
Refugees are more likely than most others to have experienced traumatic
events. It should be established whether and to what extent the research
activities in which they are asked to participate are likely to cause them
suffering or, at least, to revisit difficult experiences (Krause 2017: 4). It
should also be mentioned that participants in trauma-related research can
benefit from the experience (Ellis et al. 2007: 465). So the risk of
participants revisiting difficult experiences is something to be assessed and
reflected upon, rather than an automatic barrier to research.
_Response_
The Research Ethics Manual (see _Annex_ ) contains information on ethical
factors to consider when working with refugees from Syria. It is, of course,
important to be realistic about what can be offered (Krause 2017: 24-5):
offering to follow-up with participants and then not doing it may be harmful
(as well as disrespectful). Hence the same opportunities will be offered to
all participants, regardless of their location, which includes at minimum
providing the contact information of the person available to offer counselling
services at each study site.
At the same time, the consortium will be careful to ensure that participants
are not subject to a sort of condescending paternalistic attitude (Ellis et
al. 2007: 471). As mentioned elsewhere (see _section 8.3.5_ on incentives to
participate), it is important that researchers are sensitive to the power
differentials between them and the research participants, balancing the duty
to protect the interests of participants with respect for their dignity and
autonomy and, as far as possible, promoting a reciprocal approach (Krause
2017: 15) whereby participants also gain from their participation in some
manner (e.g. pride in having supported research that aims to foster better
policymaking).
##### 8.3.16 Incidental Findings
_Challenge_
Incidental findings may arise in different research contexts, and specifically
in human participant research, which involves the collection of data beyond
the aims and scope of the study. It is important to establish appropriate
procedures to handle and minimise the risk of occurrence of any such findings
in the context of fieldwork planned in the scope of this project.
_Response_
We will take all possible steps to minimise the risk of incidental findings,
and in this direction, we have produced a statement of FOCUS’ Incidental
Findings Policy.
**Incidental Findings Policy**
Before fieldwork commences, each partner conducting fieldwork will establish
whether data that could be inadvertently collected is likely to raise any
ethical or legal issues (e.g. if it concerns criminality, legally grey or
questionable issues, or urgent health issues). We will seek advice from the
Ethical Advisory Board and experts from the consortium to determine the best
policy for dealing with such incidental findings.
The blanket policy for incidental findings that do not raise any of the issues
mentioned above is that they will be immediately deleted. Decisions to delete
incidental data must be approved by a senior researcher in the project.
Whenever incidental findings arise, the researcher must report this to the
task or WP leader and to the Ethics Manager. This is because it is necessary
to establish why the incidental findings arose. If it is due to research
design, then the methodology will be adjusted to prevent future occurrences.
The task or WP leader will maintain records of anonymised cases of incidental
findings and how they were addressed.
### 9\. Bibliography
ALLEA. (2017). _The European Code of Conduct for Research Integrity_ , Berlin:
ALLEA.
Beauchamp, T.L. & Childress, J.F. (2001). _Principles of Biomedical Ethics_ ,
fifth edition, Oxford: Oxford University Press.
Birman, D. (2005). Ethical issues in research with immigrants and refugees, in
J. Trimble, C. Fisher (eds), _Handbook of Ethical Research with Ethnocultural
Populations and Communities_ (SAGE Publications Inc.), pp. 155-177.
Bloch, A. (2004). Survey research with refugees, _Policy Studies_ , 25(2),
139-151.
Carswell, K., Blackburn, P., & Barker, C. (2011). The relationship between
trauma, postmigration problems and the psychological well-being of refugees
and asylum seekers. _International Journal of Social Psychiatry_ , 57(2),
107-119.
Ellis, B.H. et al. (2007). Ethical research in refugee communities and the use
of community participatory methods, _Transcultural Psychiatry_ , 44(3),
459-481.
Emanuel, E.J., Wendler, D., & Grady, C. (2000). What makes clinical research
ethical? _Journal of the American Medical Association_ , 283(20), 2701-2711.
EU/GDPR. (2016). Regulation (EU) 2016/679 of the European Parliament and of
the Council of 27 April 2016 on the protection of natural persons with regard
to the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation), _Official
Journal of the European Union_ , 59, L119.
Jacobsen, K. & Landau, L.B. (2003). The dual imperative in refugee research:
some methodological and ethical considerations in social science research on
forced migration, _Disasters_ , 27(3), 185-206.
Jurlina, P., & Vidovic, T. (2018). _The wages of fear: Attitudes toward
refugees and migrants in Croatia_ . “Empowering Communities in Europe”
project, co-funded by the European Commission. Centre for Peace Studies /
British Council.
Kartal, D., & Kiropoulos, L. (2016). Effects of acculturative stress on PTSD,
depressive, and anxiety symptoms among refugees resettled in Australia and
Austria. European Journal of Psychotraumatology, 7: 28711.
Krause, U. (2017). Researching forced migration: critical reflections on
research ethics during fieldwork, _Refugee Studies Centre Working Paper
Series_ , 123, 1-36.
Laciak, B., & Segeš Frelak, J. (2018). _The wages of fear: Attitudes toward
refugees and migrants in Poland_ . “Empowering Communities in Europe” project,
co-funded by the European Commission. Instytut Spraw Publicznych, British
Council, Warszawa.
Mestheneos, E., & Ioannidi, E. (2002). Obstacles to refugee integration in the
European Union Member States. _Journal of Refugee Studies_ , 15(3), 304-320.
Obijiofor, L., Colic-Peisker, V., & Hebbani, A. (2016). Methodological and
ethical challenges in partnering for refugee research: evidence from two
Australian studies, _Journal of Immigrant & Refugee Studies _ , 0(0), 1-18.
OECD (2016). _Making integration work: Refugees and others in need for
protection_ . OECD Publishing: Paris.
Peschke, D. (2009). The role of religion for the integration of migrants and
institutional responses in Europe: Some reflections. _The Ecumenical Review_ ,
61(4), 367-380.
Sijbrandij, M. et al. (2017). Strengthening mental health care systems for
Syrian refugees in Europe and the Middle East: Integrating scalable
psychological interventions in eight countries. _European Journal of
Psychotraumatology_ , 8: 1388102.
Wong, C.W.S., & Schweitzer, R.D. (2017). Individual, premigration and
postsettlement factors, and academic achievement in adolescents from refugee
backgrounds: A systematic review and model. _Transcultural Psychiatry_ ,
54(5-6), 756-782.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1501_DEMOS_822590.md
|
**DATA MANAGEMENT PLAN**
The Data Management Plan of the DEMOS project will be revised and updated
annually.
# Data Summary
DEMOS is built on the assumption that populism is symptomatic of a disconnect
between how democratic polities operate and how citizens perceive their own
aspirations, needs and identities within the political system. As such, DEMOS
explores the practical value of ’democratic efficacy’ as the condition of
political engagement needed to address the challenge of populism. The concept
combines attitudinal features (political efficacy), political skills,
knowledge, and democratic opportunity structures. In order to better
understand populism DEMOS addresses its hitherto under-researched aspects at
micro-, meso-, and macro-levels: its socio-psychological roots, social actors’
responses to the populist challenge, and populism’s effects on governance.
DEMOS focuses not only on the polity, but equally on citizens’ perspectives:
how they are affected by, and how they react to, populism. Politically
underrepresented groups and those targeted by populist politics are a
particular focus, e.g. youth, women, and migrants. As populism has varying
socially embedded manifestations, DEMOS aims at contextualising it through
comparative analysis on the variety of populisms across Europe, including
their historical, cultural, and socioeconomic roots, manifestations, and
impacts. DEMOS develops indicators and predictors of populism and elaborates
scenarios on the interactions of populism with social actors and institutions
both at the national and the EU levels.
DEMOS involves primary data collection through: 1) a cross-national survey
that particularly focuses on implicit and explicit measurement of various
emotions; 2) experiments and quasi- experiments to investigate the relation
between cognitive processing styles and populist attitudes, the effect of
framing political information, the role of anxiety, and the role of
information versus feelings in developing populist arguments; 3) interviews
and focus groups conducted in several countries with individuals that include
citizens with a favorable preference towards populist parties, and targets of
populist discourse (e.g., minorities, women, gay people); and; 4) deliberative
polling, a unique method that combines techniques of public opinion research
and public deliberation to construct hypothetical representations of what
public opinion on a particular issue might look like if citizens were given a
chance to become more informed. DEMOS will also implement content analysis,
data mining in social sciences, legal and policy analysis, statistical
analysis, qualitative and quantitative text analysis.
Data will be collected and stored using digital audio recording devices with
the permission of the interviewees and focus group participants. In the event
that respondents do not wish to be recorded, interviews and focus groups will
be undertaken in pairs to enable detailed note- taking. The necessary
documents will be prepared prior to fieldwork, including a letter with
information about the project, anonymity, confidentiality, data sharing, and a
separate letter of informed consent (translate into the languages which will
be used for the interviews and focus groups). To ensure that the content is
understood, the informed consent form will be explained both in writing and
verbally.
The data will be potentially utilized primarily by the scientific community,
but can be also useful for our further target groups: governmental bodies
(policy makers on the EU and the national level); affected professional
communities (journalists, teachers, students); and the civil society (NGOs,
think tanks, foundations) as well as the general public.
In order to ensure fair and transparent processing in respect of the data
subject, taking into account the specific circumstances and context in which
the personal data are processed, beneficiaries of DEMOS implement technical
and organisational measures appropriate to ensure, in particular, that factors
which result in inaccuracies in personal data are corrected and the risk of
errors is minimised, secure personal data in a manner that takes account of
the potential risks involved for the interests and rights of the data subjects
and that prevents, inter alia, discriminatory effects on natural persons on
the basis of racial or ethnic origin, political opinion, religion or beliefs,
trade union membership, genetic or health status or sexual orientation, or
that result in measures having such an effect.
# FAIR data
2.1 All beneficiaries of DEMOS undertake the strict responsibility to follow
the Regulation (EU) 2016/679 of the European Parliament and of the Council of
27 April 2016 (General Data Protection Regulation, hereinafter GDPR).
Beneficiary from a third country required to ensure that all relevant
provisions in the GDPR shall be applied and provide appropriate safeguards,
and that enforceable data subject rights and effective legal remedies for data
subjects are available.
The definition of ’personal data’ used by this document is identical to the
definition used by GDPR.
The definition of ’research data’ means any research results generated by the
DEMOS project, excluding personal data.
_2.2 Data storage_
Personal data will be stored at the consortium leader’s (CSS) infrastructure,
except required otherwise by a Union or Member State law. The infrastructure
of CSS is adequate for conducting large-scale projects heavily dependent on
secure data storage and processing (i.e., 10+ TB storage, IBM blade server
cluster). In such cases the involved beneficiaries must agree about the
conditions of storing the personal data, and must ensure that the level of
protection of natural persons guaranteed by the GDPR is not undermined.
From the project’s inception, electronic files stored at the consortium
leader’s institution will be password protected and encrypted. In order to
create user-friendly and accessible data, the needed data descriptions,
annotations, contextual information and documentation, metadata labelling and
numeration will be made in all stages directly when data is uploaded onto our
server. Detailed instruction of the procedures for data and working paper
storage and curation will be shared after Month 7 with the consortium
partners. Anonymised data will be made available in a repository through the
aforementioned Documentation Center and through the project website. Except
for personal data and data classified as “sensitive” or “confidential,” all
data will be made available and accessible for re-use by other researchers.
RDC (Research Documentation Centre) of the CSS HAS. Metadata will be
harvestable through the Open Archives Initiative Protocol for Metadata
Harvesting system. The data will be identified by DOI. Version numbers will be
provided. Search keywords will be provided, all metadata (data of surveys,
methodology, research tools) and the textual information will be available and
searchable through an internal search engine at the RDC platform. DEMOS aims
to use standard naming conventions, including the following components:
partner name, date, Work Package and keywords. A detailed list of project
target audiences has been included in the Communication, Dissemination, and
Sustainability Plan (CDSP) deliverable.
_2.3. Making data openly accessible_
Research data stored at the consortium leader’s institution will be frequently
updated, backed up, and secured on the cloud system of CSS HAS.
Personal data will be stored exclusively on the cloud system of the consortium
leader and will be made accessible only for researchers working on the DEMOS
project, with two levels of accessibility:
1. researchers: on this level the WP leaders will decide on the limitation of data sharing with the other DEMOS researchers, in agreement with the Principle Investigator and after consulting the Data Protection Officer of the project, always considering the basic principles of data management, especially purpose limitation, data minimization, accuracy, storage limitation, confidentiality and integrity. In order to enhance awareness on privacy and data security issues, the WP leaders and interested task-leaders will receive a privacy and data management awareness training, provided by the consortium leader.
2. different level of accessibility will be provided for the general public: protected research data will be accessible by the general public after being declared as final by the leader of the WP, in agreement with the Principle Investigator.
The data subjects will be informed of the existence of research profiling, its
possible consequences and how their fundamental rights will be safeguarded.
LGBTI, ethnic minority and migrant participants will be identified via
national and local NGOs and associations and where possible via Facebook
groups. If the number of participants recruited through these strategies is
low, we will use snowballing to increase the number of potential participants.
That is, profiling will be based on the information the subjects themselves
provide by being members of Facebook groups and being in touch with NGOs. The
former include publicly available information, while the latter will make use
of the mediator role of the NGOs and personal contacts. Focus group
participation will be anonymised. The storage of any personal data will comply
with the GDPR regulations. Researchers will ensure that participants taking
part in the research have decided to do so by their own free will and
following sufficient information. Researchers will secure in advance the
consent of the persons who will take part in the research or of their legal
representatives. Researchers will fully ensure the protection of the
participants’ personal data, according to the national and European
legislation.
Personal data – with special emphasis on special categories of personal data,
as it is listed in Article 9 of GDPR – will be restricted and accessible to
DEMOS researchers only. As a default, all researchers will have access to the
data generated within the WP they are involved in. Access will be controlled
and managed by the Data Protection Officer (DPO) of DEMOS, who will permit and
oversee access to research data for the researchers, in agreement with the
Principle Investigator. The access of researchers to the different data sets
will be documented in an Excel table and managed by the DPO. Data management –
Emese Szilágyi, researcher of the DEMOS project, assistant researcher at the
Institute for Legal Studies, CSS HAS. Publicly accessibly research data will
be stored on the CSS HAS cloud and available upon request. Users can access
research data freely, with a login and a password. Each publication will have
a working paper version, which will be openly and freely accessible on the
DEMOS repository on the RDC, and will be made available before or latest on
the day of publication, upon agreement with the publisher. Though costs
related to Open Access publishing for scientific papers and publications
produced throughout DEMOS’ lifetime have not been budgeted, all partners will
engage in ‘green’ open access, i.e. self-archiving, whereby published articles
or final peer-reviewed manuscripts are archived by the researcher, or a
representative, in an online repository before, after, or alongside its
publication. Every author is responsible to check the criteria of working
paper versions with the publisher. The working papers will be accessible and
searchable in a readable text format.
4. _Interoperable data_
The produced data will be interoperable. Research data – excluding personal
data – will be open for re-use after it has been declared final by the WP
leaders, in agreement with the Principle Investigator. The use of the RDC
repository at the CSS HAS is free. After the end of the project, all research
data will be stored at the RDC repository and will be accessible for 15 years.
Each partner is responsible for storing sensitive data during, and after the
end of the research. Sensitive data must not be stored in non-EU countries
after the end of the project.
5. _Personal data stored at a beneficiary’s infrastructure:_
Data collected and processed under Task 3.3 (Democratic efficacy and the
youth: the role of schools) and Task 4.4 (Studies on the role of information
versus feelings in developing populist arguments) involves data of minors (age
13-16). Task leader in both cases is University of Hamburg (UHAM), who’s
Member State Law and local authorities requested the data to be stored
exclusively at the infrastructure of UHAM. Respecting the request of the
German authorities the consortium leader and UHAM agreed to proceed with this
exception. Concerned data will be saved on the UHH [UHAM] Share Server, and
access will be granted exclusively for DEMOS researchers concerned with the
task.
To ensure data safety and security, UHAM hands over a detailed description of
the UHAM IT-Architecture and servers as well as a risk evaluation of potential
threats from the perspective of the concerned researchers for the local
authorities, following the Article 30 requirements of GDPR.
# Data safety and security
_3.1 Data Safety Summary_
Research data will be stored on the server of CSS, placed within the building
of CSS. Safety archiving takes place on IBM HSM magnetic tape system, with
weekly incremental saves beyond the daily saves. The cloud if stored on IBM
Blade servers, and uses a next cloud based application, which is being
regularly updated.
_3.2 Data Security and Cybersecurity Measures_
**Central servers:** All areas where there are IT resources for processing and
storing confidential data are considered as closed and protected areas, to
which only 3 people have access. There are cameras with facial identification
in the server rooms. The IT network is under professional control, anything
concerning the extension or modification of the central network can only be
done by professional staff, with the approval of the CSS board.
**Backup:** There is regular backup on the central IT system conducted
automatically, on a daily basis, archiving the data from the system. Should
any error occur, the system notifies the system administrator.
**VPN** is available only to those who have access to the CSS emailing system.
The CSS cloud is accessible through a CSS login and an individual password,
through the address https://file.tk.mta.hu/. The password must consist of at
least 7 characters, and must contain letters of lower case and upper case, and
a number. The passwords can be changed at the work stations of the CSS and
through the Webmail system. Expired passwords can only be changed at the CSS
workstation. If a password has expired, it is not possible to access any
subsystem (email, cloud, Intranet, VPN). Users get notified about the
expiration of the password.
**Redundant firewall system:** the firewall system provides regulated
connections between the internal user networks of the users, the networks of
the building and the Internet. There is a multi-layer network segmentation,
the systems of the research institutes are divided into separate units, which
are further divided into internal client and server networks. The hardware
basis is provided by 2 DELL T340 Xeon servers, with HA (keepalived) stateful,
VRRP synchronized netfilter firewall built on a Slackware LINUX basis. The
servers are provided with a 6x1GBit and a 4x10GBit physical interface each,
and they are connected to the DLink client network core switch stack and the
CISCO tools supporting the servers and securing the WAN connections. Important
parameters: 150 logical interface, 80 VLAN, VPN networks (PPTP, L2TP, SSTP,
OpenVPN for clients, IPsec site2site tunnels for external sites), appr. 500
firewall regulations. The VLANs are divided into 8 VRRP groups based on
function and organizational unit. They can be moved freely between the 2
firewalls. The physical connections have been build redundant in all
directions, with LACP or STP protocol.
**Local IT network of the building:** The networks within the system are
realized through VLAN segmentation. The service providing point for the
clients is realiyed through RJ45 plugs, and through virtual switch plugs on
the server’s side. The DHCP server (or relay) service is a centralized task
for each network segment.
**Server infrastructure:**
IBM BladeCenter H frame + 6 db IBM Blade HS22 server
Physical configuration of the storage system:
DS3524 storage+DS3524 expansion, 48*600GB 10k SAS disk – 24TB; BTK:12TB,
TK:12TB
DS3512 expansion+DS3512 expansion, 24*3TB 7,2k SAS disk – 60TB; BTK:50TB,
TK:10TB
VMware cluster infrastructure:
6 host VMware vSphere 6 with Operations Management Standard for software.
Backup (veeam) system: The backup system works in a virtual server system. The
system is capable of restoring data/ entire virtual servers for 30 days on a
daily basis.
Monitoring system: the permanent monitoring of the condition of the central IT
tools is part of the system. In case of an error, an automated alert message
is sent to the help desk service.
The server room and its infrastructure is part of the building. The monitoring
of its operation, the power supply, monitoring the temperature and humidity,
and managing the alert messages is the responsibility of the operators.
Archive data storage: CSS operates an archive data system, which has been
elaborated to store archive data combining a tape technology and a disk
technology.
**Other:**
The MailGateway spam and virus filter at CSS works separately from the other
centres. Similarly, CSS operates their own mail storing on the MS Exchange
platform.
3.3 Data security regarding the data stored at UHAM’s infrastructure:
**Server and Software for Backups:** Data is saved on the UHH Share Server. As
the central back-up system Tivoli Storage manager (TSM) is used.
**Safety archiving:** Data is saved by TSM incrementally overnight.
**Access:** For access two login credentials are needed, namely “user
credentials” to log in at the computer itself and one “UHH credentials” to
have access to UHH Share Server. As soon as all data we collected are stored
on UHH Share, UHAM will provide access for the involved DEMOS researchers.
9
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1502_SECONDO_823997.md
|
# 1\. Introduction
SECONDO aims at achieving the following features simultaneously: efficiency,
security, user privacy, and flexibility of contract expressiveness. This
deliverable addresses the main elements of the Data Management policy that
will be used by the project participants regarding all Datasets. It also
establishes some procedural mechanisms for participants with the
responsibilities of Data Controllers and Processors. Through the SECONDO
project, **Data Controllers** and **Processors** are defined as below
[4] :
* **Data Controller** means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law;
* **Data Processor** means a natural or legal person, public authority, agency or other body which processes Personal Data on behalf of the controller;
The DMP establishes a set of guidelines meeting each of the fundamental topics
to be considered. These guidelines cover aspects such as applicable policies,
roles, standards, infrastructures, sharing strategies, Data processing,
storage, retention and structure, legal compliance and compliance with market
standards and best ethical and privacy practices, identification,
accessibility, intelligibility, legitimate use for other purposes. These
guidelines will be adopted at the early stages of the project.
For each Dataset in H2020 the following aspects should be considered:
* Making Data **F** indable,
* Making Data openly **A** ccessible,
* Making Data **I** nteroperable,
* Increase Data **R** e-use,
* Allocation of recourses and Data security
The Data collected/generated during the project will be owned by the partners
which have contributed to producing that Data (Data controller). The extent up
to which this Data will be made available and which restrictions will be
imposed on its re-use will be decided on a case-by-case basis by the Data
controller. Moreover, Data controller determines the purposes and means of
Personal Data processing and will decide the purpose for which Personal Data
is required and what Personal Data is necessary to fulfil that purpose.
The partners will comply with the Findable, Accessible, Interoperable,
Reusable (FAIR) guidelines of the H2020 programme, which state that Data will
be made as available as possible, so long that does not negatively affect the
commercial advantage of the partners. The Horizon 2020 FAIR DMP template [5]
has been designed to be applicable to any Horizon 2020 project that produces,
collects or processes research Data.
The Data will be shared among partners using internal repositories or through
direct communication, and with the public through the project’s website or
public repositories. The Data will be preserved up to **three (3) years**
after the end of the project at the partners’ repositories and cloud
infrastructures, according to each partner’s internal policy.
SECONDO DMP should be updated as a minimum in time with the periodic
evaluation/assessment of the project. Furthermore, the consortium can define a
timetable for review in the DMP itself. Regarding [5], the SECONDO DMP needs
to be updated over the course of the project whenever significant changes
arise, such as (but not limited to):
* Using new Dataset
* Changes in consortium policies (e.g. innovation potential, decision to file for a patent)
* Changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
Regarding _Participant Portal H2020 Online Manual_ [6], as part of making
research Data findable, accessible, interoperable and re-usable (FAIR),
SECONDO FAIR DMP should include information on:
* The handling of research Data during and after the end of the project What Data will be collected, processed and/or generated?
* Which methodology and standards will be applied?
* Whether Data will be shared/made open access?
* How Data will be curated and preserved (including after the end of the project).
The SECONDO FAIR Dataset Template Questionnaire (Table 6-1) includes a set of
questions that all Data controllers are required to fill in for each Dataset
[3], [7], [8]. The questionnaire template has been reviewed by the Project
Coordinator (UPRC) and the Ethics Board for completeness and compliance with
the FAIR DMP directives.
Zenodo [9] will be used as the project Data and publication repository and
will be linked to the SECONDO project-site at OpenAIRE. Zenodo is a simple and
innovative service that enables researchers, scientists, EU projects and
institutions to share and showcase multidisciplinary research results (Data
and publications) that are not part of existing institutional or subject-based
repositories.
# 2\. SECONDO FAIR Data Principles
## 2.1 DATA Summary
As part of making research data Findable, Accessible, Interoperable and Re-
usable (FAIR), a DMP should [6]:
**To be Findable:**
* Data/Metadata are assigned a globally unique and eternally persistent identifier.
* Data are described with rich Metadata.
* Data/Metadata are registered or indexed in a searchable resource. Metadata specify the Data identifier.
**To be Accessible:**
* Data are retrievable by their identifier using a standardized communications protocol.
* the protocol is open, free, and universally implementable.
* the protocol allows for an authentication and authorization procedure, where necessary. Metadata are accessible, even when the Data are no longer available.
_Particularly, SECONDO Database will be accessible through the SECONDO project
for**three (3)** years following the end of the project. During this period,
unless otherwise decided by the consortium members, the Database functionality
will remain the same as during the project duration. _
**To be Interoperable:**
* Data/Metadata use a formal, accessible, shared, and broadly applicable language for knowledge representation.
* Data/Metadata use vocabularies that follow FAIR principles.
* Data/Metadata include qualified references to other Data/Metadata.
_There is not a standard for allowing Data exchange between researchers,
institutions, organizations, countries, etc. (e.g. adhering to standards for
Data annotation, Data exchange, compliant with available software
applications, and allowing re-combinations with different Datasets from
different origins). Thus, it always needs of a human interpretation of the
Data structure to manually create a Data map. However, the utilization of
standards for Data capturing and the documented annotation will ease the Data
exchange._
**To be Re-usable:**
* Data/Metadata have a plurality of accurate and relevant attributes.
* Data/Metadata are released with a clear and accessible Data usage license.
* Data/Metadata are associated with their provenance.
* Data/Metadata meet domain-relevant community standards.
_SECONDO Data will be licensed under Creative Commons license, to the extent
that it may be subject to such licensing (likely the “CC BY”). Applicable Data
will become available at the end of the project. The Data can be re-used by
other scientists and interested parties. Parts of the Data may become
available prior to this as a result of journal publications. There will be no
embargo period._
## 2.1.1 Purpose of the Data Collection/Generation and its relation to the
objectives of the project
SECONDO will propose a unique, scalable, highly interoperable **Economics-of-
Security-as-a-Service (ESaaS) platform** that encompasses a comprehensive
cost-driven methodology for estimating cyber risks and determining the
residual risks. The SECONDO platform will establish a new paradigm in risk
management for enterprises of various sizes, with respect to the GDPR
framework, while it will enable formal and verifiable methodologies for
insurers that require estimating premiums. **SECONDO will not collect or
process Personal Data to conduct its research. The collection, processing and
use of Personal Data is only admissible if expressly permitted by any legal
provision or if the Data subject has expressly consented in advance.**
### 2.2 Allocation of Resources
The costs for Data preparation to be FAIR are unknown at this stage but will
be estimated in the future. Expenses may consist of additional publication and
documentation costs of the repositories where applicable. Data preparation and
management costs during the project will be covered by the project.
**UPRC** , as the Project Coordinator for SECONDO, will be responsible for DMP
updates, and Data archiving and publication within repositories. No additional
funding is provided for Data management activities for those deciding to
participate in the pilot. Costs relating to open access to research Data will
be eligible as part of the grant, independent from the participation in the
pilot, provided the general eligibility conditions specified in the Grant
Agreement are followed.
### 2.3 Data Sharing
The Data controller will determine the details of how Data will be shared,
including access procedures, embargo periods (if any), outlines of technical
mechanisms for dissemination and necessary software and other tools for
enabling re-use, and definition of whether access will be widely open or
restricted to specific groups. Similarly, the Data controller will identify
the repository where Data will be stored, if already existing and identified,
indicating in particular the type of repository (institutional, standard
repository for the discipline, etc.).
During the project, any potential user that wants the get access would be
guided to:
* Submit a "Request" to the Dataset controller from the SECONDO consortium. This request will contain:
* Full name
* Organization and department o Email address
* Description of intended use
* After reviewing the request, if the Data controller approves it, the user will receive an email with a special link to verify the email address.
* Then the user is asked to agree to and sign the following terms of access:
[RESEARCHER_FULLNAME] (the "Researcher") has requested permission to use the
Dataset. In exchange for such permission, Researcher hereby agrees to the
following terms and conditions:
* Researcher shall use the Database only for non-commercial research and educational purposes.
* Data Controller makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose.
* Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify Data Controller, including their employees, trustees, officers and agents, against any and all claims arising from Researcher's use of the Dataset, including but not limited to Researcher's use of any copies of copyrighted Dataset that he or she may create from the Dataset.
* Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions.
* Data Controller reserves the right to terminate Researcher's access to the Database at any time and without justification.
* If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
* The law and jurisdiction of the Data Controller’s country shall apply to all disputes under this agreement.
## 2.3.1 Methods for Data sharing
The methods used to share Data will be dependent on a number of factors such
as the type, size, complexity and sensitivity of Data. Data can be shared by
any of the following methods [10]:
* **Under the auspices of the Principal Investigator**
Investigators sharing under their own auspices may securely send Data to a
requestor or upload the Data to their institutional website. Investigators
should consider using a Data-sharing agreement to impose appropriate
limitations on the secondary use of the Data _._
* **Through a third party**
Investigators can share their Data by transferring it to a Data archive
facility to distribute more widely to the scientific community, to maintain
documentation and meet reporting requirements. Data archives are particularly
attractive for investigators concerned about managing a large volume of
requests for Data, vetting frivolous or inappropriate requests, or providing
technical assistance for users seeking to help with analyses.
* **Using a Data enclave**
Datasets that cannot be distributed to the general public due to
confidentially concerns, or third-party licensing or use agreements that
prohibit redistribution, can be accessed through a Data enclave. A Data
enclave provides a controlled secure environment in which eligible researchers
can perform analyses using restricted Data resources.
* **Through a combination of methods**
Investigators may wish to share their Data by a combination of the above
methods or in different versions, in order to control the level of access
permitted.
_**Note:** During the SECONDO project, Data controllers could use **a
combination of methods** for Data sharing. _
### 2.4 Data Security
Regardin g _Guide on Good Data_ _protection practice_ [ 11], to process Data
in a secure manner, each Data controller must:
* Take technical and organisational measures to prevent any unauthorised access Establish clear access rules
* Organise the processing in a way that gives you the best possible control, for example by allowing for tracking of access (logbook)
* If someone processes the Data on your behalf, make sure that this processor ensures for appropriate security safeguards.
In practical terms, these measures could result in:
* User authentication: The way to verify the identity of a user
* Access control: Mechanism to allow or deny access to certain Data
* Storage security: Storing Data in a way that prevents unauthorised access, for example by:
* Operating system controls (authentication & access control)
* Use of passwords to access electronic files (e.g. use the text editor function to save a document password-protected)
* Local encrypted storage (enable the full disk encryption, enable the file system, enable the text editor encryption)
* Database encryption: turning Data into a form that makes them unintelligible (for anyone not having access to the key)
* Communication security: Safe electronic communication for transferring the Data can take the following forms:
* Encrypted communication (SSL/TLS); (e.g., use web services whose URL starts with
‘https: //’ and not only http ://) o Firewall systems and access control lists
(e.g. make sure the firewall service is enabled on your PC)
* Anti-virus & anti-malware systems
* Protect Data and Data carriers when they are physically transferred (paper notes, laptop etc.).
* Other IT technical controls such as installing security updates, anti-virus protection, local backups, blocking of certain software installation, etc.
Regarding the guidelines on implementation of open access to scientific
publications and research Data, participants of the ORDP need to take the
following three steps [12]:
* Deposit research Data needed to validate the results presented in scientific publications, including associated Metadata, in the repository as soon as possible. Also, other Data (for instance Data not directly attributable to a publication, or raw Data), including associated Metadata, should be deposited – that is, according to the individual judgement by each project, specified in the Data management plan.
* Take measures to enable third parties to access, mine, exploit, reproduce and disseminate
(free of charge for any user) this research Data, for instance by attaching
_a Creative Commons_ _Attribution Licence_ (CC BY) to the Data deposited, or
by waiving all interests associated to copyright and Database protection.
* Provide information via the chosen repository about the tools available in order for the beneficiaries to validate the results, e.g., specialised software or software code, algorithms and analysis protocols. Where possible, these tools or instruments should be provided.
All SECONDO software/toolkit modules will encapsulate state-of-the art
security, authentication and authorization mechanisms. The robustness of such
modules is ensured by years of developments in the field (the basic building-
blocks stem from previously funded EU projects or from already functioning
commercial solutions) and will be tested through dedicated penetration /
hacking tests and challenges. In addition, Data protection methods will be
made available through a set of secure APIs and Smart Contracts. Moreover,
privacy-preserving smart contracts will be leveraged to hide sensitive client
information and meanwhile, secure encryption technique will be considered in
Data storage.
Privacy-preserving techniques will be used in Data storage and smart contract
to protect clients’ privacy. Privacy-preserving smart contracts will be
leveraged to hide sensitive client information and meanwhile, secure
encryption technique will be considered in Data storage.
The conceptual security and privacy taxonomy will be applied. It contains
three main Big Data security and privacy principles:
* Data confidentiality topic: safeguarding the confidentiality of Personal Data.
* Data provenance topic: safeguarding the integrity and validation of Personal Data.
* Public policy, social, and cross-organizational topics: safeguarding the specific Big Data and privacy and Data protection requirements.
In SECONDO, a Byzantine-fault-tolerance-like algorithm will be used to
randomly select a group of clients as validators. To achieve security, access
control will be used to guarantee that only registered clients can read
information from the ledger.
## 2.4.1 Data Protection
**As mentioned in SECONDO DOA, no real Data will be used in the context of the
project. However, with SECONDO being a GDPR-compliant platform by design, we
describe the procedures and technical measures that would be applied if real
Data are being processed.**
A key issue in considering observational research using social media is
whether the proposed project meets the criteria as human subjects’ research,
and if so, what type of review is needed. A human subject is defined by
federal regulations as a living individual about whom an investigator obtains
Data through interaction with the individual or identifiable private
information [13].
An important area of concern with **Social Media Website (SMW)** research is
the protection of confidentiality. Similar to other types of research
involving survey or interview Data, protection of participant identities is
critical. Website research may initially be perceived as lower risk, because
participant information can be collected in absence of some protected
information such as address or phone number. Online Data can present increased
risks; studies that publish direct text quotes from an SMW may directly
identify participants. Entering a direct quote from an SMW into a Google
search engine can lead to a specific Web link, such as a link to that person’s
LinkedIn profile, and thus identify the participant.
Personal Data refers to any information relating to an identified or
identifiable natural person, meaning by identifiable person the one who can be
identified, directly or indirectly, in particular by reference to an
identification number or to one or more factors specific to his physical,
physiological, mental, economic, cultural or social identity. Data is
considered personal when someone is able to connect information to a specific
person, even when the person or entity that is holding the Personal Data
cannot make the connection directly (e.g. name, address, e-mail), but has or
may have access to information allowing such identification (e.g. through
telephone numbers, credit card numbers, license plate numbers, etc.).
The fundamental right to the protection of Personal Data is explicitly
recognised in Article 8 of the Charter of Fundamental Rights of the European
Union, and in Article16 of the Treaty on the functioning of the European
Union, according to which everybody has the right to the protection of
Personal Data concerning them. Such Data must be processed fairly for
specified purposes and on the basis of the consent of the person concerned or
some other legitimate basis laid down by law.
If Data controllers intend to process sensitive data in the project, or if
there is a possibility that sensitive data (See section: 5: Ethical Aspects
and Privacy and Security Requirements) may be processed (unintended processing
of sensitive data), more solid justification to the Ethics Committee have to
be provided by Data Controllers
SECONDO Data processing must be lawful, fair and transparent. It should
involve only Data that are necessary and proportionate to achieve the specific
task or purpose for which they were collected. Therefore, **SECONDO will only
collect the Data that is needed for the research objectives** , since
collecting unnecessary/unrelated Data for the research project may be deemed
unethical and unlawful. The Data are to be processed only for scientific
purposes comprising processing operations that are performed for purposes of
study and systematic research to develop scientific knowledge for the specific
sector addressed by SECONDO.
**SECONDO will not collect or process Personal Data to conduct its research.
Any real users that will take part in the assessment of the implemented
software do not have to provide Personal Data and in case some non-sensitive
Data are needed, the users will be informed and sign the appropriate consent
and agreements.**
To secure the confidentiality, accuracy, and security of Data management, the
following measures will be taken:
* All Personal Data obtained in SECONDO studies will be transmitted to partners within the consortium only after anonymization. Keys to identification numbers will be held confidentially within the respective research units. In situations were re-identification of study participants becomes necessary, for example the collection of additional Data, this will only be possible through the research unit and in cases where informed consent for such cases has been given.
* Personal Data are entered to secure websites. Data are processed only for the purposes outlined in the patient information and informed consent forms of the respective case studies.
Use for other purposes will require explicit patient approval. Also, Data are
not transferred to any places outside the consortium without patient consent.
* None of the Personal Data will be used for commercial purposes, but the knowledge derived from the research using the Personal Data may be brought forward to such use as appropriate, and this process will be regulated by the Grant Agreement and the Consortium Agreement, in accordance with any generally valid legislation and regulations.
* No vulnerable or high-risk groups are used (e.g. children, adults unable to consent, people in dependency relationship, vulnerable persons) will be addressed during the development and progress of the SECONDO project;
* Persons are approached in their professional capacity;
* The purpose of collecting contact Data of potential stakeholders is to ask them about their willingness to be involved in SECONDO network and for obtaining professional opinions and consultation only;
* Information about the objectives of the project, structuring of the Stakeholder Network and details about Data processing will be provided in advance (as a governance document) to all external stakeholders;
* Minimum and limited amount of Personal Data will be collected;
* Personal contact Data will be kept internally within the SECONDO partners and will not be accessible to external organizations or individuals.
* Personal Data shall always be collected, stored, and exchanged in a secure manner, through secure channels during the project.
Regarding Data confidentiality, SECONDO partners must keep any Data, documents
or other material confidential during the implementation for the project and
for three years after end of the project.
_**Note** _ : Appendix (Table 8-1, Table 8-2, Table 8-3) has to be filled by
the SECONDO Data controllers.
# 3\. SECONDO Data Sourcing and Data Sharing
In general, Data may be grouped into four main types based on methods for
collection:
* Observational Data: captured in real time, typically cannot be reproduced exactly.
* Experimental Data: from labs and equipment, can often be reproduced.
* Simulation Data: from models, can typically be reproduced if the input Data is known.
* Derived or Compiled Data: after Data mining or statistical analysis has been done, can be reproduced if analysis is documented.
The categories of Data processed in SECONDO are:
* Experimental
_Dataset captured from a real infrastructure, such as external sources e.g.
social media and other internet-based sources, including Darknet to establish
research activities with a static dataset._
* Simulation
_Dataset captured in real time from a testbed or lab infrastructure to monitor
it and test optimization strategies from internal organisation sources, e.g.
network infrastructure._
* Derived or Compiled Data
_The intelligent**Big Data Collection and Processing Module (BDCPM)** uses
specialised crawlers to acquire risk-related Data. _
## 3.1 Overview of Research Objectives (ROs’) scenarios
The interactions mapped in the Research Objectives (ROs) scenarios have
determined the Data sources, as well as the connections that will take place
in SECONDO. A short description of each ROs’ scenarios can be found below:
**RO1. Design and develop an extended risk analysis metamodel.**
One of the key contributions of the SECONDO programme in the area will be the
design, analysis and implementation of a Quantitative Risk Analysis Metamodel
(QRAM) that will utilise advanced security metrics to quantitatively estimate
the exposed cyber risks. To implement the desired functionalities the
following SECONDO modules will be implemented:
* **Risk Analysis Ontology and Harmonisation Module (RAOHM)**
_RAOHM receives the outcomes of the existing risk analysis tools and
harmonises them using a common vocabulary with straightforward definition in
order to be used by QRAM (Leader: UPRC)_
* **Social Engineering Assessment Module (SEAM)**
_SEAM interacts with users to devise their behaviour using penetration testing
approaches and it provides specific numeric results on risky actions, (i.e.
percentage of users that open suspect files or execute Trojans, etc.) (Leader:
UPRC)._
* **Intelligent Big Data Collection and Processing Module (BDCPM)**
_BDCPM uses specialised crawlers to acquire risk-related Data either from
internal organisation sources, e.g. network infrastructure or external sources
such as social media and other internet-based sources, including Darknet.
(Leader: LST)_
**RO2. Design and develop a scenario-based risk management module that
facilitates in both cost-effective risk management and optimised security
investments.**
Cyber Security Investment Module (CSIM) will be designed and implemented. CSIM
will provide decision support for organisations that seek an optimal
equilibrium point (i.e. balance) between spending on cyber security investment
and cyber insurance fees.
CSIM will use the following results/procedures/modules outcome as an input:
* Costs for attacking and defending will be investigated and they will be given as an input to CSIM. (Leader: CUT)
* the outcome of the provided extended and QRAM
* the results of BDCPM that provides analytics on Internet sources regarding state-of-the-art security solutions as well as their cost. (Leader: LST)
* The outcome of the Game Theoretic Module (GTM) that models all possible attacking scenarios and defensive strategies, (i.e. available security controls), by employing attack graphs (Leader: FOGUS)
* The outcome of the Econometrics Module (ECM) that provides estimates of all kinds of costs of potential attacks and it takes into account costs, (i.e. purchase, installation, execution, etc.), of each possible security control using a set of existing econometric models; (Leader: CUT)
* The outcome of the Continuous Risk Monitoring Module (CRMM) that assesses on a continuous basis the performance of the implemented risk-reducing cyber security controls allowing the adaptation of the cyber insurance contract to the changing IT environment and the evolving cyber threat landscape (Leader: UBI)
**RO3. Design and develop a cyber insurance module that estimates cyber
insurance exposure and derives coverage and premiums.**
the Cyber Insurance Coverage and Premiums Module (CICPM) will compute premium
curves and coverages as a function of the organisation’s security level (can
be used by clients). CICPM will communicate with CRMM for monitoring the
conditions that violate cyber insurance contract agreements toward resolving
conflicts.
CICPM will use the following results/outcomes/policies as an input to propose
the insurance calculation tool:
* The outcome of the proposed QRAM.
* The defending policies selected to be applied in order to provide optimal protection strategies as well as the results of the related econometric parameters that justify the cost effectiveness of the considered security investments (Leader: UPRC).
* The results of analytics on cyber insurance environment and market (Leader: CRO).
* The underwriter’s strategy ( Leader: SURREY).
**RO4. Use smart contracts and a blockchain to empower cyber insurance
claim.**
SECONDO will deploy a blockchain, which is a distributed decentralised
Database that maintains continuously growing blocks of Data records, in which
all blocks are tightly chained together against information tampering. SECONDO
will use a private ledger, which provides secure access control on Data
records, to hold an inventory of assets and information regarding security and
privacy risk measurable indicators of an organisation (cyber insurance
client). The ledger will be updated based on information received from CRMM.
By using smart contracts, the traditional physical-based paper process and
endorsement will be turned to digital formats that brings convenience on Data
management (Leader: SURREY).
## 3.2 SECONDO Data Sources
**In the context of the project, SECONDO will not collect /process Personal
Data to conduct its research.** The major Data sources, as these have been
identified in SECONDO Description Of Action (DOA), are described below.
UPRC and LIST are nodes of QRAM that RAOHM (Leader: UPRC) as a main part of
SECONDO Risk analysis module, receives the outcomes of the existing risk
analysis tools and harmonises them using a common vocabulary with
straightforward definition. And internal organisation sources, e.g. network
infrastructure or external sources such as social media and other internet-
based sources, including Darknet will be used by BDCPM to acquire risk-related
Data. In the context of the QRAM, SEAM (Leader: UPRC) interacts with users to
devise their behaviour using penetration testing approaches and it provides
specific numeric results on risky actions, (i.e. percentage of users that open
suspect files or execute Trojans, etc.)
For the CSIM phase, costs for attacking and defending will be investigated and
they will be given as an input to CSIM (Leader: CUT), and the results of BDCPM
that provides analytics on Internet sources regarding state-of-the-art
security solutions as well as their cost. (Leader: LST). The outcome of the
Game Theoretic Module (GTM) that models all possible attacking scenarios and
defensive strategies, (i.e. available security controls), by employing attack
graphs (Leader: FOGUS).
ECM provides estimates of all kinds of costs of potential attacks and it takes
into account costs, (i.e. purchase, installation, execution, etc.), of each
possible security control using a set of existing econometric models; (Leader:
CUT). CRMM assesses on a continuous basis the performance of the implemented
risk-reducing cyber security controls allowing the adaptation of the cyber
insurance contract to the changing IT environment and the evolving cyber
threat landscape (Leader: UBI).
CICPM will compute premium curves and coverages as a function of the
organisation’s security level (can be used by clients). CICPM will communicate
with CRMM for monitoring the conditions that violate cyber insurance contract
agreements toward resolving conflicts. CICPM will use the QRAM’s outcome,
Cyber insurance ontology (Lead: UPRC), results of analytics on cyber insurance
environment and market (Leader: CRO), the underwriter’s strategy (Leader:
SURREY).
As mentioned before, SECONDO will use a private ledger to hold an inventory of
assets and information regarding security and privacy risk measurable
indicators of an organisation (cyber insurance client). By using smart
contracts, the traditional physical-based paper process and endorsement will
be turned to digital formats that brings convenience on Data management
(Leader: SURREY).
# 4\. Data Archiving and Preservation (including storage and backup)
The collected Data will be stored in secure servers, only accessible to the
consortium members. If any identifiable Data are required for the research
purposes, access to and distribution of this Data will be granted only after
explicit permission and after the agreement of the user participants.
Authentication will be required to access stored Data on the research site.
Authorized consortium members will have access to the Data after
authentication with a centralized server and on a need to know basis.
Consortium members will have access rights to add Data to the identity
Database. No editing or reading rights will be granted to them to prevent
alteration/disclosure of private Data, if a specific permission is not granted
by the respective user participant.
All technical partners participating in SECONDO have previous experience in
storing and processing user Data. This implies that all of them have the
appropriate competence and infrastructure to address the processing of SECONDO
user Data. This will assure secure storage, delivery and access of Personal
Data, as well as managing the rights of the users. In this way, there is
complete guarantee that the accessed, delivered, stored and transmitted
content will be managed by the right persons, with welldefined rights, at the
right time.
Depending on each Dataset the Data archiving and preservation procedures that
will be put in place for long-term preservation of the Data will be
responsibility of the corresponding Data Controller. This includes the
indication of how long the Data should be preserved, what is its approximated
end volume, what the associated costs are and how these are planned to be
covered. As mentioned before, privacy-preserving smart contracts will be
leveraged to hide sensitive client information and meanwhile, secure
encryption technique will be considered in Data storage.
# 5\. Ethical Aspects and Privacy and Security Requirements
Privacy and Data protection are fundamental rights, which needs to be
protected. Privacy can mean different things in different contexts and
cultures. It is therefore important to detail the purpose of the research
according to the different understandings of privacy. In the context of
research, privacy issues arise whenever data relating to persons are collected
and stored, in digital form or otherwise. The main challenge for research is
to use and share the data, and at the same time protect personal privacy.
Moreover, Data protection aims at guaranteeing the individual’s right to
privacy. It refers to the technical and legal framework designed to ensure
that Personal Data are safe from unforeseen, unintended or malevolent use.
Data protection therefore includes e.g., measures concerning collection,
access to Data, communication and conservation of Data. In addition, a Data
protection strategy can also include measures to assure the accuracy of the
Data. In the context of research, privacy issues arise whenever Data relating
to persons are collected and stored, in digital form or otherwise. The main
challenge for research is to use and share the Data, and at the same time
protect personal privacy [7]. In order to ensure respect for Data protection
and privacy, the European University Institute (EUI) has adopted a Data
Protection Policy [14] that must be respected by all EUI members and which is
inspired by the EU Data protection rules. If the research is exclusively
carried out at the EUI’s premises, the applicable Data protection framework is
the EUI’s Data Protection Policy, complemented when necessary by local privacy
and Data protection laws.
In legal terms, ‘processing of Personal Data’ means: ‘any operation or set of
operations which is performed upon personal data, whether or not by automatic
means, such as collection, recording, organisation, storage, adaptation or
alteration, retrieval, consultation, use, disclosure, transmission,
dissemination or otherwise making available, alignment or combination,
blocking, erasure or destruction’ [11]. Additionally, if a study will use
Personal Data on an individual who can be identified, this may fall under the
remit of the Data Protection Act 2018\. It is the Host Institution’s
responsibility to ensure that the provisions of the Act are met [15].
Article 2 of the EUI’s Data Protection Policy indicates some categories of
data that are more sensitive than other personal data and therefore require
special treatment (‘Sensitive Data’). Sensitive Data are those revealing
racial or ethnic origin, political opinions, religious or philosophical
beliefs, trade-union membership, genetic Data, biometric Data, Data concerning
health and Data relating to sexual orientation or activity. As a rule, the
processing of sensitive data is prohibited. However, Article 8 of the EUI’s
Data Protection Policy provides for specific circumstances, which allow for
the processing of sensitive data. The most common in research is upon the
**Data subject’s _explicit_ consent ** .
As mentioned before, an important area of concern with Social Media Website
(SMW) research is the protection of confidentiality. Similar to other types of
research involving survey or interview Data, protection of participant
identities is critical. Website research may initially be perceived as lower
risk, because participant information can be collected in absence of some
protected information such as address or phone number. Online Data can present
increased risks; studies that publish direct text quotes from an SMW may
directly identify participants. Entering a direct quote from an SMW into a
Google search engine can lead to a specific Web link, such as a link to that
person’s LinkedIn profile, and thus identify the participant.
**SECONDO will not collect or process personal data to conduct its research.
Therefore, a data protection impact assessment shall not be conducted.**
**Nevertheless, the SECONDO consortium and the advisory board will monitor
closely the activities of the project and in case there is a requirement for
collecting/processing personal data a risk evaluation will be conducted.**
The Ethics Board is formed by the following persons, who are closely involved
in ethical procedures within the project and to whom any issue arising during
the project, especially involving end-users would be reported.
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Name**
</th> </tr>
<tr>
<td>
UPRC
</td>
<td>
Christos Xenakis
</td> </tr>
<tr>
<td>
SURREY
</td>
<td>
Emmanouil (Manos) Panaousis
</td> </tr>
<tr>
<td>
CUT
</td>
<td>
Michael Sirivianos
</td> </tr>
<tr>
<td>
UBI
</td>
<td>
Dimirtios Alexandrou
</td> </tr>
<tr>
<td>
LST
</td>
<td>
Evangelos Kotsifakos
</td> </tr>
<tr>
<td>
CRO
</td>
<td>
Nikos Georgopoulos
</td> </tr>
<tr>
<td>
FOGUS
</td>
<td>
Dimitrios Tsolkas
</td> </tr> </table>
The Ethics Board will define a proper procedure for informing the Data
subjects about any ethical related issue (privacy, GDPR compliance etc), its
possible consequences and how their fundamental rights will be safeguarded.
The Ethics Board will make sure that the Data subjects have understood this
information by asking for their consent. These procedures will be kept in a
dedicated git repository that will only be accessible by the Ethics Board and
the SECONDO Platform Administrators. This repository was defined in
deliverable D1.1 - Quality Assurance Plan, while the procedures will be
reported in deliverable D6.2 – Platform Assessment.
As mentioned in D8.1_GEN-requirement_no2, Professor **Konstantinos
Lambrinoudakis** , as a member of the Hellenic Data Protection Authority
(HDPA) he is participating in privacy and GDPR related events, conferences and
talks.
## 5.1 General Data Protection Regulation (GDPR)
If Data controllers intend to use Personal Data that were collected from a
previous research project, they must provide details regarding the initial
Data collection, methodology and informed consent procedure, to the extent
that consent is the appropriate legal basis. They must also confirm that they
comply with the Data protection principles and that they for example have
permission from the Data controller to use the Data in the SECONDO project.
Where the planned use of Data is predicated on the ‘legitimate interests’ of
the Data controller, the nature and purpose of the Dataset must be set out in
detail, together with the safeguards (e.g. anonymisation or pseudonymisation
techniques) that warrant its use in SECONDO project (GDPR , Article 89).
If Data controllers intended Data processing is based on national legislation
or international regulations authorising the research, or a demonstrable
overriding public interest (e.g. public health, social protection) allows to
use a particular Dataset, they must make reference to the relevant Member
State or Union law or policy.
Regarding [16], one of the best ways to mitigate the ethical concerns arising
from the use of Personal Data is to anonymize them so that they no longer
relate to identifiable persons. Data that no longer relate to identifiable
persons, such as aggregate and statistical Data, or Data that have otherwise
been rendered anonymous so that the Data subject cannot be re-identified, are
not Personal Data and are therefore outside the scope of Data protection law.
However, even if the plan is to use only anonymized Datasets, significant
ethics issues may still be raised, and the Database would become rather
unusable. These ethics issues could relate to the origins of the Data or the
manner in which they were obtained. Therefore, the source of the Datasets
intended for use must be specified the and any ethics issues that arise must
be addressed. The potential for misuse of the research methodology or findings
must also be considered, as well as the risk of harm to the group or community
that the Data concern.
Where it is necessary to retain a link between the research subjects and their
Personal Data, Data controllers should, wherever possible, pseudonymize the
Data in order to protect the Data subject’s privacy and minimize the risk to
their fundamental rights in the event of unauthorized access. However, in
SECONDO, because of using only simulated and/or synthetic Data for the
purposes of validation during the project, no pseudonymisation will be used.
Data will be protected by other means of Data security.
When Personal Data moves across borders outside the Union it may put at
increased risk the ability of natural persons to exercise Data protection
rights in particular to protect themselves from the unlawful use or disclosure
of that information.
National authorities in the Member States are being called upon by Union law
to cooperate and exchange Personal Data so as to be able to perform their
duties or carry out tasks on behalf of an authority in another Member State.
cross-border cooperation and agreements to deliver effective Data protection
are essential, particularly if the EU is to maintain its values and uphold its
principles.
To achieve this, the European Data Protection Supervisor (EDPS) regularly
interacts with EU and international Data Protection Authorities (DPAs) and
Regulators to influence and develop cross-border enforcement.
## 5.2 Security and Authentication Legislation
* **The Directive (EU) 2016/1148 on Network and Information Security (NIS Directive)** , provides legal measures to boost the overall level of cybersecurity in the EU and is the first piece of EUwide cybersecurity legislation. The goal of the NIS Directive is to establish a minimum level of (cyber) security for network and information systems across the EU, particularly for those operating essential services. The Directive addresses specifically operators of essential services and digital service providers. However, it is up to the Member States to assess which entities meet the criteria of the definition of an operator of an essential service. Member States must identify the operators of essential services.
* **The Regulation on ENISA, the "EU Cybersecurity Agency", and repealing Regulation (EU) 526/2013, and on Information and Communication Technology cybersecurity certification (Cybersecurity Act)** [17] is adopted by the European Parliament on the 12 th of March 2019. This Act aims to strengthen Europe’s cybersecurity, by replacing existing national cybersecurity certification schemes in European schemes which will define security objectives. For one thing, SECONDO will comply with the Cybersecurity Act’s principles of security by design and by default.
# 6\. The SECONDO FAIR Dataset Template Questionnaire
This section gathers all FAIR forms completed with information from Data
Controllers. The following questionnaires have been addressed by the
responsible partners with a level of detail appropriate to the project’s
progress. The SECONDO FAIR Dataset Template Questionnaire (Table 7-1) includes
a set of questions that all Data Controllers are required to fill in for each
Dataset [3], [7], [8]. The questionnaire template has been reviewed by the
Project Manager, Ethics Board for completeness and compliance with the FAIR
DMP directives.
As mentioned before, the DMP is intended to be a living document in which
information can be made available gradually through successive updates as the
implementation of the project progresses. The Data Controllers will be
responsible to update their respective tables every time significant changes
occur.
**Table 6-1: SECONDO FAIR Dataset Template Questionnaire**
<table>
<tr>
<th>
**Project Acronym**
</th>
<th>
</th>
<th>
**Project Number**
</th> </tr>
<tr>
<td>
**SECONDO**
</td>
<td>
</td>
<td>
**823997**
</td> </tr>
<tr>
<td>
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
**Title**
</td>
<td>
</td>
<td>
Name of the Dataset
_Please provide a meaningful name so that we can refer to it unambiguously in
the future_
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
</td>
<td>
SECONDO task/subtask where
Dataset was generated
_Describe the overall setting of the use case in a scenario style, clarify how
things will really happen during pilots, who will be involved, who will
benefit, etc._
</td> </tr>
<tr>
<td>
**Data owner/controller**
</td>
<td>
</td>
<td>
Names and addresses of the organizations or people who own/control the Data
</td> </tr>
<tr>
<td>
**Time period covered by the Dataset**
</td>
<td>
</td>
<td>
Start and end date of the period covered by the Dataset
</td> </tr>
<tr>
<td>
**Subject**
</td>
<td>
</td>
<td>
Keywords or phrases describing the subjects or content of the Data
</td> </tr>
<tr>
<td>
**Language**
</td>
<td>
</td>
<td>
All languages used in the Dataset
</td> </tr>
<tr>
<td>
**Variable list and codebook**
</td>
<td>
</td>
<td>
All variables in the Data files, with description of the variable name,
length, type, values
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
</td>
<td>
Description of Data quality standards and procedures to assure Data quality
</td> </tr>
<tr>
<td>
**File inventory**
</td>
<td>
</td>
<td>
All files associated with the project, including extensions
</td> </tr>
<tr>
<td>
**File formats**
</td>
<td>
</td>
<td>
Format of the file
</td> </tr>
<tr>
<td>
**File structure**
</td>
<td>
</td>
<td>
Organization of the Data file(s) and layout of the variables, where applicable
</td> </tr>
<tr>
<td>
**Necessary software**
</td>
<td>
</td>
<td>
Names of any special-purpose software packages required to create, view,
analyse, or otherwise use the Data
</td> </tr>
<tr>
<td>
**Details on the procedures for obtaining informed consent**
</td>
<td>
</td>
<td>
Please give details on the procedures for obtaining informed consent from the
Data subjects (e.g. providing an information sheet together with the consent
form).
In case of children/minors and/or adults unable to give informed consent,
indicate the tailored methods used to obtain consent. According to the H2020
Guidelines, if the Data subjects are unable to give consent in writing, for
example because of illiteracy, the non-written consent must be formally
documented and independently witnessed. Please explain how you intend to
document oral consent. In the very exceptional case that it can’t be recorded
please give reasons. If you will use deception for another type of Data
subjects, you must obtain retrospective informed and free consent as well as
debrief the participants.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Deception requires strong justification and appropriate assessment of the
impact and the risk incurred by both researchers and participants.
</td> </tr>
<tr>
<td>
**Measures taken to prevent the risk of enhancing**
**vulnerability/stigmatization of individuals/groups**
</td>
<td>
</td>
<td>
_Please indicate any such protective measures (e.g. use of anonymization
techniques, use of pseudonyms, non-disclosure of audio-visual materials, voice
records, etc.)_
</td> </tr>
<tr>
<td>
**Description of the processing operations (i.e. what you do with**
**Personal Data and how)**
</td>
<td>
</td>
<td>
Processing of ‘Personal Data’ means any operation or set of operations which
is performed upon Personal Data, whether or not by automatic means, such as:
•Collection (digital audio recording, digital video caption, etc.)
•Recording
•Organization and storage
(cloud, LAN or WAN servers)
•Adaptation or alteration
(merging sets, amplification, etc.)
•Retrieval and consultation
•Use
•Disclosure, transmission, dissemination or otherwise making available (share,
exchange, transfer, access to the
Data by a third party)
•Alignment or combination •Blocking, deleting or destruction, etc.
_Please describe in detail the processing operations that you will perform for
conducting your research and give detailed feedback on participants. Indicate
also if a copy of notification/authorization for tracking or observation is
required._
_Any type of research activity may involve processing of Personal Data (ICT
research, genetic sample collection, research activities involving personal
records (financial, criminal, education, etc.), lifestyle and health
information, family histories, physical characteristics, gender and ethnic
background, location tracking and domicile information, etc.)] any method used
for tracking or observing._
</td> </tr> </table>
<table>
<tr>
<th>
1\. Data Summary
</th> </tr>
<tr>
<td>
1.1 Purpose
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
1.2 Types and formats of Data
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
1.3 Re-use of existing Data
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
1.4 Origin
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
1.5 Expected size
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
1.6 Data utility
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2\. FAIR Data
</td> </tr>
<tr>
<td>
2.1 Making Data findable (Dataset description: Metadata, persistent and unique
identifiers e.g.)
</td> </tr>
<tr>
<td>
2.1.1 Are the Data produced and/or used in the project discoverable with
Metadata, identifiable and locatable by means of a standard identification
mechanism?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.1.2 What naming conventions do you follow?
</td> </tr>
<tr>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
2.1.3 Will search keywords be provided that optimize possibilities for re-use?
</th> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.1.4 Do you provide clear version numbers?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.1.5 What Metadata will be created?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2 Making Data openly Accessible
_which Data will be made openly available and if some Datasets remain closed,
the reasons for not giving access; where the Data and associated Metadata,
documentation and code are deposited (repository?); how the Data can be
accessed (are relevant software tools/methods provided)?_
</td> </tr>
<tr>
<td>
2.2.1 Which Data produced and/or used in the project will be made openly
available as the default?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.2 How will the Data be made accessible?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.3 What methods or software tools are needed to access the Data?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.4 Is documentation about the software needed to access the Data included?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.5 Is it possible to include the relevant software?
</td> </tr>
<tr>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
2.2.6 Where will the Data and associated Metadata, documentation and code be
deposited?
</th> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.7 Have you explored appropriate arrangements with the identified
repository?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.8 If there are restrictions on use, how will access be provided?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.9 Is there a need for a Data access committee?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.10 Are there well described conditions for access?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.2.11 How will the identity of the person accessing the Data be ascertained?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**2.3 Making Data Interoperable**
_(which standard or field-specific Data and Metadata vocabularies and methods
will be used)_
</td> </tr>
<tr>
<td>
2.3.1 Are the Data produced in the project interoperable?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.3.2 What Data and Metadata vocabularies, standards or methodologies will you
follow to make your Data interoperable?
</td> </tr>
<tr>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
2.3.3 Will you be using standard vocabularies for all Data types present in
your Data set, to allow interdisciplinary interoperability?
</th> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.3.4 In case it is unavoidable that you use uncommon or generate project
specific ontologies or vocabularies, will you provide mappings to more
commonly used ontologies?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**2.4 Increase Data re-use**
_(which Data will remain re-usable and for how long, is embargo foreseen; how
the Data is licensed; Data quality assurance procedures)_
</td> </tr>
<tr>
<td>
2.4.1 How will the Data be licensed to permit the widest re-use possible?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.4.2 When will the Data be made available for re-use?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.4.3 Are the Data produced and/or used in the project useable by third
parties, in particular after the end of the project?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.4.4 How long is it intended that the Data remains re-usable?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
2.4.5 Are Data quality assurance processes described?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**3 Allocation of resources**
</td> </tr>
<tr>
<td>
3.1 What are the costs for making Data FAIR in your project?
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
3.2 How will these be covered?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
3.3 Who will be responsible for Data management in your project?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
3.4 Are the resources for long term preservation discussed?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**4 Data security**
</td> </tr>
<tr>
<td>
4.1 What provisions are in place for Data security? Please indicate any
methods considered for secure Data storage and transfer of sensitive Data.
_Please indicate any methods considered for Data storage._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
4.2 Is the Data safely stored in certified repositories for long term
preservation and curation?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**5 Ethical aspects /** Protection of Personal Data notification of processing
operations
</td> </tr>
<tr>
<td>
5.1 Are there any ethical or legal issues that can have an impact on Data
sharing?
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
5.2 Is informed consent for Data sharing and long-term preservation included
in questionnaires dealing with Personal Data?
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
5.3 Name of the Processor(s)
_Please indicate the names of any other natural or legal person that may
process the Data. If processors can be categorised into groups please refer to
them by groups and not necessarily by name, otherwise indicate their names._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
5.4 Lawfulness of Processing
_Data Controllers must process only those Personal Data that are necessary
during the project and for a specific purpose. Processing Personal Data that
are not essential to the research may even expose Data Controllers to
allegations of ‘hidden objectives’, i.e. processing information with the Data
subjects’ permission for one purpose and then use that information for another
purpose, without specific permission._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
5.5 Categories of Data Subjects
_Please indicate the categories of Data subjects involved in the processing
operations of the project._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
5.6 Categories of Personal Data
_Please list concretely the categories of Personal Data that you will
process:_
* _Normal Personal Data: name, home address, e-mail address, location Data etc._
* _Sensitive Data: religious beliefs, political opinions, medical Data, sexual identity, etc._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
5.7 Rights of Data subjects
_Regarding Article 16 of the EUI’s Data Protection Policy, Data subjects enjoy
the following rights concerning their Personal Data:_
* _To be informed whether, how, by whom and for which purpose they are processed_
* _To ask for their rectification, in case they are inaccurate or incomplete_
* _To demand their erasure in case the processing is unlawful or no longer lawful (‘right to be forgotten’)_
* _To block their further processing whilst the conditions under letters b) and c) of this Article are verified._
_Note: Please indicate how you will ensure the Data subjects’ rights. E.g.
participants will be free to withdraw at any time without justification. The
Data collected prior to the withdrawal will be deleted. In such a case, you
may need to ensure the erasure of the collected Data while maintaining
anonymity. In order to do so, you may use a pseudonym for each participant
ensuring that the key to the pseudonyms is passwordprotected and available
only to the Data Controller._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
5.8 Safeguards taken to protect the Data subjects’ identity.
</td> </tr>
<tr>
<td>
_Regarding Article 2 of the EUI’s DP Policy, Identifiable persons can be
identified directly or indirectly, in particular by reference to an
identification number or to one or more factors specific to their physical,
physiological, genetic, mental, economic, cultural or social identity. Please
provide details on the measures taken to avoid direct or indirect
identification of the Data subjects, e.g. by using anonymisation techniques or
pseudonyms. E.g. names of the Data subjects will not be disclosed, at any
time, in audio recording and published material Pseudonyms (a reversible
system of coding in order to be able to re-contact participants if needed)
will be used in all documentation, and any additional information that may
reveal the identity of participants will be concealed when publishing._
_Destroy any residual information that could lead to the identification of
participants at the end of the project. You must explain this procedure
clearly to participants during the ‘recruitment’ process._
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
6\. Other issues
</td> </tr>
<tr>
<td>
6.1 Do you make use of other national/funder/sectorial/departmental procedures
for Data management? If yes, which ones?
</td> </tr>
<tr>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1504_GrowBot_824074.md
|
# 1 Introduction
This deliverable presents the first version of the Data Management Plan (DMP)
for the GrowBot project. This document provides a preliminary analysis of the
data management policy to be applied by the Partners to datasets generated
within the Project. In particular, the DMP identifies the main data to be
generated within GrowBot, outlining the handling of research data during the
project as well as how and what parts of the datasets will be openly shared.
This document is intended for consortium internal use, aiming to provide
guidance to Project Partners on data management. The DMP is indeed a useful
tool to agree on data processing of the GrowBot project, facilitating the
creation of a common understanding and, where possible, common practices.
This deliverable is submitted to the European Commission in M7 of the first
project year (July 2019, D1.1) and represents a preliminary plan. The document
will be further detailed, updated, and corrected in line with the project life
cycle.
The document follows the EC guidelines and templates for project participating
in the open Research Data Pilot:
* H2020 Programme – AGA Annotated Model Grant Agreement - Open access to research data 1
* Guidelines to the Rules on Open Access to Scientific Publications and Open Access to Research Data in Horizon 2020 2
* Guidelines on FAIR Data Management in Horizon 2020 2
* Template for the Data Management Plan 3
* OpenAIRE Research Data Management Briefing Paper 4
* DCC Checklist for writing a DMP 5
The present Data Management Plan also reflects the provisions established by
the project contracts and complements the project exploitation, dissemination
and IPR procedures and decisions defined in different deliverables.
The relationship between the DMP and each key document are described below in
Table 1.
## 1.1 Objectives
According to the EC Guidelines on Data Management in Horizon 2020, scientific
research data should be findable, accessible, interoperable and re-usable
(FAIR):
* **Findable:** Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)?
* **Accessible:** Are the data and associated software produced and/or used in the project accessible and in what modalities, scope, licenses (e.g. licencing framework for research and education, embargo periods, commercial exploitation, etc.)?
* **Interoperable:** Are the data produced and/or used in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available [open] software applications, and in particular facilitating re-combinations with different datasets from different origins)?
* **Re-usable:** Are the data produced and/or used in the project useable by third parties, in particular after the end of the project?
### Table 1\. Relation to project key documents and deliverables
### Document Access 7 Availability Relationship to GrowBot DMP
<table>
<tr>
<th>
Grant Agreement: core text
</th>
<th>
Confidential
</th>
<th>
* Participant portal;
* GrowBot
repository 8
</th>
<th>
* Article 27 details the obligation to protect results (27.1) and of providing information on EU funding (27.3)
* Article 29 details the obligation to disseminate results, defines open access to research data (29.3) as well as the obligation to provide information on EU funding (29.4) and to exclude Commission
responsibility via a disclaimer (29.5)
* Article 36 details confidentiality obligations
* Article 37 details security-related obligations
* Article 39 details obligations to protect personal data
</th> </tr> </table>
Consortium Consortium GrowBot Chapter 4.1 on the General principles:
Agreement repository
“Each Party undertakes to take part in the efficient implementation of the
Project, and to cooperate, perform and fulfil, promptly and on time, all of
its obligations under the Grant Agreement and this Consortium Agreement as may
be reasonably required from it and in a manner of good faith as prescribed by
Belgian law.
Each Party undertakes to notify promptly, in accordance with the governance
structure of the Project, any significant information, fact, problem or delay
likely to affect the Project.
Each Party shall promptly provide all information reasonably required by a
Consortium Body or by the Coordinator to carry out its tasks.
Each Party shall take reasonable measures to ensure the accuracy of any
information or materials it supplies to the other Parties.”
This is a general declaration of the partners to abide by the rights and
obligations set out in the Grant Agreement.
7. Confidential: limited to Consortium, European Commission, appointed external evaluators and other EU bodies; Consortium: originally conceived as consortium but can be made available to European Commission, appointed external evaluators and other EU bodies if necessary; Public: public and fully open availability
8. _https://www.growbot.eu/login_
<table>
<tr>
<th>
Dissemination plan (D11.2)
</th>
<th>
Public
</th>
<th>
* Participant portal;
* GrowBot
repository
</th>
<th>
The deliverable deals with a detailed definition of the strategy, the planned
activities outlined in Dissemination, Communication, and Exploitation WP
(WP11), and their expected impact. The CoDE plan will be periodically updated
according to the progress and emerging results of the project, considering
changes in the stakeholders, work context and potential use of results during
the project lifetime.
</th> </tr> </table>
## 1.2 DMP management and update
Four DMP deliverables have to be submitted to the European Commission in M6
(June 2019, D1.1), M18 (June 2020, D1.4), M36 (December 2021, D1.7), and M48
(December 2021, D1.10).
Different versions will be identified by a version number and a date. The
version number will be composed of two digits separated by a period: the digit
before the period represents in ascending orders the official versions
submitted to the European Commission as deliverables; digits after the period
represents the periodic internal revisions of such official versions.
Official versions will be stored on the project online repository as PDF
files. An editable word copy of the latest version will also be stored to
facilitate revision and update of the already identified datasets and
policies. If during the project life cycle, a new dataset is identified,
partners can submit a new form through the online tool, automatically
notifying the coordinator. IIT will then be in charge of updating the document
and its annexes, uploading them on the repository and notify the consortium
through the project mailing list system.
# 2 Data summary
## 2.1 GrowBot datasets
For the first version of the project DMP, the analysis is based on eleven
datasets whose key details are summarized in Table 2. The descriptions of each
data set are provided in Annex 2.
### Table 2. Preliminary list of GrowBot datasets
<table>
<tr>
<th>
**REF**
</th>
<th>
**TITLE**
</th>
<th>
</th>
<th>
**PARTNER**
</th>
<th>
**DATA TYPE**
</th>
<th>
**WP &TASK **
</th>
<th>
**~ SIZE**
</th> </tr>
<tr>
<td>
**DS1**
</td>
<td>
Biomechanical characterization selected climbing plants
</td>
<td>
of
</td>
<td>
ALU-FR,
CNRS, IIT-
CMBR
</td>
<td>
Experimental
</td>
<td>
WP3: T3.1, T3.3
</td>
<td>
5 GB
</td> </tr>
<tr>
<td>
**DS2**
</td>
<td>
Bioinspired robot control
</td>
<td>
</td>
<td>
TAU,
SSSA, IIT-
CMBR,
CNRS,
ALU-FR,
GSSI
</td>
<td>
Results/analysis
</td>
<td>
WP3: T3.2
WP6: T6.1, T6.2,
T6.3
</td>
<td>
5 GB
</td> </tr>
<tr>
<td>
**DS3**
</td>
<td>
Networking information model
</td>
<td>
</td>
<td>
GSSI
</td>
<td>
Results/analysis
</td>
<td>
WP3: T3.4
</td>
<td>
5 GB
</td> </tr>
<tr>
<td>
**DS4**
</td>
<td>
Microfabricated spinner of responsive materials with
attachment capabilities
</td>
<td>
Linari, IIT-
POLBIOM, HZG
</td>
<td>
Results/analysis
</td>
<td>
WP4: T4.1, T4.3 WP5: T5.1
</td>
<td>
5 GB
</td> </tr>
<tr>
<td>
**DS5**
</td>
<td>
Multi-filament deposition mechanism
</td>
<td>
IIT-CMBR
</td>
<td>
Results/analysis
</td>
<td>
WP5 : T5.2
</td>
<td>
500
MB
</td> </tr>
<tr>
<td>
**DS6**
</td>
<td>
Micro-extrusion prototype
</td>
<td>
HZG, IITPOLBIOM
</td>
<td>
Results/analysis
</td>
<td>
WP4: T4.2
WP5: T5.3
</td>
<td>
500
MB
</td> </tr>
<tr>
<td>
**DS7**
</td>
<td>
Soft “searcher-like” robot
</td>
<td>
IIT-CMBR, SSSA
</td>
<td>
Results/analysis
</td>
<td>
WP5: T5.4
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**DS8**
</td>
<td>
Microbial fuel cells (MFCs)
</td>
<td>
Bioo
</td>
<td>
Results/analysis
</td>
<td>
WP7: T7.1
</td>
<td>
500
MB
</td> </tr>
<tr>
<td>
**DS9**
</td>
<td>
Plant-robot interfaces for energy harvesting
</td>
<td>
IIT-CMBR
</td>
<td>
Results/analysis
</td>
<td>
WP7: T7.2
</td>
<td>
500
MB
</td> </tr>
<tr>
<td>
**DS10**
</td>
<td>
Robot integration
</td>
<td>
IIT, All
</td>
<td>
Experimental
</td>
<td>
WP8: T8.1, T8.2
</td>
<td>
10 GB
</td> </tr>
<tr>
<td>
**DS11**
</td>
<td>
Robot validation
</td>
<td>
CNRS, All
</td>
<td>
Experimental
</td>
<td>
WP9: T9.1, T9.2,
T9.3, T9.4
</td>
<td>
10 GB
</td> </tr> </table>
## 2.2 General data purpose and utility
The gathered data within the GrowBot project can be useful for several
purposes.
Summarising:
* Biological research activities aim at deeply investigating the selected biological models of climbing plants in terms of morphology, physiology, anatomy, attachment capability, and biomechanical features (WP3 - Task 3.1 and Task 3.3). These characteristics are needed for identifying key functional “attributes” for the definition of strategic features of robotic artefacts. At the same time, the accurate investigation of biological models will be important for shedding light on unknown biological issues.
* The research activities on plant behaviour and communication aim at studying and analysing plant behaviour (WP3 - Task 3.2) and communication abilities (WP3 - Task 3.4) in order to design innovative control architecture and networking information models for robots (WP6 - Task 6.1, Task 6.2, and Task 6.3). As in the previous topic, the outcome is twofold because the plant abilities can inspire innovative control algorithm and networking information models; and a rigorous biological investigation will contribute to solving biological questions.
* The research activities on the climbing plants’ attachment strategies will also inspire new technological solutions able to perform reversible or permanent attachment on external supports. The artefacts can work as a single attachment device or as attachment components of a more complex robotic system (WP4 - Task 4.3).
* The research activities on the development of smart materials (e.g. responsible materials, multifunctional materials, printable materials, etc.) are crucial for the generation and characterization of innovative materials that can be applied in several different fields (e.g. robotics, architecture, environmental monitoring, etc.) (WP4 - Task 4.1, and Task 4.2).
* The research activities in manufacturing aim at designing innovative 3D additive manufacturing techniques able to manage functional materials (4D printing), multi-materials, and microfibers (WP5 – Task 5.1, Task 5.2, and Task 5.3).
* The research activities on soft “searcher-like” robot are focused on the design and development of a searcher robotic probe able to explore the surrounding environment, find an external supports, and perform grasping/attachment tasks (WP5 - Task 5.4). The developed device can be potentially useful as monitoring and grasping components of different robotic platforms.
* The research activities on plant energy harvesting aim at investigating the possibility to gather energy from the aerial and underground structure of the plants (WP7 – Task 7.1 and Task 7.2). In this case, the potential spin-off activities can be several in terms of plant energy characterization and technological outcomes.
* The research activities on characterization and validation of materials and prototypes (WP8 – Task 8.1 and Task 8.2; and WP9 – Task 9.1, Task 9.2, Task 9.3, and Task 9.4) represent an amazing source of data for other similar researches and stakeholders. These activities aim to provide standard protocols for the evaluation of the systems’ performances.
GrowBot datasets will be a corollary to the scientific publications related to
the project. Datasets will be accessible through Zenodo and, when possible,
scientific publications will be directly linked to relevant software and data.
All these links will be explicitly maintained through the use of digital
object identifiers (DOI) associated with scientific papers, datasets and
software versions.
A detailed description of each dataset can be found in Annex 2.
GrowBot datasets are expected to have long-term value and utility. They are
fundamental for guaranteeing reproducible research and re-use in similar
research studies.
Moreover, the gathered data may be potentially useful to several external
entities and stakeholders interested in one or more research activities. A
preliminary list of third parties that can find fruitful the access to our
data:
* Research and scientific community
* Botany and functional biology
* Robotics
* Artificial Intelligence
* Material Science
* Computer Science
* Architecture Rescue
* Archaeology
* Industry
* Manufacturing
* Environmental monitoring
* Health-care
* Engineering
* Attachment product design
* Design
Last but not least, our results may be potentially interesting as raw data for
producing usable education and formative materials. In this case, GrowBot
datasets can contribute to both school and higher education of future
generations.
## 2.3 Data technical details: origin, type, formats, and size
In the majority of GrowBot’s research activities, the partners will tend not
to re-use existing data in the literature, due to the need to address specific
project questions, but rather to carry out _ad hoc_ experiments and
measurements for generating the needed information.
Although several previous studies, especially in the biological field, have
already examined and carried out similar GrowBot’s investigations, additional
and new data are necessary to provide results and information that are
directly relevant to the GrowBot objectives.
The data will be gathered by various researchers and different partners as
detailed in Table 2 and Annex 2.
The data generated within the project will be both experimental and
theoretical, both quantitative and qualitative. Datasets will be generated
through various data collection techniques: field work in natural habitats,
experiments, observations, and modelling systems.
More in details, GrowBot will generate different categories of data:
* **Raw collected data** – not yet subjected to quality assurance or control
* **Validated collected data** – raw data which have been evaluated for completeness, verified for compliance with the standard operating procedure (data protection included) and validated for specific quality
* **Analysed collected data** – validated data which have been processed and analysed through statistical operations
In order to maximise the dataset interoperability, management and re-use, the
GrowBot consortium agreed to use when possible formats that are non-
proprietary, unencrypted, uncompressed and in common usage by the research
community. Since there are no unique recommendations on best data formats and
neither the selected data repository 6 provides such indication, GrowBot
partners have agreed to follow - when possible - the indications of the UK
Data Archive 7 , recommended by OpenAIRE, as indicated in **Table 3** .
### Table 3. Data recommended format
### Type of data Recommended formats Acceptable formats
<table>
<tr>
<th>
Tabular data with
extensive metadata
(variable labels, code labels, and defined
missing values)
</th>
<th>
</th>
<th>
SPSS portable format (.por)
</th>
<th>
Proprietary formats of statistical packages:
* SPSS (.sav),
* Stata (.dta),
* MS Access (.mdb/.accdb)
</th> </tr> </table>
Tabular data with comma-separated values (.csv) delimited text (.txt) with
minimal metadata tab-delimited file (.tab) characters not present in data
(column headings, delimited text with SQL data used as delimiters variable
names) definition statements widely-used formats:
* MS Excel (.xls/.xlsx),
* MS Access (.mdb/.accdb),
* dBase (.dbf),
* OpenDocument Spreadsheet
(.ods)
<table>
<tr>
<th>
Textual data
</th>
<th>
</th>
<th>
</th>
<th>
Rich Text Format (.rtf)
plain text, ASCII (.txt)
Adobe Portable Document Format
(PDF/A, PDF) (.pdf)
</th>
<th>
</th>
<th>
Hypertext Mark-up Language
(.html)
widely-used formats: MS Word
(.doc/.docx)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
some software-specific formats: NUD*IST, NVivo and ATLAS.ti
</td> </tr> </table>
Image data TIFF 6.0 uncompressed (.tif) JPEG (.jpeg, .jpg, .jp2) if
original
created in this format
* GIF (.gif)
* TIFF other versions (.tif, .tiff)
* RAW image format (.raw)
* Photoshop files (.psd)
* BMP (.bmp)
* PNG (.png)
<table>
<tr>
<th>
Audio data
</th>
<th>
</th>
<th>
</th>
<th>
Free Lossless Audio Codec (FLAC)
(.flac)
</th>
<th>
</th>
<th>
MPEG-1 Audio Layer 3 (.mp3) if original created in this format
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Audio Interchange File Format
(.aif)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Waveform Audio Format (.wav)
</td> </tr> </table>
Video data MPEG-4 (.mp4) AVCHD video (.avchd)
* OGG video (.ogv, .ogg)
* motion JPEG 2000 (.mj2)
<table>
<tr>
<th>
Documentation
scripts
</th>
<th>
and
</th>
<th>
</th>
<th>
Rich Text Format (.rtf)
PDF/UA, PDF/A or PDF (.pdf)
XHTML or HTML (.xhtml, .htm)
OpenDocument Text (.odt)
</th>
<th>
</th>
<th>
plain text (.txt) widely-used formats:
* MS Word (.doc/.docx),
* MS Excel (.xls/.xlsx)
</th> </tr> </table>
The project will generate a very large amount of data with an overall size of
approximately 48 GB. The Zenodo platform recommends a maximum upload limit of
50 GB. All the 12 datasets should fit into this limit.
The consortium will not intend to upload copies of the same data in order to
avoid the creation of multiple persistent identifiers and thus making
references and citation difficult.
# 3 FAIR data
## 3.1 Making data Findable
Each GrowBot dataset will be identified with a Digital Object Identifier (DOI)
so that it can be findable and easily citable.
GrowBot consortium has chosen Zenodo as repository for the storage of the
datasets. Zenodo provides DOI to all publicly available uploads. In
particular, the DOI versioning allows users to update the datasets and
maintain a right citing of the dataset.
Zenodo adopts a linear versioning rule 8 , whereas GrowBot data versioning
will follow the “Major.Minor numbering” rule (e.g. v2.1). An increase of the
number before the period (Major) indicates a substantial change in the
structure and/or content of the dataset. An increase of the number after the
period (Minor) indicates a minimal revision, namely a quality improvement over
existing version. During the project life, dataset will be characterized by
mainly minor revisions, although major revisions will be possible beyond the
end of GrowBot.
The consortium has defined a naming convention for the project datasets,
namely:
1. A prefix "GrowBot"
2. "DATA" (short for dataset) followed by a unique chronological number of the project datasets
3. Letter indicating sub-dataset (if applicable)
4. The short title of the dataset
5. Version number
For instance, the first project dataset identified in Annex 2 will be named:
"GrowBot_DATA1_ **Error! Reference source not found.** _v1.0"
To increase the findability of each dataset and consequent use, search
keywords will be provided once the dataset is uploaded to Zenodo.
Each project records will be annotated with metadata in order to increase data
reuse.
Zenodo follows the JSON metadata schema 9 and Data Cite metadata standards
and already provides key data documentation such as:
* Creators and their affiliation
* Data location and persistent identifier
* Chosen license
* Funding
* Related/alternate identifiers
* Contributors
* References
* Related journals, conferences, books and/or thesis
* Subjects
Moreover, the consortium will provide documentation as complete as possible to
allow third parties to properly understand the data and eventually replicate
the experiments. This will include:
* **Dataset overview** – number of sub-datasets; status of documented data (complete or in progress); eventual plan of the future update
* **Methodological information** – methods used for experimental design, data collection and data processing; instruments and software used; experimental conditions; quality assurance procedures performed on data
* **Software and tools information** – Name of tool/software; reference version; reference URL; optional DOI.
## 3.2 Making data openly Accessible
As a general rule, datasets will not be released before the publication date
of the scientific paper, patents, reports, etc. in which the data are reported
the first time. It is the intention of the GrowBot consortium to make the
datasets publicly available as early as possible after the publication date.
Potential restrictions or embargo periods of the scientific journal will have
to be respected in accordance with what set out in Grant Agreement (art 29.2).
In accordance with what just claimed about the Intellectual Property Right IPR
& Exploitation, GrowBot consortium has planned different levels of data
confidentiality:
* _Beneficiary institution access_ : The data are not disclosed at all. The partner that chooses this option believes that the dataset contains information that would lose their value if disclosed. This choice aims at protecting the information from any external access in order to exploit data for patents, publications, etc. The confidentiality must be ensured beyond the clauses agreed in the Consortium Agreement.
* _Confidential to the consortium (including EC services and GrowBot Advisory Board):_ This option is applied for data containing confidential information (e.g. exploitable results) requiring IP protection, aimed at eventual exploitation. Confidential to consortium datasets will be deposited on specific repositories (private area of project website www.growbot.eu). These repositories will be accessible uniquely by the Consortium members.
* _Open Access_ : This option is applied when data have no IP restrictions and will be openly available and re-usable.
Although the embargoed or closed access option provided by Zenodo could be a
valid option, the consortium agrees that research data linked to exploitable
results will not be deposited to avoid compromising their protection or
commercialisation prospects. As clearly specified on Zenodo security
provisions, "closed access is not suitable for secret or confidential data"
since these are "stored unencrypted and may be viewed by Zenodo operational
staff" 10 . In this case, the consortium will store the data in the private
area of the project website or institutional repository (if any) with a proper
cybersecurity certificate.
Visibility and access to publicly shared datasets will be facilitated by
Zenodo metadata and search facility as well as to the automatic link to both
OpenAIRE 11 and project Cordis project page 15 .
## 3.3 Making data Interoperable
The consortium will strive to collect and document the data in a standardized
way to ensure that datasets can be correctly understood, interpreted, and re-
used.
Documentation describing the main variables included in the datasets will be
provided in order to support the interpretation and re-use.
Standard vocabulary will be used for all data types present in the dataset to
allow inter-disciplinary interoperability. In addition, the documentation will
include a general glossary used to share information about the vocabulary and
general methodologies employed for the generation of the dataset.
## 3.4 Increase data Re-use
In order to clarify the possibility to re-use GrowBot data, the consortium
will provide a specific license for each deposited dataset that claims if the
data have open or restricted access.
Zenodo automatically offers five different licensing options among Creative
Commons Licenses, all foreseeing the attribution requirement to appropriately
credit the authors for the original creation (credit, link to license and
changes indications).
When possible, the consortium proposed licence is **Creative Commons
Attribution 4.0 International (CC BY 4.0)** 12 allowing third parties to
share and adapt data with no restrictions as long as attribution is provided.
In case the partner would like to further limit access to the uploaded data,
alternative licenses will be selected also through the CC license chooser
among the Zenodo offered options:
* **Creative Commons Attribution Share-Alike 4.0 International (CC BY-SA 4.0)** 13 – allowing adaptation for any purpose to the work to be shared as long as it is distributed under the same original licence (or a license listed as compatible);
* **Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)** 14 – allowing sharing for any purpose, but forbidding the distribution of derivative work;
* **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** 15 – allowing sharing and adaptation to the work, but limiting the use of the shared work to noncommercial purposes;
* **Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International (CC BYNC-ND 4.0)** 16 – allowing sharing but restricting both derivative work and commercial use of data.
Although not directly provided through Zenodo, an additional Creative Commons
Attribution license can be applied upon specific request to Zenodo team:
* **Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)** 17 – allowing adaptation to the work to be shared as long as it is distributed for noncommercial purposes and under the same original licence (or a license listed as compatible).
All data will be stored in Zenodo as soon as possible, at the latest upon
publication of the related scientific publication and will remain re-usable
for the lifetime of the repository, which is currently warrantied for a
minimum of 20 years.
# 4 Specific software provisions
Generally, the consortium agrees to provide full software and tools
information for all dataset within the documentation. Information on tools
name, version, URL and DOI will be thence added to increase dataset
accessibility and re-usability.
Software plays a key role in GrowBot and particular provisions should thence
be considered for software developed as part of the project activities in
addition to provisions for access and rights agreed by partners in the GrowBot
Consortium Agreement (Art 9.8, §1).
The partner(s) involved in software development will evaluate the possibility
to upload the code on GitHub directly linked to Zenodo platform, as indicated
in Annex A3.2.
# 5 Allocation of resources
At this preliminary stage of the project, the only costs foreseen for data
management are related to:
* the working time needed to set up and perform the data collection, including synchronisation of devices, and analysis activities
* the working time to setup local and shared data collection devices/servers - the working time needed to write documentation, metadata, etc.
The project coordinator is in charge of the DMP from both the scientific and
technical perspective. IIT role include the first version release as well as
the regular update.
Validation and registration of datasets and metadata, as well as backing up
data for sharing through open access repositories is the responsibility of the
partner that generates the data in the WP. Each partner will identify a
specific responsible person for each dataset. Quality control of these data is
the responsibility of the relevant WP leader, supported by the Project
Coordinator. Each partner should respect the policies set out in this DMP.
Finally, in line with Grant Agreement (art 29.1) and Consortium Agreement (art
8.4.2.1), a beneficiary that intends to disseminate its results must give
advance notice to the other beneficiaries of — unless agreed otherwise — at
least 45 days, together with sufficient information on the results it will
disseminate. Any other beneficiary may object within — unless agreed otherwise
— 30 days of receiving notification, if it can show that its legitimate
interests in relation to the results or background would be significantly
harmed. In such cases, the dissemination may not take place unless appropriate
steps are taken to safeguard these legitimate interests.
# 6 Data security
As previously stated, each partner is in charge of backing up data that will
be openly shared through Zenodo.
Once uploaded on Zenodo, data will also be stored in CERN Data Centre in
multiple online independent replicas. Long-term preservation is guaranteed
even in the unlikely event that Zenodo will cease operations, migration of
content on other repositories is planned.
For the data that cannot be uploaded on Zenodo, because not publicly
shareable; each institutional
ICT infrastructure guarantees preservation and safety of the stored data in
compliance with its
(Information Security) internal policy.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1505_CityxChange_824260.md
|
# Executive Summary
This deliverable constitutes the second version of the Data Management Plan
for the +CityxChange project. It specifies Data Governance and handling of
data in the project, what types of data are expected to be generated in the
project, if and how it will be made open and accessible for verification and
re-use. It will also specify how it will be curated and preserved, with
details such as ethical, privacy, and security issues.
All beneficiaries are informed of the applicable regulations around human
participation, informed consent, data processing, data security, and the
pertinent regulations such as GDPR or H2020 Ethics or FAIR guidelines. When
personal data collection or processing is started, the DMP information will be
updated accordingly to include updated data summaries, consent forms,
compliance, and institutional approval where necessary. Processing of personal
data will respect the Data Protection Principles. This document provides an
overview of data handling in the project and provides the initial guidelines
for the project. The project will support openness according to the EU FAIR
approach and the principle "as open as possible, as closed as necessary"
together with the project ambition of “Open by Default”.
This document is an update of D11.5: Data Management Plan - Initial Version
and supersedes that document.
# 1 Introduction
This deliverable presents the first update to the Data Management Plan (DMP)
for the
+CityxChange project. This is the second version of the DMP and an update to
D11.5: Data Management Plan - Initial Version.
It describes overall Data Governance in the project, including the lifecycle
of data to be collected, generated, used, or processed within the project and
the handling of data, including methodologies, data sharing, privacy and
security considerations, legal and regulatory requirements, informed consent,
open access, for during and after the project. The Deliverable is part of Task
11.6: Delivery of Data Management Plan and is linked with Task 11.2: Delivery
of Consortium Plan, and Task 11.1: Project Management. It is further linked to
Ethics Deliverables D12.1 H - Requirement No. 1 on Human Participants and
D12.2 POPD - Requirement No. 2 on Protection of Personal Data. Some content
from
D11.5, D12.1, D12.2, and the Description of Action (DoA) is reiterated here.
+CityxChange has a strong commitment in place for maximizing dissemination and
demonstration of the value of the implemented targets and measures. Strong
dissemination of results, sharing of data, communication, and replication are
a key success factor in making the project results more accessible,
attractive, evaluable, replicable, and implementable for a broad set of
stakeholders. The project aims to make research data findable, accessible,
interoperable and re-usable (FAIR) in line with the H2020 Guidelines on
FAIR Data Management 1 . +CityxChange participates in the Pilot on Open
Research Data (ORD) and thus delivers this Data Management Plan to define how
the project will implement data management, dissemination, and openness
according to the principle "as open as possible, as closed as necessary"
together with the project ambition of “open by default”.
The consortium will provide Open Data and Open Access to results arising from
the project to support a number of goals, namely: benchmarking with other
projects and comparison of developed measures; improving dissemination,
contribution to the Smart Cities Information System (SCIS), and exploitation
of data and results; improving access and re-use of research data generated
within the project; and knowledge sharing with citizens, the wider public,
interested stakeholders, cities, industry, and the scientific community.
The project is built around transparency and openness. 86% of 148 deliverables
are open, only 20 are confidential, which is a great support for outreach and
replication. Deliverables are expected to be used both internally and
externally, to both inform the project and its team members about activities
and results, and to infirm external stakeholders and potential collaborators
and replicators. This means that documentation is written with a focus on
usefulness for the project and the European Cities and other stakeholders.
Such outreach will also be supported through the inter- and extra-project
collaboration between SCC1 projects in WP9.
In addition, +CityxChange aims to fulfil all ethical requirements and
acknowledges that compliance with ethical principles is of utmost importance
within H2020 and within Smart Cities and Communities projects that involve
citizens and other actors, especially regarding human participants and
processing of personal data. As such, the beneficiaries will carry out the
action in compliance with: ethical principles (including the highest standards
of research integrity); and applicable international, EU and national law.
Beneficiaries will ensure respect for people and for human dignity and fair
distribution of the benefits and the burden of research, and will protect the
values, rights and interests of the participants. 2
All partners are aware of the H2020 Rules of Participation (Sections 13, 14)
and the Ethics clauses in Article 34 of the Grant Agreement and the obligation
to comply with ethical and research integrity principles set out therein and
explained in the annotated Model Grant 3
Agreement . The project will respect the privacy of all stakeholders and
citizens and will seek free and fully informed consent where personal
identifiable data is collected and processed. Processing of personal data will
respect the Data Protection Principles.
Data provided by the project will support a range of goals, such as improving
dissemination and exploitation of data and results; improving access and reuse
of research data; and knowledge sharing with citizens, the wider public,
interested stakeholders, and the scientific community. Documentation and
research data repositories will follow the H2020 best practice, with a focus
on open access, peer-reviewed journal articles, conference papers, and
datasets of various types.
This document is based on the main formal project description of the Grant
Agreement and additional documentation built so far in the project. The
+CityxChange project is part of the H2020 SCC01 Smart Cities and Communities
Programme. The related documents for the formal project description are the
Grant Agreement Number 824260 - CityxChange “Positive City ExChange”
(Innovation Action) entered into force 01.11.2018, including the core
contract, Annex 1 Part A (the Description of Action, DoA: beneficiaries, work
packages, milestones, deliverables, budget), Annex 1 Part B (Description of
project, work, background, partners), Annexes (Supporting documentation,
SEAPs, BEST tables, Dataset mapping, etc.), and Annex 2 - Budget. In addition,
the Consortium Agreement of +CityxChange, entered into force 01.11.2018,
details the Consortium Governance and relations of beneficiaries towards each
other. It includes IP-relevant background, including existing data sources.
The parts about open data, security, and privacy processes are taken from the
internal living documentation on ICT governance.
2. REGULATION (EU) No 1290/2013 (Rules for participation and dissemination in H2020) https://ec.europa.eu/research/participants/data/ref/h2020/legal_basis/rules_participation/h2020-rule s-participation_en.pdf
3. EU Grants: H2020 AGA — Annotated Model Grant Agreement: V5.0 – 03.07.2018 General MGA
For the role of the Data Manager, the Coordinator has appointed the Project
Manager. As part of the responsibilities of the Project Management Team, the
Data Manager will review the +CityxChange Data Management Plan and revise it
annually or when otherwise required with input from all partners.
This public document describes the current status of the DMP at the time of
delivery,
October 2019. It will be refined by future deliverables of the DMP and updates
in individual
Work Packages, especially around ICT in WP1 and Monitoring & Evaluation in
WP7.
This document represents the current state of the DMP document and supersedes
the previous document, D11.5: Data Management Plan - Initial Version, to which
this is an update. Specific changes to the previous version are as follows:
* Updates on city processes
* Updates from a workshop at Consortium Meeting
* Updates from T1.1/T1.2 work on ICT ecosystem and integration
* Added initial project-specific partner processes
# 2 Ethics, Privacy, and Security Considerations
+CityxChange is an innovation action. It is a complex, cross-sectoral, and
interdisciplinary undertaking that involves stakeholders from widely varying
backgrounds. Furthermore, it is a city-driven project, putting cities and
their citizens in the focus.
This means that a majority of data collection and human participation happens
through activities around automated data collection in energy and mobility
scenarios, monitoring and evaluation, as well as citizen participation,
stakeholder engagement, events or peer-to-peer exchanges in developing and co-
creating innovative solutions. The approach and structure of the project leads
to diverse data being collected and generated using a range of methodologies.
As the data is heterogeneous, a number of methodologies and approaches can be
used.
## Ethics Considerations
Most of the 11 Demonstration Projects in the +CityxChange Lighthouse Cities
will require data processing and most require evaluation involving human
research subjects and the collection of personal data. The ethics self-
assessment and Ethics Summary Report identified three ethical issues: 1) human
participation, 2) personal data collection of data subjects, and 3) potential
tracking or observation of participants. Details on these are given in D12.1
and D12.2 and summarised below.
The details for each demonstration case are summarised in the following table
(from
D12.1).
<table>
<tr>
<th>
Identified Demonstration Projects
</th>
<th>
Human
Participants
</th>
<th>
Collection of personal data
</th>
<th>
Tracking or observation of participants
</th> </tr>
<tr>
<td>
Residential, office, multi-use buildings, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, building level, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, system level, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Transport data, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Community Engagement, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Residential, office, multi-use buildings, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, building level, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, system level, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Transport data, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Community Engagement, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr> </table>
All activities within +CityxChange will be conducted in compliance with
fundamental ethical principles and will be underpinned by the principle and
practice of Responsible Research 4 and Innovation (RRI) . RRI is important in
the Smart City context where projects work to transform processes around
cities and citizens. Through the +CityxChange approaches of Open Innovation
and Quadruple Helix collaboration, societal actors and stakeholders will work
together to better align the project outcomes with the general values, needs
and expectations of society. This will be done throughout the project, with a
focus within WP9 and WP10 and the city Work Packages. The project uses open
data and openness as part of Open Innovation 2.0 and for stakeholder
participation through measures such as open data, open licences, public
deliverables, hackathons, outreach, living labs, existing innovation labs.
The consortium confirms that the ethical standards and guidelines of Horizon
2020 will be rigorously applied, regardless of the country in which the
research will be carried out, and that all data transfers will be permissible
under all necessary legal and regulatory requirements. This was detailed in
D12.1 and D12.2 and will be followed up in the following section. No major
changes from the status of D11.5 have taken place.
All proposed tasks are expected to be permissible under the applicable laws
and regulations, given proper observance of requirements. Where appropriate
information and consent of all stakeholders and citizens is mandated, the
consortium will ensure that all necessary procedures are followed,
particularly with regard to the signing, collation, and storing of all
necessary Informed Consent Forms prior to the collection of any data. All
involved stakeholders and citizens will be informed in detail about measures
and the consortium will obtain free and fully informed consent.
All necessary actions will be taken within the project management and by all
beneficiaries to ensure compliance with applicable European and national
regulations and professional codes of conduct relating to personal data
protection. This will include in particular
4 EU H2020 Responsible research & innovation
https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-
research-innovation
Directive 95/46/EC regarding data collection and processing, the General Data
Protection Regulation (GDPR, 2016/679), and respective national requirements,
ensuring legal and regulatory compliance. Ethics considerations will feed into
research and data collection protocols used in the project. This will include
collection and processing of personal data as well as surveys and interviews.
For all identified issues, in line with the above standards, ethical approvals
will be obtained from the relevant national data protection authorities and/or
institutional boards.
In line with existing regulations by the university partners relevant for
social science research, the mapping of the ID and the person will be
safeguarded and will not be available to persons other than the ones working
with the data. This will minimise the risks of ethical violations. Since data
stemming from other kinds of research might be de-anonymized and reconnected
to a person, discipline-specific study designs aim to mitigate or remove this
risk as well for different types of data collection. Results may be used in
anonymised or aggregated form for analysis and subsequent publication in
project reports and scientific papers. All beneficiaries will handle all
material with strict care for confidentiality and privacy in accordance with
the legal and regulatory requirements, so that no harm will be done to any
participants, stakeholders, or any unknown third parties. NTNU as the
coordinator has internal guidelines that comply with GDPR and these will be
followed in its data management.
In addition to relevant national data protection authorities, the university
partners have separate institutional ethics boards or respective national
research boards, which will ensure the correct implementation of all human
participation and data protection procedures and protocols around social
science research. In detail, this includes for Ireland the University of
Limerick Research Ethics Governance and respective Faculty Research Ethics
Committees, and for Norway the Norsk samfunnsvitenskapelig datatjeneste (NSD)
-
National Data Protection Official for Research.
As an example for NTNU processes, we describe sample guidelines for
interviews: Let’s assume that the interviewees' quotations will include their
role and the date of interviews. Before interviews will be conducted, the
interviewees will be asked to sign a letter of consent, in which they certify
that they are aware that the interview will be recorded, and the resulting
report will reflect their role and the date of interviews, unless interviewees
wish to say something off the record. Those parts will be quoted as anonymous.
In addition, the researchers will store the collected data in a safe place and
in the personal computer, which is secured with a passcode. The interviewees
will also be informed that the information would be kept secret and
inaccessible. In Norway, any individual researcher is obliged to familiarize
himself/herself with the Research Ethics Act, research ethics guidelines and
information from the Norwegian Social Science Data Services (NSD) concerning
the Data Protection Official scheme and processing personal data and must
submit the respective notification form at least 30 days prior to commencing
data collection. Therefore, the NSB report will be provided before the data
collection process. The Lighthouse Cities Limerick (IE) and Trondheim (NO)
will closely collaborate with their local universities. The Follower Cities
Alba Iulia (RO), Písek (CZ), Smolyan (BG), Sestao (ES), and Võru (EE) will
follow similar procedures for any potential replication of demonstration
projects. Details will be developed within the respective tasks, initially in
WP3, and input into ongoing versions of this DMP.
## Ethics Requirements and Confirmations
Recruitment and informed consent procedures for research subjects will fulfil
the following requirements (cf. D12.1):
1. The procedures and criteria that will be used to identify/recruit research participants.
2. The informed consent procedures that will be implemented for the participation of humans.
3. Templates of the informed consent/assent forms and information sheets (in language and terms intelligible to the participants).
4. The beneficiary currently foresees no participation of children/minors and/or adults unable to give informed consent. If this changes, justification for their participation and the acquirement of consent of their legal representatives will be given in an update of the DMP and relevant documentation within the respective tasks.
In addition, for the processing of personally identifiable data the following
requirements will be observed (cf. D12.2):
1. The contact details of the host institution’s DPO are made available to all data subjects involved in the research. Data protection policy for the project will be coordinated with the
DPO.
2. A description of the technical and organisational measures that will be implemented to safeguard the rights and freedoms of the data subjects/research participants as well as a description of the anonymisation/pseudonymisation techniques that will be implemented.
3. Detailed information on the informed consent procedures linked to the above in regard to data processing.
4. Templates of the informed consent forms and information sheets (in language and terms intelligible to the participants) linked to the above regarding data processing.
5. The project currently does not foresee profiling. In case this changes, the beneficiary will provide explanation how the data subjects will be informed of the existence of the profiling, its possible consequences and how their fundamental rights will be safeguarded in an update of the DMP.
6. The beneficiaries will explain how all of the data they intend to process is relevant and limited to the purposes of the research project (in accordance with the ‘data minimisation’ principle).
7. The project does not foresee the case of further processing of previously collected personal data. In case this changes, an explicit confirmation that the beneficiary has lawful basis for the data processing and that the appropriate technical and organisational measures are in place to safeguard the rights of the data subjects will be submitted in an update to the DMP.
## Recruitment of Participants and Informed Consent Procedures
The project will engage with a multitude of participants and stakeholders in
different Work Packages and Tasks. This runs from an open to highly targeted
activities, co-creation workshops, citizen engagement, outreach activities,
stakeholder and citizen groups, and other activities. The Deliverable on Human
Participants D12.1 H - Requirement No. 1 has described general guidelines on
the processes to be used. The current drafts of informed consent forms are
shown in the Annex of D12.1. The updates to these will be included in future
versions of this DMP.
More detailed requirements and documentation will be generated before the
start of any activity involving participation of humans being the subjects of
the study, while fully operating within local, national, and EU regulations.
These forms will be detailed and tailored to the individual demonstration
projects within the Lighthouse cities, in the official language of the
country/city where the demonstration takes place, and include demonstration-
specific aspects and referring to the relevant regulations on data protection
and/or other legislation if applicable.
For all applicable physical meetings and consortium events we will inform
participants that pictures will be taken, and participants will have to
actively consent to, with an option to opt out from pictures being used in
project specific communication. It also concerns photographic evidence of
events, demonstrations, etc. that is done throughout the project and may be
needed for documentation of task and milestone completion. This will also be
taken up with WP10 on communication and WP9 on inter-project collaboration
with regards to documentation of events.
## Data Privacy and Personal Data
Detailed requirements and descriptions of the technical and organisational
measures that will be implemented to safeguard the rights and freedoms of the
data subjects/research participants will be described by tasks that implement
them. Where necessary, data will be anonymised or pseudonymised.
Data minimisation principles will be followed in line with applicable
legislation. The
56 relevance of data collected for tasks will be considered , .
As the project will include the participation of numerous cities requiring
multiple data measurements per city, the actual project beneficiaries,
external stakeholders and citizens involved will vary between tasks. The
project will respect the privacy of all stakeholders and
5. H2020 Ethics and Data Protection
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/ethics/h2020_hi_ethics-d
ata-protection_en.pdf
6. EU, Principles of the GDPR: What data can we process and under which conditions?
https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-
and-organisations/pri nciples-gdpr/what-data-can-we-process-and-under-which-
conditions_en
citizens and will seek free and fully informed consent where personally
identifiable data is collected and processed as described above, implementing
suitable data handling procedures and protocols to avoid potential
identification of individuals. This process will include participants’ data in
activities that use techniques such as questionnaires, interviews, workshops,
or mailing lists as well as automatic building, energy, and mobility data
collection.
The +CityxChange consortium is aware of potential issues arising from data
aggregation from different sources, scales, flows, and devices. Data collected
in the project will thus be anonymised and aggregated as close to the source
as possible. In certain cases, personal data avoidance and minimisation can
eliminate and/or reduce identifiability. For example, energy consumption with
a high temporal resolution can be used to identify personal daily patterns and
routines when gathered at an individual household level. Aggregate data either
with lower temporal resolution (e.g. once a day) or with a lower geographical
resolution (e.g. energy consumption on a district level as is directly
available for energy providers) mitigates this risk. The same approach will be
implemented for mobility data, which can incorporate a much higher level of
personal information and will need to be treated with adequate anonymisation
and aggregation methods.
## Data Protection Officers and GDPR compliance
As Coordinator and host institution, NTNU confirms that it has appointed a
Data Protection Officer (DPO) and the contact details of the DPO will be made
available to all data subjects involved in the research (see D12.2).
Respective partners will also follow their internal data protection and
European GDPR 2 regulations. In line with GDPR, individual beneficiaries are
responsible for their own data processing, so the respective beneficiaries are
to involve their own DPOs, who will ensure the implementation and compliance
of the procedures and protocols in line with internal processes and national
regulations. This also includes options to withdraw consent and procedures
that must be in place to deal with privacy violations in a timely manner.
Processing of personal data will respect the Data Protection Principles as set
out:
Lawfulness, fairness and transparency; Purpose limitation; Data minimisation;
Accuracy;
Storage limitation; Integrity and confidentiality; accountability.
Each beneficiary is reminded that under the General Data Protection Regulation
2016/679, the data controllers and processors are fully accountable for the
data processing operations, which means that every beneficiary is ultimately
responsible for their data collection and processing. Any violation of the
data subject rights may lead to sanctions as described in Chapter VIII,
art.77-84.
## Data Security
The beneficiaries will implement technical and organisational measures to
ensure privacy and data protection rights in the project.
All ICT systems to be developed will be designed to safeguard collected data
against unauthorized use and to comply with all national and EU regulations.
Engineering best practices and state-of-the-art data security measures will be
incorporated as well as GDPR considerations, and respective guidelines and
principles. Ultimately, each partner is responsible for their own information
security in developed systems, but for overall guidelines, replication
blueprints, and documentation, the ICT ecosystem architecture (WP1, T1.1/T1.2)
will incorporate this aspect in the overall development as part of data
governance in D1.2: Report on the architecture for the ICT ecosystem, due in
M24 and currently under development.
Information security management, which is central to the undertaking of the
project, will follow the guidelines of relevant standards, e.g., ISO/IEC 27001
and 27002 (Code of practice for information security management), to ensure
confidentiality, integrity, and availability. It will additionally include the
Directive on security of network and information systems (‘Cybersecurity
directive’, NIS-Directive 2016/1148) on the security of critical
infrastructures and the ePrivacy Directive 2002/58, as well as European Union
Agency for Network and Information Security (~) guidance. In addition, data
storage will fully comply with the national and EU legal and regulatory
requirements. Partners will ensure and document that used cloud infrastructure
complies with applicable regulations.
## City Processes on Privacy and Security
All project beneficiaries have existing and operational policies regarding
potential ethics issues as well as privacy and security regulations or will
ensure their provision for the tasks where they are necessary.
In addition to the cities, the solution providers in +CityxChange have their
own data protection routines established in their existing operations and in
their development and test activities of the project. They are responsible to
establish compliance with GDPR and other data protection and security
regulations. They will further support and implement guidelines from/with the
ICT tasks in WP1 and this DMP.
In the following, we discuss overall city procedures. Details on Demo Projects
and partners will be given in further updates of this DMP as far as they can
be made available.
TK is currently in the process of establishing a formal privacy policy. It
uses internal tools to ensure internal control and audit and to keep track of
all processes around personal data. TK will ensure that it has legal consent,
updated routines, valid risk and vulnerability analysis, in compliance with EU
and Norwegian law. It has a Data Protection Officer (DPO) responsible for the
municipality and an assistant DPO in each business area, following National
Datatilsynet 3 regulations. Following these regulations, TK has a project
under the municipal director of organization to ensure compliance with GDPR,
and future Norwegian personal data privacy act regulations; TK continuously
aims to maintain compliance. TK has a strong focus on privacy and security
when it comes to ICT systems, including IoT, encryption, etc. Work is based on
ISO 27001 and it complies with all relevant national and EU policies. It has a
dedicated role of Security Architect and relies on an operational provider for
the internal cloud, who is bound by SLAs. TK is one of the initiators and is
participating in the Norwegian municipal sector (KS) investigation by
municipal-CSIRT (Computer Security Incident Response Team). CSIRT is one of
the key elements of the NIS directive.
LCCC has updated its Data Protection Policy to one that is in line with GDPR
and the Data
Protection Act 2018. A new role has been created for GDPR compliance for the
Data Protection Officer - DPO. An existing staff member with auditing
experience has been appointed to the full time role and will ensure compliance
with the requirements of the Irish Data Protection Commissioner 4 5 . The
DPO is currently auditing the organisation for
GDPR compliance. This work is being carried out in conjunction with the
Digital Strategy Programme Manager. LCCC is currently reviewing its Data
Processors Agreements with all its suppliers that access data. A database of
data sets, access, security, business processes, anonymisation etc. is being
documented through this audit and captured into the organisation's CRM system.
LCCC has strict security policies to protect its systems and data, handled by
the ICT Network Team. LCCC complies with the NIS directive by taking
appropriate technical and organisational measures to secure network and
information systems; taking into account the latest developments and consider
the potential risks facing the systems; taking appropriate measures to prevent
and minimise the impact of security incidents to ensure service continuity;
and notifying the relevant supervisory authority of any security incident
having a significant impact on service continuity without undue delay.
Alba Iulia Municipality is compliant with the Data Protection Regulation (EU)
2016/679. It implemented the process of a formal privacy policy. The
municipality elaborated privacy policy notifications for every employee
regarding the new Data Protection Regulation and dedicated a section in the
official web page. Internal tools will ensure internal control and audit and
to keep track of all processes around personal data. Alba Iulia will ensure
that it has legal consent, updated routines, valid risk and vulnerability
analysis, in compliance with EU and Romanian law. A Data Protection Officer
(DPO) is appointed for all the municipality departments in line with GDPR and
ensures compliance with national regulations by the 10
National supervisory Authority for personal data processing . AIM follows its
security policy for ICT use within the municipality organized by the IT team
and the head of IT, with outsourced contract for server management and
maintenance, and the latest audit carried out in 2018. The NIS Directive was
transposed into local law, aligning Romania with the common European framework
for responding to cyber security incidents.
Písek has developed an analysis of municipal processes and its compliance with
GDPR. The
City Council approved an inner policy directive for GDPR on 2018-10-05
(decision no.
290/18). A role of DPO is assigned since 01.03.2018 in the City Bureau, in
line with the 11 national Office for Personal Data Protection and the Act No.
101/2000 Coll., on the Protection of Personal Data (currently amended to meet
the GDPR conditions). The Security
Policy and IS Security Management Plan is handled by the IT department and the
IT
Management Committee in reference to Act No. 365/2000 Coll., On Public
Administration
Information Systems, by the IT Management Committee. The NIS Directive is
reflected in Act No. 181/2014 Coll., on Cyber Security, the Decree of the
National Security Authority (NBÚ)
No. 316/2014 Coll., the Cyber Security Order; Decree of NBÚ and Ministry of
the interior
(MVČR) No. 317/2014 Coll., on Important Information Systems and their
Criteria.
The Municipality of Sestao and Sestao Berri are complying with all relevant
regional, national, and European legislation around data security and privacy
in line with the Spanish 12 13 data protection authority AGPD and the Basque
data protection authority AVPD . The latter is working on guides for the
adaptation of public administrations to the General Data Protection Regulation
(GDPR) for the Basque municipalities. The respective Spanish regulations are
followed (Organic Law 3/2018, of December 5, on the Protection of Personal
Data and guarantee of digital rights 14 ).
The data protection role (Delegado de Protección de Datos) is taken by the
General Register of the City of Sestao (Registro General del Ayuntamiento de
Sestao). Detailed data handling for different data sources of the municipality
is described in an extensive list on 15 data use, justification, and rights.
In Smolyan, the policies for information security management are part of the
Integrated
Management System of the Municipality; they comply with the international
standards ISO
9001: 2008, ISO 14001: 2004 and ISO 27001: 2013 for which the municipality is
certified. They are implemented by the Information Security Working Group. A
Personal Data Protection System, complying with Regulation (EC) 2016/679 of
the European Parliament and of the Council of 27 April 2016 has been adopted
by the Municipality of Smolyan. The system has been documented, implemented
and maintained through 9 procedures/policies that include internal
regulations, technical and organizational measures, which the Municipality of
Smolyan applies. The system for protection of personal data is approved by
Order № РД - 0455 / 23.05.2018 of the Mayor of Smolyan
11 Úřad pro ochranu osobních údajů, Czech Republic, https://www.uoou.cz/en/
12 Agencia Española de Protección de Datos - AGPD, Spain,
https://www.aepd.es/
13. Agencia Vasca de Protección de Datos - AVPD, Basque Country, http://www.avpd.euskadi.eus/s04-5213/es/
14. Ley Orgánica 3/2018, de 5 de diciembre, de Protección de Datos Personales y garantía de los derechos digitales, https://www.boe.es/buscar/doc.php?id=BOE-A-2018-16673 15 http://www.sestao.eus/es-ES/Institucional/Paginas/informacion-adicional.aspx
Municipality. It is constantly improving both in the case of significant
changes in the legal framework and in other related circumstances.
A DPO has been appointed, following regulations from the Commission for
Personal Data
Protection 16 and working with the Information Security Working Group. The
Personal Data Administrator is responsible for the compliance of the
processing of personal data, as required by European and national legislation.
It links with the Bulgarian Law for protection of personal data (The Privacy
Act) and the Act for Access to Public Information. A Network Security
Management and Remote Access Policy is based on ISO 27001:2013 with respect to
the protection of the information on the network, the supporting
infrastructure and the establishment of rules for configuring the internal
servers owned and managed by the municipality of Smolyan. It connects to the
Management Policy of the Municipality of Smolyan as well as a total of nine
Information
Security Management Policies, which are part of the Integrated Management
System of the
Municipality.
Võru follows its own privacy policy with its ISKE working group and data
protection working group. Specialists have additional tasks to supervise
implementation of the privacy policy in 17 the organisation, following the
rules of the Estonian Data Protection Inspectorate . The DPO has mapped the
current situation, and works with documentation and suggests changes if
needed. The national principles are observed, in the coordination of the
respective draft law, including recommendations from the Information Systems
Authority.
## Project-Specific Partner Processes on Privacy and Security
### Building data and stakeholder/citizen engagement
For a number of tasks within the PEB for Limerick, data from building owners
is needed, for example yearly or monthly energy bills, and floorplans or
detailed blueprints of buildings. At later stages, detailed personal data may
be needed as well. Data has been difficult to obtain from the building owners,
and the level of approvals required from different parties was at a level not
previously anticipated. In addition, these activities needed to be aligned
with, plans and actions for citizen engagement, so that building owners were
not surprised by the request and can react positively to it, in line with
overall stakeholder engagement by the project and overall alignment. This
also shows that collecting data is a part of overall interaction with the
communities and needs to be integrated into those plans. An overall
MoU is being developed for interactions with building owners.
Collecting data is a part of interaction with the communities.
16. Комисия за защита на личните данни, Bulgaria, https://www.cpdp.bg/en/
17. Andmekaitse Inspektsioon, Estonia, https://www.aki.ee/en
Relevant risks on data availability and GDPR compliance in collecting data
have been added to the project risk table.
### Data Privacy Impact Assessments
As part of the project work, Limerick is planning a Data Privacy Impact
Assessment (DPIA) for WP4. This process may be replicated later by the other
cities.
Main questions include:
* Is personal data protected?
* How can we manage this?
* In what scenarios will we collect data?
* What Smart Grid application are we building?
* Status of a Data Controller or a Data Processor needs to be clarified with energy partners
In addition, partners are examining the Data Protection Impact Assessment
Template for Smart Grid and Smart Metering systems (2018) 6 . Specifically,
for the Smart Grid applications, non-exhaustive examples of Personal Data
which gives rise to conduct a DPIA, would be: Consumer registration data,
Usage data (energy consumption, in particular household consumption, demand
information and time stamps), as these provide insight in the daily life of
the data subject, Amount of energy and power provided to grid (energy
production), as they provide insight into the amount of available sustainable
energy resources of the Data Subject, Profile of types of consumers, as they
might influence how the consumer is approached, Facility operations profile
data (e.g. hours of use, how many occupants at what time and type of
occupants), Frequency of transmitting data (if bound to certain thresholds),
as these might provide insight in the daily life of the data subject, Billing
data and consumer’s payment method
### Workshop on Privacy and Smart City Data Model Structure
At the Consortium Meeting in Limerick on 23rd of October 2019, a workshop was
held on privacy and Smart City Data Model Structure. It focused on knowledge
sharing, challenges, and identification of possible solutions.
During this workshop 6 main points were discussed in relation to data
management and interoperability of the systems developed as part of the
project:
1. Enterprise Architecture,
2. Data integration,
3. City Data, Open data portals, APIs,
4. Data Protection Impact Assessments
5. Informed consent
6. DMP and open research data
During the project multiple partners will be creating new services, which need
to use data. How do we ensure data exchange between partners in the long term:
maybe responsibility can be fortified by data exchange contracts? We also need
to create a story for citizens to understand how the enterprise architecture
is applied in order to protect their personal data and interests. The new
services and solutions developed by +CityxChange for the LHCs will further
have to be replicated to the FCs. As stated above, the project will follow the
EU rules on GDPR. The legal basis for Personal Data Processing must always be
identified. Details on the discussion of DPIAs have been detailed in the
subsection above.
# 3 Data Management, Sharing and Open Access
+CityxChange will distinguish four key categories of data arising from the
project:
* **underlying research data** : data necessary for validation of results presented in scientific papers, including associated metadata, which works hand in hand with the general principle of openness of scientific results. The consortium will provide timely open access to research data in project-independent repositories and link to the respective publications, to allow the scientific community to examine and validate the results based on the underlying data. +CityxChange has a commitment to publish results via Gold Open Access and has allocated a budget for it. The deposition of research data will depend on the type and channel of publication, ranging from associating data with a publication at the publisher, university or national research data repositories, or the use of the OpenAIRE infrastructure, following the H2020 best practice, with particular focus on peer-reviewed journal articles, conference papers, and datasets of various types.
* **operational and observational data** : This category includes curated or raw data arising from the implementation, testing, and operation of the demonstrators (operational data), and data from related qualitative activities, such as surveys, interviews, fieldwork data, engagement activities (observational data). +CityxChange will make this data available in coordination with the ICT ecosystem and respective partner repositories, opening it up for project partners and stakeholders, and to citizens and interested third parties to support engagement and innovation (WP3), where possible and allowed under regulations and privacy issues.
* **monitoring and evaluation data** : This data will specifically be captured to track
KPIs of the project performance in WP7 and will be regularly reported and
published 19 to the Smart Cities Information System (SCIS) in a clearly
defined and open way. In addition, monitoring data will be available in the
project’s M&E system (for system and data description, see D7.3: Data
Collation, Management and Analysis Methodology Framework; D7.4: Monitoring and
Evaluation Dashboard; ongoing reporting will be described in D7.5: Data
Collection and Management Guideline Reports 1; D7.6: Reporting to the SCIS
system 2; and the subsequent Deliverables).
* **documentation, instruments, and reusable knowledge** : This concerns general and specific documentation of the project and demonstration/implementation projects, including tools, methods, instruments, software, and underlying source code needed to replicate the results. A number of collaboration and document management tools will be used, ranging from collaboration solutions, source code repositories (e.g. git) over document stores to the project website (WP10). Clean and consistent documentation and publication will support dissemination impact. All public Deliverables will be published on the project website 20 in Open Access with open licenses.
19. EU Smart Cities Information System (SCIS) http://smartcities-infosystem.eu/
20. +CityxChange Knowledge Base: https://cityxchange.eu/knowledge-base/
## Data Handling Descriptions
Apart from other mechanisms within the project, such as communication,
outreach, citizen participation, peer-to-peer learning workshops and networks,
measures such as sharing of data, documentation, and results will be an
important contributing factor to the project goals. The project will ensure
that research data is ‘findable, accessible, interoperable and reusable’
(FAIR), in line with the H2020 Guidelines on FAIR Data Management.
The following describes the guidelines and expectations for relevant data sets
along with detailed description, metadata, methodology, standards, and
collection procedure. Further details are types of data, data formats and
vocabularies, storage, deadlines for publication, data ownership rules, and
detailed decisions regarding data management and protection. Issues to be
defined will be, for example, the confidentiality needs of utility providers,
the privacy needs of citizens, commercialisation and cybersecurity issues,
together with general ethical, legal, and regulatory considerations and
requirements.
At the time of delivery, most tasks have not yet fully defined the type and
structure of the data that they need or will generate or can make available.
Part of these tasks is also considered and documented in the overall ICT
ecosystem architecture and interface Tasks (T1.1 and T1.2) and in the KPI
development and data collection in WP7 on Monitoring and Evaluation. Regarding
data governance, main areas of concern are Open data, Open data models, Clear
definitions of data ownership and accessibility, Data audit process to support
transparency, Change management guidelines to track the data changes,
Standardised rules and guidelines.
As part of the DMP, storage, processing, protection, dissemination, retention,
destruction will be collected and documented.
For this, individual Tasks within the Work Packages will specify and implement
approaches related to data collection, management, and processing measures
that are most appropriate based on data avoidance, especially concerning
personally identifiable aspects of data sets, in coordination with Task T11.6
for the DMP.
Individual data collection will be handled by the involved partners and cities
in the Work Packages, keeping much data processing close to the source and
within the originating partners, while providing a loosely coupled overall
architecture through suitable architecture choices and guidelines.
Architectural details will be described by the ICT ecosystem Tasks T1.1, T1.2
in WP1.
To ensure maximum use and quality of open research data and re-use of existing
data for example from city Open Data Portals, the project will base much of
the internal collaboration on structured research data sets collected in
standardized formats in collaboration with WP1/2/3, WP7 and WP10/11. This will
help ensure that deposited datasets can be evaluated internally as well
regarding their use for the scientific community (‘dogfooding’, and
organisation using its products and services also internally. In this case,
also avoiding duplicate work by making as much data as possible available in
structured formats for internal use and external dissemination). Such an
approach should also support outreach activities such as hackathons, by
enabling low-barrier access for external stakeholders. Where possible,
research data and associated metadata (standardised as Dublin Core, W3C DCAT,
or CSVW) will be made available in common standard machine-readable formats
such as Linked Open Data (LOD) in coordination with T1.2, enabling it to be
linked to other public datasets on the Web and to facilitate discovery and 21
automatic processing. Example approaches include the ESPRESSO framework , Open
ePolicy Group, and others, to be detailed in WP1. In addition, data must also
be interoperable to facilitate ease of access and exchange. As set out in the
new EU 22
‘Interoperability Framework’ , this is vital to the functioning of pan-
European business and to impact for H2020 projects.
For all tasks, digital copies of all data will be stored for a minimum of
three years after the conclusion of the grant award or after the data is
released to the public, whichever is later. All information and data gathered
and elaborated will be suitably described in the respective Deliverables. All
public Deliverables will be made available and archived on the project website
and through the EU Community Research and Development Information 23
Service (CORDIS) for the project . The project aims to make research data and
publications freely available through Open Access and suitable repositories.
Pending detailed descriptions, the following table shows the data handling
summary template for use within the DMP and within Tasks for documentation:
21. Espresso – systEmic standardisation apPRoach to Empower Smart citieS and cOmmunities http://espresso.espresso-project.eu/
22. The New European Interoperability Framework | ISA² - Promoting seamless services and data flows for European public administrations, 2017, https://ec.europa.eu/isa2/eif_en
23. Positive City ExChange | Projects | H2020 | CORDIS | European Commission, https://cordis.europa.eu/project/rcn/219210/factsheet/en
Template for data handling and management summary (to be made into a table in
the shared document space when examples are available)
<table>
<tr>
<th>
Task/Demo/Activity
</th>
<th>
Task Name/Demo Name/Task Links
</th> </tr>
<tr>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
Purpose and relevance of data collection and relation to objectives
</td>
<td>
</td> </tr>
<tr>
<td>
Methodology
</td>
<td>
</td> </tr>
<tr>
<td>
Data source, data ownership
</td>
<td>
</td> </tr>
<tr>
<td>
Standards, data formats, vocabularies
</td>
<td>
</td> </tr>
<tr>
<td>
Storage
</td>
<td>
</td> </tr>
<tr>
<td>
Security & Privacy considerations
</td>
<td>
</td> </tr>
<tr>
<td>
Exploitation/Dissemination
</td>
<td>
</td> </tr>
<tr>
<td>
Dissemination Level,
Limitations, Approach, Justification
</td>
<td>
</td> </tr>
<tr>
<td>
Stakeholders
</td>
<td>
</td> </tr> </table>
## Access Rights and Procedures
In line with the Consortium Agreement and the Grant Agreement, research
results are owned by the partner that generates them. However, the stated aim
is to make data and results publicly available, whenever possible. Further
access rights and regulations are set forth in the Consortium Agreement as
rights and obligations of partners. In particular, Consortium partners will
give each other access to data that is needed to carry out the project.
Partners will furthermore give each other access under fair and reasonable
conditions to exploit their results. For other affiliated entities, access can
be granted under fair and reasonable conditions for data and research output,
as long as it is not covered by the Open Access conditions, provided such
access is in line with the project goals and confidentiality agreements. Data
published or otherwise released to the public will include disclaimers and/or
terms of use as deemed necessary.
The protection of intellectual property rights, detailed terms for access
rights, and collective and individual exploitation of IP are agreed upon in
the Consortium Agreement (Section 8 page 19, Section 9 page 21, Section 10
page 26) and Grant Agreement (Section 3, page 43). Some Deliverables will
include project internals which do not need to be public. Some others will
include detailed specifications for the software tools and methodologies;
these will remain confidential as per the Deliverable designation as they
contain potentially patentable information.
Any data relating to the demonstration sites, e.g. metered data, utility bills
will remain the property of the demonstration sites and will only be shared
with the permission of the demonstration site owner. Aggregated data for
purposes of Monitoring and Evaluation will be shared under open licenses (cf.
Section Dissemination).
Software licenses will be aimed to be as open as possible, with Creative
Commons for documentation and GNU-style licenses for software as a default.
For example, GPLv3 (GNU General Public License) 7 , MIT 8 , or Apache 9
10 are open and permissible licenses, with GPL additionally using a share-
alike model for sharing only under the original conditions (reciprocal
license).
Adaptations are expected for commercial partners to be aligned with their IPR
strategy. A balance is needed for openness and need for marketability,
patenting, and other IPR issues.
This will be handled by the industry partners together with the cities, and is
also linked to
WP8 on Replication and the Innovation Manager in the Project Management Team.
## Open Access to publications
The dissemination activities within the project will include a number of
scientific and other publications. +CityxChange is committed to dissemination
and the principle of Open Access for scientific publications arising from the
project, in line with the H2020 Guidelines to
27
Open Access . It further aims to make research data open as described above. A
budget has been set aside for the academic partners to support gold open
access publishing.
Publication of scientific papers will be encouraged by the +CityxChange
consortium. For cases where it may interfere with seeking protection of IPR or
with publication of confidential information, a permission process for
publishing any information arising from the project is put in place in the
Consortium Agreement. Notification needs to be given at least 45 days before
the publication, with objections subject to the rules of the Consortium
Agreement.
The project aims for Gold Open Access publication of scientific peer-reviewed
papers where possible and will adopt a Green Open Access strategy as a
fallback. At the minimum, this
will include self-archiving of publications in known centralized or
institutional repositories, for example the NTNU institutional archive NTNU
Open 28 the UL Institutional Repository 29 , 30 or OpenAIRE . Authors will
ensure appropriate bibliographic metadata is published as well, where
possible. It will be in a standard format and include the terms "European
Union (EU)" & "Horizon 2020"; the name of the action, acronym & grant number
as below; publication date, length of the embargo period, if applicable; and a
persistent identifier. These requirements are also codified in Article 29.2 of
the Grant Agreement on Open Access.
Authors will aim to retain copyright and usage rights through open licenses,
such as 31
Creative Commons Attribution License (CC-BY 4.0 /CC-BY-SA) or otherwise
publisher agreements to a similar effect will be pursued. Project participants
will ensure that all publications acknowledge the EU H2020 funding and the
name and grant number of the project, including the standard disclaimer as is
also found on the title page of this document (+CityxChange, project number
824260). Deliverables are public by default through a Creative Commons CC-
BY4.0 license. Other CC licenses can be applied after consultation. External
third-party material will be labeled as such, to clearly identify such content
and exclude it from the free use given for consortium-generated material. This
can be done by excluding such content in the general license statement and by
identifying 32 copyright information next to third-party material included in
documents .
## Open Research Data and Open City Data
Quality-assured data is a cornerstone of scientific research and of industry
and city developments. Research data should be freely, publicly, and
permanently available where possible and appropriate to support validation of
results and re-use of data for example in research, development, and open or
citizen science as well as Open Innovation. 33
+CityxChange participates in the Pilot on Open Research Data (ORD) and will
thus aim to provide open access to raw and aggregated curated datasets. The
project aims to make research data findable, accessible, interoperable and re-
usable (FAIR) in line with the H2020 Guidelines on FAIR Data Management.
28. https://www.ntnu.edu/ub/research-support/open-access
29. https://ulir.ul.ie/ 30 https://www.openaire.eu/
31. Creative Commons License Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/
32. For example, in the license label at the beginning: “CC-BY4.0 Creative Commons Attribution, except where otherwise noted.” and a full copyright and attribution next to third-party content in the document.
See also the CC guidelines:
https://wiki.creativecommons.org/wiki/Marking/Creators/Marking_third_party_content
33. H2020 Programme Guidelines to the Rules on Open Access to Scientific Publications and Open
Access to Research Data in Horizon 2020
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-pil ot-guide_en.pdf
Data will be made accessible for verification and reuse through appropriate
channels and repositories. Limits of access and availability are to be given
in individual data descriptions and will be further developed within the
project with the aim of greater openness.
Where research data is made available, it will be made available in recognized
repositories such as OpenAIRE or Zenondo, or local repositories of
universities or national research institutes, with possible assistance from
national OA desks.
Apart from research data repositories, the partner cities in +CityxChange are
working on or running their own City Open Data Portals, where general data
arising from the project should also be made available. Data may also be
federated into research repositories or other systems. The Lighthouse Cities
have a strong interest in this and will focus on open data through existing or
new projects.
Insight.Limerick.ie is the Limerick Data as a Service platform that integrates
data about
Limerick from multiple sources and provides open access to linked open data
and open APIs at _ http://insight.limerick.ie/ _ . Data is available for
viewing in charts and maps and also as open format downloads. While no formal
open data policy is being enforced, the concept of making data available as
open data is being encouraged throughout the workforce. Open data published
here will also become available in the national open data portal
www.data.gov.ie.
Trondheim has set up an open data portal based on CKAN. It is available at
_ https://data.trondheim.kommune.no _ . In TK, there is a general drive
towards making more data available. TK has a wealth of data, and is in the
process of opening up as much non-personally-identifiable data as possible,
even though much data is unfortunately locked in vendors systems without a
proper API to get the data out. TK is part of a national research project
-SamÅpne- that looks into the barriers of opening up municipal data, and is
working on a solution. Data is and will also be made available in the national
open data portal _http://data.norge.no/_
The Follower Cities are working towards Open Data, and are already using a
variety of processes and tools to make data available.
Smolyan uses the National Portal for Open Data, as required by the Access to
Public Information Act. The Open Data Portal is a single, central, public web-
based information system that provides for the publication and management of
re-use information in an open, machine-readable format, along with relevant
metadata: _https://opendata.government.bg/?q=smolyan_
Písek follows the national level guideline for Open Data publishing and is
preparing its publication plan as part of Smart Písek. Initial solutions are
implemented for new information systems: _ https://smart.pisek.eu/portal.htm
l _
Alba Iulia is building an open data portal as one component of its smart city
portfolio. It is being tested and will be published when sufficient data is
available. Regarding the fact that Open data underpins innovation and out-of-
the-box solutions in any area, Alba Iulia is an early partner in the Open
Energy project, developed by one of the Alba Iulia Smart City Pilot Project -
CivicTech (IT-based NGO). This is the first open energy consumption data
platform in public institutions, having the purpose to monitor this
consumption transparently, which will enable the identification of better
patterns of consumption prediction, facilitate the transfer of good
institutional practices, encourage investment in the efficiency of energy
consumption and in the future will support the taking of responsible
consumption of electricity among the whole society. At this point the open
data platform is not published yet as the partner found some difficulties in
funding the development of this solution. Being a pilot project and with no
financial involvement on behalf of Alba Iulia Municipality, it is dependent
entirely on local partners’ team efforts.
Sestao and Võru currently have no own portals.
The project aims to make anonymised data sets public, but will aim to strike a
balance between publication of data and privacy and confidentiality issues.
When in doubt, the consortium will refrain from publishing raw datasets and
only report aggregate measures. Decisions will be made on a case-by-case basis
by senior researchers to ensure that privacy, anonymity, and confidentiality
are not breached by publication of datasets or any other type of publication.
In addition, ongoing consultation with the relevant Data Protection Offices
will be ensured during the lifetime of the project.
This will also ensure that data is preserved, available, and discoverable. In
any case of data dissemination, national and European legislation will be
taken into account. To ensure free and open access with clear licensing, the
project will mostly adopt Creative Commons licenses ranging from attribution
to share-alike licenses (such as CC-BY 4.0/CC-BY-SA 4.0).
As above, publications will have bibliographic metadata attached where
possible, which is extended to research data. Where possible, research data
and associated metadata will be made available in common standards and
possibly as Linked Open Data. Annotations will be at minimum at the dataset
level, to support interoperability of data.
There is currently no separate operating budget for this, as it will be taken
as part of the budget for website and platform management, use existing
infrastructure at the Coordinator, and the cities will for example achieve
this through their Open Data portals (see next section), other partners, or
will use free and open repositories.
## Document Management
As noted in the overall consortium plan (D11.1), documents in the consortium
are handled in one overall platform for easy collaboration and findability of
overall project documentation. The project has set up a shared file repository
in the form of an Enterprise installation of Google Drive, including
collaborative editing tools for documents, spreadsheets, and presentations.
The instance is made available by Trondheim Kommune and is compatible with all
applicable regulations. The repository is accessible by invitation. Access
will be granted to registered members of the consortium. Generally, it is not
recommended to share highly sensitive data, on this system.
The handling of sensitive documents will be coordinated with the DPO of the
host partner. The partners have internal repositories and processes for
dealing with such sensitive data and how it can be shared for research (see
also next section on archiving).
Additional sharing and development tools can be set up by specific tasks if
needed, such as version control software that is outside the scope of the
overall platform, but will be documented and linked there.
## Archiving and Preservation
Deliverables will be archived on the project website. The internal datasets
will be backed up periodically so that they can be recovered (for re-use
and/or verifications) in the future. Published datasets, raw or aggregated,
will be stored within internal and external repositories and thereby ensure
sustainability of the data collection. Records and documentation will be in
line with common standards in the research fields to ensure adherence to
standards, practices, and data quality. Data will be retained for three years
after the conclusion of the grant award or after the data are released to the
public, whichever is later.
The LHCs LCCC and TK together with NTNU as the Coordinator will ensure long-
term data curation and preservation beyond the project period. It will be
implemented as sustainability of the monitoring and evaluation platform and
data. This is linked to WP7 and prepared in T7.6 on migration of the
monitoring system, and as sustainability of the project documentation and
website, linked to WP10 and WP11.
# 4 Dissemination and Exploitation
Disseminating and exploitation of the project outputs and results are an
important step to achieve the project goals. This is done in cooperation with
WP10 on Dissemination and Communication, WP9 on Inter- and Intra Project
Collaboration, WP11 on Project
Coordination, and all partners. As detailed above, data will be made as open
as possible. All consortium partners, together take responsibility for
exploitation and dissemination of results and to ensure visibility and
accessibility of results. Implementing FAIR data principles will support the
openness and re-use of data and therefore dissemination and replication.
Different dissemination channels are estimated to be used and maintained
during and after the project as shown in the following table:
<table>
<tr>
<th>
Dissemination type
</th>
<th>
Usage
</th>
<th>
Policy
</th> </tr>
<tr>
<td>
Website
</td>
<td>
Main reference point for project dissemination and data description
</td>
<td>
Creative Commons where applicable. External rights clearly marked.
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Deliverables to the EU and the public. Disseminated through the project
website cityxchange.eu and the EU
Cordis system.
</td>
<td>
Dissemination level set per deliverable, public by default and open with
Creative Commons Attribution CC-BY4.0. 86% of 148 deliverables are public, 20
are confidential.
</td> </tr>
<tr>
<td>
Social Media
</td>
<td>
Support of communication
activities
</td>
<td>
To be decided. Creative
Commons where applicable.
</td> </tr>
<tr>
<td>
Newsletters
</td>
<td>
Regular updates and links to website and other channels
</td>
<td>
Creative Commons where applicable.
</td> </tr>
<tr>
<td>
Publications
</td>
<td>
Scientific and other publications arising from the project
</td>
<td>
Open Access to publications as detailed above.
</td> </tr>
<tr>
<td>
Benchmarking, Monitoring & Evaluation, KPIs
</td>
<td>
Monitoring of indicators for project and city performance
</td>
<td>
Aggregate KPI data can be openly and publicly reported to SCIS, in line with
the overall SCIS policy and license (updated with the updated SCIS license for
dissemination). Limitations due to privacy and data policies may apply.
General
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
data governance issues around this will be followed up in future versions of
the
DMP and in WP1 and WP7. The license for KPI data inside the +CItyxChange M&E
system and the data to be reported into SCIS will be under a CC-BY4.0 Creative
Commons Attribution (https://creativecommons.or g/licenses/by/4.0/)
Raw data or supporting data and documentation for achieving targets (for
example for survey-based indicators or detailed personally identifiable data
from single areas) will be kept confidential. This will be detailed in the WP7
methodology.
</td> </tr>
<tr>
<td>
Research data as laid out in Data Management section
</td>
<td>
Underlying research data of the project
</td>
<td>
Open Access with limitations due to privacy, as detailed above, in accordance
with the FAIR guidelines on Data Management in H2020.
</td> </tr>
<tr>
<td>
Any other data
</td>
<td>
TBD
</td>
<td>
Wherever possible, open through Creative Commons or other open licenses. 'As
open as possible, as closed as necessary'; and ‘open by
default’.
</td> </tr> </table>
# 5 Conclusion
This deliverable constitutes the second DMP for +CityxChange at the time of
delivery by October 2019. The Project Management Team will regularly follow up
with the consortium members to refine and update the DMP. Responsibilities
reside with NTNU and all consortium members.
More detailed procedures, descriptions, forms, etc. will be added as they
become available through the ongoing work in the respective Work Packages. The
next update will include detailed data summaries for the work that is being
started in that period, and with more detailed partner processes and
descriptions of data sets and consent procedures.
The DMP will be updated at least annually, with the next regular update due in
M24 as
D11.16 Data Management Plan 3.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1506_CityxChange_824260.md
|
# Executive Summary
This deliverable constitutes the initial Data Management Plan for the
+CityxChange project (824260). It specifies Data Governance and handling of
data in the project, what types of data are expected to be generated in the
project, whether and how it will be made open and accessible for verification
and re-use, how it will be curated and preserved, and details ethical,
privacy, and security issues.
All beneficiaries are informed of the applicable regulations around human
participation, informed consent, data processing, data security, and the
pertinent regulations such as GDPR or H2020 Ethics or FAIR guidelines. When
personal data collection or processing is started, the DMP information will be
updated accordingly to include updated data summaries, consent forms,
compliance, and institutional approval where necessary. Processing of personal
data will respect Data Protection Principles.
This document provides the overview of data handling in the project and
provides the initial guidelines for the project. The project will support
openness according to the EU principle "as open as possible, as closed as
necessary" together with the project ambition of “Open by Default”.
# Section 1: Introduction
This deliverable presents the initial Data Management Plan (DMP) for the
+CityxChange project (824260). This is the first version of the DMP. It
describes overall Data Governance in the project, including the lifecycle of
data to be collected, generated, used, or processed within the project and the
handling of data, including methodologies, data sharing, privacy and security
considerations, legal and regulatory requirements, informed consent, open
access, for during and after the project. The Deliverable is part of Task
11.6: Delivery of Data Management Plan and is linked with Task 11.2: Delivery
of Consortium Plan, and Task 11.1: Project Management. It is further linked to
Ethics
Deliverables D12.1 H - Requirement No. 1 on Human Participants and D12.2 POPD
- Requirement No. 2 on Protection of Personal Data. Some content from D12.1
and D12.2 and the Description of Action (DoA) is reiterated here.
+CityxChange has a strong commitment in place for maximizing dissemination and
demonstration of the value of the implemented targets and measures. Strong
dissemination of results, sharing of data, communication, and replication are
a key success factor in making the project results more accessible,
attractive, evaluable, replicable, and implementable for a broad set of
stakeholders. The project aims to make research data findable, accessible,
interoperable and re-usable (FAIR) in line with the H2020 Guidelines on FAIR
Data Management 1 . +CityxChange participates in the Pilot on Open Research
Data (ORD) and thus delivers this Data Management Plan to define how the
project will implement data management, dissemination, and openness according
to the principle "as open as possible, as closed as necessary" together with
the project ambition of “open by default”.
The consortium will provide Open Data and Open Access to results arising from
the project to support a number of goals, namely: benchmarking with other
projects and comparison of developed measures; improving dissemination,
contribution to the Smart Cities Information System (SCIS), and exploitation
of data and results; improving access and re-use of research data generated
within the project; and knowledge sharing with citizens, the wider public,
interested stakeholders, cities, industry, and the scientific community.
The project is built around transparency and openness. 86% of 148 deliverables
are open, only 20 are confidential, which is a great support for outreach and
replication. Deliverables are expected to be used both internally and
externally, to both inform the project and its team members about activities
and results, and to infirm external stakeholders and potential collaborators
and replicators. This means that documentation is written with a focus on
usefulness for the project and the European Cities and other stakeholders.
Such outreach will also be supported through the inter- and extraproject
collaboration between SCC1 projects in WP9.
In addition, +CityxChange aims to fulfil all ethical requirements and
acknowledges that compliance with ethical principles is of utmost importance
within H2020 and within Smart Cities and Communities projects that involve
citizens and other actors, especially regarding human participants and
processing of personal data. As such, the beneficiaries will carry out the
action in compliance with: ethical principles (including the highest standards
of research integrity); and applicable international, EU and national law.
Beneficiaries will ensure respect for people and for human dignity and fair
distribution of the benefits and burden of research, and will protect the
values, rights and interests of the participants. All partners are aware of
the H2020 Rules of Participation 2 (Sections 13, 14) and the Ethics clauses
in Article 34 of the Grant Agreement and the obligation to comply with ethical
and research integrity principles set out therein and explained in the
annotated Model Grant
Agreement 3 . The project will respect the privacy of all stakeholders and
citizens and will seek free and fully informed consent where personally
identifiable data is collected and processed. Processing of personal data will
respect Data Protection Principles.
Data provided by the project will support a range of goals, such as improving
dissemination and exploitation of data and results; improving access and reuse
of research data; and knowledge sharing with citizens, the wider public,
interested stakeholders, and the scientific community. Documentation and
research data repositories will follow the H2020 best practice, with a focus
on open access, peerreviewed journal articles, conference papers, and datasets
of various types.
This document is based on the main formal project description of the Grant
Agreement and additional documentation built so far in the project. The
+CityxChange project is part of the H2020 SCC01 Smart
Cities and Communities Programme. The related documents for the formal project
description are the Grant Agreement Number 824260 - CityxChange “Positive City
ExChange” (Innovation Action) entered into force 01.11.2018, including the
core contract, Annex 1 Part A (the Description of Action, DoA: beneficiaries,
work packages, milestones, deliverables, budget), Annex 1 Part B (Description
of project, work, background, partners), Annexes (Supporting documentation,
SEAPs, BEST tables, Dataset mapping, etc.), and Annex 2 - Budget. In addition,
the Consortium Agreement of +CityxChange, entered into force 01.11.2018,
details the Consortium Governance and relations of beneficiaries towards each
other. It includes IP-relevant background, including existing data sources.
The parts about open data, security, and privacy processes are taken from the
internal living documentation on ICT governance.
For the role of the Data Manager, the Coordinator has appointed the Project
Manager. As part of the responsibilities of the Project Management Team, the
Data Manager will review the +CityxChange Data Management Plan and revise it
annually or when otherwise required with input from all partners.
This public document describes the status of the DMP at the time of delivery,
January 2019. It will be refined by future deliverables of the DMP and updates
in individual Work Packages, especially around ICT in WP1 and Monitoring &
Evaluation in WP7.
# Section 2: Ethics, Privacy, and Security Considerations
+CityxChange is an innovation action. It is a complex, cross-sectoral, and
interdisciplinary undertaking that involves stakeholders from widely varying
backgrounds. Furthermore, it is a city-driven project, putting cities and
their citizens in the focus.
This means that a majority of data collection and human participation happens
through activities around automated data collection in energy and mobility
scenarios, monitoring and evaluation, as well as citizen participation,
stakeholder engagement, events or peer-to-peer exchanges in developing and co-
creating innovative solutions.
The approach and structure of the project leads to diverse data being
collected and generated using a range of methodologies. As the data is
heterogeneous, a number of methodologies and approaches can be used.
## Ethics Considerations
All 11 Demonstration Projects in the +CityxChange Lighthouse Cities will
require data processing and most require evaluation involving human research
subjects and the collection of personal data. The ethics self-assessment and
Ethics Summary Report identified three ethical issues: 1) human participation,
2) personal data collection of data subjects, and 3) potential tracking or
observation of participants. Details on these are given in D12.1 and D12.2 and
summarised below.
The details for each demonstration case are summarised in the following table
(from D12.1).
<table>
<tr>
<th>
**Identified Demonstration Projects**
</th>
<th>
**Human**
**Participants**
</th>
<th>
**Collection of personal data**
</th>
<th>
**Tracking or observation of participants**
</th> </tr>
<tr>
<td>
Residential, office, multi-use buildings, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, building level,
Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, system level, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Transport data, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Community Engagement, Norway
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Residential, office, multi-use buildings, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, building level, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Energy data, system level, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Transport data, Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Community Engagement,
Ireland
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr> </table>
All activities within +CityxChange will be conducted in compliance with
fundamental ethical principles and will be underpinned by the principle and
practice of Responsible Research and Innovation (RRI) 4 . RRI is important
in the Smart City context where projects work to transform processes around
cities and citizens. Through the +CityxChange approaches of Open Innovation
and Quadruple Helix collaboration, societal actors and stakeholders will work
together to better align the project outcomes with the general values, needs
and expectations of society. This will be done throughout the project, with a
focus within WP9 and WP10 and the city Work Packages. The project uses open
data and openness as part of Open Innovation 2.0 and for stakeholder
participation through measures such as open data, open licences, public
deliverables, hackathons, outreach, living labs, existing innovation labs.
The consortium confirms that the ethical standards and guidelines of Horizon
2020 will be rigorously applied, regardless of the country in which the
research will be carried out, and that all data transfers will be permissible
under all necessary legal and regulatory requirements. This was detailed in
D12.1 and D12.2 and will be followed up in the following section.
All proposed tasks are expected to be permissible under the applicable laws
and regulations, given proper observance of requirements. Where appropriate
information and consent of all stakeholders and citizens is mandated, the
consortium will ensure that all necessary procedures are followed,
particularly with regard to the signing, collation, and storing of all
necessary Informed Consent Forms prior to the collection of any data. All
involved stakeholders and citizens will be informed in detail about measures
and the consortium will obtain free and fully informed consent.
All necessary actions will be taken within the project management and by all
beneficiaries to ensure compliance with applicable European and national
regulations and professional codes of conduct relating to personal data
protection. This will include in particular Directive 95/46/EC regarding data
collection and processing, the General Data Protection Regulation (GDPR,
2016/679), and respective national requirements, ensuring legal and regulatory
compliance. Ethics considerations will feed into research and data collection
protocols used in the project. This will include the collecting and processing
of personal data as well as surveys and interviews. For all identified issues,
in line with the above standards, ethical approvals will be obtained from the
relevant national data protection authorities and/or institutional boards.
In line with existing regulations by the university partners relevant for
social science research, the mapping of the ID and the person will be
safeguarded and will not be available to persons other than the ones working
with the data. This will minimise the risks of ethical violations. Since data
stemming from other kinds of research might be de-anonymized and reconnected
to a person, discipline-specific study designs aim to mitigate or remove this
risk as well for different types of data collection. Results may be used in
anonymised or aggregate form for analysis and subsequent publication in
project reports and scientific papers. All beneficiaries will handle all
material with strict care for confidentiality and privacy in accordance with
the legal and regulatory requirements, so that no harm will be done to any
participants, stakeholders, or any unknown third parties. NTNU as the
coordinator has internal guidelines that comply with GDPR and these will be
followed in all data management.
In addition to relevant national data protection authorities, the university
partners have separate institutional ethics boards or respective national
research boards, which will ensure the correct implementation of all human
participation and data protection procedures and protocols around social
science research. In detail, this includes for Ireland the University of
Limerick Research Ethics Governance and respective Faculty Research Ethics
Committees, and for Norway the Norsk samfunnsvitenskapelig datatjeneste (NSD)
- National Data Protection Official for Research. The Lighthouse Cities
Limerick (IE) and Trondheim (NO) will closely collaborate with their local
universities. The Follower Cities Alba Iulia (RO), Písek (CZ), Smolyan (BG),
Sestao (ES), and Võru (EE) will follow similar procedures for any potential
replication of demonstration projects. Details will be developed within the
respective tasks, mostly in WP3, and input into ongoing versions of the DMP.
## Ethics Requirements and Confirmations
Recruitment and informed consent procedures for research subjects will fulfil
the following requirements (cf. D12.1):
1. The procedures and criteria that will be used to identify/recruit research participants.
2. The informed consent procedures that will be implemented for the participation of humans.
3. Templates of the informed consent/assent forms and information sheets (in language and terms intelligible to the participants).
4. The beneficiary currently foresees no participation of children/minors and/or adults unable to give informed consent. If this changes, justification for their participation and the acquirement of consent of their legal representatives will be given in an update of the DMP and relevant documentation within the respective tasks.
In addition, for the processing of personally identifiable data the following
requirements will be observed (cf. D12.2):
1. The contact details of the host institution’s DPO are made available to all data subjects involved in the research. Data protection policy for the project will be coordinated with the DPO.
2. A description of the technical and organisational measures that will be implemented to safeguard the rights and freedoms of the data subjects/research participants as well as a description of the anonymisation/pseudonymisation techniques that will implemented.
3. Detailed information on the informed consent procedures linked to the above in regard to data processing.
4. Templates of the informed consent forms and information sheets (in language and terms intelligible to the participants) linked to the above regarding data processing.
5. The project currently does not foresee profiling. In case this changes, the beneficiary will provide explanation how the data subjects will be informed of the existence of the profiling, its possible consequences and how their fundamental rights will be safeguarded in an update of the DMP.
6. The beneficiaries will explain how all of the data they intend to process is relevant and limited to the purposes of the research project (in accordance with the ‘data minimisation’ principle).
7. The project does not foresee the case of further processing of previously collected personal data. In case this changes, an explicit confirmation that the beneficiary has lawful basis for the data processing and that the appropriate technical and organisational measures are in place to safeguard the rights of the data subjects will be submitted in an update to the DMP.
## Recruitment of Participants and Informed Consent Procedures
The project will engage with a multitude of participants and stakeholders in
different Work Packages and Tasks. This runs from open to highly targeted
activities, co-creation workshops, citizen engagement, outreach activities,
stakeholder and citizen groups, and other activities. The Deliverable on Human
Participants D12.1 H - Requirement No. 1 has described general guidelines on
the processes to be used. The current drafts of informed consent forms are
shown in the Annex of D12.1.
The updates to these will be included in future versions of the DMP.
More detailed requirements and documentation will be generated before the
start of any activity involving participation of humans being the subjects of
the study, while fully operating within local, national, and EU regulations.
These forms will be detailed and tailored to the individual demonstration
projects within the Lighthouse cities, in the official language of the
country/city where the demonstration takes place, and include demonstration-
specific aspects and referring to the relevant regulations on data protection
and/or other legislation if applicable.
For all applicable physical meetings and consortium events we will inform
participants that pictures will be taken, and participants will have to
actively consent to, with an option to opt out, pictures being used in project
specific communication. It also concerns photographic evidence of events,
demonstrations, etc. that is done throughout the project and may be needed for
documentation of task and milestone completion. This will also be taken up
with WP10 on communication and WP9 on interproject collaboration with regards
to documentation of events.
## Data Privacy and Personal Data
Detailed requirements and descriptions of the technical and organisational
measures that will be implemented to safeguard the rights and freedoms of the
data subjects/research participants will be described by tasks that implement
them. Where necessary, data will be anonymised or pseudonymised.
Data minimisation principles will be followed in line with applicable
legislation. The relevance of data collected for tasks will be considered 5
, 6 .
As the project will include the participation of numerous cities requiring
multiple data measurements per city, the actual project beneficiaries,
external stakeholders and citizens involved will vary between tasks. The
project will respect the privacy of all stakeholders and citizens and will
seek free and fully informed consent where personally identifiable data is
collected and processed as described above, implementing suitable data
handling procedures and protocols to avoid potential identification of
individuals. This will include participants’ data in activities that use
techniques such as questionnaires, interviews, workshops, or mailing lists as
well as automatic building, energy, and mobility data collection.
The +CityxChange consortium is aware of potential issues arising from data
aggregation from different sources, scales, flows, and devices. Data collected
in the project will thus be anonymised and aggregated as close to the source
as possible. In certain cases, personal data avoidance and minimisation can
eliminate and/or reduce identifiability. For example, energy consumption with
a high temporal resolution can be used to identify personal daily patterns and
routines when gathered at an individual household level. Aggregate data either
with lower temporal resolution (e.g. once a day) or with a lower geographical
resolution (e.g. energy consumption on a district level as is directly
available for energy providers) mitigates this risk. The same approach will be
implemented for mobility data, which can incorporate a much higher level of
personal information and will need to be treated with adequate anonymisation
and aggregation methods.
## Data Protection Officers and GDPR compliance
As Coordinator and host institution, NTNU confirms that it has appointed a
Data Protection Officer (DPO) and the contact details of the DPO will be made
available to all data subjects involved in the research (see D12.2).
Respective partners will also follow their internal data protection and
European GDPR 7 regulations. In line with GDPR, individual beneficiaries are
responsible for their own data processing, so the respective beneficiaries are
to involve their own DPOs, who will ensure the implementation and compliance
of the procedures and protocols in line with internal processes and national
regulations. Processing of personal data will respect Data Protection
Principles as set out: Lawfulness, fairness and transparency; Purpose
limitation; Data minimisation; Accuracy; Storage limitation; Integrity and
confidentiality; accountability.
Each beneficiary is reminded that under the General Data Protection Regulation
2016/679, the data controllers and processors are fully accountable for the
data processing operations. Any violation of the data subject rights may lead
to sanctions as described in Chapter VIII, art.77-84.
## Data Security
The beneficiaries will implement technical and organisational measures to
ensure privacy and data protection rights in the project.
All ICT systems to be developed will be designed to safeguard collected data
against unauthorized use and to comply with all national and EU regulations.
Engineering best practices and state-of-theart data security measures will be
incorporated as well as GDPR considerations, and respective guidelines and
principles. Ultimately, each partner is responsible for their own information
security in developed systems, but for overall guidelines, replication
blueprints, and documentation, the ICT architecture and ecosystem (WP1,
T1.1/T1.2) will incorporate this aspect in the overall development as part of
_data governance_ .
Information security management, which is central to the undertaking of the
project, will follow the guidelines of relevant standards, e.g., ISO/IEC 27001
and 27002 (Code of practice for information security management), to ensure
confidentiality, integrity, and availability. It will additionally include the
Directive on security of network and information systems (‘Cybersecurity
directive’, NIS-Directive 2016/1148) on the security of critical
infrastructures and the ePrivacy Directive 2002/58, as well as European Union
Agency for Network and Information Security (ENISA) guidance. In addition,
data storage will fully comply with the national and EU legal and regulatory
requirements. Partners will ensure and document that used cloud infrastructure
complies with applicable regulations.
## City Processes on Privacy and Security
All project beneficiaries have existing and operational policies regarding
potential ethics issues as well as privacy and security regulations or will
ensure their provision for the tasks where they are necessary.
In addition to the cities, the solution providers in +CityxChange have their
own data protection routines established in their existing operations and in
their development and test activities of the project. They are responsible to
established compliance with GDPR and other data protection and security
regulations. They will further support and implement guidelines from the ICT
tasks (WP1) of +CityxChange.
In the following, we detail the city procedures. Details on all partners will
be given in further updates of the DMP as far as it can be made available.
TK is currently in the process of establishing a formal privacy policy. It
uses internal tools to ensure internal control and audit and to keep track of
all processes around personal data. TK will ensure that it has legal consent,
updated routines, valid risk and vulnerability analysis, in compliance with EU
and Norwegian law. It has a Data Protection Officer (DPO) in the municipality
and an assistant DPO in each business area, following National Datatilsynet 8
regulations. Following these regulations, TK has a project under the municipal
director of organization to ensure compliance with GDPR, and future Norwegian
personal data privacy act regulations; TK continuously aims to maintain
compliance. TK has a strong focus on privacy and security when it comes to ICT
systems, including IoT, encryption, etc. Work is based on ISO 27001 and it
complies with all relevant national and EU policies. It has a dedicated role
of Security Architect and relies on an operational provider for the internal
cloud, who is bound by SLAs. TK is one of the initiators and is participating
in the Norwegian municipal sector (KS) investigation by municipal-CSIRT
(Computer Security Incident Response Team). CSIRT is one of the key elements
of the NIS directive.
LCCC has updated its Data Protection Policy to one that is in line with GDPR
and the Data Protection Act 2018. A new role has been created for GDPR
compliance for the Data Protection Officer - DPO. An existing staff member
with auditing experience has been awarded the full time role and will ensure
compliance with the requirements of the Irish Data Protection Commissioner 9
. The DPO is currently auditing the organisation for GDPR compliance. This
work is being carried out in conjunction with the Digital Strategy Program
Manager. LCCC is currently reviewing its Data Processors Agreements with all
its suppliers that access data. A database of data sets, access, security,
business processes, anonymisation etc. is being documented through this audit
and captured into the organisation's CRM system. LCCC has strict security
policies to protect its systems and data, handled by the ICT Network Team.
LCCC complies with the NIS directive by taking appropriate technical and
organisational measures to secure network and information systems; taking into
account the latest developments and consider the potential risks facing the
systems; taking appropriate measures to prevent and minimise the impact of
security incidents to ensure service continuity; and notifying the relevant
supervisory authority of any security incident having a significant impact on
service continuity without undue delay.
Alba Iulia Municipality is compliant with the Data Protection Regulation (EU)
2016/679. It implemented the process of a formal privacy policy. The
municipality elaborated privacy policy notifications for every employee
regarding the new Data Protection Regulation and dedicated a section in the
official web page. Internal tools will ensure internal control and audit and
to keep track of all processes around personal data. Alba Iulia will ensure
that it has legal consent, updated routines, valid risk and vulnerability
analysis, in compliance with EU and Romanian law. A Data Protection Officer
(DPO) is appointed for all the municipality departments in line with GDPR and
ensures compliance with national regulations by the National supervisory
Authority for personal data processing 10 . AIM follows its security policy
for ICT use within the municipality organized by the IT team and the head of
IT, with outsourced contract for server management and maintenance, and the
latest audit carried out in 2018. The NIS Directive was transposed into local
law, aligning Romania with the common European framework for responding to
cyber security incidents.
Písek has developed an analysis of municipal processes and its compliance with
GDPR. The City
Council approved an inner policy directive for GDPR on 2018-10-05 (decision
no. 290/18). A role of
DPO is assigned since 01.03.2018 in the City Bureau, in line with the national
Office for Personal Data Protection 11 and the Act No. 101/2000 Coll., on
the Protection of Personal Data (currently amended to meet the GDPR
conditions). The Security Policy and IS Security Management Plan is handled by
the IT department and the IT Management Committee in reference to Act No.
365/2000 Coll., On Public Administration Information Systems, by the IT
Management Committee. The NIS
Directive is reflected in Act No. 181/2014 Coll., on Cyber Security, the
Decree of the National Security Authority (NBÚ) No. 316/2014 Coll., the Cyber
Security Order; Decree of NBÚ and Ministry of the interior (MVČR) No. 317/2014
Coll., on Important Information Systems and their Criteria.
The Municipality of Sestao and Sestao Berri are complying with all relevant
regional, national, and European legislation around data security and privacy
in line with the Spanish data protection authority AGPD 12 and the Basque
data protection authority AVPD 13 . The latter is working on guides for the
adaptation of public administrations to the General Data Protection Regulation
(GDPR) for the Basque municipalities. The data protection role (Delegado de
Protección de Datos) is taken by the General Register of the City of Sestao
(Registro General del Ayuntamiento de Sestao). Detailed data handling for
different data sources of the municipality is described in an extensive list
on data use, justification, and rights. 14
In Smolyan, the policies for information security management are part of the
Integrated Management System of the Municipality; they comply with the
international standards ISO 9001: 2008, ISO 14001: 2004 and ISO 27001: 2013
for which the municipality is certified. They are implemented by the
Information Security Working Group. A Personal Data Protection System,
complying with Regulation (EC) 2016/679 of the European Parliament and of the
Council of 27 April 2016 has been adopted by the Municipality of Smolyan. The
system has been documented, implemented and maintained through 9
procedures/policies that include internal regulations, technical and
organizational measures, which the Municipality of Smolyan applies. The system
for protection of personal data is approved by Order № РД - 0455 / 23.05.2018
of the Mayor of Smolyan Municipality. It is constantly improving both in the
case of significant changes in the legal framework and in other related
circumstances.
A DPO has been appointed, following regulations from the Commission for
Personal Data Protection 15 and working with the Information Security
Working Group. The Personal Data Administrator is responsible for the
compliance of the processing of personal data, as required by European and
national legislation.
It links with the Bulgarian Law for protection of personal data (The Privacy
Act) and the Act for Access to Public Information. A Network Security
Management and Remote Access Policy is based on
ISO 27001:2013 with respect to the protection of the information on the
network, the supporting infrastructure and the establishment of rules for
configuring the internal servers owned and managed by the municipality of
Smolyan. It connects to the Management Policy of the Municipality of Smolyan
as well as a total of nine Information Security Management Policies, which are
part of the Integrated Management System of the Municipality.
Võru follows its own privacy policy with its ISKE working group and data
protection working group. Specialists have additional tasks to supervise
implementation of the privacy policy in the organisation, following the rules
of the Estonian Data protection Inspectorate 16 . The DPO has mapped the
current situation, and works with documentation and suggests changes if
needed. The national principles are observed, in the coordination of the
respective draft law, including recommendations from the Information Systems
Authority.
# Section 3: Data Management, Sharing and Open Access
+CityxChange will distinguish four key categories of data arising from the
project:
* **underlying research data** : data necessary for validation of results presented in scientific papers, including associated metadata, which works hand in hand with the general principle of openness of scientific results. The consortium will provide timely open access to research data in project-independent repositories and link to the respective publications, to allow the scientific community to examine and validate the results based on the underlying data. +CityxChange has a commitment to publish results via Gold Open Access and has allocated a budget for it. The deposition of research data will depend on the type and channel of publication, ranging from associating data with a publication at the publisher, university or national research data repositories, or the use of the OpenAIRE infrastructure, following the H2020 best practice, with particular focus on peer-reviewed journal articles, conference papers, and datasets of various types.
* **operational and observational data** : This category includes curated or raw data arising from the implementation, testing, and operation of the demonstrators (operational data), and data from related qualitative activities, such as surveys, interviews, fieldwork data, engagement activities (observational data). +CityxChange will make this data available in coordination with the ICT ecosystem designed in WP1 and respective partner repositories, opening it up for project partners and stakeholders, and to citizens and interested third parties to support engagement and innovation (WP3), where possible and allowed under regulations and privacy issues.
* **monitoring and evaluation data** : This data will specifically be captured to track KPIs of the project performance in WP7 and will be regularly reported and published to the Smart Cities Information System (SCIS) 17 in a clearly defined and open way.
* **documentation, instruments, and reusable knowledge** : This concerns general and specific documentation of the project and demonstration/implementation projects, including tools, methods, instruments, software, and underlying source code needed to replicate the results. A number of collaboration and document management tools will be used, ranging from collaboration solutions, source code repositories (e.g. git) over document stores to the project website (WP10). Clean and consistent documentation and publication will support dissemination impact. All public Deliverables will be published on the project website in Open Access with open licenses.
## Data Handling Descriptions
Apart from other mechanisms within the project, such as communication,
outreach, citizen participation, peer-to-peer learning workshops and networks,
measures such as sharing of data, documentation, and results will be an
important contributing factor to the project goals. The project will ensure
that research data is ‘findable, accessible, interoperable and reusable’
(FAIR), in line with the H2020 Guidelines on FAIR Data Management.
The following describes the guidelines and expectations for relevant data sets
along with detailed description, metadata, methodology, standards, and
collection procedure. Further details are types of data, data formats and
vocabularies, storage, deadlines for publication, data ownership rules, and
detailed decisions regarding data management and protection.
Issues to be defined will be, for example, the confidentiality needs of
utility providers, the privacy needs of citizens, commercialisation and
cybersecurity issues, together with general ethical, legal, and regulatory
considerations and requirements.
At the time of delivery, most tasks have not yet fully defined the type and
structure of the data that they need or will generate or can make available.
Part of these tasks is also considered and documented in the overall ICT
architecture and interface
Tasks (T1.1 and T1.2) and in the KPI development and data collection in WP7 on
Monitoring and Evaluation. As part of the DMP, storage, processing,
protection, dissemination, retention, destruction will be collected and
documented.
For this, individual Tasks within the Work Packages will specify and implement
approaches related to data collection, management, and processing measures
that are most appropriate based on data avoidance, especially concerning
personally identifiable aspects of data sets, in coordination with Task T11.6
for the DMP.
Individual data collection will be handled by the involved partners and cities
in the Work Packages, keeping much data processing close to the source and
within the originating partners, while providing a loosely coupled overall
architecture through suitable architecture choices and guidelines.
Architectural details will be described by the ICT ecosystem Tasks T1.1, T1.2
in WP1.
To ensure maximum use and quality of open research data and re-use of existing
data for example from city Open Data Portals, the project will base much of
the internal collaboration on structured research data sets collected in
standardized formats in collaboration with WP1/2/3, WP7 and WP10/11. This will
help that deposited datasets will be evaluated internally as well regarding
their use for the scientific community (‘dogfooding’, and organisation using
its products and services also internally. In this case, also avoiding
duplicate work by making as much data as possible available in structured
formats for internal use and external dissemination). Such an approach should
also support outreach activities such as hackathons, by enabling low-barrier
access for external stakeholders. Where possible, research data and associated
metadata (standardised as Dublin Core, W3C DCAT, or CSVW) will be made
available in common standard machine-readable formats such as Linked Open Data
(LOD) in coordination with T1.2, enabling it to be linked to other public
datasets on the
Web and to facilitate discovery and automatic processing. Example approaches
include the ESPRESSO framework 18 , Open ePolicy Group, and other to be
detailed in WP1. In addition, data must also be interoperable to facilitate
ease of access and exchange. As set out in the new EU ‘Interoperability
Framework’ 19 , this is vital to the functioning of pan-European business
and to impact for H2020 projects.
For all tasks, digital copies of all data will be stored for a minimum of
three years after the conclusion of the grant award or after the data is
released to the public, whichever is later. All information and data gathered
and elaborated will be suitably described in the respective Deliverables. All
public Deliverables will be made available and archived on the project website
and through the EU Community Research and Development Information Service
(CORDIS) for the project 20 . The project aims to make research data and
publications freely available through Open Access and suitable repositories.
Pending detailed descriptions, the following table shows the data handling
summary template for use within the DMP and within Tasks for documentation:
Template for data handling and management summary
<table>
<tr>
<th>
Task/Demo/Activity
</th>
<th>
Task Name
</th> </tr>
<tr>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
Purpose and relevance of data collection and relation to objectives
</td>
<td>
</td> </tr>
<tr>
<td>
Methodology
</td>
<td>
</td> </tr>
<tr>
<td>
Data source, data ownership
</td>
<td>
</td> </tr>
<tr>
<td>
Standards, data formats, vocabularies
</td>
<td>
</td> </tr>
<tr>
<td>
Storage
</td>
<td>
</td> </tr>
<tr>
<td>
Security & Privacy considerations
</td>
<td>
</td> </tr>
<tr>
<td>
Exploitation/Dissemination
</td>
<td>
</td> </tr>
<tr>
<td>
Dissemination Level, Limitations, Approach, Justification
</td>
<td>
</td> </tr>
<tr>
<td>
Stakeholders
</td>
<td>
</td> </tr> </table>
## Access Rights and Procedures
In line with the Consortium Agreement and the Grant Agreement, research
results are owned by the partner that generates them. However, the stated aim
is to make data and results publicly available, whenever possible. Further
access rights and regulations are set forth in the Consortium Agreement as
rights and obligations of partners. In particular, consortium partners will
give each other access to data that is needed to carry out the project.
Partners will furthermore give each other access under fair and reasonable
conditions to exploit their results. For other affiliated entities, access can
be granted under fair and reasonable conditions for data and research output,
as long as it is not covered by the Open Access conditions, provided such
access is in line with the project goals and confidentiality agreements. Data
published or otherwise released to the public will include disclaimers and/or
terms of use as deemed necessary.
Regarding the protection of intellectual property rights, detailed terms for
access rights and collective and individual exploitation of IP are agreed upon
in the Consortium Agreement (Section 8 page 19, Section 9 page 21, Section 10
page 26) and Grant Agreement (Section 3, page 43).
Some Deliverables will include project internals that do not need to be
public. Some others will include detailed specifications for the software
tools and methodologies; these will remain confidential as per the Deliverable
designation as they contain potentially patentable information.
Any data relating to the demonstration sites, e.g. metered data, utility bills
will remain the property of the demonstration sites and will only be shared
with the permission of the demonstration site owner.
Aggregated data for purposes of Monitoring and Evaluation will be shared under
open licenses (cf. Section Dissemination).
Software licenses will be aimed to be as open as possible, with Creative
Commons for documentation and GNU-style licenses for software as a default.
For example, GPLv3 (GNU General Public License) 20 , MIT 21 , or Apache
22 are open and permissible licenses, with GPL additionally using a share-
alike model for sharing only under the original conditions (reciprocal
license).
Adaptations are expected for commercial partners to be aligned with their IPR
strategy. A balance is needed for openness and need for marketability,
patenting, other IPR issues. This will be handled by the industry partners
together with the cities and is also linked to WP8 on Replication and the
Innovation Manager in the Project Management Team.
## Open Access to publications
The dissemination activities within the project will include a number of
scientific and other publications. +CityxChange is committed to dissemination
and the principle of Open Access for scientific publications arising from the
project, in line with the H2020 Guidelines to Open Access 23 . It further
aims to make research data open as described above. A budget has been set
aside for the academic partners to support gold open access publishing.
Publication of scientific papers will be encouraged by the +CityxChange
consortium. For cases where it may interfere with seeking protection of IPR or
with publication of confidential information, a permission process for
publishing any information arising from the project is put in place in the
Consortium Agreement. Notification needs to be given at least 45 days before
the publication, with objections subject to the rules of the Consortium
Agreement.
The project aims for Gold Open Access publication of scientific peer-reviewed
papers where possible and will adopt a Green Open Access strategy as a
fallback. At the minimum, this will include selfarchiving of publications in
known centralized or institutional repositories, for example the NTNU
institutional archive NTNU Open 25 the UL Institutional Repository 24 , or
OpenAIRE 25 . Authors will ensure appropriate bibliographic metadata is
published as well, where possible. It will be in a standard format and include
the terms "European Union (EU)" & "Horizon 2020"; the name of the action,
acronym & grant number as below; publication date, length of the embargo
period, if applicable; and a persistent identifier.
These requirements are also codified in Article 29.2 of the Grant Agreement on
Open Access.
Authors will aim to retain copyright and usage rights through open licenses,
such as Creative Commons Attribution License (CC-BY 4.0 26 /CC-BY-SA) or
otherwise publisher agreements to similar effect will be pursued. Project
participants will ensure that all publications acknowledge the EU H2020
funding and the name and grant number of the project, including the standard
disclaimer as also found on the title page of this document (+CityxChange,
project number 824260). Deliverables are public by default through a Creative
Commons CC-BY4.0 license. Other CC licenses can be applied after consultation.
_External third-party material_ will be labeled as such, to clearly identify
such content and exclude it from the free use given for consortium-generated
material. This can be done by excluding such content in the general license
statement and by identifying copyright information next to third-party
material included in documents 27 .
## Open Research Data and Open City Data
Quality-assured data is a cornerstone of scientific research and of industry
and city developments. Research data should be freely, publicly, and
permanently available where possible and appropriate to support validation of
results and re-use of data for example in research, development, and open or
citizen science as well as Open Innovation.
+CityxChange participates in the Pilot on Open Research Data (ORD) 28 and
will thus aim to provide open access to raw and aggregated curated datasets.
The project aims to make research data findable, accessible, interoperable and
re-usable (FAIR) in line with the H2020 Guidelines on FAIR Data Management.
Data will be made accessible for verification and reuse through appropriate
channels and repositories. Limits of access and availability are to be given
in individual data descriptions and will be further developed within the
project with the aim of greater openness.
Where research data is made available, it will be made available in recognized
repositories such as OpenAIRE or Zenondo, or local repositories of
universities or national research institutes, with possible assistance from
national OA desks.
Apart from research data repositories, the cities in +CityxChange are working
on or running their own City Open Data Portals, where general data arising
from the project should also be made available. Data may also be federated
into research repositories or other systems. The Lighthouse Cities have a
strong interest in this and will focus on open data through existing or new
projects.
Insight.Limerick.ie is the Limerick Data as a Service platform that integrates
data about Limerick from multiple sources and provides open access to linked
open data and open APIs at
_http://insight.limerick.ie/_ . Data is available for viewing in charts and
maps and also as open format downloads. While no formal open data policy is
being enforced, the concept of making data available as open data is being
encouraged throughout the workforce. Open data published here will also become
available in the national open data portal www.data.gov.ie.
Trondheim is setting up an open data portal based on CKAN. At the time of
writing, a temporary test version is available on
_https://open.trondheim.kommune.no/_ . During the start of 2019, the interface
will be changed and new data will be added. In TK, there is a general drive
towards making more data available. TK has a wealth of data, and currently is
in the process of opening up as much nonpersonally-identifiable data as
possible. Data is and will also made available in the national open data
portal _http://data.norge.no/_
The Follower Cities are working towards Open Data, and are already using a
variety of processes and tools to make data available.
Smolyan uses the National Portal for Open Data, as required by the Access to
Public Information Act. The Open Data Portal is a single, central, public web-
based information system that provides for the publication and management of
re-use information in an open, machine-readable format, along with relevant
metadata: _https://opendata.government.bg/?q=smolyan_
Písek follows the national level guideline for Open Data publishing and is
preparing its publication plan as part of Smart Písek. Initial solutions are
implemented for new information systems:
_https://smart.pisek.eu/portal.html_
Alba Iulia is building an open data portal as one component of its smart city
portfolio. It is being tested and will be published when sufficient data is
available. Regarding the fact that Open data underpins innovation and out-of-
the-box solutions in any area, Alba Iulia is an early partner in the Open
Energy project, developed by one of the Alba Iulia Smart City Pilot Project -
CivicTech (IT-based NGO). This is the first open energy consumption data
platform in public institutions, having the purpose to monitor this
consumption transparently, which will enable the identification of better
patterns of consumption prediction, facilitate the transfer of good
institutional practices, encourage investment in the efficiency of energy
consumption and in the future will support the taking of responsible
consumption of electricity among the whole society.
Sestao and Voru currently have no own portals.
The project aims to make anonymised data sets public, but will aim to strike a
balance between publication of data and privacy and confidentiality issues.
When in doubt, the consortium will refrain from publishing raw datasets and
only report aggregate measures. Decisions will be made on a caseby-case basis
by senior researchers to ensure that privacy, anonymity, and confidentiality
are not breached by publication of datasets or any other type of publication.
In addition, ongoing consultation with the relevant Data Protection Offices
will be ensured during the lifetime of the project.
This will also ensure that data is preserved, available, and discoverable. In
any case of data dissemination, national and European legislation will be
taken into account. To ensure free and open access with clear licensing, the
project will mostly adopt Creative Commons licenses ranging from attribution
to share-alike licenses (such as CC-BY 4.0/CC-BY-SA 4.0).
As above, publications will have bibliographic metadata attached where
possible, which is extended to research data. Where possible, research data
and associated metadata will be made available in common standards and
possibly as Linked Open Data. Annotations will be at minimum at the dataset
level, to support interoperability of data.
There is currently no separate operating budget for this, as it will be taken
as part of the budget for website and platform management, use existing
infrastructure at the Coordinator, the cities for example through their Open
Data portals (see next section), other partners, or will use free and open
repositories.
## Document Management
As noted in the overall consortium plan (D11.1), documents in the consortium
are handled in one overall platform for easy collaboration and findability of
overall project documentation. The project has set up a shared file repository
in the form of an Enterprise installation of Google Drive, including
collaborative editing tools for documents, spreadsheets, and presentations.
The instance is made available by Trondheim Kommune and is compatible with all
applicable regulations. The repository is only accessible by invitation.
Access will be granted to registered members of the consortium.
Generally, it is recommended to not share highly sensitive data, as far as it
needs to be shared, on this system.
The handling of sensitive documents will be coordinated with the DPO of the
host partner. The partners have internal repositories and processes for
dealing with such sensitive data and how it can be shared for research.
Additional sharing and development tools can be set up by specific tasks if
needed, such as version control software that is outside the scope of the
overall platform, but will be documented and linked there.
## Archiving and Preservation
Deliverables will be archived on the project website. The internal datasets
will be backed up periodically so that they can be recovered (for re-use
and/or verifications) in the future. Published datasets, raw or aggregated,
will be stored within internal and external repositories and thereby ensure
sustainability of the data collection. Records and documentation will be in
line with common standards in the research fields to ensure adherence to
standards, practices, and data quality. Data will be retained for three years
after the conclusion of the grant award or after the data are released to the
public, whichever is later.
The LHCs LCCC and TK together with NTNU as the Coordinator will ensure long-
term data curation and preservation beyond the project period. It will be
implemented as sustainability of the monitoring and evaluation platform and
data, linked to WP7 and prepared in T7.6 on migration of the monitoring
system, and as sustainability of project documentation and website, linked to
WP10 and WP11.
# Section 4: Dissemination and Exploitation
Disseminating and exploitation of the project outputs and results are an
important step to achieve the project goals. This is done in cooperation with
WP10 on Dissemination and Communication, WP9 on Inter- and Intra Project
Collaboration, WP11 on Project Coordination, and all partners. As detailed
above, data will be made as open as possible. All consortium partners together
take responsibility for exploitation and dissemination of results and to
ensure visibility and accessibility of results. Implementing FAIR data
principles will support the openness and re-use of data and therefore
dissemination and replication. Different dissemination channels are estimated
to be used and maintained during and after the project as shown in the
following table:
<table>
<tr>
<th>
**Dissemination type**
</th>
<th>
**Usage**
</th>
<th>
**Policy**
</th> </tr>
<tr>
<td>
**Website**
</td>
<td>
Main reference point for project
dissemination and data description
</td>
<td>
Creative Commons where applicable. External rights clearly marked.
</td> </tr>
<tr>
<td>
**Deliverables**
</td>
<td>
Deliverables to the EU and the public. Disseminated through the project
website cityxchange.eu and the EU Cordis system.
</td>
<td>
Dissemination level set per deliverable, public by default and open with
Creative Commons Attribution CCBY4.0. 86% of 148 deliverables are public, 20
are confidential.
</td> </tr>
<tr>
<td>
**Social Media**
</td>
<td>
Support of communication
activities
</td>
<td>
To be decided. Creative
Commons where applicable
</td> </tr>
<tr>
<td>
**Newsletters**
</td>
<td>
Regular updates and links to website and other channels
</td>
<td>
Creative Commons where applicable
</td> </tr>
<tr>
<td>
**Publications**
</td>
<td>
Scientific and other publications arising from the project
</td>
<td>
Open Access as detailed above
</td> </tr>
<tr>
<td>
**Benchmarking, Monitoring & Evaluation, KPIs **
</td>
<td>
Monitoring of indicators for project and city performance
</td>
<td>
Aggregate KPI data can be openly and publicly reported to
SCIS, in line with the overall
SCIS policy and license
(updated with the updated SCIS license for dissemination). Limitations due to
privacy and data policies may apply. General data governance issues around
this will be followed up in future versions of the DMP and in WP1 and WP7. Raw
data or supporting data and documentation for evidence archiving (for example
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
for survey-based indicators or detailed personally identifiable data from
single areas) will be kept confidential. This will be
detailed in the WP7 methodology
</td> </tr>
<tr>
<td>
**Research data as laid out in Data Management section**
</td>
<td>
Underlying research data of the project
</td>
<td>
Open Access with limitations due to privacy, as detailed above, in accordance
with the
FAIR guidelines on Data
Management in H2020
</td> </tr>
<tr>
<td>
**Any other data**
</td>
<td>
TBD
</td>
<td>
Where ever possible, open through Creative Commons or other open licenses. 'As
open as possible, as closed as necessary'.
</td> </tr> </table>
# Section 5: Conclusion
This deliverable constitutes the initial DMP for +CityxChange at the time of
delivery of January 2019. The Project Management Team will regularly follow up
with the consortium members to refine and update the DMP. Responsibilities
reside with NTNU and all consortium members.
More detailed procedures, descriptions, forms, etc. will be added as they
become available through the ongoing work in the respective Work Packages. The
next update will include detailed data summaries for the work that is being
started in that period.
The DMP will be updated at least annually, with the next regular update due in
M12 as D11.7 Data Management Plan 2. Updates will include more detailed
partner processes and descriptions of data sets and consent procedures.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1507_FITGEN_824335.md
|
# Introduction
## The FITGEN Project
FITGEN aims at developing a functionally integrated e-axle ready for
implementation in third generation electric vehicles. It is delivered at TRL
and MRL 7 in all its components and demonstrated on an electric vehicle
platform designed for the European market (A-segment reference platform). The
e-axle is composed of a latest generation Buried-Permanent-Magnet Synchronous
Machine, driven by a SiC-inverter and coupled with a high-speed transmission.
It is complemented by a DC/DC-converter for high voltage operation of the
motor in traction and for enabling super-fast charging of the 40-kWh battery
(120 kW-peak) plus an integrated AC/DC on-board charger. The e-axle also
includes a breakthrough cooling system which combines the water motor/inverter
circuit with transmission oil. The FITGEN e-axle delivers significant advances
over the 2018 State of the Art:
* 40 % increase of the power density of the e-motor, with operation up to 18,000 rpm;
* 50 % increase of the power density of the inverter, thanks to the adoption of SiC-components;
* affordable super-fast charge capability (120 kW-peak) enabled by the DC/DC-converter, integrated with single- or 3-phase AC/DC-charger;
* increase of the electric driving range from 740 to 1,050 km (including 75 minutes of charging time) in real-world freeway driving with the use of auxiliaries.
The FITGEN e-axle will enter the market in the year 2023, reaching a
production volume target of 200,000 units/year by 2025 and of 700,000
units/year by 2030. It is designed to be brand-independent and to fit
different segments and configurations of electric vehicles, including hybrids.
The FITGEN consortium includes one car-maker and three automotive suppliers
for motor, power electronics, and transmission, reproducing the complete
supply chain of the e-axle. Their expertise is leveraged by the partnership
with research institutions and academia, constituting an ideal setup for
strengthening the competitiveness of the European automotive industry.
The aim of deliverable D8.1 is to describe the project management structures
and procedures aimed at ensuring that the above-mentioned objectives are met
and that the results and deliverables of the project are of high quality,
fulfilling the specifications set in the description of work and the grant
agreement. Hence D8.1 is the document defining the quality assurance
procedures for the FITGEN project. To enter in force, the quality plan is
accepted by the full FITGEN consortium. Furthermore, it is intended as a
dynamic document that is kept up to date as the needs of the project evolve
and emerge at the general assembly meetings.
## Scope of the quality plan
The quality plan encompasses the description of the quality assurance
procedures and is addressed to the project partners for the successful
development of the FITGEN project. Hence the quality plan will guide all
consortium partners responsible for preparing and amending deliverables (e.g.
WP leader, Task leader), the steering committee and the quality coordinator
(who is responsible for reviewing completed or updated parts of the quality
plan) and any responsible consortium partner for approving works to be done by
third parties, to complete deliverables.
## Description of the process
As an integral part of management planning, the quality plan is prepared in
the early project phase to provide the consortium with the guidelines and
conditions for the execution of the technical activities. To ensure the
applicability of the quality plan at any time, the coordinator should perform
quality reviews throughout the duration of the FITGEN project and shall ensure
that the quality plan is available to all involved partners and that the
requirements regarding the quality assurance are met.
# Governance structure
## Overall structure
Figure 1 depicts the governance structure of the FITGEN project. As defined in
the Consortium Agreement (CA), the organizational main structure of the
consortium comprises the general assembly (GA), the coordinator and the
steering committee (SC).
**Figure 1. Governance structure**
## General assembly
The General Assembly (GA) is the decision-making body of the consortium.
Decisions can refer to all administrative and technical questions of the
project. The GA consists of at least one member per each consortium partner.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**General Assembly**
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
AIT
</td>
<td>
CRF
</td>
<td>
TEC
</td>
<td>
BRU
</td>
<td>
POLITO
</td>
<td>
ST-I
</td>
<td>
GKN
</td>
<td>
VUB
</td> </tr> </table>
**Table 1. Members of the General Assembly**
## Project coordinator
AIT is the acting Project Coordinator (PC) for FITGEN. The PC is the legal
entity acting as intermediary between the consortium and the Funding
Authority. The Coordinator shall, in addition to its responsibilities as a
Party, perform the tasks assigned to it as described in the Grant Agreement
and the Consortium Agreement.
## Steering committee
The Steering Committee (SC) assists the GA and the Coordinator in all
administrative, technical and quality issues. The SC consists of the
Coordinator (AIT) plus one representative of the WP leaders of the consortium,
i.e. CRF, ST-I, BRU, TEC, POLITO. The project-quality assurance will be
reviewed subsequently during the SC meetings considering:
* the results from project audits and from internal audits;
* the official project deliverables (reports and prototypes);
* the corrective action requests and the preventive actions;
* any project prototype deficiencies and subsystems/parts problems, project participants staff training and adequacy for the tasks undertaken;
* level of used resources per category and adequacy of spent resources per task.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Steering Committee**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
AIT
</td>
<td>
CRF
</td>
<td>
ST-I
</td>
<td>
BRU
</td>
<td>
TEC
</td>
<td>
POLITO
</td> </tr> </table>
**Table 2. Members of the Steering Committee**
## Governing bodies and responsibilities
### Quality Coordinator
AIT will also cover the role of quality coordinator shall assist and
facilitate the work of the SC. The Quality Coordinator will monitor the
progress of the FITGEN project and report to the SC any significant deviations
in terms of results, quality, timing and resources spent. Further the Quality
Coordinator will ensure that all project outcomes (like material used in
presentations, conferences and workshops) will have the same high level of
quality.
### Work Package Leader
The Work Package (WP) Leaders are responsible for the achievement of the
related WP objectives. Their role is to coordinate all efforts of the
participants in the WP and to monitor the progress by checking status and task
quality. The leading beneficiaries for each WP are defined in the GA and
listed here as follows.
<table>
<tr>
<th>
**WP No.**
</th>
<th>
**Work Package Title**
</th>
<th>
**Lead Participant Short Name**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Electrical architecture of the e-axle
</td>
<td>
CRF
</td> </tr>
<tr>
<td>
2
</td>
<td>
Power electronics: SiC-inverter, DC/DC-converter and on-board charger
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
3
</td>
<td>
High speed permanent magnet electric motor and transmission
</td>
<td>
BRU
</td> </tr>
<tr>
<td>
4
</td>
<td>
Cooling circuit design and control of the e-axle
</td>
<td>
TEC
</td> </tr>
<tr>
<td>
5
</td>
<td>
Prototyping, testing and qualification of e-axle components
</td>
<td>
AIT
</td> </tr>
<tr>
<td>
6
</td>
<td>
Integration of the e-axle into the A-segment platform, demonstration and final
assessment
</td>
<td>
CRF
</td> </tr>
<tr>
<td>
7
</td>
<td>
Exploitation, Dissemination and Communication
</td>
<td>
POLITO
</td> </tr>
<tr>
<td>
8
</td>
<td>
Project Management
</td>
<td>
AIT
</td> </tr> </table>
**Table 3. WP Leaders**
### Task leaders
The role and responsibility of task leaders is similar to that of the WP
leaders but at the Task level (e.g. monitoring and coordinating the technical
progress of the task). The task leaders report to the WP leader. In case of
arising issues, the WP leader discusses the issue with the task leader and
comes up with the proposed solution.
# Quality Assurance - Reporting
## 6-months progress reporting
To ensure the quality and compliance to project schedule of each WP, each WP
leader is requested to report the technical and financial status of their own
WP in written form to the PC every six months, in concomitance with the date
set for the GA. The reporting begins in Month 7\. To achieve a consistent flow
of information, the status reporting shall be sent to the PC latest 2 weeks
after end of the progress reporting period. The following information shall be
included in the technical section:
* Performed work and achieved results;
* Status of each task;
* Status of each deliverable;
* Status of each milestone;
* Gap analysis to original project plan;
* Assessment of compliance to original project timeline;
* If applicable, countermeasures to regain compliance to original timeline;
* Outlook on work items and targets of next reporting period.
The financial content shall be reported by using an Excel template. This sheet
needs to be submitted along with the technical progress information. The
templates to report the technical and financial status will be made available
by the coordinator in due time.
## Progress report to the EC
At the end of each Reporting Period, progress reports must be submitted to the
EC. According to the Grant Agreement, delivery dates are Month 18 + 60 days
and Month 36 + 60 days. The reports need to include the technical and
financial progress. Reports will be created by the Coordinator with the
support of all WP leaders. The internal 6-month progress reports shall be used
as basis for these documents. To achieve a timely delivery of the reports to
the EC, the following timeline shall be followed:
* 8 weeks / 60 days before submission deadline i.e. at the end of the reporting period: coordinator requests contents from WP leaders by email;
* 6 weeks before submission deadline: WP leaders receive feedback from respective Task leaders;
* 4 weeks before submission deadline: WP leaders provide draft reports to Coordinator;
* 2 weeks before submission deadline: Coordinator sends out feedback on report draft;
* at submission deadline, i.e. 60 days after the end of the reporting period: the Coordinator submits the progress report to the EC.
# Quality Assurance – Creation of Deliverables
## Dissemination level
In FITGEN, the deliverables can fall under two different confidentiality
levels:
* Confidential (CO): Only accessible for consortium members (including the Commission Services);
* Public (PU).
Each Deliverable is assigned a dissemination level (DL), as per Table below.
<table>
<tr>
<th>
**Deliverable number**
</th>
<th>
**Deliverable name**
</th>
<th>
**WP**
</th>
<th>
**Short name of lead participant**
</th>
<th>
**Type**
</th>
<th>
**DL**
</th>
<th>
**Delivery date (Month)**
</th> </tr>
<tr>
<td>
D1.1
</td>
<td>
Driving cycles specification and enduser requirements
</td>
<td>
WP1
</td>
<td>
CRF
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
4
</td> </tr>
<tr>
<td>
D1.2
</td>
<td>
Architecture and interface of the eaxle/charger
</td>
<td>
WP1
</td>
<td>
BRU
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
6
</td> </tr>
<tr>
<td>
D1.3
</td>
<td>
Reliable and scalable design ready for mass manufacturing and dismantling
</td>
<td>
WP1
</td>
<td>
VUB
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
9
</td> </tr>
<tr>
<td>
D2.1
</td>
<td>
Design of SiC-inverter, DC/DCconverter and on-board charger
</td>
<td>
WP2
</td>
<td>
ST-I
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
18
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
Electrical architecture and interfaces
</td>
<td>
WP2
</td>
<td>
AIT
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
21
</td> </tr>
<tr>
<td>
D3.1
</td>
<td>
E-axle specification input
</td>
<td>
WP3
</td>
<td>
CRF
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
9
</td> </tr>
<tr>
<td>
D3.2
</td>
<td>
BPM-SM and transmission
development
</td>
<td>
WP3
</td>
<td>
GKN
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
18
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
BPM-SM, SiC-inverter and
transmission integration
</td>
<td>
WP3
</td>
<td>
ST-I
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
24
</td> </tr>
<tr>
<td>
D4.1
</td>
<td>
Control system design
</td>
<td>
WP4
</td>
<td>
TEC
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
24
</td> </tr>
<tr>
<td>
D4.2
</td>
<td>
Cooling system design and integration
</td>
<td>
WP4
</td>
<td>
AIT
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
24
</td> </tr>
<tr>
<td>
D5.1
</td>
<td>
Report on the prototyping of the components
</td>
<td>
WP5
</td>
<td>
BRU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
24
</td> </tr>
<tr>
<td>
D5.2
</td>
<td>
Integration of the components and bench qualification
</td>
<td>
WP5
</td>
<td>
TEC
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
30
</td> </tr>
<tr>
<td>
D6.1
</td>
<td>
E-axle integration report
</td>
<td>
WP6
</td>
<td>
CRF
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
33
</td> </tr>
<tr>
<td>
D6.2
</td>
<td>
Verification of the vehicle
functionalities
</td>
<td>
WP6
</td>
<td>
CRF
</td>
<td>
R
</td>
<td>
CO
</td>
<td>
33
</td> </tr>
<tr>
<td>
D6.3
</td>
<td>
Vehicle testing report
</td>
<td>
WP6
</td>
<td>
AIT
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
36
</td> </tr>
<tr>
<td>
D6.4
</td>
<td>
E-axle assessment report (TRL, MRL and LCA)
</td>
<td>
WP6
</td>
<td>
VUB
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
36
</td> </tr>
<tr>
<td>
D7.1
</td>
<td>
Dissemination strategy
</td>
<td>
WP7
</td>
<td>
POLITO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
6
</td> </tr>
<tr>
<td>
D7.2
</td>
<td>
Project website and communication strategy
</td>
<td>
WP7
</td>
<td>
AIT
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
6
</td> </tr>
<tr>
<td>
D7.3
</td>
<td>
Exploitation strategy
</td>
<td>
WP7
</td>
<td>
BRU
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
21
</td> </tr>
<tr>
<td>
D7.4
</td>
<td>
Final report and summary of published documents
</td>
<td>
WP7
</td>
<td>
POLITO
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
36
</td> </tr>
<tr>
<td>
D8.1
</td>
<td>
Quality plan, contracts and reports
</td>
<td>
WP8
</td>
<td>
AIT
</td>
<td>
R
</td>
<td>
PU
</td>
<td>
3
</td> </tr> </table>
**Table 4. Deliverables.**
## Templates
The official template for deliverables can be found on the FITGEN SharePoint
at the link:
_https://portal.ait.ac.at/sites/FITGEN/SitePages/FITGEN%20Home.aspx?RootFolder=%2Fsites%2FFITGEN%2_
_FShared%20Documents%2FDocument%20templates
&FolderCTID=0x012000BA32B24263A1BC4E82338CD7 _ _8794D48E
&View=%7B0CAA86DF%2DDA3A%2D469F%2DA229%2D9CCD602B7033%7D _
These templates must be used by all project partners.
## Reviewing and approval
To assure good quality deliverables, they need to be reviewed and checked
before submission. This review shall be done by the PC and previously
appointed reviewers. The following timeline shall be followed for the
submission of deliverables:
* 8-to-6 weeks before the deadline: the coordinator reminds the deliverable owner of the upcoming submission deadline;
* 4 weeks before the deadline: the deliverable owner submits the draft to the appointed reviewer;
* 3 weeks before the deadline: the appointed reviewer proposes an amended version to the deliverable owner;
* 2 weeks before the deadline: the deliverable owner submits the draft to the PC;
* From 2 weeks before the deadline and until the deadline: the PC and the deliverable owner work together to finalise the deliverable. The final version of the deliverable is always uploaded in SYGMA by the PC.
Reviewers have been nominated for each deliverable at the KoM. The guidelines
for selection are the following: the reviewer must be a representative of the
entity that is mostly involved in the task(s) to which the deliverable belongs
and must be different from the deliverable owner and the PC. Reviewers for
each Deliverable are proposed by the PC. The GA needs to be consulted for
final approval of the Reviewers. A list with nominated reviewers is reported
below.
<table>
<tr>
<th>
**Deliverable number**
</th>
<th>
**Deliverable name**
</th>
<th>
**WP**
</th>
<th>
**Leader**
</th>
<th>
**Reviewer**
</th>
<th>
**Approver**
</th> </tr>
<tr>
<td>
D1.1
</td>
<td>
Driving cycles specification and enduser requirements
</td>
<td>
WP1
</td>
<td>
CRF
</td>
<td>
POLITO
</td>
<td>
AIT
</td> </tr>
<tr>
<td>
D1.2
</td>
<td>
Architecture and interface of the eaxle/charger
</td>
<td>
WP1
</td>
<td>
BRU
</td>
<td>
POLITO
</td> </tr>
<tr>
<td>
D1.3
</td>
<td>
Reliable and scalable design ready for mass manufacturing and dismantling
</td>
<td>
WP1
</td>
<td>
VUB
</td>
<td>
POLITO
</td> </tr>
<tr>
<td>
D2.1
</td>
<td>
Design of SiC-inverter, DC/DCconverter and on-board charger
</td>
<td>
WP2
</td>
<td>
ST-I
</td>
<td>
BRU
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
Electrical architecture and interfaces
</td>
<td>
WP2
</td>
<td>
AIT
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D3.1
</td>
<td>
E-axle specification input
</td>
<td>
WP3
</td>
<td>
CRF
</td>
<td>
BRU
</td> </tr>
<tr>
<td>
D3.2
</td>
<td>
BPM-SM and transmission
development
</td>
<td>
WP3
</td>
<td>
GKN
</td>
<td>
BRU
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
BPM-SM, SiC-inverter and
transmission integration
</td>
<td>
WP3
</td>
<td>
ST-I
</td>
<td>
BRU
</td> </tr>
<tr>
<td>
D4.1
</td>
<td>
Control system design
</td>
<td>
WP4
</td>
<td>
TEC
</td>
<td>
VUB
</td> </tr>
<tr>
<td>
D4.2
</td>
<td>
Cooling system design and integration
</td>
<td>
WP4
</td>
<td>
AIT
</td>
<td>
TEC
</td> </tr>
<tr>
<td>
D5.1
</td>
<td>
Report on the prototyping of the components
</td>
<td>
WP5
</td>
<td>
BRU
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D5.2
</td>
<td>
Integration of the components and bench qualification
</td>
<td>
WP5
</td>
<td>
TEC
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D6.1
</td>
<td>
E-axle integration report
</td>
<td>
WP6
</td>
<td>
CRF
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D6.2
</td>
<td>
Verification of the vehicle
functionalities
</td>
<td>
WP6
</td>
<td>
CRF
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D6.3
</td>
<td>
Vehicle testing report
</td>
<td>
WP6
</td>
<td>
AIT
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D6.4
</td>
<td>
E-axle assessment report (TRL, MRL and LCA)
</td>
<td>
WP6
</td>
<td>
VUB
</td>
<td>
ST-I
</td> </tr>
<tr>
<td>
D7.1
</td>
<td>
Dissemination strategy
</td>
<td>
WP7
</td>
<td>
POLITO
</td>
<td>
VUB
</td> </tr>
<tr>
<td>
D7.2
</td>
<td>
Project website and communication strategy
</td>
<td>
WP7
</td>
<td>
AIT
</td>
<td>
POLITO
</td> </tr>
<tr>
<td>
D7.3
</td>
<td>
Exploitation strategy
</td>
<td>
WP7
</td>
<td>
BRU
</td>
<td>
POLITO
</td> </tr>
<tr>
<td>
D7.4
</td>
<td>
Final report and summary of published documents
</td>
<td>
WP7
</td>
<td>
POLITO
</td>
<td>
VUB
</td> </tr>
<tr>
<td>
D8.1
</td>
<td>
Quality plan, contracts and reports
</td>
<td>
WP8
</td>
<td>
AIT
</td> </tr> </table>
**Table 5. Deliverables’ reviewers.**
# Quality Assurance – Management of Risks
A proper management of risks is key in the execution of FITGEN. The critical
risks for implementation of the project have been identified and reported in
Table 6. Description of the risks (including level of likelihood, i.e.
low/medium/high), involvement of WPs and risk-mitigation measures are
preliminary identified, at the best of the knowledge of the consortium. The
risk management table will be revised at each GA; updates will be made in case
new risks arise during the execution of the technical activities. In this
case, also appropriate mitigation measures will be indicate, addressing an
appropriate response to the changing environment.
<table>
<tr>
<th>
**Description of risk (indicate level of likelihood: Low/Medium/High)**
</th>
<th>
WP(s) involved
</th>
<th>
Proposed risk-mitigation measures
</th> </tr>
<tr>
<td>
**Low**
Vehicle validation platform
(demonstrator) not available.
</td>
<td>
All WPs
</td>
<td>
The commitment of CRF to provide the donor vehicle will be recorded in the CA.
</td> </tr>
<tr>
<td>
**High**
Project partner cannot provide e-axle and/or component prototype (or/and mock-
up parts for first-level testing) in time or in budget.
</td>
<td>
WP3
WP4
</td>
<td>
Continuous monitoring of the FITGEN project progress by task- and work
package-leaders (i.e. timely report to the SC any significant deviations in
terms of results) to take corrective and/or preventive actions against any
prototype deficiencies or problems in terms of subsystems/parts/mock-up parts.
</td> </tr>
<tr>
<td>
**Low**
Developed and realized components of the e-axle does not reach the expected or
simulated behavior (power, efficiency, etc.)
</td>
<td>
WP3
WP4
</td>
<td>
Testing of all relevant prototypes on the testbench will be performed prior to
vehicle integration test.
</td> </tr>
<tr>
<td>
**Medium**
SotA data of the vehicle and its components cannot be assessed in the required
depth of detail.
</td>
<td>
WP1
</td>
<td>
A non-disclosure agreement will be set up, which must be signed and committed
by all project partners. This allows the partners to share data with the
consortium and to make relevant data accessible.
</td> </tr>
<tr>
<td>
**Medium**
Temperatures exceed limits in integrated e-drivetrain, especially in
electronics/power electronics compartment.
</td>
<td>
WP4
WP5
</td>
<td>
Accurate loss calculation with possible crosscheck between different design
tools in the consortium (analytical and finite elements) and first level
measurements will be applied. Temperature sensors will be inserted in
prototypes. Derating and safety functions will be included in the control
system.
</td> </tr>
<tr>
<td>
**Medium**
Prototype components get damaged during first or second level testing.
</td>
<td>
WP5
WP6
</td>
<td>
Spare parts for all critical components will be purchased/produced before
starting the tests. Test sequences will start with the lowest power rating and
end with short-term overload tests (the involved partners are very experienced
in testing).
</td> </tr>
<tr>
<td>
**High**
Packaging of the developed components and systems leads to box volume problems
in the vehicle validation platform.
</td>
<td>
WP2
WP3
WP4
</td>
<td>
Exchange of coarse CAD data and simulation models (vehicle, modules and
components) must start just at the beginning of the project.
</td> </tr>
<tr>
<td>
**High**
The investigations show that the benefits of the proposed technological
improvements have a lower impact on the vehicle (energy consumption, vehicle
weight, comfort or maximum driving range) than expected.
</td>
<td>
WP1
WP6
WP7
</td>
<td>
Suitable types and the right combination of the novel technologies must be
found, and synergistic effects must be used in order to maximize the impact on
the vehicle.
Before integrating the new technologies into the vehicle, their operating
behavior will be analyzed, and the expected benefit will be adapted.
Recognizing early deviations from the planned improvements enables to find
possible solutions to balance the lower benefit with other components or
technologies.
</td> </tr> </table>
**Table 6. Critical risks for implementation**
# External communications and publications
## Logo
Figure 2 shows the official FITGEN project logo. This logo has been presented
by the GA during the Kick-off meeting of FITGEN and consequently approved by
the consortium. On external and internal publications, the use of the official
project logo is required.
The project logo is located on the FITGEN SharePoint at the link:
_https://portal.ait.ac.at/sites/FITGEN/_layouts/15/start.aspx#/SitePages/WP7%20-_
_%20Exploitation%2C%20Dissemination%20and%20Communication.aspx?RootFolder=%2Fsites%2FFITGEN%_
_2FWP7%20documents%2FT7%2E2%20%2D%20Website%2C%20social%20media%20and%20communicatio_
_n%20towards%20stakeholders%20and%20citizens%2FLogos%20%2D%20FITGEN
&FolderCTID=0x01200034 _ _E985CF7C09D7459B979B5536FE7CE3
&View=%7B7757C34F%2DF1C2%2D48AE%2D9052%2D59B52F00DC4 _ _A%7D_
## Templates
To ensure that presented contents are clearly connected to FITGEN and to
create a recognition factor of the project itself, the usage of the official
project presentation template can be found on the FITGEN SharePoint at the
link:
_https://portal.ait.ac.at/sites/FITGEN/SitePages/FITGEN%20Home.aspx?RootFolder=%2Fsites%2FFITGEN%2_
_FShared%20Documents%2FDocument%20templates
&FolderCTID=0x012000BA32B24263A1BC4E82338CD7 _ _8794D48E
&View=%7B0CAA86DF%2DDA3A%2D469F%2DA229%2D9CCD602B7033%7D _
## Rules
On all project publications, the funding by the European Union needs to be
acknowledged. This includes the usage of the FITGEN project logo and the EU
flag in sufficiently high resolution. For the acknowledgement itself, the
following sentence is mandatory:
_This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No. 824335._
Additionally, dissemination documents can include the following disclaimer(s):
_The content of this publication is the sole responsibility of the Consortium
partners listed herein and does not necessarily represent the view of the
European Commission or its services._
_This publication reflects only the author’s view and the Innovation and
Networks Executive Agency (INEA) is not responsible for any use that may be
made of the information it contains._
## Procedures
Before executing any formal publication or external communication, the PC
needs to be informed in advance. The PC will finally confirm the content and
visual appearance. To ensure an orderly procedure, the following deadlines
shall be met:
* 6 weeks before submission: giving notification to Coordinator, and distribution to the Project Consortium;
* 2 weeks before submission: summarising feedback and approval from Project Consortium;
The PC regards the publication or communication as authorised if no objection
from the partners is received within the feedback period. However, the
publishing party needs to receive a written confirmation of that approval
before any material can be submitted or communicated.
# Communication and meeting management
## Communication
Standard working communication shall be done via phone or email. Important
communication and exchange of information (e.g. to provide information on the
release of new deliverables or to notify the project partners about the
availability of new information and events or to circulate meeting agendas,
etc.) should be done via email to enable tracking and follow-up. To enable the
coordinator to maintain an overview of the entire project, the coordinator
contacts shall be included in all technical and administrative e-mails in CC.
For project documents, especially large ones, the project SharePoint should be
used as the definitive repository:
_https://portal.ait.ac.at/sites/FITGEN/_layouts/15/start.aspx#/_
Instead of sending attachments via email, good practise is to upload them to
the appropriate folder on the SharePoint and reference them (hyperlink) in an
email.
## Bi-weekly plenary calls
Web meetings are a powerful tool for keeping frequently in touch with partners
via display aided telephone conferences (e.g. GoToMeeting). Partners need only
basic equipment (i.e. a telephone set and a standard working station) to use
this type of meeting environment. Bi-weekly web meetings are organized by the
PC. Regardless of these regular meetings, spontaneous web meetings with short
notice are possible at any time to save resources (e.g. travel budget and
time) and for having WP-dedicated discussions. For web meetings generally, the
same principles are valid as for physical meetings. This means, all required
documents must be shared with the attendees before the meeting. This includes
an agenda and a participant list.
## Face-to-Face meetings
The main pillar for communication in the project will be the Face-to-Face
meetings. To foster the personal exchange of project participant across all
WPs, these meetings will be held on a regular basis. Two types of meetings
shall be held during the project:
* General Assembly meetings;
* Steering Committee meetings;
The following target dates/locations have been set for the GA meetings:
<table>
<tr>
<th>
**Meeting ID**
</th>
<th>
**Project Month**
</th>
<th>
**Location**
</th> </tr>
<tr>
<td>
GA - 1 (Kick-Off Meeting)
</td>
<td>
M1 (Jan. 2019)
</td>
<td>
AIT (Vienna, AT)
</td> </tr>
<tr>
<td>
GA - 2
</td>
<td>
M7 (July 2019) 2 nd /3 rd July (tentative)
</td>
<td>
ST-I (Catania, IT)
</td> </tr>
<tr>
<td>
GA - 3
</td>
<td>
M13 (Jan. 2020)
</td>
<td>
BRU (Sennwald, CH)
</td> </tr>
<tr>
<td>
GA - 4 (MidTerm Meeting)
</td>
<td>
M19 (July 2020)
</td>
<td>
AIT (Vienna, AT)
</td> </tr>
<tr>
<td>
GA - 5
</td>
<td>
M25 (Jan. 2021)
</td>
<td>
TEC (Bilbao, ES)
</td> </tr>
<tr>
<td>
GA - 6
</td>
<td>
M31 (July 2021)
</td>
<td>
VUB (Brussels, BE)
</td> </tr>
<tr>
<td>
GA - 7 (Final Meeting)
</td>
<td>
M36 (Dec. 2021)
</td>
<td>
CRF (Torino, IT)
</td> </tr> </table>
**Table 7. GA meetings.**
Ordinary meetings of the Steering Committee shall be held at least quarterly.
Extraordinary meetings can be called at any time upon written request of any
member of the SC. At each meeting, the location for the following meeting
shall be discussed and decided by the GA.
## Meeting minutes
Meeting minutes shall be prepared by the PC. After the meeting, the minutes
will be distributed among all participants and the coordinator within 10
calendar days. The partners should send comments on the minutes within 10
working days. Within further 2 working days the final revised meeting minutes
should be circulated again.
# Electronic Data Management
## Document creation
To ensure compatibility and open access to all electronic project documents,
common standards on data formats need to be defined. Electronic project
documents shall be created using the Microsoft Office (2013 and later)
software suite. The following data formats need to be used:
* Text documents: Microsoft Office Word Document (.docx);
* Presentations Microsoft Office PowerPoint Presentation (.pptx);
* Spreadsheets Microsoft Office Excel Workbook (.xlsx);
All documents shall use the English (United Kingdom) language. Common rules
for file names need to be followed. File names need to comply with the
following rule:
* FITGEN_Index_DocName_Date_Version_Partner.ext; with the following meanings:
* Index Number of WP or deliverable, e.g. WP1 or D1.4;
* DocName Short name suitable for content identification, e.g. KickOff;
* Date Date of document creation, e.g. 2017-11-06;
* Version Version number, e.g. V1;
* Partner Acronym of document responsible partner, e.g. AIT;
* ext File extension, e.g. .docx;
## Data transfer and storage
Presentations and general documents shall be shared via SharePoint. This
system is administered and maintained by the PC. After invitation by the PC,
the storage location can be accessed via the following URL:
_https://portal.ait.ac.at/sites/FITGEN/_layouts/15/start.aspx#/_
**Figure 3. FITGEN SharePoint.**
# Conclusions
Procedures and standards to be used in the FITGEN project to guarantee the
quality of the outcomes are formulated in D8.1, in full compliance with all
contractual requirements framed into the GA and CA.
# Risk Register
<table>
<tr>
<th>
**Risk No.**
</th>
<th>
**What is the risk**
</th>
<th>
**Probability of risk occurrence** 1
</th>
<th>
**Effect of risk** 2
</th>
<th>
**Solutions to overcome the risk**
</th> </tr>
<tr>
<td>
n.a.
</td>
<td>
n.a.
</td>
<td>
n.a.
</td>
<td>
n.a.
</td>
<td>
n.a.
</td> </tr> </table>
# Project partners
<table>
<tr>
<th>
**Participant No.**
</th>
<th>
**Participant short name**
</th>
<th>
**Participant organization name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
1 (Coordinator)
</td>
<td>
**AIT**
</td>
<td>
AIT Austrian Institute of Technology GmbH
</td>
<td>
Austria
</td> </tr>
<tr>
<td>
2
</td>
<td>
**CRF**
</td>
<td>
Centro Ricerche FIAT SCPA
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
3
</td>
<td>
**TEC**
</td>
<td>
Fundacion Tecnalia Research & Innovation
</td>
<td>
Spain
</td> </tr>
<tr>
<td>
4
</td>
<td>
**BRU**
</td>
<td>
BRUSA Elektronik AG
</td>
<td>
Switzerland
</td> </tr>
<tr>
<td>
5
</td>
<td>
**POLITO**
</td>
<td>
Politecnico di Torino
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
6
</td>
<td>
**ST-I**
</td>
<td>
STMicroelectronics SRL
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
7
</td>
<td>
**GKN**
</td>
<td>
Guest, Keen and Nettlefolds
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
8
</td>
<td>
**VUB**
</td>
<td>
Vrije Universiteit Brussel
</td>
<td>
Belgium
</td> </tr> </table>
_This project has received funding from the European Union’s H2020 research
and innovation programme under Grant Agreement no. 824335._
_This publication reflects only the author’s view and the Innovation and
Networks Executive Agency (INEA) is not responsible for any use_
_that may be made of the information it contains._
## Appendix A – Quality Assurance
The following questions should be answered by the WP Leader, the reviewers and
the coordinator as part of the Quality Assurance Procedure. Questions answered
with NO should be explained. The author will then make an updated version of
the Deliverable. When all reviewers have answered all questions with YES, only
then the Deliverable can be submitted to the EC.
NOTE: For public documents this Quality Assurance part will be removed before
publication.
<table>
<tr>
<th>
**Question**
</th>
<th>
**Deliverable Leader**
</th>
<th>
**Peer reviewer**
</th>
<th>
**Coordinator**
</th> </tr>
<tr>
<td>
</td>
<td>
Michele DE GENNARO
(AIT)
</td>
<td>
Boschidar GANEV
(AIT)
</td>
<td>
Michele DE GENNARO
(AIT)
</td> </tr>
<tr>
<td>
**1\. Do you accept this deliverable as it is?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr>
<tr>
<td>
**2\. Is the deliverable completely ready (or are any changes required)?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr>
<tr>
<td>
**3\. Does this deliverable correspond to the DoW?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr>
<tr>
<td>
**4\. Is the Deliverable in line with the FITGEN objectives?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr>
<tr>
<td>
**a. WP Objectives?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr>
<tr>
<td>
**b. Task Objectives?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr>
<tr>
<td>
**5\. Is the technical quality sufficient?**
</td>
<td>
YES
</td>
<td>
YES
</td>
<td>
YES
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1510_ReachOut_825307.md
|
# Introduction
The purpose of this document is to provide a data management plan (DMP) for
ReachOut. According to the guidelines of the European Commission [EC13a], “the
purpose of the DMP is to support the data management life cycle for all data
that will be collected, processed or generated by the project.”
The ReachOut project is a Coordination and Support Action that aims to help
H2020 projects in the area of software technologies to establish their
software ecosystems by providing them all necessary resources to conduct beta-
testing campaigns: a technical infrastructure for the publication of beta
releases, questionnaires and collaterals, a comprehensive framework for the
development of beta testing campaigns and outreach activities to promote the
platform for the collective benefit of the projects.
Hence, the data management plan relates to data provided by the projects for
their beta-testing campaigns and the software evaluation results.
# Data Management Objectives
The guideline [EC13a] provides a check list of objectives to be taken into
account when defining data management principles. In the following, we are
going to relate out approach to these:
<table>
<tr>
<th>
Objective
</th>
<th>
Description
</th>
<th>
ReachOut actions
</th> </tr>
<tr>
<td>
Discoverable
</td>
<td>
Are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier)?
</td>
<td>
The ReachOut platform provides a list of the beta-testing campaigns including
their name, description, the organiser and technical details.
</td> </tr>
<tr>
<td>
Accessible
</td>
<td>
Are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.)?
</td>
<td>
The tested in the campaigns software and its documentation is provided under
the license selected by the research project organising the campaign. The
software evaluation results will remain private and accessible only by the
campaign organisers.
</td> </tr>
<tr>
<td>
Assessable and
intelligible
</td>
<td>
Are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, are data is
provided in a way that judgements can be made about
their reliability and the competence of those who created them)?
</td>
<td>
The software and documentation for beta-testing campaigns are provided by the
projects. It will be publicly accessible at the ReachOut platform.
</td> </tr>
<tr>
<td>
Useable beyond the original purpose for which it was collected
</td>
<td>
Are the data and associated software produced and/or used in the project
useable by third parties even long time after the collection of the data (e.g.
is the data safely stored in certified repositories for long term preservation
and
</td>
<td>
The ReachOut project will not use repositories certificated for long term
storage. The project aims to help other H2020 to improve the quality of the
produced by them software. The results of beta-testing will remain
confidential and accessible only for
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
curation; is it stored together with the minimum software, metadata and
documentation to make it useful; is the data useful for the wider public needs
and usable for the likely purposes of nonspecialists)?
</th>
<th>
the project, which organised the testing campaign. The tested software and its
documentation belongs the projects as well. They will follow the own data
management plans.
</th> </tr>
<tr>
<td>
Interoperable to
specific quality standards
</td>
<td>
Are the data and associated software produced and/or used in the project
interoperable allowing
data exchange between researchers, institutions, organisations, countries, etc
(e.g. adhering to standards for data annotation, data exchange, compliant with
available software applications, and allowing recombinations with different
datasets from different origins)?
</td>
<td>
The ReachOut will not produce any new data, which would be publicly avaialble
to other researchers or institutions apart from the deliverables about project
activities, which will be published on the project web site.
</td> </tr> </table>
# Data Collection and Quality
The ReachOut project will help H2020 projects to organise their beta-testing
campaigns. It will provide a technical platform, in which the projects will be
able to publish their testing campaign information and provide links to the
software to be tested and its documentation. The feedback of beta-testers will
be collected with help of project/software specific surveys. The ReachOut
project team will actively support the testing campaign organisers and the
beta-testers in producing the data needed for the success of the campaign.
Hence, data quality is achieved by a continuous dialogue between the projects
and beta-testers, and the ReachOut consortium.
# Data Sharing
The software to be tested and its documentation will be shared through the
ReachOut platform , but only for the period of the beta-testing campaign. The
users will get access to the software to be tested, its documentation and the
instructions regarding the testing procedure.
# Publications and Deliverables
Publications produces in the ReachOut project and deliverables (with
dissemination level “public”) after the approval by the European Commission
will be published on the ReachOUt web site as fast as possible. Hence,
ReachOut will provide (gold) open access publishing whenever this is possible
[EC13b].
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1511_MyPal_825872.md
|
# Executive Summary
This document outlines the management lifecycle for the data that will be
collected, processed or generated in the scope of the MyPal Action, based on
the guidelines provided by the EU. The MyPal Consortium is committed on an “
_as open as possible and as closed as necessary_ ” approach, focusing on
potential personal privacy issues.
This approach is heavily depending on the national and European legislation
(e.g. the General Data Protection Regulation - GDPR) and a robust ethics
background due to the sensitivity of the data that MyPal will manage. The
three main policy axes for the data management plan (DMP) in MyPal are
summarized as follows:
1. Data and research results produced at each Action task will be considered for publication using open-access scientific journals and/or open data repositories.
2. Protection of sensitive data is a priority according to ethical and legal constraints, therefore, each dataset will be thoroughly reviewed with respect to its potential access/sharing. Preferably, data will be published in an aggregative fashion (e.g. average values) not referring to specific persons. When data referring to individuals are decided to be published in order to facilitate further research, these must be anonymized and thoroughly examined for potential privacy issues.
3. The deliverable provides a clear and detailed approach regarding data management. However, as the project evolves, the DMP will also evolve as part of the respective activities, adapting to potentially new data, defining explicit rules for specific datasets, etc.
Five datasets have been identified in the current stage of the project, which
the DMP accounts for:
* MyPal-ADULT clinical study
* MyPal-CHILD clinical study
* Focus groups to extract user requirements
* Systematic and Mapping Review of use of PRO systems for cancer patients ▪ Internal Expert Questionnaires for technical design
The MyPal clinical studies are expected to produce the two most important and
sensitive datasets of the Action.
While the currently presented DMP clearly outlines the MyPal Consortium’s data
management policy, it should not be considered as a “fixed” document. On the
contrary, the MyPal DMP should be considered as a live/evolving document
during the lifespan of the Action as data are generated and processed,
expected to be updated regularly in the future.
# Introduction
MyPal aims to foster palliative care for cancer patients via Patient Reported
Outcome (PRO) systems, while focusing on their adaptation to the personal
needs of the cancer patient and his/her caregiver(s). To this end, MyPal
aspires to empower cancer patients and their caregivers in capturing more
accurately their symptoms/conditions, communicate them with a seamless and
effective way to their healthcare providers and, ultimately, foster the time
for action through the prompt identification of important deviations in the
patient’s state and Quality of Life (QoL).
The project’s ambition is to exploit advances in digital health to support
patients, family members and healthcare providers in gaining value through a
systematic and comprehensive PRO-based intervention and, therefore, provide a
paradigm shift from passive patient reporting (through conventional PRO
approaches) to active patient engagement (via personalized and integrated care
delivery) across the entire cancer trajectory.
MyPal will demonstrate and validate the proposed intervention in two different
patient groups, i.e. adults suffering from hematologic malignancies and
children with solid tumours or hematologic malignancies, hence targeting
different age groups and cancer types, through carefully designed clinical
studies that will be conducted in diverse healthcare settings across Europe.
MyPal-ADULT will be a randomized controlled trial (RCT) and MyPal-CHILD an
observational study. As MyPal intends to produce and exploit sensitive
personal health data, data management becomes a top priority, also regulated
by legislation and ethics focusing on patient privacy protection (e.g. the
General Data Protection Regulation – GDPR 1 and the ICH-GCP Guidelines EU
Clinical Trial Directive (2001/20/EG) 2 ). In parallel, as MyPal
participates in the Pilot on Open Research Data 2 , the need for a clear
data management approach which enables open access and reuse of research data
becomes imperative.
This deliverable describes the project’s Data Management Plan (DMP) concerning
the data processed, generated and preserved during and after MyPal, as well as
relative concerns generated from their usage. The deliverable aims to define a
framework outlining the MyPal policy for data management. In particular, this
deliverable covers topics like information about the data, metadata content
and format, policies for access, sharing and re-use and long-term storage and
data management.
The deliverable is organized into following sections:
* Section 2 refers to the overall data management approach of the project.
* Section 3 provides details regarding the data sharing approach applied in MyPal.
* Section 4 describes the datasets identified so far and potential risks with respect to data management.
* Section 5 concludes the report.
# Rationale
The main guidelines used to define this DMP are summarized in Table 1\.
**Table 1: DMP guideline documents.**
<table>
<tr>
<th>
**Document title**
</th>
<th>
**Link**
</th> </tr>
<tr>
<td>
European Research Council (ERC) - Guidelines on Implementation of Open Access
to Scientific Publications and Research Data in projects supported by the
European Research Council under Horizon 2020
</td>
<td>
_http://ec.europa.eu/research/participants/data/ref/h2020/othe r/hi/oa-
pilot/h2020-hi-erc-oa-guide_en.pdf_
</td> </tr>
<tr>
<td>
European Commission, Directorate-General for Research & Innovation -
Guidelines on FAIR Data Management in Horizon 2020
</td>
<td>
_http://ec.europa.eu/research/participants/data/ref/h2020/gran
ts_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf_
</td> </tr>
<tr>
<td>
European Commission, Directorate-General for Research & Innovation - Data
management
</td>
<td>
_http://ec.europa.eu/research/participants/docs/h2020-fundingguide/cross-
cutting-issues/open-access-data-management/datamanagement_en.htm_
</td> </tr>
<tr>
<td>
European Commission, Horizon 2020 Data Management Plan (DMP) template
</td>
<td>
_http://ec.europa.eu/research/participants/data/ref/h2020/othe
r/gm/reporting/h2020-tpl-oa-data-mgt-plan-annotated_en.pdf_
</td> </tr> </table>
Since MyPal will handle personal and sensitive health data, legal obligations
significantly affect the respective data management processes. To this end,
legal and ethical restrictions of the project as a whole have been identified
in deliverable “D1.1: MyPal Ethics”: the Convention of Human Rights with
regard to the applications of Biomedicine (the Oviedo Convention) the
Declaration of Helsinki, the EU Charter of Fundamental Rights, the ICH-GCP
Guidelines EU Clinical Trial Directive (2001/20/EG) and the ESMO Clinical
Practice Guidelines for Supportive and Palliative Care which are intended to
provide the user with a set of recommendations for the best standards of
cancer care. The principles of “Privacy by Design” will be complied with i.e.
a development method for privacy-friendly systems and services. Finally, the
General Data Protection Regulation 679/2016 (GDPR) will be the main legal text
followed for data privacy of the participants, as privacy is one of the main
concerns surrounding participation of humans, collection and processing of
data. More specifically, MyPal needs to comply with the following principles
specified in Article 5 of the GDPR:
* Minimization: Only data that is necessary for this research will be collected.
* Lawfulness fairness and transparency of processing.
* Accuracy of the data, i.e. providing the right to the participant to erase or modify inaccurate data.
* Storage limitation.
* Integrity and confidentiality (security of the data).
* Safety of data by a Roles and Rights management. ▪ Data privacy by pseudonymization/anonymization ▪ Accountability of the data controller.
According to the above guidelines and the respective legal artefacts, MyPal
DMP had to fulfil the following three main requirements:
1. Must focus on the protection of special categories of personal data (sensitive) as imposed by legislation and widely accepted research ethics principles.
2. Must provide open access and facilitate finding of research data as widely as possible applying the principles of H2020 Open Research Data Pilot (ORDP).
3. Must be delivered by the 6th month of the project.
The two first requirements might be considered contradictory, as they lead to
a delicate balance between openness and data protection. Furthermore, the
delivery of the DMP by the 6 th month of the project requires the definition
of a DMP early in the project time schedule, before all project aspects are
defined (e.g. clinical studies detailed plan, datasets detailed definition,
technical decisions regarding MyPal ePRO platform etc.).
Therefore, in order to define a clear and practical data management process,
the MyPal Consortium identified the following three policy axes regarding its
DMP:
1. Data and research results produced at each project task will be considered for publication using open-access scientific journals and/or open data repositories.
2. Protection of special categories of data (sensitive) will be a priority according to ethical and legal constraints and therefore, each dataset will be thoroughly reviewed with respect to access policies. For example, preferably, data will be published in an aggregative fashion (e.g. average values) not referring to specific persons. Where publication of data referring to individuals is planned in order to facilitate further research, these must be anonymized and thoroughly examined for potential personal privacy issues.
3. The deliverable of DMP provides a clear and detailed approach regarding data management. However, as the project evolves, the DMP also might evolve as part of the respective activities, adapting to potentially new datasets defining explicit rules for specific datasets and assuring the overall data quality management. Therefore, as project implementation progresses, the presented DMP will be iteratively adjusted in order to reflect these changes.
As project implementation progresses, the presented DMP will be iteratively
adjusted in order to reflect changes (e.g. new datasets) and assure the
overall data quality management. The proposed data management lifecycle can be
summarized in the following steps, applied as part of each project activity
(i.e. WP, task etc.) or across project activities, as suitable (Figure 1).
1. Data management process definition
As a first step, the employed dataset will have to be defined. The need for
the respective dataset and a specific plan for its management must be defined
(e.g. description of the dataset, definition of Data Protection Officer – DPO
according to GDPR, definition of the consent process etc.). Furthermore, data
management risks will have to be identified and elaborated through a suitable
threat analysis/risk management approach. Finally, approval by the respective
national or local bioethics committees will be pursued.
2. Data collection
Collection of data will be performed using suitable methods (e.g. surveys),
applying strict confidentiality rules as well as legal and ethical
restrictions. DPO and the respective partners will make sure that the data
collection process is appropriately applied, while measures will be taken to
apply the collection in as unobtrusive a manner as possible.
3. Data processing
As data processing might involve data transformation processes, special care
will be taken to avoid data tampering, enabling tracing the original raw data.
Furthermore, as data processing might require data exchange among partners,
use of external IT infrastructure etc., confidentiality guarantees among the
engaged institutions as well as technical information security best practices
measures will be applied.
4. Data publication
Prior to data publication, the real need for the publication of personal data
will be evaluated. By principle, MyPal consortium has decided that data should
be published in an aggregated fashion. However, if there is a clear need for
personal data to be published, they will be anonymized. Furthermore, MyPal
published data will also comply with open data standards and be accompanied by
metadata (e.g. dates, provenance information etc.) to comply with FAIR
principles 3 .
5. Data maintenance
The MyPal Consortium will maintain data obtained in the project for 15 years,
including both published and originally collected raw datasets. The general
principle is that data should be maintained in the sites where they have been
collected or created. Furthermore, raw data which are not useful for further
research or validation purposes will be deleted in order to minimize potential
personal privacy risks.
**Figure 1: MyPal data management policy**
In order to minimize information security risks, especially regarding the
clinical studies’ data, the MyPal Consortium seriously considers the option to
host sensitive data centrally in the computational infrastructure of CERTH,
due to its information security capabilities and the fact that it is ISO
27000-certified regarding Information Security Management 4 . While this
policy decision significantly affects the DMP presented in this stage of the
project, it is currently reviewed for potential conflicts with national or
European legislations and local sites bioethics committees and therefore
cannot be considered final.
While this deliverable outlines the consortium policy in terms of the DMP,
data will be processed and therefore handled accordingly as part of the
respective work packages (WPs) and tasks. Therefore, DMP could be adjusted
based on the results of other WPs, e.g. the results regarding the intervention
design activities in WP2, the ethics management in WP9 and the risk management
activities as part of WP8. To this end, the DMP will be updated at the end of
each reporting period (in months 18, 36 and 42 of the project) to depict
information on a finer level of granularity as the implementation of the
project is progressing.
# Data sharing
The MyPal data sharing policy requires that each dataset will be thoroughly
reviewed before getting published/shared. While special actions referring to
the respective dataset might be explicitly defined, the MyPal policy requires
for each dataset the following:
* Definition of data owner(s);
* Definition of incentives concerning the data providers;
* Identification of user groups and the access policies concerning the data;
* Definition of access procedure and embargo periods;
* Compliance with the corresponding legal and ethical framework.
As data sharing and publishing of research results in an open-access fashion
is a priority for MyPal, with regard to each project activity (i.e. WP, task
etc.), the following steps are planned:
1. Select what data should be retained to support validation of the Action finding or datasets that might be considered useful for further research, including research out of the MyPal scope;
2. Deposit the research data into an online open-access research data repository (at least the data that are for public use). Repositories will be investigated for each dataset individually, evaluating the options in order to promote the specific dataset’s visibility and further reuse, using the most appropriate data standards, the most appropriate access control schemes, and satisfying legal requirements. Such options include:
* institutional research data repository, if available;
* external data archive or repository already established in the MyPal research domain (to preserve the data according to recognized standards);
* the European sponsored repository: _http://zenodo.org/_ ;
* other data repositories (searchable here: _http://www.re3data.org_ ), if the aforementioned ones are not eligible.
3. License the data for re-use (Horizon 2020 recommendation is to use CC0 or CC BY);
4. Provide information on the tools needed for validation, i.e. everything that could help a third party in validating the data (e.g. code, an excel macro etc.). Independent of the selected repository, the authors will ensure that the repository:
* Gives the submitted dataset a persistent and unique identifier to ensure that research outputs in disparate repositories can be linked back to particular researchers and grants;
* Provides a landing page for each dataset, with metadata and guiding information;
* Helps track if the data has been used by providing access and download statistics;
* Keeps the data available in the long term;
* Provides guidance on how to cite the data or relevant MyPal work;
5. Check if the above steps are compatible with the main DMP, and act accordingly (including potential updates to the DMP per se).
These steps will be adapted for each dataset identified as part of MyPal
activities and each case will be examined separately, in order to select the
most suitable online repository. The respective dataset owner will have the
main responsibility for the data sharing process.
The clinical sites engaged in data collection and management as a whole are:
* Università Vita-Salute San Raffaele, Milan, Italy (USR).
* University Hospital Brno, Czech Republic (FN BRNO).
* University Hospital of Crete, Greece (PAGNI).
* Karolinska Institute, Stockholm, Sweden (KI).
* Hannover Medical School, Hannover, Germany (MHH).
* Universität des Saarlandes, Saarbrucken, Germany (USAAR).
* Geniko Nosokomeio Thessalonikis G.Papanikolaou (GPH), linked third-party to CERTH, which will also be involved in data collection as a backup site in the case when not enough patients could be recruited by other clinical partners.
A number of tools (advanced PRO systems via mobile/desktop apps and games,
self-management tools, psychoemotional assessment) will be developed for the
collection, processing and storage of personal sensitive data. These tools
will be further supported by electronic and paper-based questionnaires,
gamification methods for children and advanced user interface approaches (e.g.
embodied conversational agents offering voice-based interaction). The
appropriate questionnaires for use in palliative care that will be employed
for the reporting include:
* Purpose-made questionnaires to assess the acceptability of ePRO system by patients/family.
* Purpose-made questionnaires to assess signs and symptoms and promptness of response by healthcare providers, level of care and level of communication.
* Standardised questionnaires such as:
* EuroQoL- EQ-5D, a QoL measure (to be used as a cost-effectiveness evaluation tool),
* European Organization for Research and Treatment of Cancer quality of life questionnaire (EORTC QLQC30)
* EORTC PAT-SAT C-33 to assess satisfaction with care received in the hospital setting or in the clinic
* Brief Pain Inventory (BPI) to assess pain
* Hospital Anxiety and Depression Scale (HADS) to assess anxiety and depression
* Edmonton Symptom Assessment Scale (ESAS) to assess patients’ symptoms
## Policies for data access and sharing
MyPal partners will deposit the research data needed to validate the results
presented in the submitted scientific publications. This timescale applies for
data underpinning the publication and results presented _._ Research papers
written and published during the funding period will be made available with a
subset of the data necessary to verify the research findings. The consortium
will then make a newer, complete version of data, available within 6 months of
Action completion. This embargo period is requested to allow time for
additional analysis and further publication of research findings to be
performed.
Other data (not underpinning the publication) will be shared during the Action
life following a granular approach to data sharing and releasing subsets of
data at distinct periods rather than waiting until the end of the Action, in
order to obtain feedback from the user community and refine it as necessary.
Access schemes for these data are very important, especially regarding the
data produced by clinical processes, explicitly related with specific
individuals, as they could lead to important privacy issues. Therefore, MyPal
policy provides for partial or controlled publication of research data,
including:
* Authentication systems that limit read access to authorized users only;
* Procedures to monitor and evaluate access requests one by one. A user must complete a request form stating the purpose for which they intend to use the data;
* Adoption of a Data Transfer Agreement that outlines conditions for access and use of the data.
The policy for access and sharing of data will be defined on a per dataset
fashion. In general, anonymised and aggregate data will be made freely
available to everyone, whereas sensitive and confidential data will only be
accessed by specific authorised users.
## Open access to research data
Open access to research data refers to the right to access and re-use digital
research data generated by Actions. EU expects funded researchers to manage
and share research data in a manner that maximizes opportunities for future
research and complies with relevant best practices. Therefore, the MyPal data
sharing policy promotes publishing of dataset which have at least one of the
following characteristics:
* the dataset has clear scope for wider research use;
* the dataset is likely to have long-term value for research or other purposes;
* the dataset has broad utility for reference and use by research communities;
* the dataset represents a significant output of the research project;
* the dataset does not expose information which could be used to cause harm to individuals (e.g. patients), especially focusing on their confidentiality, according to applying legislation and research ethics.
Openly accessible research data generated during the MyPal Action must be
disseminated and available for access free of charge, for use cases like data
mining, further research exploitation, reproduction and validation of already
conducted research and its conclusions etc.
As EC emphasizes on the need for publishing data complying with FAIR
principles (Findable – Accessible – Interoperable – Reusable) 5 . The
requirements if “Findable” and “Accessible”, practically correspond to having
data openly available on the internet. However, the requirements of
“Interoperable” and “Reusable” imply the need for: (a) metadata to provide an
in-depth description of data to facilitate the clear definition of their scope
and potential value and (b) use of standards to enable automatic reuse and
processing through IT systems.
The exact definition of these metadata and standards will be part of each
dataset publication process, as it might be related with the specific dataset
use cases. Publishing data as simple spreadsheets might be suitable for simple
data analysis, whereas publishing data using other more complex formats might
be a necessity depending on their scope. For example, CDISC 6 might be
selected as a more appropriate data format in order to publish produced data
in clinical trial registries, or Resource Description Framework – RDF 7
might be selected to facilitate the reuse of published data in Knowledge
Graphs according to the Linked Data paradigm.
Since most of the data in MyPal will be produced from local systems, their
publication metadata will include provenance information, including the
following:
* Time stamp for when the data was generated;
* Data owner (including contact information);
* Data producing authority;
* Dataset versioning information explaining potential changes between versions;
* Licensing information;
* Unique identifier for the dataset (e.g. Document Object Identifier - DOI)
Documenting datasets, data sources and the methodology used for acquiring the
data establishes the basis for the interpretation and appropriate usage of the
data. Each generated/collected and deposited dataset will include
documentation to help users to re-use it. If limitations exist for the
generated data, these restrictions will be clearly described and justified.
Potential issues that could affect how data can be shared and used may include
the need to protect participant confidentiality, comply with informed consent
agreement, protect Intellectual Property Rights, submit patent applications
and protect commercial confidentiality. Possible measures that may be applied
to address these issues include encryption of data during storage and
transfer, anonymisation of personal information, development of Data Transfer
Agreements that specify how data may be used, specification of embargo
periods, and development of procedures and systems to limit access to
authorised users only.
## Open access to scientific publications
Open access to scientific publications refers to free of charge online access
for end users in order to promote further research without barriers. Open
access will be achieved through the following steps:
1. All papers will be deposited at least by the time of publication to a formal repository for scientific papers ( _https://www.openaire.eu/participate/deposit/idrepos_ ). If no other suitable repository is found, the European sponsored repository for scientific papers will be used: _http://zenodo.org/_ .
2. Authors can choose to pay “author processing charges” to ensure open access publishing, but still they have to deposit the paper in a formal repository for scientific papers.
3. Authors will ensure open access via the repository to the bibliographic metadata identifying the deposited publication. More specifically, the following will be included:
* The terms “ _European Union (EU)_ ” and “ _Horizon 2020_ ”;
* “ _MyPal: Fostering Palliative Care of Adults and Children with Cancer through Advanced Patient Reported Outcome Systems_ ”, Grant agreement number 825872; ▪ Publication data, length of embargo period if applicable; and
* A persistent identifier.
4. Each case will be examined separately in order to decide on self-archiving or paying for open access publishing.
It should be noted that as part of the MyPal publication policy, published
papers should also include the following acknowledgement: “ _The research
leading to these results has received funding from the European Union’s
Horizon 2020 research and innovation programme under grant agreement No
825872-MyPal_ ” and display the EU emblem.
## Archiving and presentation
According to the MyPal data management policies, datasets will be maintained
for 15 years in the clinical sites where they have been collected or created.
Clinical study data are excluded from this rule as, due to their high
sensitivity, MyPal consortium decided to host them centrally in CERTH which
holds an ISO 27000 Information Security Management certification. To ensure
high-quality long-term management and maintenance of the dataset, the
consortium will implement procedures to protect information over time. These
procedures will permit a broad range of users to easily obtain, share and
properly interpret both active and archived information and they will ensure
that information is:
* Kept up-to-date in content and format so that they remain easily accessible and usable;
* Protected from catastrophic events (e.g., fire and flood), user error, hardware failure, software failure or corruption, security breaches, and vandalism.
Regarding the second aspect, solutions dealing with disaster risk management
and recovery, as well as with regular backups of data and off-site storage of
backup sets, are always integrated when using the official data repositories
(i.e., _http://zenodo.org/_ ); the partners will ensure the adoption of
similar solutions when choosing an institutional research data repository.
Partners are encouraged to claim costs for resources necessary to manage and
share data; these will be clearly described and justified. Arrangements for
post-action data management and sharing must be made during the life of the
Action. Costs associated with long-term curation and preservation, such as
POSF (Pay Once, Store Forever) storage, will be purchased before the Action
ends.
# Datasets
The datasets described here are those identified at the 6 th month of the
project, and therefore cannot be considered definite. However, these dataset
descriptions are provided to provide a clear pathway on how the overall MyPal
data management policy might be applied through the whole project.
The users engaged in the MyPal project can be categorized as following:
* Adult patients
* Children patients
* Informal carers (typically patients’ family) ▪ Healthcare professionals
At the current stage of the project, the data engaged in the project can be
categorized as following:
* Electronic Health Records
* Personal demographics
* Lifestyle/behavioral data
* Responses to structured questionnaires
* Psychosomatic information
According to GDPR Article 4.1. “personal data is defined as any information
relating to an identified or identifiable natural person (‘data subject’); an
identifiable natural person is one who can be identified, directly or
indirectly, in particular by reference to an identifier such as a name, an
identification number, location data, an online identifier or to one or more
factors specific to the physical, physiological, genetic, mental, economic,
cultural or social identity of that natural person”. In the context of MyPal,
this refers to the following identifiers: names, email addresses, medical
records, home/work addresses, phone numbers and other data linked directly to
individual users. Sensitive data constitute a big part of this collection, for
example, questionnaire-based tools and applications to monitor patients’
stress, anxiety, depression and related negative impact of the disease on
their lives and social relations will be used. At this stage of the project,
the collection of sensory data for objective physical activity assessment
(e.g. step measurement) is also planned.
The description of the identified datasets and the respective data management
activities follows the rationale of the Horizon 2020 DMP template 8 while
for each dataset, the respective document based on the ERC template 10 is
provided as an appendix. Since the most important project activities regarding
the presented DMP are the two clinical studies which have not yet been fully
designed (i.e. MyPal ADULT randomized control trial (RCT) and MyPal CHILD non-
interventional observational study (OS)), the respective dataset descriptions
cannot be finalized, but rather described in a more abstract manner.
## MyPal-ADULT clinical study
### Data Summary
MyPal-ADULT is an RCT planned to involve 300 patients with hematologic
malignancies, planned to start on month 17 and end on M42 of the project. Two
groups of adult patients will be employed: (a) an intervention group will use
the MyPal ePRO system and (b) a control group will receive typical palliative
care if desired.
The user categories engaged in this clinical study data management process
are: (a) adult patients and (b) healthcare professionals. As part of the MyPal
ADULT clinical study, the EORTC QLQ-C30 General Questionnaire and EQ-5D
(Czech, Greek, Italian, Swedish version) will be used to measure the
improvement of quality of life of patients every month for the first six
months and at the end of the clinical study to provide psychosomatic,
behavioural and lifestyle information for the patient. In addition, patient
and healthcare professional demographic data, along with medical history and
necessary EHR information will be employed. This data will enable the
calculation of scores for various well-defined scales (e.g. EORTC Satisfaction
with Cancer Care questionnaire, etc.).
More specifically, MyPal-ADULT dataset is expected to include the following
subsets:
* Assessment dataset: all the data collected for the assessment of the study outcomes that are associated with the endpoints of the study – electronically collected assessment scale (i.e., questionnaire data regarding QoL, satisfaction with care, etc.) & clinical information (e.g., overall survival).
* Intervention dataset: all the data collected as part of the developed eHealth intervention: includes electronically collected assessment scale data concerning cancer-related symptoms (e.g., Brief Pain Inventory, Edmonton Symptom Assessment Scale, etc.), lifestyle data (daily steps, sleep quality) coming from sensory device, clinical data (e.g., diagnosis), treatment/medication plan, etc.
The data collection process will entail answering of the respective
questionnaires using electronic or other means (e.g. interviews) while also
employing the MyPal ePRO platform. The data collected will be typically stored
in spreadsheet files (e.g. csv, or Microsoft Excel files) and would be used by
the consortium researchers to produce statistics that could assess the planned
intervention’s feasibility and improvement on overall patient’s quality of
life. The size of the collected data is estimated at 500MBs.
Data processing will mostly focus on the calculation of the various scores
regarding the improvement of the patient’s quality of life and various
statistic measures. No special needs for data processing can be identified.
Therefore, it is assumed that the respective index scores or the patient group
statistics can be calculated on each clinical site with no need for data
exchange. If data need to be exchanged among consortium members (e.g. for
validation or other purposes) they will only include anonymized information,
after written guarantees regarding data security have been provided.
### FAIR data
_**Making data findable, including provisions for metadata** _
MyPal-ADULT data will be available online after anonymization, accompanied
with suitable metadata to facilitate finding via search engines. Furthermore,
suitable openly accessible data repositories will be selected to store the
produced data using persistent and unique identifiers (e.g. DOI) in order to
enable unambiguous identification of the dataset and referencing in future
research, either by MyPal consortium or other researchers.
_**Making data openly accessible** _
While at the current stage of the project, scientific publications or data
publications could not be defined, MyPal consortium is committed to publishing
anonymized data to the extent possible, using openly accessible data
repositories. Similarly, open access scientific journals will be employed for
scientific publications. Indicatively, outlining a publication plan, the
following 3 data or research results publications could be outlined:
1. The clinical study’s _research protocol_ will be published in EU Clinical Trials Register (EU-CTR) 9 could be published in an open access scientific journal like “JMIR Research Protocols” 10 , “BMJ Open” 11 , “BMC Cancer” 12 and “International Journal of Clinical Trials” 13 , always in accordance with the International Committee of Medical Journal Editors (ICMJE) recommendations 14 .
2. The MyPal-ADULT produced _datasets_ could be published (after thorough anonymization) in the clinicaltrials.gov _ 17 _ . “Janus clinical trial data repository” 15 will also be evaluated as a potential repository in order to maximize future data reuse. However, since Janus currently focuses on data “ _submitted to FDA as part of a regulatory submissions_ ”, it might not be relevant to MyPal-ADULT study. Other repositories will also be evaluated (indicative such lists are published by UCL 16 , Cancer Research UK 17 , and Nature Scientific Data 21 ). It should be noted that in any case, prior to data publication, the respective data repository will be evaluated regarding its adherence to FAIR principles, ensuring that MyPal is compatible with the goals of ORDP.
3. The overall evaluation of the MyPal-ADULT study will be published in open access scientific journal, possibly referring to the respective datasets. “JMIR” 18 and “JMIR mHealth and uHealth” 23 could be identified as a potentially suitable journal.
_**Making data interoperable** _
Data will be published applying open access formats (e.g. csv files) to
facilitate further reuse and data validation without the need for specific
vendor tools and software. Furthermore, widely accepted terminologies and
vocabularies will be used to the extent possible (e.g. ICD for diagnoses, ATC
for drugs and MedDRA for adverse drug events) to enable unambiguous semantic
interpretation of data.
_**Increase data re-use (through clarifying licences)** _
The full data regarding scientific publications will be available at the
moment of publication. Full dataset will be available within 6 months of
Action completion to allow time for additional analysis and potential further
results publication while protecting the consortium’s Intellectual Property
Rights (IPR). Data will be able to be reused from the moment they will be
published. Data will be licensed using appropriate open access licenses, based
on Creative Commons. In order to assure quality control procedures, MyPal
consortium will apply proper internal review processes. Furthermore, peer
reviewed scientific publications will be pursued to assure high quality
interpretation of the produced data.
### Allocation of resources
A DPO will be defined for each clinical site participating in the study, in
compliance with GDPR. Finally, as already outlined in the overall MyPal
policy, data (both processed and raw collected data) will be maintained for 15
years, either locally (for low risk data) or centrally in CERTH which is
certified for its information management processes (for high risk data). Costs
for long term storage of results are eligible in the context of MyPal and the
respective partners will have the responsibility of cost management.
### Data security
Data management risks/threats can be identified based on a widely used
information security threat analysis model, i.e. the STRIDE model 19 , as
following:
* _Spoofing_ could refer to intercepting information for any reason, violating patient’s privacy or the MyPal consortium’s work in terms of confidentiality in order to cause harm (e.g. personal harm) or gain benefits (e.g. intellectual property rights’ issues).
* _Tampering_ could refer to altering collected data. Tampering includes modifications conducted either by mistake or on purpose, in order to cause harm to the patient or MyPal consortium.
* _Repudiation_ refers to the risk of falsely denying the validity of an action, denouncing responsibility for it. In MyPal context, an example could refer to a clinician denying that he/she is responsible for a clinical act in order to avoid legal or other consequences.
* _Information disclosure_ could refer to revealing information either to harm the patient or MyPal consortium or to provide other kind of benefits to the malicious user (e.g. financial benefit via disclosing information to insurance companies).
* _Denial of Service_ could refer to stealing data in order to stop the service of MyPal (e.g. the MyPal ePRO platform) either to cause harm to the patient or the MyPal consortium, or in order to provide other kind of benefits for the malicious user (e.g. for competition reasons).
* _Elevation of privilege_ could refer to providing access to a non-legitimate user in order to exploit collected data.
To successfully mitigate such information management risks, the produced data
will be hosted by CERTH (instead of the respective clinical sites) which holds
an ISO 27000 standard regarding its Information Security practices.
### Ethical aspects
The overall design of the study will be approved by all clinical sites local
research ethics committees as well as by any authorities defined by local or
European laws. Regarding ethics and legal issues, the overall process will be
governed by widely accepted best practices which can be enumerated as
following:
* Ethical principles of the Declaration of Helsinki
* The General Data Protection Regulation (GDPR)
* ICH-GCP Guidelines
* EU Clinical Trial Directive (2001/20/EG)
* ICH E9 statistical principles for clinical trials
* ESMO Clinical Practice Guidelines for Supportive and Palliative Care
* Guidelines of the National Consensus Project for Quality Palliative Care
* NCCN Guidelines Insights: Palliative Care, Version 2.2017
The consent process will provide documents to be signed after information
sheets regarding MyPal have been provided to potential participants.
Furthermore, a questions-and-answers session will be conducted enabling the
clarification of patients’ concerns and providing explanations to them. As
also explained in the clinical study definition, a copy of the informed
consent form will be given to the subject and the original will be placed in
the subject's medical record. Draft versions of the information sheets and the
respective consent documents have been already defined in the context of the
deliverable “D1.1: MyPal Ethics” and are also provided in this DMP for easy
reference in Appendix A and Appendix B.
Appendix C provides the data management plan for the MyPal ADULT study dataset
following the respective ERC template.
## MyPal-CHILD clinical study
### Data Summary
MyPal-CHILD is planned as an observational, non-interventional clinical study
of the MyPal ePRO-based early palliative care system in 100 paediatric
oncology patients (6-18 years of age). The clinical study will start at month
17, and end at month 42 of the project focusing on two groups of patients,
i.e. paediatric patients Acute Lymphoblastic Leukaemia (ALL) and paediatric
patients with solid cancers. Even though MyPal-CHILD is a less intrusive
observational study, compared to MyPal-ADULT which is an RCT, similar
principles apply.
The user categories engaged in this clinical study data management process
are: (a) child patients, (b) healthcare professionals, and (c) informal
carers. As part of the MyPal-CHILD clinical study, the Impact on Family Scale
and the EORTC PATSAT – C33 Parent version questionnaire will be used to assess
the informal carers’ burden, priorities and satisfaction regarding healthcare
services. In addition, patient and healthcare professional demographic data,
along with medical history and necessary EHR information will be employed.
More specifically, the MyPal-CHILD dataset is expected to include the
following subsets:
* Assessment dataset: all the data collected for the assessment of the study outcomes that are associated with the endpoints of the study – electronically collected assessment scale data (i.e., questionnaire data regarding QoL, satisfaction with care, etc.) and clinical information (e.g., overall survival).
* Intervention dataset: all the data collected as part of the developed eHealth intervention: includes electronically collected assessment scale data concerning cancer-related symptoms (e.g., Memorial Symptom Assessment Scale, etc.), information obtained via a serious game planned to be developed, lifestyle data (daily steps, sleep quality) coming from sensory device, clinical data (e.g., diagnosis), treatment/medication plan, etc.
The data collection process will employ the answering of the respective
questionnaires using electronic or other means (e.g. written answers or
interviews) while also employing the MyPal ePRO platform. The data collected
will be typically stored in spreadsheet files (e.g. csv, or Microsoft Excel
files) and would be used by the consortium researchers to produce statistics
that could assess the planned intervention’s feasibility and improvement on
overall patient’s quality of life. The size of the collected data is estimated
at 100MBs.
Data processing will mostly focus on the calculation of the various scores
regarding the improvement of the patient’s quality of life, the burden on the
informal carers and various statistic measures. No special needs for data
processing can be identified at this stage of the project and therefore, it is
assumed that the respective index scores or the patient group statistics can
be calculated on each clinical site with no need for data exchange. If data
need to be exchanged among consortium members (e.g. for validation or other
purposes) they will only include anonymized information, after written
guarantees regarding data security have been provided.
### FAIR data
_**Making data findable, including provisions for metadata** _
MyPal CHILD data will be available online after anonymization, accompanied
with suitable metadata to facilitate finding via search engines. Furthermore,
suitable openly accessible data repositories will be selected to store the
produced data using persistent and unique identifiers (e.g. DOI) in order to
enable unambiguous identification of the dataset and referencing in future
research, either by MyPal consortium or other researchers.
_**Making data openly accessible** _
While at the current stage of the project, research results publications or
data publications could not be defined in detail, MyPal consortium is
committed in publishing anonymized data to the extent possible, using openly
accessible data repositories. To this end, open access scientific journals
will be employed for scientific publications. Indicatively, outlining a
publication plan, the following 3 datasets or research results publications
could be outlined:
1. The clinical study’s _research protocol_ could be published in an open access scientific journal like “JMIR Research Protocols”, “Journal of Paediatric Haematology / Oncology” 20 , “Paediatric Haematology and Oncology” 21 and “Paediatric Haematology Oncology Journal” 22 .
2. Since no repositories directly related with the scenario of MyPal CHILD study could be identified (at least in this stage of the project), the produced _datasets_ could be published (after thorough anonymization) in local institutional or other open access repositories (e.g. zenodo). It should be noted, that in any case, prior to data publication, the respective data repository will be evaluated regarding its adherence to FAIR principles, ensuring that MyPal is compatible with the goals of ORDP.
3. The overall evaluation of the MyPal-CHILD study will be published in an open access scientific journal, possibly referring to the respective datasets. “JMIR”, “Journal of Paediatric Haematology / Oncology”, “Paediatric Haematology and Oncology” and “Paediatric Haematology Oncology Journal” have been identified as potentially suitable journals.
_**Making data interoperable** _
Data will be published applying open access formats (e.g. csv files) to
facilitate further reuse and data validation without the need for specific
vendor tools and software. Furthermore, widely accepted terminologies and
vocabularies will be used to the extent possible (e.g. ICD for diagnoses, ATC
for drugs and MedDRA for adverse drug events) to enable unambiguous semantic
interpretation of data.
_**Increase data re-use (through clarifying licences)** _
The full data regarding scientific publications will be available at the
moment of publication. Full dataset will be available within 6 months of
Action completion to allow time for additional analysis and potential further
results publication while protecting the consortium’s IPR. Data will be
licensed using appropriate open access licenses, based on Creative Commons. In
order to assure quality control procedures, MyPal consortium will apply proper
internal review processes. Furthermore, peer reviewed scientific publications
will be pursued to assure high quality interpretation of the produced data.
### Allocation of resources
A DPO will be defined for each clinical site participating in the study, in
compliance with GDPR. Finally, as already outlined in the overall MyPal
policy, data (both processed and raw collected data) will be maintained for 15
years, either locally (for low risk data) or centrally in CERTH which is
certified for its information management processes (for high risk data). Costs
for long term storage of results are eligible in the context of MyPal and the
respective partners will have the responsibility of cost management.
### Data security
Similarly to the MyPal-ADULT study, data management risks/threats can be
summarized using the STRIDE model as following:
* _Spoofing_ could refer to intercepting information for any reason, violating patient’s privacy or the MyPal consortium’s work in terms of confidentiality.
* _Tampering_ could refer to altering collected data either by mistake or on purpose, in order to cause harm to the patient or MyPal consortium.
* _Repudiation_ refers to the risk of falsely denying the validity of an action, denouncing responsibility for it. In MyPal context, an example could refer to a clinician denying that he/she is responsible for a clinical act in order to avoid legal or other consequences.
* _Information disclosure_ could refer to revealing information either to harm the patient or MyPal consortium or to provide other kind of benefits to the malicious user (e.g. financial benefit via disclosing information to insurance companies).
* _Denial of Service_ could refer to stealing data in order to stop the service of MyPal (e.g. the MyPal ePRO platform) either to cause harm to the patient or the MyPal consortium, or in order to provide other kind of benefits for the malicious user (e.g. for competition reasons).
* _Elevation of privilege_ could refer to providing access to a non-legitimate user in order to exploit collected data .
To successfully mitigate such information management risks, similarly with the
MyPal-ADULT study, the produced data will be hosted by CERTH (instead of the
respective clinical sites) which holds an ISO 27000 standard regarding its
Information Security practices.
### Ethical aspects
The study design will be approved by all clinical sites local research ethics
committees as well as by any authorities defined by local or European laws.
Similar to the MyPal-ADULT study, regarding ethics and legal issues, the
overall process will be governed by widely accepted best practices which can
be enumerated as following:
* Ethical principles of the Declaration of Helsinki
* The General Data Protection Regulation (GDPR)
* ICH-GCP Guidelines
* EU Clinical Trial Directive (2001/20/EG)
* ICH E9 statistical principles for clinical trials
* ESMO Clinical Practice Guidelines for Supportive and Palliative Care
* Guidelines of the National Consensus Project for Quality Palliative Care
* NCCN Guidelines Insights: Palliative Care, Version 2.2017
The consent process will provide consent documents to be signed after
information sheets regarding MyPal have been provided to potential
participants. Furthermore, a questions-and-answers session will be conducted
enabling the clarification of patients’ worries and providing explanations to
them. As also explained in the clinical study definition, a copy of the
informed consent form will be given to the subject and the original will be
placed in the subject's medical record. Draft versions of the information
sheets and the respective consent documents have been already defined in the
context of the deliverable “D1.1: MyPal Ethics” and are also provided in this
DMP for easy reference in Appendix A and Appendix B.
Appendix D provides the data management plan for the MyPal-CHILD study dataset
following the respective ERC template.
## Focus groups to extract user requirements
### Data Summary
In the context of “Task 2.1: MyPal palliative care context and user needs” a
number of focus groups meetings have been conducted in all clinical sites of
the project, including various stakeholders (e.g. clinicians, patients,
informal carers etc.). The purpose of these focus groups was to enable a live
discussion, identify potential user requirements and get end-user feedback
regarding the overall idea of MyPal. To this end, all focus groups have been
recorded and analyzed by local clinical partners to extract meaningful
information regarding user requirements. Furthermore, semi-structured
questionnaires in paper form have been used to collect user feedback.
The first level of data processing, (i.e. the extraction of useful information
from sound recordings, and the analysis of the questionnaires) has been
conducted locally on each clinical site due to both legal and practical
restrictions, as the centralized analysis of locally collected data would
require its transcription/translation. The local partners transcoded the
extracted information in English, and all the information from clinical sites
has been collected using spreadsheet files created by CERTH to collect
anonymized and aggregated data which are further analyzed centrally.
Locally collected and stored data include sound recordings and spreadsheet
files (typically in csv format). Centrally analyzed data are also stored in
spreadsheet files and they are used in order to produce graphical
representations of the collected results to facilitate the decisions regarding
the technical design of the system.
The dataset size could be estimated at about 5MBs of data in a simple CSV
format and about 1GB of sound recording files.
### FAIR data
_**Making data findable, including provisions for metadata** _
Focus groups raw data (i.e. sound recordings, initial questionnaire responses)
will not be available online in order to protect user privacy. This decision
was made in order to reduce reluctance for stakeholders to participate in
focus groups and also facilitate the expression of opinion. However, the
results of the respective analysis will be available with suitable metadata to
facilitate finding via search engines. Furthermore, suitable openly accessible
data repositories will be selected to store the produced data using persistent
and unique identifiers (e.g. DOI) in order to enable unambiguous
identification of the dataset and referencing in future research, either by
MyPal consortium or other researchers.
_**Making data openly accessible** _
While at the current stage of the project, scientific publications or data
publications could not be explicitly defined, MyPal consortium is committed in
publishing anonymized data to the extent possible, using openly accessible
data repositories. Since no purpose specific data repositories have been
identified, Zenodo or institutional repositories will probably be used to make
data openly accessible. Similarly, open access scientific journals will be
employed for scientific publications. The analysis of the focus groups data is
planned to be published as part of the overall project’s “User requirements”
analysis process and “BMC Medical Informatics and Decision Making” 23 has
already been identified as potential publishing.
_**Making data interoperable** _
Data will be published applying open access formats (e.g. csv files) to
facilitate further reuse and data validation without the need for specific
vendor tools and software. No widely accepted terminologies/vocabularies could
be identified for this purpose.
_**Increase data re-use (through clarifying licences)** _
The full data regarding scientific publications will be available at the
moment of publication. Full dataset will be available within 6 months of
Action completion to allow time for additional analysis and potential further
results publication while protecting the consortium’s IPR. Data will be
licensed using appropriate open access licenses, based on Creative Commons. In
order to assure quality control procedures, MyPal consortium will apply proper
internal review processes. Furthermore, peer reviewed scientific publications
will be pursued to assure high quality interpretation of the produced data.
### Allocation of resources
A DPO will be defined for each clinical site participating in the study, in
compliance with GDPR. Finally, as already outlined in the overall MyPal
policy, data (both processed and raw collected data) will be maintained for 15
years. Costs for long term storage of results are eligible in the context of
MyPal and the respective partners will have the responsibility of cost
management. Regarding publication costs, the leader partner of the publication
will handle the publication costs, in cooperation with other partners if
needed.
### Data security
Focus group data security issues mostly refer to the data stored locally on
clinical partners site. While the data produced by the analysis process are
considered as low risk data (as they cannot lead to personal privacy risks),
the original raw data (questionnaire responses and sound recordings) might be
related with some security risks, e.g. the identification of patients and
implicit medical history information. Therefore, the local clinical sites and
the respective DPO are considered responsible for the respective data
maintenance. In cases where the local clinical sites are not able to guarantee
information security of these data, they will be able to use CERTH
infrastructures (which is certified for its Information Security Management
approaches with ISO 27000), always under the restrictions of local bioethics
committee’s approval and compatibility with national and European laws.
### Ethical aspects
The focus groups analysis was designed according to current Research Ethics
guidelines as articulated in the European Commission’s ‘Ethics for
Researchers’, issued for the 7th Framework Programme (FP7). The only data
management related activity based on ethics is the need to maintain the
originally collected data (i.e. sound recordings and questionnaire responses)
confidential to protect participants’ privacy.
Appendix E provides the data management plan for the focus groups dataset
following the respective ERC template.
## Systematic and Mapping Review of use of PRO systems for cancer patients
### Data Summary
In the context of “T1.2: PRO systems in palliative cancer care”, a Systematic
and Mapping Review is conducted regarding the applications of PRO systems for
cancer patients. During this process, a large number of related scientific
publications has been reviewed to identify and quantify the characteristics
and the trends of the PRO approaches in the context of cancer treatment.
To this end, CERTH, FRAU and FORTH analyze the eligible publications and map
them to a well-defined set of criteria. These data are not sensitive by any
means and are maintained using spreadsheet files in csv format, with a size
estimated at about 5MBs.
### FAIR data
_**Making data findable, including provisions for metadata** _
Systematic and Mapping Review data will be available online, accompanied with
suitable metadata to facilitate finding via search engines. Furthermore,
suitable openly accessible data repositories will be selected to store the
produced data using persistent and unique identifiers (e.g. DOI) in order to
enable unambiguous identification of the dataset and referencing in future
research, either by MyPal consortium or other researchers.
_**Making data openly accessible** _
While at the current stage of the project, scientific publications or data
publications could not be explicitly defined, MyPal consortium is committed in
publishing anonymized data to the extent possible, using openly accessible
data repositories. “Systematic Data Review Repository (SRDR)” 24 provided by
U.S. Department of Health & Human Services will be considered to publish the
collected data. Furthermore, the Systematic and Mapping Review protocol will
be published in Prospero 25 . Similar with the other datasets, open access
scientific journals will be employed for scientific publications. “Clinical
Cancer Informatics” 26 is identified as a potential target journal for
publication. Finally, Cochrane library of systematic reviews 27 will also be
considered for publication of the respective Systematic and Mapping Review
protocol and results.
_**Making data interoperable** _
Data will be published applying open access formats (e.g. csv files) to
facilitate further reuse and data validation without the need for specific
vendor tools and software. No widely accepted terminologies/vocabularies could
be identified for this purpose, apart perhaps from the use of MeSH keywords
28 which are widely used to organize medical scientific publications.
_**Increase data re-use (through clarifying licences)** _
The full data regarding scientific publications will be available at the
moment of publication. Full dataset will be available within 6 months of
Action completion to allow time for additional analysis and potential further
results publication while protecting the consortium’s IPR. Data will be
licensed using appropriate open access licenses, based on Creative Commons. In
order to assure quality control procedures, MyPal consortium will apply proper
internal review processes. Furthermore, peer reviewed scientific publications
will be pursued to assure high quality interpretation of the produced data.
### Allocation of resources
Since no personal or sensitive data are engaged in this dataset, there is no
need to define a specific DPO. As already outlined in the overall MyPal
policy, data (both processed and raw collected data) will be maintained for 15
years.
Costs for long term storage of results are eligible in the context of MyPal
and the respective partners will have the responsibility of cost management.
Regarding publication costs, the leader partner of the publication will handle
the publication costs, in cooperation with other partners if needed.
### Data security
No personal or sensitive data are involved in this dataset, therefore no
information management security risks are related apart from the ones
regarding the protection of the MyPal consortium Intellectual Property Rights
(IPR). To this end, CERTH which leads the current activity will maintain data
for 15 years, applying certified information security management practices and
no specific measures are required.
### Ethical aspects
The only ethical aspect regarding the Systematic and Mapping Review refers to
the use of widely accepted scientific methodologies to ensure the produced
results quality and research integrity. To this end, PRISMA methodology will
be applied 29 along with widely accepted scientific publications best
practices (e.g. COPE guidelines 30 ).
Appendix F provides the data management plan for the “Systematic and Mapping
Review” dataset following the respective ERC template.
## Internal Expert Questionnaires for technical design
### Data Summary
In the context of “Task 2.1: MyPal palliative care context and user needs” a
series of questionnaires has been created and circulated among MyPal
consortium experts, in order to facilitate the technical requirements
engineering process and design of the system, prioritization of ICT system
features and answering to open-blocking questions regarding technical design
and development of the ePRO platform.
Data processing has been conducted by CERTH using data collected via Google
forms, typically in spreadsheet files. These data have been used in the
project’s requirements engineering process and are used to produce graphical
representations of the collected results to facilitate the decisions regarding
the technical design of the system. The dataset size could be estimated at
about 5MBs.
### FAIR data
_**Making data findable, including provisions for metadata** _
“Internal Expert Questionnaires” raw data will be available online after
anonymization, accompanied with suitable metadata to facilitate finding via
search engines. Furthermore, suitable openly accessible data repositories will
be selected to store the produced data using persistent and unique identifiers
(e.g. DOI) in order to enable unambiguous identification of the dataset and
referencing in future research, either by MyPal consortium or other
researchers.
_**Making data openly accessible** _
While at the current stage of the project, scientific publications or data
publications could not be explicitly defined, MyPal consortium is committed in
publishing anonymized data to the extent possible, using openly accessible
data repositories. Since no purpose-specific data repositories have been
identified, Zenodo or institutional repositories will probably be used to make
data openly accessible. Similarly, open access scientific journals will be
employed for scientific publications. The analysis of the internal experts
questionnaires is planned to be published as part of the overall project’s
“User requirements” analysis process and “BMC Medical Informatics and Decision
Making” has already been identified as potential publishing.
_**Making data interoperable** _
Data will be published applying open access formats (e.g. csv files) to
facilitate further reuse and data validation without the need for specific
vendor tools and software. No widely accepted terminologies/vocabularies could
be identified for this purpose.
_**Increase data re-use (through clarifying licences)** _
The full data regarding scientific publications will be available at the
moment of publication. Full dataset will be available within 6 months of
Action completion to allow time for additional analysis and potential further
results publication while protecting the consortium’s IPR. Data will be
licensed using appropriate open access licenses, based on Creative Commons. In
order to assure quality control procedures, MyPal consortium will apply proper
internal review processes. Furthermore, peer reviewed scientific publications
will be pursued to assure high quality interpretation of the produced data.
### Allocation of resources
Since no personal or sensitive data are engaged in this dataset, there is no
need to define a specific DPO. As already outlined in the overall MyPal
policy, data (both processed and raw collected data) will be maintained for 15
years. Costs for long term storage of results are eligible in the context of
MyPal and the respective partners will have the responsibility of cost
management. Regarding publication costs, the leader partner of the publication
will handle the publication costs, in cooperation with other partners if
needed.
### Data security
No personal or sensitive data are involved in this dataset, therefore no
information management security risks are related apart from the ones
regarding the protection of the MyPal consortium Intellectual Property Rights
(IPR). To this end, CERTH which leads the current activity will maintain data
for 15 years, applying certified information security management practices and
no specific measures are required.
### Ethical aspects
The only ethical aspect regarding the “Internal expert questionnaires” refers
to the use of widely accepted scientific methodologies to ensure the produced
results quality and research integrity. To this end, COPE guidelines will be
applied along with widely accepted research ethics best practices.
Appendix G provides the data management plan for the “Internal Expert
Questionnaires” dataset following the respective ERC template.
# Conclusions
The purpose of this deliverable is to provide a clear DMP to support the
management lifecycle for all the data that will be collected, processed or
generated by the MyPal Action. This DMP is produced based on EU provided
guidelines and outlines the MyPal policy regarding research results and data
sharing.
In summary, the MyPal Consortium is committed on an “ _as open as possible and
as closed as necessary_ ” approach, focusing on potential personal privacy
issues. This approach is heavily dependent on the national and European
legislation (e.g. GDPR) and a robust ethics background due to the sensitivity
of the data to be managed.
Five datasets have been identified in the current stage of the project, for
which a first data management plan, including potential publications has been
made. The two clinical studies, MyPal-ADULT and MyPal-CHILD, are expected to
produce the two most important and sensitive datasets of the project. Three
more datasets identified as part of ongoing activities are also presented.
While the currently presented DMP clearly outlines the MyPal consortiums data
management policy, it should not be considered a fixed document. On the
contrary, MyPal DMP should be considered a live evolving document during the
lifespan of the Action as data are generated and processed, expected to be
updated regularly in the future.
Regarding the future updates of the presented DMP, the following milestones
are identified:
* MyPal ADULT and MyPal CHILD Study protocols are expected to be finalized on month 10 of the project (October, 2019) – they will include details about the data to be collected & their management and therefore they are expected to have significant impact on the DMP
* End of 1 st reported period on month 18 is an update milestone for the DMP
* End 2 nd reported period on month 36 is also defined as an update milestone for the DMP
* End of the project on month 42 is the final update milestone for the DMP
# Abbreviations
ALL: Acute Lymphoblastic Leukaemia
CERTH: Centre for Research and Technology Hellas
COPE: Committee on Publication Ethics
CTR: Clinical Trials Register
DMP: Data Management Plan
DOI: Document Object Identifier
DPO: Data Protection Officer
ERC: European Research Council
EU: European Union
FACT: Functional Assessment to Cancer Therapy
GB: Gigabyte
GDPR: General Data Protection Regulation
ICMJE: International Committee of Medical Journal Editors
ICT: Information and Communications Technology
IPR: Intellectual Property Rights
MB: Megabyte
ORDP: Open Research Data Pilot
OS: Observational Study
PRO: Patient Reported Outcomes
QoL: Quality of Live
RCT: Randomized Control Trial
RDF: Resource Description Framework
SRDR: Systematic Data Review Repository
WP: Work Package
# Appendix A. Information Sheets
<table>
<tr>
<th>
</th>
<th>
**APPENDIX A – INFORMATION SHEETS**
</th> </tr>
<tr>
<td>
ANNEX A.1
</td>
<td>
ADULT PATIENTS Information Sheet
</td> </tr>
<tr>
<td>
ANNEX A.2
</td>
<td>
PARENTS Information Sheet
</td> </tr>
<tr>
<td>
ANNEX A.3
</td>
<td>
ADOLESCENTS 16-18 Information Sheet
</td> </tr>
<tr>
<td>
ANNEX A.4
</td>
<td>
HEALTHCARE PROFESSIONALS Information Sheet
</td> </tr>
<tr>
<td>
ANNEX A.5
</td>
<td>
FAMILY MEMBERS (HEALTHY ADULTS) Information Sheets
</td> </tr>
<tr>
<td>
ANNEX A.6
</td>
<td>
CHILDREN PATIENTS 10-15 Information Sheet
</td> </tr>
<tr>
<td>
ANNEX A.7
</td>
<td>
CHILDREN PATIENTS 6-9 Information Sheet
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1513_Tech4Win_826002.md
|
# EXECUTIVE SUMMARY
This document focuses on a specific aspect of the way that the Tech4Win
consortium will operate. It presents related information for partners in the
consortium as well as for external parties about the processes that the
Tech4Win project shall follow in order to manage the data that are associated
with and generated by its work. This topic requires a level of detail such
that it is the subject in its own deliverable rather than being a part of
D8.1, the deliverable that presents all of the other day-to-day management
procedures for the running of the Tech4Win project. These processes are
defined in line with the guidelines of the European Commission, according to
the participation of Tech4Win in the Open Research Data Plan. The evaluation
of the capability and performance of the Tech4Win project with regard to this
area will be the subject of a short review within the context of WP7 at each
consortium meeting.
# I NTRODUCTION
This document presents the initial version of the Data Management Plan (DMP)
for the Tech4Win project. This information has been prepared partially
following the guidance of the UK Digital Curation Centre
(http://www.dcc.ac.uk), an internationally-recognized center of expertise in
digital curation with a focus on building capability and skills for research
data management. The DCC provides expert advice and practical help to research
organisations wanting to store, manage, protect and share digital research
data. This DMP for Tech4Win details the public datasets that the project:
* will generate,
* whether and how it will be exploited or made accessible for verification and reuse,
* how it will be curated and preserved.
Academic papers have been made available as open access for some years (may
depend on the host service), while the provision of managed public datasets is
relatively new, at least in the field of materials research. All commonplace
mechanisms for academic papers will be automatically followed by the Tech4Win
project. This means that the DMP concerns itself with processes with which the
project will manage the data that it generates. Clearly, this includes:
* metadata generation,
* data preservation,
* data storage beyond the end of the project.
In particular, the consortium realizes that its responsibilities under the DMP
are that:
* the Data Management Plan must be defined in first 6 months of project,
* there must be interim and final reports on data,
* data identified in DMP must be shared in an online repository,
* appropriate support will be provided to those involved in the pilot.
At a deeper level of detail, the aim of the DMP is that its processes will
lead to:
* better understanding of the data produced as output from the project,
* clarity on how the data is actually used within the project and outside of it,
* continuity in the work of the consortium in the event of staff leaving or entering the project during its lifecycle or equivalently staff changing role within the project during its lifecycle. This includes such areas as:
* avoiding duplication of effort i.e. re-collecting or re-working data,
* enabling validation of results,
* contributing to collaboration through data sharing,
* increasing visibility of output and thereby leading to greater impact. In particular enabling other researchers to cite the datasets generated by the project.
The potential strong commercial nature of the Tech4Win project means that the
vast majority of its datasets will remain private and access to these will be
restricted to only those partners who are using them, an aspect that resonates
with the terms of the Tech4Win consortium agreement between partners.
# DATA SHARING - DISSEMINATION OF RESULTS AND ASSOCIATED DATASET
The Tech4Win consortium fully embraces the H2020 requirement for Open Access
publishing, following the guidelines presented by the European Commission. The
project will ensure both ‘green’ and ‘gold’ publishing, i.e. self-archiving in
one or more repositories and through paying an author’s fee when publishing in
journals. The ultimate choice between these two options will be made in a
case-by-case approach taking into account the potential impact of the
published results.
The project will make its public datasets results available through the
following repositories:
* The project website: _http://www.tech4win.eu/_
* The central repositor y _http://www.zenodo.org_ (as suggested in the Horizon 2020 guidelines), where the project will store (public) deliverables, publications and datasets. The project identifier is Tech4Win.
_Figure_
_2_
_._
_1_
_._
_ZENODO repository_
_._
For internal management purposes the Tech4Win consortium will use a Microsoft
SharePoint site which has been prepared, organized and will be maintained by
IREC, the coordinating organization of Tech4Win. The secure HTTP link for the
Tech4Win SharePoint is easily reached through the open project website.
# ETHICAL AND LEGAL COMPLIANCE
This section addresses the issues of ethical and legal compliance about the
datasets of information produced by the project.
## ETHICAL AND LEGAL COMPLIANCE
None of the data that Tech4Win makes available to the public that is in the
public repositories mentioned above will contain information on individuals or
companies. In general, the data used in Tech4Win is synthetic where possible
and does not represent any human being or corporate entity. Note also that
during the project, participants will be given the option to withdraw
themselves and their data at any time. It should be pointed out that the
Consortium will do its best to adapt the Data Management to the Data
Protection Directive 95/46/EC.
## IPR ISSUES
In accordance with the terms of the CA (consortium agreement), ownership of
any datasets generated resides with the consortium partner(s) who create the
datasets in their research and development work.
# ARCHIVING AND PRESERVATION
The site _http://www.zenodo.org_ provides long-term storage for the datasets
that are placed there. Individual partners may also place datasets (and
academic papers) in the open source systems made available at their
organisations.
# METADATA
The consortium recognizes that the Dublin Core Metadata Initiative is widely
recognized as a mechanism by which to record the metadata for each public
dataset in the project. This a set of 15 terms which are furthermore endorsed
in IETF RFC 5013 and in ISO Standard 15836-2009. These terms are as follows:
1. Title
2. Creator
3. Subject
4. Description
5. Publisher
6. Contributor
7. Date
8. Type
9. Format
10. Identifier
11. Source
12. Language
13. Relation
14. Coverage
15. Rights
Associated with each public dataset that Tech4Win produces will be a file of
metadata structured as in the following example:
<meta name=”DC.Title” content=”Test data for Tech4Win experiment 1”>
<meta name="DC.Format" content="text; sparse graph representing X">
<meta name="DC.Language" content="en" >
<meta name="DC.Publisher" content=" Tech4Win Project" >
…..
All meta tags are optional in the DCC standard, however the Tech4Win project
will endeavor to fill in all 15 meta tags for each data set.
# ROLES AND RESPONSIBILITIES
Roles and responsibilities for maintaining and updating the Data Management
Plan (DMP) are linked to roles within Tech4Win. In principle, the WP leaders
are responsible for keeping updated the repositories using the inputs provided
by the Parties involved in each WP before each consortium meeting, with the
overall coordination of the PC.
The EIB will check and decide what data can be open without jeopardizing the
effective protection of IPR and generated Foreground.
Parties are requested to deliver in a six-monthly basis:
* Pre-printed manuscript of any (accepted) publication,
* Slides and posters shown at conferences,
* Raw data supporting paper and deliverable figures.
* PhD dissertations generated in the frame of the project
In case new personnel are assigned to a relevant role, responsibilities with
respect to the DMP are also taken over. For details on the management roles
and structure of Tech4Win see D8.1.In case the contact person for data is
leaving the project, the affiliation of the original contact person will take
over the responsibility and will assign a new contact person.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1514_TER4RAIL_826055.md
|
1\. Executive Summary
The aim of the Data Management Plan, in short DMP, is to manage data used and
generated within the TER4RAIL project. It describes how data 1 will be
collected, processed, stored and managed from the perspective of external
accessibility and long-term archiving. It takes account of the particular
characteristics of the TER4RAIL project, which has features such as diverse
data sources and formats and greater initial uncertainties typical of
coordination and support actions. The DMP is therefore designed for
flexibility to meet emerging needs.
The DMP supports a project which has as its principal aims the establishment
of a research observatory for rail, the correlation of its outputs with
existing roadmaps so they may be improved and updated and setting out the key
argument for the use of rail as the backbone of European mobility.
More precisely, the DMP addresses the following points for each of the
project’s Work Packages:
1. Type of data to be utilised and generated within TER4RAIL. This section identifies and describes the (existing) input data that will be utilised and the output data to be generated by the project.
2. Standards to be used, metadata and quality issues. GDPR and compliance issues are covered as appropriate.
3. How data are exploited and shared/accessed for their verification and reutilisation. The exploitation of data will follow the strategies of each partner concerning their business potential, in accordance to the exploitation plan produced in WP4, and in accordance to the access to data by the partners specified in the Consortium Agreement. Specific restrictions and confidentiality aspects are clarified.
4. Data storage and conservation. Where the data will be held and the arrangements and responsibilities for managing, updating and maintaining the data.
2. Abbreviations and acronyms
<table>
<tr>
<th>
**Abbreviation / Acronyms**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
CSA
</td>
<td>
Coordination and Support Actions
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
ERRAC
</td>
<td>
European Railway Research Advisory Council’s
</td> </tr>
<tr>
<td>
EU
</td>
<td>
European Union
</td> </tr>
<tr>
<td>
GDPR
</td>
<td>
General Data Protection Regulation
</td> </tr>
<tr>
<td>
JU
</td>
<td>
Joint Undertaking
</td> </tr>
<tr>
<td>
MAAP
</td>
<td>
Multi Annual Action Plan
</td> </tr>
<tr>
<td>
NA
</td>
<td>
Not Applicable
</td> </tr>
<tr>
<td>
R&D
</td>
<td>
Research and development
</td> </tr>
<tr>
<td>
S2R
</td>
<td>
Shift2Rail
</td> </tr>
<tr>
<td>
UIC
</td>
<td>
Union Internationale des Chemins de Fer
</td> </tr>
<tr>
<td>
WP
</td>
<td>
Work Package
</td> </tr> </table>
3. Background
This document constitutes the Deliverable D4.2 “Data Management Plan” for the
TER4RAIL project, a 24-month coordination and support action for transversal
exploratory research activities to benefit the railways, within the overall
framework of Shift2Rail, which is developing the fundamental building blocks
that will allow the creation of the future railway interoperable system.
The DMP supports TER4RAIL’s principal aims - the establishment of a research
observatory for rail, the correlation of its outputs with existing roadmaps so
they may be improved and updated and setting out the key argument for the use
of rail as the backbone of European mobility.
The DMP is a vital management tool, particularly as the project has seven
partners from five countries and different disciplines, and the project has a
particularly wide field of activity.
The partners recognise the challenges of this type of project for data
management, so have committed themselves to devising and honouring a formal
process to ensure the data are managed and maintained in a way which will
mitigate the complexity and diversity of the data sources and ensure an
efficient and sustainable process to deliver the project objectives and the
ongoing usefulness of its products for the success of the rail sector and its
contribution to delivering S2R’s wider social, economic and environmental
objectives.
4\. Objective
This Data Management Plan (DMP) details what data the project will generate,
whether and how it will be exploited or made accessible for verification and
re-use, and how it will be curated and preserved. This document is to be
considered in combination with:
* Section 9 “Access Rights” and Attachment 1 “Background included” of the Consortium Agreement, dealing with access rights and the use of the Workflow Tool.
* Section 3 “Rights and obligations related to background and results” of the Grant Agreement No. 826055, dealing with rights and obligations related to background and results.
The DMP is organised per Work Package (WP) to concretely describe the
contribution of each WP to the outcomes as well as the spin-off potential of
each activity. To understand the data that the project will generate, a brief
overview of the project is given below:
TER4RAIL entails a coordination and support action to determine transversal
exploratory research activities among different actors that are beneficial for
railways. The Shift2Rail Multi Annual Action Plan (MAAP) will play a central
role in the establishment of future interoperable railway systems suitable for
European society and environment. However, due to the rapid pace of
technological change and innovation, it is necessary to be aware of the novel
possibilities that can enable an increasingly sustainable progress in this
regard.
Additionally, the European railway community is represented by different
actors (industry, academia, users, researchers, and policy makers) with
different perceptions regarding technological applications and different
objectives for the future.
With regard to this context, the work of TER4RAIL is organised as follows:
* TER4RAIL will identify and monitor new opportunities for innovative research and facilitate the cross-fertilisation of knowledge from other disciplines, at what is referred to as the Rail Innovative Research Observatory. Permanent contact with other relevant sectors will have a prominent role in importing disruptive perspectives from other disciplines and facilitating interactions.
* TER4RAIL will determine and assess the existing roadmaps that drive the future of railways and compare them with the interpretations obtained from the observatory. This analysis will indicate the gaps that require to be covered and serve as the anchor for the prospective roadmaps, among others, the Shift2Rail MAAP.
* TER4RAIL considers railways as the backbone of future European mobility, as stated in the rail sector’s European Railway Research Advisory Council’s (ERRAC) Rail 2050 Vision published in December 2017, and therefore, it is necessary that TER4RAIL raise arguments that can sustain this essential system. To that end, data analysis and statistical reporting are foreseen and conducted.
* Finally, the work performed under TER4RAIL will be communicated to the transport community, liaising with the Shift2Rail communication team with a correlated communication strategy. A strategy of exploitation of the results will guarantee that these are properly employed in this area with maximum impact.
* TER4RAIL will be able to select and synthetise a considerable amount of information regarding railways’ futures and transmit them in a consolidated, improved, clear, and understandable manner. This should facilitate the realisation of TER4RAIL’s ambition of being the CSA of reference for the evolution of EU railways.
The WPs will address the following areas:
* WP1: Rail Innovative Research Observatory
* WP2: Roadmaps
* WP3: Arguments supporting rail
* WP4: Work package title Dissemination, exploitation, and knowledge transfer WP5: Coordination and management
5\. Data Management at Project Level
### 5.1. Data Collection
Each Work Package Leader is responsible for defining and describing all (non-
generic) datasets specific to their individual work package.
The WP leaders shall formally review the datasets related to their WP when
relevant and at least at time for project periodic report to the European
Commission.
All modifications and additions to the DMP shall be provided to the TER4RAIL
Data Manager Coordinator, UIC, for inclusion in the DMP. Each WP Leader is
responsible for the quality and completeness of the datasets related to its
work package: the quality check will be done at WP level by the WP leader as
some of the data will have to be pre-worked and validated by the WP leader
having the only access to the corresponding raw data (for instance case of
survey data to be anonymized / aggregated before being communicated to other
partners).
### 5.2. Data Archiving & Preservation
A Workflow Tool platform was created to support the work of the consortium
members. The partners have received a link with an invitation to access to the
platform.
TER4RAIL partners are strongly suggested to use the Workflow Tool platform to
share project information. The main functionality that should be used is the
upload and download of documents (contact list, deliverables, templates,
minutes of meetings, agendas, presentations, Technical Annex of the Grant
Agreement, etc.).
#### 5.2.1. Data Security and Integrity
All data types that are uploaded to the Workflow Tool shall not be encrypted,
irrespective of whether these data items have been identified for future
archiving or not.
All the partners invited to the platform have the same publication rights.
These rights include viewing, modificating or creating new documents. All
members of the project have access to all documents and meetings in the tool.
The internal structure of the five WP folders will be determined by the Work
Package Leader. The Project Coordinator has overall project administration
rights, enabling to administrate the complete project document database.
Uploaded data to the Workflow Tool are protected against disturbances and
possible loss of data in the server. As backup, all the information is also
stored in the hard disk of four different computers in EURNEX headquarters.
#### 5.2.2. Document Archiving
The document structure and type definition will be preserved as defined in the
document breakdown structure and work package groupings specified for the
Workflow Tool.
The process of archiving will be based on a data extract performed by EURNEX
within 12 weeks of the formal closure of the TER4RAIL project. Data will be
copied and transferred to a digital repository provided by EURNEX.
### 5.3. Computer file formats
To ensure document compatibility, the following file formats should be used:
* WORD version Microsoft Office 2007 or higher (including the OOXML and ODT formats) for documents;
* EXCEL version Microsoft Office 2007 or higher (including the OOXML and ODT formats) OR
Commas Separated Value format (CSV) for spreadsheets and databases;
* PowerPoint version Microsoft Office 2007 or higher (including the OOXML and ODT formats) for overhead slides;
* PDF for consolidated releases of project documents;
* ZIP for compressed documents;
* JPEG/PNG for pictures;
* AVI or MPEG-4 for videos;
* MP3 or MPEG-4 for audio
For any other cases, the Mendeley Open Data platform will be applied:
_https://data.mendeley.com/file-formats_ .
### 5.4. File Naming Conventions
Documents produced during the project and uploaded to the Workflow Tool will
be assigned a unique document code.
#### 5.4.1. Document code structure
The identification code contains the five following sections:
**[Project] - [Domain] - [Type] - [Filename] - [Version]**
* [Project] is T4R for all TER4RAIL documents;
* [Domain] is the relevant domain in the Workflow Tool (WP, Task or project body);
* [Type] is one or two letters defining the document category, with the addition of a 1 to 3 digits code (such as deliverable number as stated in the Grant Agreement, or such as dataset number as stated in this Data Management Plan); [Filename] is a short description of the document; [Version] is a version number starting at 001.
Examples:
<table>
<tr>
<th>
**PROJECT CODE**
</th>
<th>
**-**
</th>
<th>
**DOMAIN**
**(3-4 letters)**
</th>
<th>
**-**
</th>
<th>
**TYPE**
**(1-2 letters)**
**\+ if applicable:**
**CODE**
**(1-3 digits)**
</th>
<th>
**-**
</th>
<th>
**FILENAME**
**(n letters)**
</th>
<th>
**-**
</th>
<th>
**Version**
**(3 digits)**
</th> </tr>
<tr>
<td>
T4R
</td>
<td>
\-
</td>
<td>
SC
</td>
<td>
\-
</td>
<td>
MA
</td>
<td>
\-
</td>
<td>
5th_April_2019_Minutes
</td>
<td>
\-
</td>
<td>
002
</td> </tr>
<tr>
<td>
T4R
</td>
<td>
\-
</td>
<td>
WP1
</td>
<td>
\-
</td>
<td>
P
</td>
<td>
\-
</td>
<td>
1st_Periodic_Report
</td>
<td>
\-
</td>
<td>
001
</td> </tr>
<tr>
<td>
T4R
</td>
<td>
\-
</td>
<td>
WP1
</td>
<td>
\-
</td>
<td>
D1.1
</td>
<td>
\-
</td>
<td>
Mapping
</td>
<td>
\-
</td>
<td>
001
</td> </tr>
<tr>
<td>
T4R
</td>
<td>
\-
</td>
<td>
WP1
</td>
<td>
\-
</td>
<td>
DA1.1
</td>
<td>
\-
</td>
<td>
List_of_Key_Documents
</td>
<td>
\-
</td>
<td>
001
</td> </tr> </table>
Table 1 - Examples of file naming
#### 5.4.2. Document types
This information will be used to set up the identification code.
Documents are classified among the following types:
<table>
<tr>
<th>
**Letter**
</th>
<th>
**Name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
A
</td>
<td>
Administrative
</td>
<td>
Any administrative document except contractual documents
</td> </tr>
<tr>
<td>
C
</td>
<td>
Contractual document
</td>
<td>
Consortium Agreement, Grant Agreement and their approved amendments
</td> </tr>
<tr>
<td>
D
</td>
<td>
Deliverable
</td>
<td>
Deliverable identified as such under the Grant Agreement
</td> </tr>
<tr>
<td>
DA
</td>
<td>
Dataset
</td>
<td>
Dataset identified as such in the Data Management Plan
</td> </tr>
<tr>
<td>
EC
</td>
<td>
EC document
</td>
<td>
Document provided by EC (general rules, guidelines or EC experts documents)
</td> </tr>
<tr>
<td>
M
</td>
<td>
Model
(template)
</td>
<td>
MS-Office document templates including TER4RAIL visual identity
</td> </tr>
<tr>
<td>
MA
</td>
<td>
Meeting Agenda
</td>
<td>
Meeting Agenda
</td> </tr>
<tr>
<td>
MI
</td>
<td>
Minutes
</td>
<td>
Minutes
</td> </tr>
<tr>
<td>
P
</td>
<td>
Periodic Report
</td>
<td>
All intermediate/periodic reports except those listed as deliverables. May be
a WP intermediate report or a project intermediate report requested by the
Grant Agreement but not listed as deliverable.
</td> </tr>
<tr>
<td>
PR
</td>
<td>
Presentation
</td>
<td>
Presentation
</td> </tr>
<tr>
<td>
T
</td>
<td>
Technical contribution
</td>
<td>
Technical document contributing to a task/deliverable but not part of the
deliverable
</td> </tr>
<tr>
<td>
W
</td>
<td>
Proposal
</td>
<td>
Proposal for changes to the Consortium Agreement or Grant Agreement
</td> </tr>
<tr>
<td>
X
</td>
<td>
External document
</td>
<td>
Document produced by non-members of the project (e.g. papers, reports,
external public deliverables, etc.) that, upon authorisation of the author(s),
are
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
shared with the project due to its relevancy.
</td> </tr> </table>
Table 2 - Document types
### 5.5. Data and Shift2Rail
The TER4RAIL deliverables and all other related generated data are
fundamentally linked to the future planned Shift2Rail project activity.
The data requirements of this DMP have been developed with the objective of
providing data structures that are uniform and not subject to possible future
ambiguous interpretation that will facilitate synergies.
Data shall be specifically selected for archiving based on the criteria that
it will be likely to be useful for future Shift2Rail activities.
6\. DMP of WP1: Rail Innovative Research Observatory
### 6.1. Data types
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
T4R-WP1-DA1.1
</td>
<td>
List of Rail R&D key documents
</td>
<td>
EXCEL
</td>
<td>
21KB
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.2
</td>
<td>
Folder containing the public documents of the list of Rail R&D key documents
</td>
<td>
PDF + ZIP
</td>
<td>
\-
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.3
</td>
<td>
Folder with the confidential documents of the list of Rail R&D key documents
(#6 and #23): #6 is the “Capabilities and
areas of development” of UIC and #23 is “MAIN PUBLIC TRANSPORT TRENDS &
DEVELOPMENTS OUTSIDE EUROPE” from the UITP.
</td>
<td>
PDF + ZIP
</td>
<td>
\-
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.4
</td>
<td>
Rail stakeholders survey: answers provided at SurveyMonkey – nonpersonal
information
</td>
<td>
EXCEL
</td>
<td>
\-
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.5
</td>
<td>
Rail stakeholders survey: answers provided at SurveyMonkey – personal
information: results of Q#33 (Would you like to keep in touch regarding
TER4RAIL activities? If so, feel free to leave here you name and contact
details). Question is accompanied by information concerning the GDPR
compliance.
</td>
<td>
EXCEL
</td>
<td>
.
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.6
</td>
<td>
Rail stakeholders survey: aggregated answers – result of analysis: included
inside D.1.1.
</td>
<td>
PDF
</td>
<td>
\-
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.7
</td>
<td>
Database of rail related projects financed under H2020
</td>
<td>
EXCEL
</td>
<td>
\-
</td>
<td>
FFE
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.8
</td>
<td>
Shift2Rail specific questionnaire / interview
</td>
<td>
WORD
</td>
<td>
\-
</td>
<td>
FFE
</td> </tr> </table>
Table 3 - Existing Data used in WP1
Regarding the “Rail stakeholders survey” (1.4. and 1.5.), the following
statement was included at the beginning of the questionnaire “Answers will be
treated confidentially and results will be aggregated”, making the
participants aware of the treatment and use of their answers. The survey
includes only one question affected to the GDPR law Q#33. Before answering
this question, the applicable data protection terms and conditions were
presented 2 and the participants had to agree in order to provide an answer.
In case of disagreeing they were not able to answer Q#33.
In case additional data will be generated in this WP, additions to the DMP
will be made.
6.2. Standards, Metadata and Quality Issues
Not applicable.
### 6.3. Data Sharing
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
T4R-WP1-DA1.1
</td>
<td>
Publicly available. Included in M.S. 1, D.1.1., shared with other projects /
stakeholders that may have interest (so far: FLEX4RAIL project)
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.2
</td>
<td>
Can be shared with any stakeholder interested in it.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.3
</td>
<td>
Restricted only to Project partners.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.4
</td>
<td>
Not shared. It will be stored at FFE’s internal servers and used only for the
generation of D.1.1., elaborating aggregated analysis.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.5
</td>
<td>
Not shared. It will be stored at FFE’s internal servers according to the Data
Protection Law applicable 2 . Consent with this data protection law terms
has been asked as a requirement for answering this question.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.6
</td>
<td>
Publicly available. Included in D.1.1.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.7
</td>
<td>
Publicly available. Available at TER4RAIL website, emails to all interested
stakeholders requesting it (e.g. so far FLEX4RAIL, ERRAC WG2, Shift2Rail
Secretariat.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.8
</td>
<td>
Confidential. Q3, Q4, Q5 and Q15 may be incorporated to the aggregated results
of the online survey in an anonymous way. Q8, Q9, Q10, Q17 will be shared with
projects partners as input for T.1.2. The complete questionnaire will not be
shared. It will be stored at FFE’s internal servers and used internally to
align WP1’s activities.
</td> </tr> </table>
Table 4 - Data Sharing in WP1
2 The data provided under Q#33 will be stored and controlled by Fundación de
los Ferrocarriles
Españoles (FFE) in compliance with the information set out in the Act 3/2018
on the Personal Data Protection and Guarantee of Digital Rights and the
provisions of the General Data Protection Regulation (Regulation (EU) 2016/679
of 27 April 2016), applying GDPR 6.1.a) The data subject has given consent to
the processing of his or her personal data for one or more specific purposes,
with the objective to contact you in case of requests for further information
on the topics addressed by this questionnaire or distribution of results and
information from TER4RAIL project. Personal data will not be published, nor
shared with third parties unless legal obligation. For further information, or
making use of your rights, please consult:
_https://www.ffe.es/fundacion/aviso_legal_en.htm_ .
### 6.4. Archiving and Preservation
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
T4R-WP1-DA1.1
</td>
<td>
Workflow Tool Project folders. It can be archived and preserved.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.2
</td>
<td>
Workflow Tool Project folders. It can be archived and preserved.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.3
</td>
<td>
Workflow Tool Project folders only accessible to project partners. It will be
deleted once the project ends.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.4
</td>
<td>
FFE internal servers. It will be deleted once the project ends.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.5
</td>
<td>
FFE internal servers. It will be deleted once the project ends.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.6
</td>
<td>
Included in D1.1. Publicly available at the project web. It can be archived
and preserved.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.7
</td>
<td>
Workflow Tool Project folders and project web. It can be archived and
preserved.
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.8
</td>
<td>
FFE internal servers. It will be deleted once the project ends.
</td> </tr> </table>
Table 5 - Archiving and preservation of the data in WP1
### 6.5. Data Management Responsibilities
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
T4R-WP1-DA1.1
</td>
<td>
EURNEX
</td>
<td>
Manages project Workflow Tool folders
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.2
</td>
<td>
EURNEX
</td>
<td>
Manages project Workflow Tool folders
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.3
</td>
<td>
EURNEX
</td>
<td>
Manages project Workflow Tool folders
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.4
</td>
<td>
FFE
</td>
<td>
Stores and guards the data
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.5
</td>
<td>
FFE
</td>
<td>
Stores and guards the data
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.6
</td>
<td>
EURNEX
</td>
<td>
Manages project Workflow Tool folders
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.7
</td>
<td>
EURNEX / UIC
</td>
<td>
Manages project Workflow Tool folders / manages website
</td> </tr>
<tr>
<td>
T4R-WP1-DA1.8
</td>
<td>
FFE
</td>
<td>
Stores and guards the data
</td> </tr> </table>
Table 6 - Data Management Responsibilities in WP1
7\. DMP of WP2: Roadmaps
### 7.1. Data types
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
T4R-WP2-DA2.1
</td>
<td>
SurveyMonkey Delphi Survey Round 1 responses
</td>
<td>
Qualitative and
Quantitative data
</td>
<td>
Unknown at this stage, likely to be
<100mb
</td>
<td>
UNEW: Thomas Zunder
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.2
</td>
<td>
SurveyMonkey Delphi Survey Round 2 responses
</td>
<td>
Qualitative and
Quantitative data
</td>
<td>
Unknown at this stage, likely to be
<100mb
</td>
<td>
UNEW: Thomas Zunder
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.3
</td>
<td>
TER4RAIL Webinars
</td>
<td>
Video data, usually
MPEG-4
</td>
<td>
Unknown at this stage but likely to
be >1Gb
</td>
<td>
UNEW: Thomas Zunder
</td> </tr> </table>
Table 7 - Existing Data used in WP2
The participants in the Delphi Survey are asked to give explicit consent for
the use of their responses by agreeing to the following statement by
explicitly opting-in. If they choose to not opt-in then the survey ends. This
is fully compliant with the GDPR.
Welcome to TER4RAIL Delphi survey round 1.
[You may wish to maximise this browser window.]
The main objective of TER4RAIL is to reinforce the cooperation between rail-
related stakeholders to improve the efficiency of the research in the rail
sector, in order to facilitate emerging innovative ideas and the cross-
fertilisation of knowledge from other disciplines or of disruptive technology
and innovation. TER4RAIL intends to promote this process by strengthening
transversal exploratory research in Europe for and with a railways perspective
in the frame of multimodality.
The objective of this Delphi survey is to review, support, and improve the
sector roadmaps, in preparation for the next iteration of the roadmapping
process in the railway sector, considering multimodal environments and railway
as the backbone of mobility in the future;
This is the first round questionnaire of the Delphi survey. Your contribution
to the first round will be used to develop a second round questionnaire. We
are therefore interested in broad answers.
Please expand upon your answers whenever appropriate.
<table>
<tr>
<th>
Your opinions expressed in this survey are 'personal to you as an expert'. We
understand that they do not necessarily represent the opinions or policy of
your organisation and will not be used as such. Your response will be treated
in strict confidence, and names of individual respondents or organisations
will not be used in published material or given to third parties. The general
findings of the survey will be published. If you participate in the survey and
enter an email address, a copy of the result will be emailed to you.
Thank you,
If you have queries then please do not hesitate to contact: Thomas Zunder,
[email protected]
Newcastle University
Stephenson Building
Newcastle upon Tyne
NE1 7RU
United Kingdom
Data Protection and Privacy Terms
we will process all personal data fairly and lawfully
we will only process personal data for specified and lawful purposes
we will endeavour to hold relevant and accurate personal data, and where
practical, we will keep it up to date
we will not keep personal data for longer than is necessary we will keep all
personal data secure
we will endeavour to ensure that personal data is not transferred to countries
outside of the
European Economic Area (EEA) without adequate protection
We would like to assure you that your opinion will be held anonymously and
securely. Personal data is asked for and retained for the purpose of the
survey but will not be published or used in an identifiable manner. Survey
results and feedback will be analysed and stored securely within Newcastle
University as well as on SurveyMonkey servers. The anonymised data and results
will be made available publicly.
Since data will be held on SurveyMonkey then the SurveyMonkey Privacy Policy
will apply, please refer to and read policy here:
_https://www.surveymonkey.com/mp/legal/privacy-policy/_
</th> </tr>
<tr>
<td>
Note that data may be transferred out of the EU as part of the SurveyMonkey
Privacy Policy, see above.
Do you consent to the Data Protection and Privacy Terms above?
Yes No
</td> </tr> </table>
No additional data are planned to be generated in this WP.
### 7.2. Standards, Metadata and Quality Issues
The following standards and metadata are planned to be used for data related
to WP2: Compliance with GDPR. All participants are to be clearly advised of
the privacy policy and the nature of sharing.
### 7.3. Data Sharing
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
T4R-WP2-DA2.1
</td>
<td>
Shared within consortium and to public as anonymised data and summaries only.
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.2
</td>
<td>
Shared within consortium and to public as anonymised data and summaries only.
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.3
</td>
<td>
Shared with public on VIMEO video sharing platform and embedded in TER4RAIL
website.
</td> </tr> </table>
Table 8 - Data Sharing in WP2
### 7.4. Archiving and Preservation
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
T4R-WP2-DA2.1
</td>
<td>
Stored on SurveyMonkey and on Newcastle University secure and password
protected servers. To be shared using the Mendeley open data platform:
_https://data.mendeley.com_ .
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.2
</td>
<td>
Stored on SurveyMonkey and on Newcastle University secure and password
protected servers. To be shared using the Mendeley open data platform:
_https://data.mendeley.com_ .
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.3
</td>
<td>
Stored on Newcastle University secure and password protected servers and VIMEO
video sharing platform as well as embedded into TER4RAIL website.
</td> </tr> </table>
Table 9 - Archiving and preservation of the data in WP2
### 7.5. Data Management Responsibilities
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
T4R-WP2-DA2.1
</td>
<td>
UNEW: Thomas Zunder
</td>
<td>
Principal Research Associate
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.2
</td>
<td>
UNEW: Thomas Zunder
</td>
<td>
Principal Research Associate
</td> </tr>
<tr>
<td>
T4R-WP2-DA2.3
</td>
<td>
UNEW: Thomas Zunder
</td>
<td>
Principal Research Associate
</td> </tr> </table>
Table 10 - Data Management Responsibilities in WP2
8\. DMP of WP3: Arguments supporting rail
### 8.1. Data types
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
T4R-WP3-DA3.1
</td>
<td>
Collection and description of tables, graphs, charts, statistics regarding
rail and non-rail transport, summarized in a descriptive report.
</td>
<td>
Word,
Excel, PDF, images/ta bles/chart s in JPG,
PNG
format
</td>
<td>
Variable
</td>
<td>
Consortia members. All sources/references are properly quoted. The results of
the
deliverables also belong to European
Commission and S2R JU.
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.2
</td>
<td>
Report summarizing analysis and comments to the data collected in 3.1, also
providing insights on bottlenecks and gaps elimination.
</td>
<td>
Word, PDF, images/ta bles/chart s in JPG,
PNG
format
</td>
<td>
Variable
</td>
<td>
Consortia members. All sources/references are properly quoted. The results of
the
deliverables also belong to European
Commission and S2R JU.
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.3
</td>
<td>
Description of success stories regarding rail, summarized in a handbook.
</td>
<td>
Word, PDF, images/ta bles/chart s in JPG,
PNG
format
</td>
<td>
Variable
</td>
<td>
Consortia members. All sources/references are properly quoted. The results of
the
deliverables also belong to European
Commission and S2R JU.
</td> </tr> </table>
Table 11 - Existing Data used in WP3
No additional data are planned to be generated in this WP.
8.2. Standards, Metadata and Quality Issues
Not applicable.
### 8.3. Data Sharing
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
T4R-WP3-DA3.1
</td>
<td>
Deliverable will be public
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.2
</td>
<td>
Deliverable will be public
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.3
</td>
<td>
Deliverable will be public
</td> </tr> </table>
Table 12 - Data Sharing in WP3
### 8.4. Archiving and Preservation
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
T4R-WP3-DA3.1
</td>
<td>
All documents/data utilized as “sources” for producing the deliverables will
be stored in NEW OPERA archive. Deliverables and other public documents will
be uploaded on the Workflow Tool for consultation.
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.2
</td>
<td>
All documents/data utilized as “sources” for producing the deliverables will
be stored in NEW OPERA archive. Deliverables and other public documents will
be uploaded on the Workflow Tool for consultation.
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.3
</td>
<td>
All documents/data utilized as “sources” for producing the deliverables will
be stored in NEW OPERA archive. Deliverables and other public documents will
be uploaded on the Workflow Tool for consultation.
</td> </tr> </table>
Table 13 - Archiving and preservation of the data in WP3
### 8.5. Data Management Responsibilities
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
T4R-WP3-DA3.1
</td>
<td>
Giuseppe Rizzi (NEW OPERA)
</td>
<td>
Update and maintenance of the data
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.2
</td>
<td>
Giuseppe Rizzi (NEW OPERA)
</td>
<td>
Update and maintenance of the data
</td> </tr>
<tr>
<td>
T4R-WP3-DA3.3
</td>
<td>
Daria Kuzmina (UITP)
</td>
<td>
Update and maintenance of the data
</td> </tr> </table>
Table 14 - Data Management Responsibilities in WP3
9\. DMP of WP4: Work package title Dissemination, exploitation, and knowledge
transfer
### 9.1. Data types
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
T4R-WP4-DA4.1
</td>
<td>
Images: Images and logos from partners participating in the project.
</td>
<td>
.eps, .ai,
.png,
.jpeg
</td>
<td>
Variable
</td>
<td>
The owner gives permission to
EURNEX as coordinator and to UIC as WP leader to use images for dissemination
purposes of TER4RAIL.
</td> </tr>
<tr>
<td>
T4R-WP4-DA4.2
</td>
<td>
Contact information of persons who have
registered for the final conference
</td>
<td>
html format
</td>
<td>
Variable
</td>
<td>
The data will be collected and processed in the UIC servers in accordance with
the provisions
of Regulation (EU) 2016/679 OF
THE EUROPEAN
PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC
(General Data Protection Regulation).
</td> </tr> </table>
Table 15 - Existing Data used in WP4
No additional data are planned to be generated in this work package.
### 9.2. Standards, Metadata and Quality Issues
The pictures and logos are stored in common formats: vector image formats and
picture compression standards.
### 9.3. Data Sharing
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
T4R-WP4-DA4.1
</td>
<td>
The data will not be shared but some of the image database will be used for
dissemination purposes and therefore will become public.
</td> </tr>
<tr>
<td>
T4R-WP4-DA4.2
</td>
<td>
The data will be collected and processed by UIC only for the logistics purpose
of the final conference and will not be shared outside the consortium.
</td> </tr> </table>
Table 16 - Data Sharing in WP4
## 9.4. Archiving and Preservation
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
T4R-WP4-DA4.1
</td>
<td>
Data will be stored in the Workflow Tool
</td> </tr>
<tr>
<td>
T4R-WP4-DA4.2
</td>
<td>
Data will be stored on UIC servers
</td> </tr> </table>
Table 17 Archiving and preservation of the data in WP4
## 9.5. Data Management Responsibilities
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
T4R-WP4-DA4.1
</td>
<td>
Christine HASSOUN (UIC)
</td>
<td>
Update and maintenance of the data
</td> </tr>
<tr>
<td>
T4R-WP4-DA4.2
</td>
<td>
Christine HASSOUN (UIC)
</td>
<td>
Update and maintenance of the data
</td> </tr> </table>
Table 18 - Data Management Responsibilities in WP4
# DMP of WP5: Coordination and management
## Data types
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
T4R-WP5-DA5.1
</td>
<td>
Consortium partners data (Telephone number, email, name, company/institution)
</td>
<td>
.xlsx
</td>
<td>
Small
</td>
<td>
Consortium members
</td> </tr>
<tr>
<td>
T4R-WP5-DA5.2
</td>
<td>
Candidates for the
Stakeholders Reference Group (Name, company/institution), email)
</td>
<td>
.xlsx
</td>
<td>
Small
</td>
<td>
Consortium members
</td> </tr> </table>
Table 19 - Existing Data used in WP5
No additional data are planned to be generated in this WP.
## Standards, Metadata and Quality Issues
Not applicable.
## Data Sharing
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
T4R-WP5-DA5.1
</td>
<td>
Access granted only to consortium partners
</td> </tr>
<tr>
<td>
T4R-WP5-DA5.2
</td>
<td>
Access granted only to consortium partners
</td> </tr> </table>
Table 20 - Data Sharing in WP5
## Archiving and Preservation
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
T4R-WP5-DA5.1
</td>
<td>
The data will be erased after the end of the project
</td> </tr>
<tr>
<td>
T4R-WP5-DA5.2
</td>
<td>
The data will be erased after the end of the project
</td> </tr> </table>
Table 21 - Archiving and preservation of the data in WP5
## Data Management Responsibilities
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
T4R-WP5-DA5.1
</td>
<td>
Armando Carrillo (EURNEX)
</td>
<td>
Update, maintenance and subsequent erasure of the data
</td> </tr>
<tr>
<td>
T4R-WP5-DA5.2
</td>
<td>
Armando Carrillo (EURNEX)
</td>
<td>
Update, maintenance and subsequent erasure of the data
</td> </tr> </table>
Table 22 - Data Management Responsibilities in WP5
# Conclusion
The purpose of the Data Management Plan is to support the data management life
cycle for all data that will be collected, processed or generated by the
TER4RAIL project. For this particular project, the DMP is more important than
usual because data are at the heart of delivering the outputs and they are to
be sourced from a diverse range of origins.
The DMP is not intended to be a static document but is designed to allow for
its own evolution during the lifespan of the project to take account of
emerging needs. This flexibility is important because the transversal project
itself is far-reaching, with diverse and potentially complex data that are yet
to be identified. This document is therefore expected to mature during the
project; more developed versions of the plan could be included as additional
revisions of this deliverable at later stages. The DMP will be updated at
least after the mid-term and final reviews to finetune it to the data
generated and the uses identified by the consortium since not all data or
potential uses are defined at this stage of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1515_CONNECTA-2_826098.md
|
# INTRODUCTION
The Present Data Management Plan (DMP) details what data the project will
generate, whether and how it will be exploited or made accessible for
verification and re-use, and how will be curated and preserved. This document
should be considered in combination with:
* Articles 9.1, 9.2, 9.3 and attachment 1 of the Consortium Agreement.
* Section 3 (Articles 23, 24, 25, 26, 27, 28, 29, 30 and 31) of the Grant Agreement No. 826098
The DMP is organised per project WP in order to concretely describe the
contribution of each WP to the final outcome as well as the spin-off potential
of each activity.
In order to understand the data of the project a brief overview of the project
is given below.
## CONNECTA-2 PROJECT OVERVIEW
CONNECTA-2 aims at contributing to the Shift2Rail’s next generation of TCMS
architectures and components with wireless capabilities. The research and
development work will address the second phase of activities of the Shift2Rail
Multi-Annual Action Plan (MAAP) on TD1.2 – TD Next Generation TCMS to reach
higher TRL (up to TRL5 expected).
This proposal covers the implementation of the new technological concepts,
standard specifications and architectures for the train control and monitoring
defined within CONNECTA-1 project. The project will be developed in five main
blocks of work which will both reinforce and extend the early work done in the
previous project. These blocks are described below.
* A transversal common block to continue the research in basic technologies for wireless communications, produce new application profiles, explore new solutions for the humanmachine interface (HMI) and complete potential open points left at the end of CONNECTA1 (WP1, WP2).
* A second transversal block for implementing the technologies as defined in CONNECTA-1 (WP3) while defining the testing procedures (WP3, WP4).
* A vertical block for deploying and testing technologies in an urban (heavy metro) laboratory environment (WP5).
* A second vertical block, running in parallel, for deploying and testing technologies in a mainline (regional) laboratory environment (WP6).
* A project wide block to evaluate results (including KPI assessment), disseminate, communicate and exploit (WP7, WP8) as much as possible at this TRL5 level of achievements.
CONNECTA-2 will be divided into eight work packages (WP). Each WP contributes
to the scope of the call S2R-CFM-IP1-02-2018. Figure 1: shows the organisation
of the project.
**Figure 1: Project structure** The goals of each WP are described below:
* **WP1:** First topic is focusing on new technologies needed for the technical WPs of the project. These specifications will define the technologies to be implemented and integrated in the urban and/or regional demonstrators such as the new Wireless TCMS communications. Additionally, this WP will work in the definition of the Application Profile for ATO in collaboration with TD2.2. This WP will also support the definition of functions for DMI standardization and the completion of the Functional Open Coupling regarding the input and output needed for DMI visualization.
* **WP2:** It is working on implementation of the evolved Train-to-ground specified in WP1 (new functions of the IEC 61375-2-6) and further Application Profiles which will run on top of the FDF deployed in demonstrators. Further on members will be expected to participate together in interoperability tests of the IEC 61375-2-6 to be carried out in the laboratory and the demonstrator. Additionally, this WP will work together with the X2Rail-1 project from IP2 to extend the IEC 61375-2-6 architecture to the “Adaptable Communication” concept coming from that project to allow reusing the radio carriers used by signaling applications by TCMS train-to-ground applications.
* **WP3:** Is specifying and implementing components for laboratory demonstrators corresponding to two different application fields: a regional (train) demonstrator and an urban (train) demonstrator. Those demonstrators shall be used in subsequent WPs to provide a proof-of-concept of those technologies which have been investigated and selected during Connecta-1 and the Roll2Rail Lighthouse project, namely the wireless consist network and train to wayside communication, Drive-by-Data, Function Distribution Framework and Functional Open Coupling. The simulation framework defined in Connecta1 will be used for sub-system simulation.
* **WP4:** Is defining test cases and test scenarios in order to demonstrate the correct integration of the different technologies and architectures specified and implemented in WP3. These test specifications will be accomplished for both urban and regional laboratory demonstrators, with a view to testing the wireless train backbone and consist network, trainto-ground communication, Drive-By-Data solution, the integration of Application Profiles implemented in the Functional Distribution Framework, Functional Open Coupling functionality and Virtual Homologation Framework.
* **WP5:** Is integrating the set of components developed in WP3 in a laboratory demonstrator for an urban train application, thus ensuring interoperability of technologies and architectures from different suppliers. Namely, the urban demonstrator will include the wireless train backbone, train-to-ground communication, Drive-By-Data, Functional Distribution Framework and Virtual Homologation Framework. For this purpose, a series of simulators and test tools will be implemented, and after test facilities have been prepared, the tests previously defined in WP4 will be finally executed and carefully evaluated to check the fulfilment of requirements. This demonstrator will be the basis for the future validation and later deployment on real vehicles.
* **WP6:** Is integrating and evaluating the outcome (technologies, architectures and components) of WP3 in a laboratory demonstrator for regional rail environment. Among others, the demonstrator will include the Functional Distribution Framework, drive-by-data solution and train-to-ground communication. Consists inside this demonstrator will be provided by different partners to prove that chosen concepts yields functional interoperability.
* **WP7:** Is seeking to ensure proper dissemination and promotion of the project results, in a way which is consistent with the wider dissemination and promotion activities of Shift2Rail. The objective is to ensure that the outputs of the project are delivered in a form which makes them immediately available for use by the complementary actions, and ensure that all important actors in the European railway sector are informed about the results.
* **WP8:** Is focusing on the project management and technical coordination. Its main objectives are to ensure efficient coordination of the project together with the TMT (Technical Management Team) and the Steering Committee. Moreover, this WP is coordinating the technical work of the various WPs in order to keep the alignment with the overall objectives of the project and with Shift2Rail activities, as well as monitoring the TD1.2 contribution to the overall KPI of Shift2Rail.
## DATA MANAGEMENT PLAN (DMP) GUIDING PRINCIPLES
The Data Management Plan of CONNECTA-2 is coordinated by Work Package 8, and
is articulated around the following key points:
* The Data Management Plan (DMP) described in this document has been prepared taking into account the template of the Guidelines on Data Management in Horizon 2020 [01]. The elaboration of the DMP will allow CONNECTA-2 partners to address all issues related with IP protection and data. The DMP is an official project Deliverable (D8.2) due in Month 4 (January 2019), but it will be a live document throughout the project. This initial version will evolve depending on significant changes arising and periodic reviews at reporting stages of the project.
* The consortium will comply with the Regulatiohn (EU) 2016/679 regarding the General Data Protection Regulation, meaning that beneficiaries will ensure that - if applicable - all the data intended to be processed are relevant and limited to the purposes of the research project (in accordance with the ‘data minimisation‘ principle).
* Procedures that will be implemented for data collection, storage, access, sharing policies, protection, retention and destruction will be in line with EU standards as described in the Grant Agreement and the Consortium Agreement, particularly Articles 18 (“Keeping Records - Supporting Documentation”; Article 23 (“Management of Intellectual Property”);
Article 24 (“Agreement on background”); Article 25 (“Access Rights to
Background”); Article 26 (“Ownership of Results”); Article 27 (“Protection of
Results - Visibility of EU funding”);
Article 30 (“Transfer and Licensing of Results”); Article 31 (“Access Rights
to Results”); Article 36 (“Confidentiality”); Article 37 (“Security-related
Obligations”); Article 39 (“Processing of Personal Data”); Article 52
(“Communication between the parties”), and “Annex I – Description of Work” of
the Grant Agreement.
## CONNECTA-2 DATA MANAGEMENT POLICY
CONNECTA-2 Data Management Plan applies the FAIR (Findable, Accessible,
Interoperable and Reusable) Data Management Protocols. This document addresses
for each data set collected, processed and/or generated in the project the
following elements:
* **Contribution reference and naming:** Internal project Identifier (ID) for the data set to be produced. This identification code contains the six following sections: [Project] - [Domain] - [Type] - [Owner] - [Number] – [Version]. Where:
* [Project] is CTA2 for all CONNECTA-2 documents; o [Domain] is the relevant domain in the Cooperation Tool (WP, Task or project body); o [Type] is one letter defining the document category; o [Owner] is the trigram of the deliverable leader organisation;
* [Number] is an order number within a domain allocated by the Cooperation Tool when the document is first created;
* [Version] is the incremental version number, automatically incremented at each upload.
* **Standards and metadata:** Reference to existing suitable standards will be added if any.
* **Contribution description:** Description of the data that will be generated or collected.
* **Data sharing:** Description of how data will be shared, including access procedures and necessary software and other tools for enabling reuse, and definition of whether access will be open or restricted to specific groups.
* **Archiving and preservation:** Description of the procedures that will be put in place for long-term preservation of the data
# DATA MANAGEMENT PLAN
## DATA SUMMARY
CONNECTA-2 will generate different type of data which are listed in the
following table: **Table 1: Existing Data used in CTA2**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
CTA2-1.1
</td>
<td>
Measurement data
</td>
<td>
Format: raw
data, text files, proprietary formats as
e.g. .mat, .xls, …, Units: e.g. Hz, bits/s, samples/s, m, s, …
</td>
<td>
Data from former measurement campaigns
</td>
<td>
Partner which generated the
measurement
data
</td> </tr>
<tr>
<td>
CTA2-1.2
</td>
<td>
Software
</td>
<td>
</td>
<td>
variable
</td>
<td>
Partner institution
</td> </tr> </table>
Data generated in this project include the following types:
**Table 2: Data Generated in CTA2**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
CTA2-2.1
</td>
<td>
Measurement data
</td>
<td>
E.g. Hz, m, bits/s,
samples/s, … Format: raw
data, text file,
.mat, …
</td>
<td>
From several MB to GB of data from the wireless data transmission tests on
real vehicles
</td>
<td>
Partner institution who executes the measurement
</td> </tr>
<tr>
<td>
CTA2-2.2
</td>
<td>
Software(Code):
Simulations, scripts, etc.
</td>
<td>
e.g. .c, .m, …
</td>
<td>
Several MB
</td>
<td>
The rightful owner according to the contract of purchase
</td> </tr>
<tr>
<td>
CTA2-2.3
</td>
<td>
Source files for
RBD/FTA Calculations
</td>
<td>
TBD
</td>
<td>
Unknown
</td>
<td>
Partner institution who executes the calculations
</td> </tr>
<tr>
<td>
CTA2-2.4
</td>
<td>
Source files for FMECA
</td>
<td>
TBD
</td>
<td>
unknown
</td>
<td>
Partner institution who executes the calculations
</td> </tr>
<tr>
<td>
CTA2-2.5
</td>
<td>
DOORS requirements
</td>
<td>
DOORS
database
</td>
<td>
Unknown MB
</td>
<td>
Shared between the contributing partners
</td> </tr>
<tr>
<td>
CTA2-2.6
</td>
<td>
SysML / UML diagrams
</td>
<td>
MagicDraw,
Enterprise Architect… formats
</td>
<td>
Several MB
</td>
<td>
Partner producing the diagrams
</td> </tr>
<tr>
<td>
CTA2-2.7
</td>
<td>
Test logs
</td>
<td>
Several formats: Text, raw data…
</td>
<td>
From several MB to GB
</td>
<td>
Partner institution who executes the
tests
</td> </tr> </table>
## FAIR DATA
CONNECTA-2 project will work to ensure as much as possible that its data will
be ’FAIR’, that is findable, accessible, interoperable and reusable, according
to the points below.
### Making data findable, including provisions for metadata
CONNECTA-2 project is part of European Shift2Rail initiative, therefore it is
expected to deposit the generated results in the _Cooperation Tool_ online
repository. Within this repository, the deliverables marked as _public_ will
be accesible via Shift2Rail website. Each public deliverable goes with a
deliverable title and a short description of its content, which helps to find
the desired content.
Each task leader is responsible for ensuring that the dissemination level of
each deliverable is correctly set. Equally, the deliverables will use
references according to their dissemination level. This means that public
deliverables should not refer to confidential documents which invalidate their
correct understanding.
### Making data openly accessible
In order to ease the future works within the Shift2Rail TD1.2, CONNECTA-2 will
make available all data which are identified as appropriate (public and
confidential) to future projects (i.e. AWP 2020). The CONNECTA-2 Steering
Commitee is responsible for IPR issues that may appear, and any confidential
data disclosure needs for its possitive decision.
Task leaders will collect data from each task and the IPR Committee will
review and approve all data that are identified as appropriate for open
access. This process will be carried out on an ongoing basis to facilitate the
publication of appropriate data as soon as possible.
Any additional data beside the foreseen deliverables that are likely to be
shared should be evaluated by the CONNECTA-2 consortium. The Steering
Committee of CONNECTA-2 will assess such justifications and make the final
decision, based on examination of the following elements regarding
confidentiality of datasets:
* Commercial sensitivity of datasets
* Data confidentiality for security reasons
* Conflicts between open-access rules and national and European legislation (e.g. data protection regulations).
* Sharing data would jeopardise the aims of the project.
* Other legitimate reasons, to be validated by the IPR Committee
Where it is determined that a database should be kept confidential, the
reasons for doing so will be included in an updated version of the DMP. Table
3 illustrates an example of a level of accesibility of CONNECTA-2 data for
future Shift2Rail AWP 2020 TD1.2 projects.
#### Table 3: Level of availability of additional CONNECTA-2 data ( _example_
)
<table>
<tr>
<th>
**Dataset number**
</th>
<th>
**Task number**
</th>
<th>
**Dataset name**
</th>
<th>
**Open / Restricted**
</th>
<th>
**Reason for**
**Restriction**
</th> </tr>
<tr>
<td>
_1_
</td>
<td>
_T5.2_
</td>
<td>
_Report on Regional lab demonstrator Test_
_Platform_
</td>
<td>
_Restricted_
</td>
<td>
_IPR_
_Sensitivities across datasets_
</td> </tr>
<tr>
<td>
_2_
</td>
<td>
_T6.1_
</td>
<td>
_Report on Urban lab demonstrator Test_
</td>
<td>
_Open_
</td>
<td>
_N/A_
</td> </tr>
<tr>
<td>
_3_
</td>
<td>
_T6.2_
</td>
<td>
_Report on Regional lab demonstrator Test_
</td>
<td>
_Open_
</td>
<td>
_N/A_
</td> </tr> </table>
### Making data interoperable
The data type and unique identifiers for the data produced by CONNECTA-2 are
introduced in section 1 and section 2.1. For further data generated during the
project, this information will be oulined in subsequent versions of this
document. In that case, information on data and metadata vocabularies,
standards or methodology to follow to facilitate interoperability will be
defined.
### Increase data re-use (through clarifying licenses)
CONNECTA-2 project will generate valuable data for subsequent project in AWP
2020. Specifically, the experimental results obtained in CONNECTA-2 will be
the basis for the future CFM project starting in 2020\.
As the project progresses and data are identified and collected, further
information on increasing data re-use will be outlined in subsequent versions
of the DMP.
## DATA SHARING
Table 4 summarizes the data sharing mechanisms to be used within CONNECTA-2
project. **Table 4: Data Sharing in CONNECTA**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data sharing**
</th> </tr>
<tr>
<td>
CTA2-2.3 / 2.4 / 2.6
</td>
<td>
Cooperation Tool (Project’s online repository)
</td> </tr>
<tr>
<td>
CTA2-2.5
</td>
<td>
DOORS requirements will be shared as ReqIF exchange format, together with the
Microsoft Word version, and stored in Cooperation Tool (Project’s online
repository) for sharing.
</td> </tr>
<tr>
<td>
CTA2-2.2
</td>
<td>
Generated source code and executable files may be shared with the project
partners additionally through an online repository (CVS like) or FTP.
</td> </tr>
<tr>
<td>
CTA2-2.1 / 2.7
</td>
<td>
Produced test logs and measured data may be shared with the project partners
additionally through FTP if its size rises over 20 MB, otherwise the
Cooperation Tool will be used.
</td> </tr> </table>
## ARCHIVING AND PRESERVATION
Data shall be specifically selected for archiving based on the criteria that
it will be likely to be useful for on-going and future Shift2Rail activities.
During the life of CONNECTA data extraction from the Cooperation Tool will be
supported. Table 5 summarizes the archiving and preservation policies to be
used.
#### Table 5: Archiving and preservation of the data in CONNECTA
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
CTA-2.1 / 2.2 / 2.5 / 2.7
</td>
<td>
Regular backup of data on server, managed by IT departments
</td> </tr>
<tr>
<td>
CTA-2.3 / 2.4 / 2.6
</td>
<td>
Data will be stored on the Cooperation Tool which already has its backup
procedures.
</td> </tr> </table>
## DATA SECURITY
The research outputs of the project will be publicly available within the
website of the project (
_https://projects.shift2rail.org/s2r_ip1_n.aspx?p=CONNECTA-2_ ) unless the
result is marked as confidential in the Grant Agreeement.
The reasons to consider the results as confidential are the following ones:
* Protection of intellectual property rights regarding new processes, products and technologies that would impact the competitive advantage of the consortium or its members.
* Commercial agreements as part of the procurements of components or materials that might foresee the confidentiality of data.
* Members background knowledge that might foresee the confidentiality of data.
# DMP REVIEW PROCESS & TIMETABLE
Shift2Rail TD1.2 MAAP (deployed by CONNECTA-1, CONNECTA-2 and CONNECTA-3
projects) is based on the V-Model illustrated in Figure 2. This model must be
contextualised to Shift2Rail, and in particular to the MAAP. CONNECTA-2 does
not cover the whole life cycle of the new TCMS generation but only some of the
first activities. Indeed, the project outcomes will reach TRL 4 or 5, but not
higher.
So the “V” can be split into three parts, each of them corresponding to a
different call or phase. While the specification, system architecture and
subsystem design correspond to CONNECTA-1, the implementation of the
components and integrating them into subsystems are allocated to CONNECTA-2,
and finally, putting everything together on the Integrated Technology
Demonstrator (ITD) for system testing in CONNECTA-3.
**Figure 2: Project structure**
Due to the continuous iteration between design and testing of developed
technologies, it may be needed to update the design specifications, applying
also to some specifications already finished in CONNECTA-1 project. _In order
to keep the specification of the NG-TCMS updated along the whole_ _MAAP, this
section will include in subsequent releases (mainly in M24 and M30) any
additional_ _document (not foreseen initially in the project proposal), which
amends of complements any_ _specification already released._
## DMP REVIEW IN M24
This section is temporally empty until M24 of the project.
## DMP REVIEW IN M30
This section is temporally empty until M30 of the project.
# CONCLUSIONS
The purpose of the Data Management Plan (DMP) is to support the data
management life cycle for all data that will be collected, processed or
generated by the CONNECTA-2 project. The DMP is not a fixed document, but
evolves during the lifespan of the project. This document is expected to
mature during the project; more developed versions of the plan could be
included as additional revision of this deliverable at later stages. The DMP
will be updated at least after the mid-term review to fine-tune it to the data
generated and the uses identified by the consortium since not all data or
potential uses are clear at this stage of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1516_CO-ADAPT_826266.md
|
# Summary
This deliverable describes plans of how data will be managed in the CO-ADAPT
project. The focus is on guidelines and practices to ensure ethical handling
of data, in particular protect privacy and confidentiality. The sensitive data
that is collected is of participant volunteers that have signed an informed
consent to allow the project to analyse and reuse the data. The project
includes four activities where data of participants is collected: The CO-ADAPT
conversational agent application (T2.2, T2.3, T5.4, T6.4), The smart shift
scheduling study (T6.1), The proactive recommender (T2.4 , T6.3), adaptive
assembly line with cobots (T2.5, T6.2). These four activities can be
considered subprojects and operate data collection on participants all four
with different technologies and tools but respecting the guidelines proposed
on this document.
Since the technologies and tools are still being defined, the document will
not contain specific technologies, protocols and formats, as these will
specified in the deliverables concerning each of the four activities.
The conclusions are that the plan identifies activities in the project where
data is collected and clearly proposes guidelines for the data management. The
main guidelines include 1) Obtaining ethical approval from local committees
for all data collection activities, 2) Obtaining informed consent from all
participants 3) Right to refuse or withdraw for participants, 4)
Confidentiality and anonymization of data 5) Use of state of the art security
in protecting the data 6) Nominating data protection
officers 8) Training project participants on Ethical Handling of data.
# Introduction
The project will make use of a mixture of data collection methods.
1. **Codesign and qualitative data.** The focus groups and ethnographic observations are qualitative in nature, relying on rich data that can tell us much about people’s experiences with current technologies and preferred design aspects of the new system. All data collected through these methods will be kept confidential and will be stored on secured servers. If any of the materials in which participants could be identified are to be used in academic or educational (classroom) settings, the participants need to provide separate consent for this use.
2. **Experiments and field studies/trials** . These whether at work or in personal life, have a more quantitative approach. The project will make use of several behavioural measures (physical activity, heart rate, sleep patterns, work sheet logs, etc).
The types of data collection for experiments and field trials are summarised
below.
_Figure 1 Overview of the 4 data gathering activities in CO-ADAPT_
The project includes four activities where data of participants is collected:
The COADAPT conversational agent application (T2.2, T2.3, T5.4, T6.4), The
smart shift scheduling study (T6.1) , The proactive recommender (T2.4 , T6.3),
adaptive assembly line with cobots (T2.5, T6.2). These four activities can be
considered subprojects and operate data collection on participants all four
with different technologies and tools but respecting the guidelines proposed
on this document.
This deliverable first introduces **GDPR** **section 2** confirming that the
project follows the principles of the regulation. **Section 3** introduces the
main Data Management approach in particular regarding ethical handling of
data. **Section 4** discusses how the project conforms with the FAIR Data Use
principles. Finally, **section 5** reports the nominated Data Protection
officers for all partners that handle and collect data.
# GDPR
As of May 2018, the GDPR regulation applies in the European Union member
states, which creates the obligation for all consortium partner to follow the
new rules and principles. This section describes how the founding principles
of the GDPR will be followed in the CO-ADAPT project.
## Lawfulness, fairness and transparency
_**Personal data shall be processed lawfully, fairly and in a transparent
manner in relation to the data subject.** _
The CO-ADAPT project describes all handling of personal data in this DMP. All
data gathering from individuals will require informed consent of the test
subjects, or other individuals who are engaged in the project. Informed
consent requests will consist of an information letter and a consent form.
This will state the specific causes for the experiment (or other activity),
how the data will be handled, safely stored, and shared. The request will also
inform individuals of their rights to have data updated or removed, and the
project’s policies on how these rights are managed.
The project will anonymise the personal data as far as possible, however it is
foreseen that this will be possible in all cases. In those cases, further
consent will be asked to use the data for open research purposes, including
presentation at conferences, publications in journals as well as depositing a
data set in an open repository at the end of the project.
The consortium will be as transparent as possible in the collection of
personal data. This means when collecting the data information leaflet and
consent form will describe the kind of information, the manner in which it
will be collected and processed, if, how, and for which purpose it will be
disseminated and if and how it will be made open access. Furthermore, the
subjects will have the possibility to request what kind of information has
been stored about them and they can request up to a reasonable limit to be
removed from the results.
## Purpose limitation
_**Personal data shall be collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those
purposes** _
CO-ADAPT project will not collect any data that is outside the scope of the
project. Each researcher will only collect data necessary within their
specific work package.
## Data minimisation
#### Personal data shall be adequate, relevant and limited to what is
necessary in relation to the purposes for which they are processed
Only data that is relevant for the project research questions and the required
coaching strategies will be collected. Since this data can be highly personal,
it will be treated according to all guidelines on special categories of
personal data and won’t be shared without anonymisation or explicit consent of
the patient.
## Accuracy
_**Personal data shall be accurate and, where necessary, kept up to date.** _
All data collected will be checked for consistency.
## Storage limitation
#### Personal data shall be kept in a form which permits identification of
data subjects for no longer than is necessary for the purposes for which the
personal data are processed
All personal data that will no longer be used for research purposes will be
deleted as soon as possible. All personal data will be made anonymous as soon
as possible. At the end of the project, if the data has been anonymised, the
data set will be stored according to the partners practices more information
in section 5. If data cannot be made anonymous, it will be pseudonymised as
much as possible and stored following local regulations.
## Integrity and confidentiality
_**Personal data shall be processed in a manner that ensures appropriate
security of the personal data, including protection against unauthorised or
unlawful processing and against accidental loss, destruction or damage, using
appropriate technical or organisational measures** _
All personal data will be handled with appropriate security measures. This
means:
* Data sets with personal data will be stored servers that complies with all GDPR regulations and is ISO 27001 certified.
* Access to this server will be managed by the project management and will be given only to people who need to access the data. Access can be retracted if necessary.
* All people with access to the personal data files will need to sign a confidentiality agreement.
* These data files cannot be copied, unless stored encrypted on a password protected storage device. In case of theft or loss, these files will be protected by the encryption.
* These copies must be deleted as soon as possible and cannot be shared with anyone outside the consortium or within the consortium without the proper authorization.
In exceptional cases where the dataset is too large, or it cannot be
transferred securely, each partner can share their own datasets through
channels that comply with the GDPR.
## Accountability
_**The controller shall be responsible for, and be able to demonstrate
compliance with the GDPR.** _
At project level, the project management is responsible for the correct data
management within the project. For each data set, a responsible person has
been appointed at partner level, who will be held accountable for this
specific data set. Each researcher will need to make a mention of a dataset
with personal information to their Data Protection Officer, in line with the
GDPR regulations.
# Project Policies on Data
## Overview of ethical handling of data in CO-ADAPT
As described in the 1 Introduction In the field trials, monitoring of
participants will take place. The task 8.4 Ethical Issue Management will
verify and provide to EC ethical approvals obtained by relevant local ethical
committees in Italy (UNITN, UNIPD) and Finland (FIOH, UH).
Transmission of personal data over open communication channels will be done in
encrypted form only. The people working with the data will have to have a
unique password to access the database for security purposes. In all phases of
CO-ADAPT, these crucial ethical and legal aspects will be taken into account.
As a further measure to ensure compliance with legal and ethical conduct with
private data, CO-ADAPT will provide a mandatory training session on data
privacy for all CO-ADAPT researchers (see dedicated subsection in this section
on Milestone 2) at the project kick-off and three further ones before the
start of the last user studies. The consortium is committed to maintain strict
rules of privacy and prevent all personal data from being abused or leaked.
Under no circumstances, the consortium will provide, give or sell any
information on its users to any third party (data will not be used under any
circumstances for commercial purposes).
Relatedly, CO-ADAPT will be based on strong analyses of how the design of
persuasive interaction paradigms can be created such that the influencing
strategies take into account specific ethical constraints by including
relevant ethical content and appropriate influencing strategies in the very
design of the CO-ADAPT influencing framework (developed in WP1) and thereby in
the hardware and software interfaces. Main guidelines:
**Ethical Approval** . All studies involving data collection from participants
will obtain a ethical approval from local relevant committees and such
approvals will be kept on file.
**Minimal risk** . CO-ADAPT will only use hardware that users interact with
(wearable sensors or devices for showing conversational agents) that do not
need additional safety certification (i.e., that already have been EC
certified for the specific use conditions, or that do not need any
certification as a coffee mug).
**Informed Consent** \- Written and verbal informed consent will be obtained
from all subjects participating in the lab and field trials. All consent forms
will be approved by the local ethical committees.
**Confidentiality** The confidentiality of data obtained in the study will be
safe guarded by anonymization. Encryption and anonymization of data will avoid
to identify participants or view the sensitive data. Researchers involved
commit themselves not to misuse the data collected during and after the extent
of the research. In particular they commit not to use them against
participants, nor to sell this information to third parties, and to use the
data only in anonymous format unless specifically agreed with you.
**Data security and restricted access** . The project partners commit to
employ state of the art data security and restricted access only to
researchers that have signed a confidentiality agreement.
**Sharing the Results.** We will share with participants from which data come
from the results of the overall study with you once the data has been
analyzed.
**Right to Refuse or Withdraw.** Participant has the right to withdraw
him-/herself and his/her data from the project at any time. In the case that
the participants decide to withdraw from the experiment, all data collected up
to that point would be destroyed within the following 24 hours.
**Incidental findings.** These refer to the medical problems discovered in the
course of a research / trial which were not related to the topic of research.
As a first step the research subject will be made aware of the approach being
taken in the event of incidental findings, which include the right to decide
to be informed or not of such findings, as well as the right to request data
about such findings would be deleted.
## Co-design and Participatory Design
The CO-ADAPT project will implement active and continuous user participation
from a co-design perspective. The involvement of older users in participatory
design activities such as focus groups, ethnographic observation and co-design
workshops is foreseen. CO-ADAPT will give specific attention to any ethical
issues that will arise and will address them in a professional way following
very closely established EU regulations and corresponding national laws about
user privacy, confidentiality and consent. The main ethical issues to address
center on involving older persons in the various methods of the development
process of the augmented objects and the virtual e-coaching agent. Following
guidelines from research ethics throughout these stages ensures that
potentially problematic issues would be identified and assessed. All the work
that is done with human participants will therefore be submitted to ethical
review boards for approval. This approval will only be given if the proposed
research follows ethical codes of conduct that apply to the research
population.
Most participants in the co-design and implementation stages of the project
will be older users (contact with user groups will be established through
several consortium partners; IDEGO, UNIPD, UH, UNITN, FIOH). Participants in
all stages of the research will be given informed consent about the research
objectives. To this end, an informed consent form will be used on which it is
explained what the research is about, what is expected from the research
participants, and whether and how they will be compensated for participation.
The informed consent forms will be drafted in understandable terms to the
older participants. Additionally, there always needs to be the possibility for
participants to ask for clarifications regarding the content of the informed
consent form. Importantly, in line with codes of ethical conduct, participants
can always terminate their participation at any time with no negative
consequences whatsoever.
## Task 8.4 Ethical Issues Management (M1-M42)
In the work package Management CO-ADAPT includes a task on Ethical Issues
Management. The aim of this task is to monitor ethical issues, where users’
personal and potentially sensitive data are collected both explicitly and
implicitly, to ensure that the CO-ADAPT activities unfolds in the respect of
the EU Regulation 2016/679 (27 April 2016) and of the codes of conduct for
professionals doing research with technologies (e.g. IEEE and ACM) and human
beings (e.g. American Psychological Association).
The deliverables D9.1-D9.5 define a set of requirements to the ethical conduct
that will be monitored by this task. In addition, a yearly presentation will
ensure training of project partners on these ethical requirements and common
ethical conduct guidelines (MS2).
The Advisory board will be called to comment on the possible ethical issues.
provide a set of guidelines at the beginning of project. These suggestions
will inform the development of the adaptive systems in CO-ADAPT.
## MS2 Ethical practices and training
A milestone is foreseen to be delivered as a presentation for training all
project participants in ethical handling of data.
The training will include an overview of ethics in Co-adapt DoA and on the
deliverables 9.1-9.5 guidelines (inf consent, ethical app, DPM etc.). It will
also include an overview of established guidelines for example APA codes of
conduct for research with humans, with technologies and with personal data.
## Local legislations
All studies will be conducted adhering to all regulatory and ethical national
and international requirements. More precisely:
**Finland** The data protection legislation of Finland and EU, and
corresponding regulations and guidelines are followed, as well as instructions
by the authorities responsible for each individual registry database used in
the FIOH registry study and for the study conducted by UH.
**Italy** We will comply with GDPR and with art 22 of the old national norm
(Decreto legislativo 30 giugno 2003, n. 196) regarding processing health data.
Indeed, Italy has not made public its new data protection law, although it
seems to have approved it recently ("On the 8th of August 2018, the Italian
Board of Ministries announced that they have approved the Italian privacy law
integrating the GDPR. The law has not yet been published on the Official
Gazette. According to the Government, the decisions and the authorizations
issued by the Italian DPA, the Garante per il trattamento dei dati personali,
under the regime prior to the GDPR, as well as the existing Ethical Codes,
will remain in place “to ensure continuity“ until they are updated by the
Italian DPA. Source:
https://www.lexology.com/library/detail.aspx?g=8e76f584-b6a1-4762bb1c-86aeac143c4b).
# FAIR Principle
The CO-ADAPT project, representing a Research Innovation Action within the
H2020 framework, has a clear focus on the development of a framework that
provides principles for a two-way adaptation in support of ageing citizens. As
such, the project’s primary objective has never been to generate datasets that
are re-usable for whichever purpose. The project’s current focus is on the
design and implementation of a working software prototype. The final stage of
the project includes an evaluation study that may result in a dataset that has
potential value outside the project. As the evaluation protocol for that study
becomes clear, we will re-visit this document to describe potential FAIR Data
Use principles.
## Making data findable, including provisions for metadata
CO-ADAPT will offer open access to results gathered throughout the project.
General awareness and wider access to the CO-ADAPT research data will be
ensured by including the repository in registries of scientific repositories.
DataCite offers access to data via Digital Object Identifier (DOI) and
metadata search, while re3data.org and Databib are the most popular registries
for digital repositories.
## Making data openly accessible
As the repositories cover the basic principles of CO-ADAPT for publishing
research data, the consortium will pursue membership to them, without
excluding new initiatives which may arise during the forthcoming years due to
the increased interest for open access to research results and the new
European policy framework for sharing and freely accessing data collected
during publicly funded research activities. As a result, the partners will
keep track of those initiatives and will try to deposit the project’s
generated data sets at repositories which ensure compliance with the relevant
proposed standards in order to be easily exchanged. Dryad and figshare can be
also used as alternative repositories. In any case, open access to data,
following appropriate licensing schemes will be ensured. CO-ADAPT will target
“gold” open access for scientific publications and has foreseen budget for
this activity. Wherever “gold” is not possible, “green” open access will be
pursued. The target is to maximize the impact on scientific excellence through
result publication in open access yet highly appreciated journals (see initial
list below). It is worth stressing that this list includes targets where CO-
ADAPT partners have already published previous results. Furthermore,
repositories for enabling “green” open access to all project publications will
be used, as well as the OpenAIRE, which provides means to promote and realise
the widespread adoption of the Open Access Policy, as set out by the ERC
Scientific Council Guidelines for Open Access and the EC Open Access pilot.
In addition, CO-ADAPT will also release a set of core libraries from CO-ADAPT
as open source, which will be part of their exploitation strategy towards wide
adoption (D3.4, D4.4, D5.5).
## Making data interoperable
Depending on the scientific field where the data set will originate from,
additional metadata standards might be used.
## Increase data re-use (through clarifying licenses)
The CO-ADAPT will be implemented based on a variety of background components,
including proprietary. Based on these components and the effort to be
allocated in the project, CO-ADAPT will produce foreground, also by including
open source (royalty free) components.
# Subprojects specific plans
## Smart shift scheduling FIOH
#### General description of data
The data consists of quantitative registry and survey data associated to the
Finnish Public Sector (FPS) study. The registry data includes information on
the daily working hours of the employees (starting and ending times of the
work shifts), as well as information on sickness absence (without diagnosis)
as obtained from the use of shift scheduling software Titania® in the co-
operating organizations in the health and social care sector in Finland. The
survey data includes questionnaire information on areas like perceived work
ability, sleep, mental health and individual differences.
The obtained registry data of working hours consist of raw data, pre-processed
data, data analysis results as well as managerial documents and project
deliverables. Raw data are in ascii mode (work hour register), csv form
(health registers) and excel form (surveys) and will be stored in SAS-format.
The data analysis results of the raw data include data averaged for each 3 and
12 months in relation to the four main dimensions of the working hours: length
(e.g. the percentage of long work shifts or work weeks), timing (e.g. the
number of night shifts), recovery and work-life interaction.
Data consistency and quality are ensured by centralized processing and storage
of the data enabling efficient curation, harmonization and integration of the
data, resulting in reliable high-quality research data. The data has and will
be linked between registers using the Finnish personal identity codes unique
to each resident. The data will be version controlled and backed up, ensuring
its efficient storage and re-use.
#### Ethical and legal compliance
FPS data are owned by the Finnish Institute of Occupational Health (FIOH). FPS
consists of the 10-town study (PI Tuula Oksanen), hospital cohort (PI Mika
Kivimäki) and Working Hours in the Finnish Public Sector study (WHFPS, PI
Mikko Härmä). The FPS study has been approved by The Ethics Committee of the
Hospital District of Helsinki and Uusimaa (HUS 1210/2016). We will comply with
the protocol by removing personal information (personal identification code)
from the data before sharing it with researchers to ensure privacy protection.
FIOH has written contracts with all the FPS and other organizations to agree
on the use of obtained data, co-operation and feedback in this project.
Results of the COADAPT project will be presented in statistical form so that
no individual can be identified indirectly from published reports.
Ethical issues are considered throughout the research data life cycle. The
data includes personal and sensitive information, and therefore we will ensure
privacy protection and data pseudonymisation. Data quality control ensures
that no data are accidentally changed and that the accuracy of data is
maintained over their entire life cycle. We take into account the effects of
the new Finnish data protection act (based on the EU’s General Data Protection
Regulation) on data security, personal data processing and _anonymisation_ .
#### Documentation and metadata
The datasets in the Finnish Public Sector study (FPS) and Working hours in the
Finnish Public Sector (WHFPS) (for the register-based working hours data) are
documented as standardized metadata (person file description) on the project
websites.
## Proactive entity recommender
**Short description** : This activity is aimed at developing intelligent
recommendations of useful entities (people, documents, topics, etc.) utilising
easily accessible interfaces that minimise for example keyboard input (Vuong
et al 2017). A user's digital activities are continuously monitored by
capturing all content on a user's screen using optical character recognition.
This includes all applications and services being used and relies on each
individual user's computer usage, such as their Web browsing, emails, instant
messaging, and word processing. In addition, microphone and camera are used to
capture entities in the real world as well. Unsupervised machine learning and
topic modelling is then applied to detect the user's topical activity context
to retrieve information. Based on this autonomously learned user model, the
system proactively retrieves information entities directly related to what the
user is doing as observed on the screen.
_**Digital activity logs** _
The digital activity logs will be recorded in a similar way of the operating
system event logs, which commonly exist in any operating systems. Logs include
the following information.
* Text read by the user: A digital activity monitoring software attempts to capture any information changes on a device’s screen (laptop or smartphone) or waits 2 seconds upon any user keystrokes, touch, or mouse behavior (clicks/taps, scrolls, drags, gestures) and commence taking a screenshot. A screenshot will be converted into text using Tesseract 4.0, an open source Optical Character Recognition (OCR) engine. After text conversion, screenshots will be deleted to reserve a device’s disk space.
* Operating system logs: time of when the text is read, title of an active document, directory/url of the document, and an active application will be logged in the below format.
_**Voice activity logs** _
The voice activity logs will be recorded based on speech recognition
technology.
* A software attempts to capture information from a device’s microphone. Audio streams will be converted to textual logs.
_**Detection of entities in the real world** _
Using computer vision technology entities will be recognised in the real world
for example through OCR.
#### Relevance Assessments
We collect relevance assessments on the entity information that are
recommended during the task. The participants rate the entity information
(keywords, documents, applications) on a scale from 0 to 3 (0: not relevant,
1: low relevance, 2: medium relevance, 3: high relevance). Participants assess
relevance of recommendation in an excel file with 3 fields (word ID, plain
text words, relevance score). This file is automatically generated after the
participant finishes a task. Plain text words column will be manually removed
by participants before handing over the excel file to the experimenter.
#### Data minimization, security and management
_Data minimization_ : We minimize the amount of data processed to what is
absolutely necessary for carrying out the purpose of the research. We avoid
storing and archiving personal data, such as plain texts of the digital and
voice activity logs.
_Data security_ : We provide a level of security that is appropriate to the
risks represented by the processing of personal data (both digital and voice
activity logs). Personal data collected are stored on local hard drive during
data collection phase. We use encryption to ensure that personal data would be
unintelligible even if data breaches occur. We also minimize the risk of any
data breaches on users’ personal computers by helping them fulfilling basic
security measures and by using the secure infrastructure of University of
Helsinki during the lab tests.
_Data management_ : All interaction logs during the lab tests and relevance
assessment sheets collected and archived for the purpose of the evaluation of
the system will be anonymized. Users are identified by 5-digit codes given by
themselves. Identifiable information about users in the logs and relevance
assessment sheets will be removed before handing over to the researcher in
charge. Signed informed consent sheets will never be digitised and kept in a
locked room; Anonymized logs and relevance assessment sheets are stored in the
secured server located in University of Helsinki.
We expect no risk beyond the risks users encounter in their normal life, but
any potential security risks of data breaches mentioned above which can be
minimized by advising the users to install a reliable antivirus software and
avoid new software installation during the study.
Additional information that cannot be determined at this point such as server
setups, formats and security measures will be found in:
19
**Data Management Plan**
## Adaptive Assembly line with co-bots
**Short description:** the activity comprises the introduction of an adaptive
workstation paired with a collaborative robotic arm (i.e., a cobot) that will
support the employees in the unfolding of their regular working tasks. More
specifically, the adaptive assembly workstation, will adjust its features to
the physical and perceptual characteristics of each specific user, e.g.,
height and level of brightness. Furthermore, the workstation will assist the
worker as s/he is performing her/his usual activities. Indeed, several
implicit metrics (e.g., pupil dilation, blink duration and rate) will be
continuously and unobtrusively acquired to monitor the user’s workload by
means of wearable devices (e.g., eye-tracking glasses, smart T-shirts/chest
band, surface electromyography (EMG). By doing so, the workstation will detect
transient changes in the employees’ status and will adjust accordingly and in
real-time its operating, so as to support her/him. For instance, if the system
senses that the user’s cognitive workload or stress level have overpassed a
given threshold, it would activate a ‘lightguidance’ indicating to the
employee the next action to accomplish or it would slow down the workflow
speed. In addition, the cobot should assist employees in repetitive tasks,
e.g., handing over the components to be assembled, thereby relieving their
workload. Taken together, such interventions are expected to reduce the
overall level of stress and to positively impact on well-being and
satisfaction. Overall, the targeted working activities will be video-recorded
in order to allow a subsequent computersupported video-analysis to investigate
how and to what extent the employee’s working practices change as a
consequence of the cobot introduction. The working experience will be assessed
also through self-reported metrics, i.e., questionnaires and interviews.
**Data collected** : Overall, several metrics will be gathered in order to
accomplish the planned adaptations in the work system: physical
characteristics of the workers (e.g., height), measures of cognitive workload
(i.e., pupil dilation, blink duration and rate, saccades amplitude and
duration), indices of stress (i.e., heart rate and heart rate variability,
prolonged muscle contraction as well as reduction in the frequency of
decontraction). Part of the measures are collected in order to assess the
effect of the adaptations in terms of: efficiency (system log-files, time on
tasks, errors, decrease in accidents); perceived well-being, safety, security,
and satisfaction (self-reported measures). The actual working practices
observed before the introduction of the cobot will be investigated using
computer-supported video-analysis that will allow to understand both
quantitative aspects of the work (e.g., frequency of specific behaviors, time
required to accomplish specific tasks) and qualitative aspects of the working
activities (e.g., need to use special equipment). A subsequent
computersupported observation, following the cobot introduction, will allow to
understand the changes in the working practices brought about by the robot.
Pupil dilation, blink duration and rate, saccades amplitude and duration will
be collected utilizing eye-tracking glasses (i.e., 120 Hz Pupil Labs). Pupil
Capture software will record the raw eye-tracking data while Pupil Player will
allow to export the abovementioned eye-tracking metrics.
A smart T-shirt/chest band (i.e., Smartex) will be utilized to record heart
rate and heart rate variability. Furthermore, surface electrodes (i.e.,
ProComp Infiniti 5) will be considered to monitor electromyographic activity.
Dedicated software will be utilized to record and export the data (e.g.,
Biograph Infiniti).
The software The Observer by Noldus will be utilized for the video-analysis of
the operator-cobots interactions.
The measures collected are then motivated by the multifold goal of the
activity, that is evaluating the performance of the user’s interaction with
the adaptive assembly workstation; identifying the most suitable and
informative psychophysiological and cognitive indices upon which the adaptive
system should rely; and finally, comprehensively investigating the workers’
perceptions regarding their own overall experience.
Additional information regarding the security measures, that cannot be
determined at this point, will be found in:
#### Data security and management
Data security: the level of security will be appropriate to the sensitivity of
the collected data (i.e., implicit psychophysiological and cognitive metrics,
self-reported evaluations, interviews recording and transcriptions,
videorecordings) and the associated risks. All the data in their raw format,
either digital or not, and in their processed versions will be archived in a
dedicated location at the premises of the HIT Center, where only the
researchers directly involved in the project will have the access. They will
be anonymized, meaning that each user will be assigned a pseudonym (e.g., P01)
unrelated to his/her actual identity, to protect his/her privacy.
Data management: Before starting the activity, all participants will receive
full and detailed explanation regarding the data that will be collected, the
modality that will be employed and the possible risks. To maximize the
understandability of the information, care will be given to avoiding technical
jargon, and participants will be encouraged to make any question to the
researchers. In addition, they will be provided an informed consent describing
all the details pertaining to the data collection, storage and management. The
aim of collecting and exploiting also implicit personal data (e.g.,
psychophysiological metrics), by means of wearable devices, will be clarified
in order to avoid any possibility of privacy and ethics violation insofar as
participants have reduced control on this type of information. The informed
consents, containing the personal data of the participants, will be never
converted in digital format and will be kept within secure locations.
21
**Data Management Plan**
The data collected using paper and pencil surveys will be converted into
electronic spreadsheets, assigning an encrypted code to each participant, to
allow their processing. Similarly, qualitative data pertaining to the
interviews will be transcribed to allow for thematic analysis.
## CO-ADAPT conversational agent
**Short description** : The CO-ADAPT conversational agent supports the
communicative engagement between ageing workers, digital professionals (e.g.
counsellors, psychotherapists) through AI-based conversational technology. The
conversational agent will support ageing workers and digital therapists in
coping and assessing states of stress or anxiety as they go through major life
changes at home and at work. The conversational technologies will be able to
learn from different streams of signals: implicit physiological and explicit
linguistic signals. Conversational agents will be personalized to deliver
therapies to ageing workers and monitoring compliance and support digital
therapy.
The conversational agent will infer their actions and behaviour (linguistic or
multimodal) from the interaction signals with users and from the behavioural
knowledge base. The knowledge base will model and encode the possible
relationships between emotional patterns and factors of change (e.g. life
events) and resistance to change, and the role of persuasion in that process.
The framework will manage data in compliance to the processes and API that
will be established in the data collection and analytics work package (WP4).
**Data collected** :
The knowledge base to be used to feed the conversational agent includes
physiological signals - recorded by wearable sensors, and behavioural data.
According to GDPR definition, in CO-ADAPT we will deal with sensitive personal
data, including biometric data. Sensitive personal data will be held
separately from other personal data, and both categories of data will be
pseudonymised by replacing identifying information with artificial
identifiers. Pseudonymised data will be dealt with by CO-ADAPT partners IDEGO
and UNITN. Pseudonymised individual data of the subjects participating in the
data collection will be kept on separate file and locked cabinet by IDEGO.
UNITN will receive data in pseudonymised format, and will store such data by
technical measures that prevent the re-identification of data subject.
Security incidents, if any, will be immediately notified by UNITN researches
to their DPO (see Section 6). Data in the pseudonymised format will be kept
and dealt with for the purposes of the CO-ADAPT project. After the completion
of the project, data may be used by UNITN for further research activities, and
will not be transferred to third parties outside of an agreement that takes
into account the GDPR and the National regulations for the application of such
legislation. In particular, the data will not be transferred to research or
industrial organizations outside the European Union.
# Data Protection Officers
Five of our partners, who process and/or store large amounts of personal data,
have appointed DPOs. They will be in charge of monitoring performance and
providing advice on the impact of data protection efforts. In addition, they
will maintain comprehensive records of all data processing activities.
_Table 1. DPOs and contact details_
<table>
<tr>
<th>
**Partner**
</th>
<th>
**DPO**
</th>
<th>
**Details**
</th>
<th>
**E-mail**
</th> </tr>
<tr>
<td>
**FIOH**
</td>
<td>
Specialized researcher Simo
Virtanen
</td>
<td>
Topeliuksenkatu 41B,
00250 Helsinki,
Tel. +358 43 825 6330
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**UH**
</td>
<td>
Professor Giulio Jacucci
</td>
<td>
Department of
Computer Science
P.O. Box 68 (Gustaf
Hällströmin katu 2b)
FI-00014 University of
HelsinkiI
Finland,
Tel. +358 29 415 1153
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**UNITN**
</td>
<td>
Anti-corruption and
Transparency
Officer Fiorenzo Tomaselli
</td>
<td>
Via Verdi, 8 - 38122
Trento, Tel. 0461 281114
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**UNIPD**
</td>
<td>
Postdoctoral
Researcher Valeria Orso
</td>
<td>
Human Inspired
Technology Research Centre
Via Luzzatti, 4 - 35121 Padova, Italy.
Tel. +39 049 827 5796
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**AALTO**
</td>
<td>
Research Assistant
Zeinab Rezaei Yousefi
</td>
<td>
Department of
Computer Science
Aalto University,
Konemiehentie 2,
02150 Espoo, (P.O.Box
15400, FI-00076 Aalto)
Finland ,
Tel. +358 46 951 8283
</td>
<td>
[email protected]
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1517_MOVINGRAIL_826347.md
|
# Introduction
## Project summary
MOVINGRAIL (‘MOving block and VIrtual coupling New Generations of RAIL
signalling’) is a
Shift2Rail project addressing the topic ‘Analysis for Moving Block and
implementation of Virtual Coupling concept’. The aims of MOVINGRAIL are
* To identify and assess the most suitable methodology in order to test and bring into service Moving or Fixed Virtual Block contributing to the definition of the Operational Procedures and highlighting the differences with the traditional signalling systems.
* To analyse the potential business and market response thanks to the application of the Virtual Coupling concept identifying pros/cons in terms of performance and cost, and to assess the needs and work done for the Train-to-Train (T2T) both in IP1 and IP2 and propose convergence of technical communication solution(s).
## Purpose of this document
This document has been prepared to provide the Data Management Plan (DMP)
which addresses the way research data is managed in the MOVINGRAIL project
within the Open Research Data Pilot (ORD Pilot). The ORD pilot aims to improve
and maximise access and re-use of research data generated by Horizon 2020
projects, considering the need to balance openness and protection of sensitive
information, commercialisation and Intellectual Property Rights (IPR), privacy
concerns, as well as data management and preservation of questions.
DMPs are a key element for good data management, as they describe the
management of the data to be collected, processed and published during a
research project, creating awareness about research data management topics
such data storage, backup, data access, data sharing, archiving and licensing.
MOVINGRAIL hereby states the adherence to the FAIR data principles, whereby
research data is made Findable, Accessible, Interoperable and Re-usable for
the community, responsibly considering possible data restrictions on public
sharing.
It is acknowledged that a DMP is a living document and, therefore, as the
implementation of the project progresses and significant changes occur, this
plan is updated accordingly on a finer level of granularity at the end of each
project period (M12 and M24).
## Context
The present document constitutes the Deliverable D5.1 “Data Management Plan”
in the framework of the TD2.3 of IP2 (Moving Block) task 2.3.1 (Moving Block
Operational and
Engineering Rules) and task 2.3.6 (Test Specifications), as well as the TD2.8
of IP2 (Virtual
Coupling) task 2.8.3 (Feasibility Analysis) and task 2.8.6 (Impact Analysis).
# Data Summary
MOVINGRAIL collects various kinds of data:
1. Semantic data
2. Stated preference data from surveys and workshops
3. Simulation data.
The responsibility to define and describe all non-generic data sets specific
to an individual work package is with the WP leaders. The WP leaders formally
review and update the data sets related to their WP. All
modifications/additions to the data sets are provided to the MOVINGRAIL
Coordinator (TUD) for inclusion in the DMP.
The table below shows the various data collected with the purpose of the data
collection and its relation to the objective of the project.
<table>
<tr>
<th>
**Work Package**
</th>
<th>
**Data**
</th> </tr>
<tr>
<td>
**WP 1 (TUBS)**
</td>
<td>
**Semantic data of railway signalling**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
The data supports the operations analysis of train centric signalling
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
Excel and PDF
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
Semantic data from X2RAIL-1
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
X2RAIL-1 and own work
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 100 MB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
Useful for anyone working on railway signalling engineering and operations
</td> </tr>
<tr>
<td>
**WP 1 (TUBS)**
</td>
<td>
**Glossary**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
The data supports the operations analysis and terminology for describing
various scenarios.
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
mysql, php, flatfile, pdf, epub, html
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
Various literature as specified in references
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
Literature and own work
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 100 MB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
Useful for anyone working on railway signalling engineering and operations,
accessible via _https://glossary.ivev.bau.tu-bs.de/tikiindex.php_ and
_www.movingrail.eu_ under a Creative Commons Attribution 4.0 International
License (CC BY 4.0).
</td> </tr>
<tr>
<td>
**WP 1 (TUBS)**
</td>
<td>
**Symbol library**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
This TikZ library is a toolbox of symbols geared primarily towards creating
track schematic for either research or educational purposes. It provides a
TikZ frontend to some of the symbols which may be needed to describe
situations and layouts in railway operation.
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
TeX, TikZ, pdf, png
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
own work
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 50 MB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
Useful for anyone working on railway signalling engineering and operations,
accessible via CTAN (Comprehensive TEX Archive Network) under an ISC license
at _https://ctan.org/pkg/tikz-trackschematic_
</td> </tr> </table>
<table>
<tr>
<th>
**WP 2 (UoB)**
</th>
<th>
**Stakeholders requirements data**
</th> </tr>
<tr>
<td>
Purpose
</td>
<td>
The data supports the identification of gaps in ETCS Level 3 testing, current
issues and requirements needed for an effective system testing, validation and
certification.
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
PDF questionnaires and PDF survey results
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
The data derives from questionnaires made originally on paper in a workshop
and then aggregated and anonymized.
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 100 MB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
The data can be used for developing operational concepts and testing
strategies for the verification and validation of moving block signalling
systems that draws on best practice and meets all stakeholder requirements. It
is available at
_https://beardatashare.bham.ac.uk/getlink/fiNYac39GLAxPfS7s5WWRvi_
_9/_
</td> </tr>
<tr>
<td>
**WP 3 (PARK)**
</td>
<td>
**Stakeholders requirements data**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
The data will establish and refine the communications requirements for Virtual
Coupling and the Performance of communications architectures and equipment’s
including developments relating to autonomously driven cars.
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
It is expected that the primary new data used in WP3 will be in the form of
textual requirements from stakeholders via questionnaires and workshops,
anonymized in accordance with GDPR.
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
It is expected that data will be received and shared from the complementary
projects (CONNECTA-2, X2RAIL-3) which will be in accordance with the
Collaboration Agreements. In addition, we will reuse data from the public
domain, and other data made available, from ASTRail, CONNECTA-1, ETALON,
IN2RAIL, MISTRAL, Roll2Rail, Safe4Rail1, Safe4Rail-2, X2RAIL-1-WP3, and
X2RAIL-2-WP3/4/5 projects are also expected to be of use to MOVINGRAIL WP3.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
Original research, industry, preceding and collaborating Shift2Rail projects.
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 1 GB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
The data will be useful to the signalling industry to identify virtual
coupling technical communication requirements and solutions; review previous
studies and projects into virtual coupling; analyse solutions against
requirements for virtual coupling; investigate the application, solutions and
dynamics of automated car driving; and evaluate the applicability of
autonomous vehicles to the railway field.
</td> </tr>
<tr>
<td>
**WP 3 (PARK)**
</td>
<td>
**Requirements data**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
The data will establish and refine the communications requirements for Virtual
Coupling and the Performance of communications architectures and equipment’s
including developments relating to autonomously driven cars.
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
Statistical performance data on communications systems.
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
It is expected that performance data will be subject to commercial
</td> </tr>
<tr>
<td>
</td>
<td>
confidentiality.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
Original research, industry, preceding and collaborating Shift2Rail projects.
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 100 MB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
The data will be useful to the signalling industry to identify virtual
coupling technical communication requirements and solutions.
</td> </tr>
<tr>
<td>
**WP 4 (TUD)**
</td>
<td>
**Stated preference data from surveys and workshops**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
The data supports the assessment of market potentials and impact assessment of
Virtual Coupling for different railway segments
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
Surveys from railway experts to gather feedback and opinions about actual
technological and operational feasibility of Virtual Coupling
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
Part of the information about operational scenarios from X2RAIL-3 WP6 & 7 will
be reused to make surveys to railway experts.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
The data will derive from surveys made originally on paper and then
electronically transferred to an Access database
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 1 GB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
The data produced in WP4 will be useful to railway industry stakeholders,
academic researchers to assess feasibility and multidimensional impacts of
Virtual Coupling as well as to make predictions/plans about development and
implementation plans for such a technology. Furthermore it is useful to other
experts of the broader transport industry and statisticians to estimate
environmental repercussions that Virtual Coupling could have by potentially
attracting more passengers towards the railways.
</td> </tr>
<tr>
<td>
**WP 4 (TUD)**
</td>
<td>
**Simulation data**
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
Investigate applicability and impacts on safety, costs, and performance of
Virtual Coupling
</td> </tr>
<tr>
<td>
Types and format
</td>
<td>
Simulation data will have different formats, specifically .xslm (Excel), .csv
files, RailML, InfraAtlas and plain text files
</td> </tr>
<tr>
<td>
Reuse of existing data
</td>
<td>
Input data from railway traffic simulation models already built during other
national and international projects (e.g. ON-TIME) are expected to be re-used.
</td> </tr>
<tr>
<td>
Origin of data
</td>
<td>
Input and output of simulation models and multi-criteria analyses.
</td> </tr>
<tr>
<td>
Expected size of data
</td>
<td>
Less than 10 GB
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
The data produced in WP4 will be useful to railway industry stakeholders,
academic researchers to assess feasibility and multidimensional impacts of
Virtual Coupling as well as to make predictions/plans about development and
implementation plans for such a technology. Furthermore it is useful to other
experts of the broader transport industry and statisticians to estimate
environmental repercussions that Virtual Coupling could have by potentially
attracting more passengers towards the railways.
</td> </tr> </table>
# FAIR data
## Making data findable, including provisions for metadata
The data will be securely stored at 4TU.Centre for Research, which is a
Trusted Digital Repository for technical-scientific research data in the
Netherlands that complies fully with H2020 requirements of making data
findable, accessible, interoperable and reusable (FAIR). See
_https://researchdata.4tu.nl/en/home/_
Data collections, processed data and data representations will be stored for
15 years after the end of the project. Research data that is not privacy
sensitive will be open access available through the data repository mentioned
above as far as this is compatible with and does not infringe IP requirements
of the partners. These data, including the metadata that ensures that others
can find and use the data, will be stored and made available in the TU Delft
data archive 4TU.ResearchData.
The Dublin Core ( _www.dublincore.org_ ) metadata standard will be adopted and
further information on the data will be delivered in file headers or in
separate documentation files. This also applies to documentation files (e.g.
reports) and other types of data including experimental data and input/output
tabular data for code training, testing and validation.
## Making data openly accessible
Once scientific journal publications are published (in Open Access),
publishable data (according to the Consortium Agreement) will be publicly
archived for the long term via the 4TU.Centre for Research Data archive
(documentation, experimental data and tabular data), following their metadata
standards (Dublin Core). TUD researchers can upload up to 1 TB of data per
year free of charge. This should suffice for the data that will be archived
for the long term. The 4TU.Centre for Research Data Archive ensures data will
be well-preserved and findable in the long term (each uploaded dataset is
given a unique persistent digital identifier).
Open and standard formats will be preferred for archived data files (e.g.,
.csv, .txt). Proper documentation files will be delivered together with the
datasets in order to facilitate reuse of data.
## Making data interoperable
All publishable data will be delivered in open and standard data formats.
Discipline specific metadata is currently under discussion. If applicable,
metadata will be delivered in XML format together with the data (depending on
the chosen format). Proper documentation (README) files will be delivered
accordingly. Tabular data will be archived with informative and explanatory
headers to facilitate data re-use and interoperability.
## Increase data re-use (through clarifying licences)
All data that cannot be disclosed will be kept at the respective institutional
server for the long term (at least 4 years after the end of the project);
accessed only by team members within the institution, for auditing and
validation purposes. It is also acknowledged that, for some of the outcomes,
conditions for exploitation as stated in the Consortium Agreement may apply.
Since the results from this project will make a strong impact in the railway
sector, we find it is extremely important to share the data responsibly. Hence
datasets that will be open to the public will be released along the journal
scientific publications after proper discussion with partners. The datasets
will be published via repositories such as the 4TU.Centre for Research Data
Archive. In the same way, and in order to motivate re-use of data, the journal
articles associated to these datasets will be published in open access and/or
self-archived on the MOVINGRAIL website and subject repositories, following
the publisher’s self-archiving policies.
# Allocation of resources
TUD researchers can upload up to 1 TB of data to the 4TU.Centre for Research
Data Archive (per year) free of charge. Also, the storage capacity and
privately accessed drives managed by each partner are already available.
For internal document sharing between partners we make use of SURFdrive, a
password protected cloud storage service. Each TUD staff member may use
SURFdrive (100 GB storage, access via institutional account). The SURFdrive is
shared between all MOVINGRAIL partners.
The WP leaders are in charge of the management of the data from their work
package.
<table>
<tr>
<th>
**Work package**
</th>
<th>
**Responsible partner**
</th> </tr>
<tr>
<td>
**WP 1**
</td>
<td>
TUBS
</td> </tr>
<tr>
<td>
**WP 2**
</td>
<td>
UoB
</td> </tr>
<tr>
<td>
**WP 3**
</td>
<td>
PARK
</td> </tr>
<tr>
<td>
**WP 4**
</td>
<td>
TUD
</td> </tr> </table>
# Data security
Some data will be processed in work laptops of research team members only when
allowed. Master copies will be kept at the drives of each respective
institution. The IT departments of each institution will maintain the data
regarding backups (redundancy) and secure storage (protected access to only
team members). Only team members within each institution will have access to
the data during the research project. Such data access will be set up by the
respective IT departments of each institution. The data that will remain close
to the public will be archived at each partner’s servers for at least 4 years
after the end of the project.
SURFdrive will be used for temporal data storage and for data sharing among
different partners coordinated by TUD (coordinator).
# Ethical aspects
There are no ethical issues that have an impact on data sharing.
It is important to mention, in case there are ethics-related questions or
issues arising throughout the project, these will be reported to the
scientific coordinator and will be discussed accordingly among team members.
Extra advise can be discussed with the Human Research Ethics Committee of TUD
(at [email protected]).
# Other
MOVINGRAIL will make use of the TUD Research Data Framework Policy which can
be found via
_https://www.tudelft.nl/en/2018/library/researchdatamanagement/tu-delft-
research-dataframework-policy-published/_
| 10
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1519_INITIO_828779.md
|
2. Chiral nanostructures: detailed written and graphical descriptions of the synthetic protocols for nanostructure preparations. Typically, in Office and ChemDraw file (or equivalent) formats. CIF files for the description of crystal structure determinations
3. Thin film depositions: Office files for protocols; typically, in CSV, Origin or Excel files. Characterizations of the films involve photos taken by camera, optical microscope, scanning electron microscope, and transmission electron microscope. Involve image files (e.g. BMP, TIFF, JPG ...) and video files (e.g. MP4, AVI, MOV...)
4. Sensor array: CAD files, with defined schemas, shapes and dimensions of the prototypes’ parts (e.g. mechanical holders, transducers, microfluidic system, data storage, etc.) and their integration.
5. Measurements data: Frequency data and photoluminescence spectra. Typically, in CSV, Origin or Excel files. It can include image files (e.g. BMP, TIFF, JPG ...) and video files (e.g. MP4, AVI, MOV...). Data analyses in MatLab.
4. **Specify if existing data is being re-used (if any)**
No data, other than expertise from partners’ background (e.g. chiral
receptors, nanostructures and chemical sensors produced in the past), is being
re-used.
5. **Specify the origin of the data**
All the data generated will be the product of the research carried out by the
partners in the framework of the INITIO project.
6. **State the expected size of the data (if known)** **Type I: Design and fabrication details**
* Synthetic protocols: 1MB – 100 MB per information file
* Chiral nanostructures: 10MB – 100 MB per information file
* Thin film depositions: 1MB – 100MB per process - Sensor arrays: 10MB – 1GB per file.
* Measurements data: 10MB – 1GB per file; 1MB – 1GB per image/video
* Data analysis: 1 MB – 1GB per experiments
**1.7 Outline the data utility: to whom will it be useful**
The data will be used for internal validation of the processes, benchmarking
of the performances of the prototypes, and research on metrology and medical
applications.
It may also be useful for research institutions and companies working in the
field of chemical sensors, and environmental control as well; either for a
better understanding of the development and its performances or for
benchmarking and reproduction of the results.
# FAIR data
**3.1 Making data findable, including provisions for metadata:**
**3.1.1 Outline the discoverability of data (metadata provision)**
Usually, the data will be self-document. When uploaded to public repositories
(e.g. European OpenAIRE repository), metadata might accompany it, to be
defined in further versions of the DMP.
**3.1.2 Outline the identifiability of data and refer to standard
identification mechanism. Do you make use of persistent and unique identifiers
such as Digital Object Identifiers?**
To be defined in further versions of the DMP, when the public repository
system will be fully defined.
**3.1.3 Outline naming conventions used.**
To be defined in further versions of the DMP, when the public repository
system will be fully defined. As a general rule, it should include information
related to the project, partner generating the data, serial number or date and
description of the dataset.
**3.1.4 Outline the approach towards search keyword.**
To be defined in further versions of the DMP, when the public repository
system will be fully defined.
**3.1.5 Outline the approach for clear versioning.**
Version control mechanisms should be established and documented before any
data are made openly public. During generation and collection, each partners
will follow its own internal procedures.
**3.1.6 Specify standards for metadata creation (if any). If there are no
standards in your discipline describe what metadata will be created and how**
To be defined in further versions of the DMP, when the public repository
system will be fully defined. Metadata will be created manually by depositors
in the deposit form at the repository
**3.2 Making data openly accessible:**
**3.2.1 Specify which data will be made openly available? If some data is kept
closed provide rationale for doing so.**
**Type a), b), c), e) data will be made openly available.** In fulfillment of
project objectives, the consortium oversees any disclosure of scientific and
technical data made by the partners, in the form of summaries, conference
contributions, paper publications, online communications, etc. The content of
the approved communications is considered not confidential and its
communication is deemed beneficial for the achievement of the project
objectives. Consistently with this communication protocol, the consortium will
make public all the original datasets of Type **a), b), c), e) data** used to
prepare these public communications.
**Type II data will only be made openly available partially** . This is
necessary to protect the technological asset developed in the project. Any
public disclosure of the fabrication details of sensor devices would
jeopardize the chances of exploiting the technology, among the project
partners, in particular with SMEs participating to the project, or with third
parties.
**3.2.2 Specify how the data will be made available.**
Data will be made openly available in relation to an associated open access
publication. For each publication, the associated Type II data will be filed
together in a container format (e.g. zip, or tar). Information to relate each
data set with the corresponding figure, table or results presented in the
publication will be provided.
Data will be made openly available following the same time rules that apply to
the associated open access publication, e.g. in terms of timeliness, and
embargo.
**3.2.3 Specify what methods or software tools are needed to access the data?
Is documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?**
Data will be made available in standard file formats that could be accessed
with common software tools. This will include, ASCII or Office files for
numeric datasets, and standard picture formats for images.
**3.2.4 Specify where the data and associated metadata, documentation and code
are deposited.**
Details about the public repository system to be used will be fully defined in
further versions of the DMP. In deciding where to store project data, the
following choice will be performed, in order of priority:
* An institutional research data repository, if available
An external data archive or repository already established in the project
research domain (to preserve the data according to recognized standards)
* The European sponsored repository: Zenodo (http://zenodo.org)
* Other data repositories (searchable here: re3data http://www.re3data.org/), if the previous ones are ineligible
**3.2.5 Specify how access will be provided in case there are any
restrictions.**
Data availability is categorized at this stage in one of two ways:
* Openly Accessible Data [Type a), b), c) and e) associated to open access
publication]: open data that is shared for re-use that underpins a scientific
publication.
* Consortium Confidential data [Type d) data]: accessible to all partners within the conditions established in the Consortium Agreement.
**3.3 Making data interoperable:**
1. **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.**
Does not apply for the moment.
2. **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?**
Does not apply for the moment.
**3.4 Increase data re-use (through clarifying licenses):**
1. **Specify how the data will be licensed to permit the widest reuse possible**
The Openly Accessible Datasets will be licensed, when deposited to the
repository, under an Attribution-NonCommercial license (by-nc).
2. **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed**
The Openly Accessible Datasets could be re-used in the moment of the open
publication.
3. **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why.**
Each archived Openly Accessible Dataset will have its own permanent repository
ID and will be easily accessible, and could be used by any third party under
by-nc license.
4. **Describe data quality assurance processes.**
The repository platform functioning guarantees the quality of the dataset.
5. **Specify the length of time for which the data will remain re-usable.**
Openly Accessible Datasets will remain re-usable after the end of the project
by anyone interested in it. Accessibility may depend on the functioning of the
repository platform, and the project partners do not assume any responsibility
after the end of the project.
# Allocation of resources
**4.1 Estimate the costs for making your data FAIR. Describe how you intend to
cover these costs.**
There are no costs associated to the described mechanisms to make the datasets
FAIR and long term preserved.
**4.2 Clearly identify responsibilities for data management in your project.**
The project coordinator has the ultimate responsibility for the data
management in the Project. Each partner is requested to provide the necessary
information to compose the Openly Accessible Datasets in compliance of the
terms defined in the DMP agreed by the consortium.
**4.3 Describe costs and potential value of long term preservation.**
Does not apply for the moment.
# Data security
**5.1 Address data recovery as well as secure storage and transfer of
sensitive data.**
Data security will be provided in the standard terms and conditions available
in the selected repository platform.
# Ethical aspects
**6.1 To be covered in the context of the ethics review, ethics section of DoA
and ethics deliverables. Include references and related technical aspects if
not covered by the former.**
Does not apply for the moment.
# Other
**7.1 Refer to other national/funder/sectorial/departmental procedures for
data management that you are using (if any)**
The project data and documentation are also stored in the project intranet,
which is accessible to all project partners.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1521_SoFiA_828838.md
|
1\. Introduction 4
2\. DMP Strategy 4
3\. Internal Repository 4
4\. Scientific Publications 5
5\. Dissemination / Communication Material 5
6\. Research Data 5
7\. Computational Data 5
# INTRODUCTION
The overall objective of SoFiA is to develop a radically new technology to
overcome the scientific and engineering roadblocks that plague state- of-the-
art AP. The proposed radical technology involves using cells of soap foam as
miniature photocatalytic reactors. Implementation of such a revolutionary idea
in sustainable energy to a prototype stage and beyond requires a combination
of excellent research and innovation bridged efficiently to strategic
stakeholders. The present document reports Data Management Plan in detail,
listing the foreseen activities mainly for the first reporting period
(M1-M12). The Project Coordinator is responsible for ensuring that the
different activities described herein are performed within the consortium.
# DMP STRATEGY
SoFiA Steering Committee will decide on publishing documents and data sets,
and IP protection. For all scientific publications, Open Access protocol will
follow the Guidelines on Open Access to Scientific Publications and Research
Data in Horizon 2020.
General project data will be stored in a safe repository at Uppsala
Universitet.
The consortium will use ZENODO – a repository hosted at CERN and created
through the European Commission’s OpenAIREplus project, as the central
scientific publication and data repository for the project outcomes. ZENODO
offers the following services:
* Sharing results in multiple formats including text, spreadsheets, audio, video, and images.
* Display and curate citable research results, and integrate them into existing reporting channels to funding agencies like the European Commission Define the different licenses and access levels.
* Assigns a Digital Object Identifier (DOI) to all publicly available uploads, making content easily and uniquely citable.
* Easily access and reuse shared research results.
Main modelling results will be disseminated through the European Materials
Modelling Council ensuring wide research visibility.
# INTERNAL REPOSITORY
General project data, including meeting minutes, presentation drafts, design
blueprints, part of the modelling and simulation data, videos and images, and
publication manuscripts will be stored in a safe repository at Uppsala
Universitet ( _https://myfiles.uu.se_ ). SoFiA PIs and research staff, through
the Intranet link at _www.sofiaprpject.eu_ , can access this password-
protected repository. The consortium members will be notified by e-mail when
an important document is uploaded in the intranet.
Following schematic illustrates our internal repository organization scheme:
# SCIENTIFIC PUBLICATIONS
We will prioritize Gold or Green (with 6 months embargo) open access
publication. At least 8 publications are estimated in journals with the
highest impact in multidisciplinary science, in materials sciences, in
nanotechnology, and in chemistry. The Open Access publications will be
available for downloading from the SoFiA webpage ( _www.sofiaproject.eu_ ) and
from the ZENODO repository.
Archiving and preservation Open Access, through the SoFiA public website, will
be maintained for at least 3 years after the project completion. We expect the
project and associated website and repository to go into its second
(prototype) and third (pilot) phases envisioned to conclude by 2030.
Preliminary list of potential titles of specific papers for scientific
publications is below:
Science, Nature, Nature Materials, Nature Photonics, Nature Nanotechnology,
Nature Energy, Advanced Materials, Journal of the American Chemical Society,
Angewandte Chemie, ACS Nano, Energy & Environmental Science, Advanced Energy
Materials.
# DISSEMINATION / COMMUNICATION MATERIAL
The dissemination and Communication material refers to the following items:
* Posters, presentations, and image and video footage, flyers, public presentations, newsletter, press releases, tutorials and researcher’s blog posts as dissemination materials at conferences, workshops, summer school and industrial fairs.
* Website, social media accounts, audiovisual material as communication outreach. The website will also promote important (public) results from related projects in AP and will administer an open researchers blog as a knowledge-sharing tool for partners and user communities. Facebook, Twitter and LinkedIn will be used to promote the website content. An impact assessment of the entire social media communications activities will be carried out by monitoring web hits, likes, followers, retweets (KPI). Videos & news bytes will also be promoted through Hassim Al-Ghaili’s science communication website which has >16M fans. The existing Wikipedia page on AP will be updated with critical results from SoFiA.
# RESEARCH DATA
The data, including metadata , needed to validate the results presented in
scientific publications (underlying data), will be made available in open
access mode after consensus at the steering committee. All data collected
and/or generated will be stored according to the following format:
## SoFiA_WPx_Tx.y/Title_Benificiary_Date
In case, the data cannot be associated to a Work Package and/or task, a self-
explanatory title will be used according to the following format:
SoFiA_Title_Benificiary_Date
# COMPUTATIONAL DATA
There are two sets of computational data. The first set will be atomistic and
mesoscopic calculations and interpretation of spectroscopic results. The data
generated in this process will be stored at our partner ICTP’s local
repository. This data will be linked/coupled with the continuum modelling
performed at POLITO. The continuum modelling results will be shared in
accordance with our Dissemination plan through ZENODO and through European
Materials Modelling Council.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1522_SoFiA_828838.md
|
1. Introduction
The overall objective of SoFiA is to develop a radically new technology to
overcome the scientific and engineering roadblocks that plague state- of-the-
art AP. The proposed radical technology involves using cells of soap foam as
miniature photocatalytic reactors. Implementation of such a revolutionary idea
in sustainable energy to a prototype stage and beyond requires a combination
of excellent research and innovation bridged efficiently to strategic
stakeholders. Our Dissemination and Exploitation Plan (D&E Plan) has been
designed to achieve a support ecosystem for the pilot and tech transfer
phases, by the end of our 48 month project. The present document reports
dissemination and exploitation plans in detail, listing the foreseen
activities mainly for the first reporting period (M1-M12). The Project
Coordinator is responsible for ensuring that the different activities
described herein are performed within the consortium.
2. GOALS AND OBJECTIVES
2.1 Dissemination Goals
SoFiA has three dissemination phases:
Dissemination at the first phase involves awareness on the project objectives
and expected results addressed to EU funded projects on Solar Fuels and
Artificial Photosynthesis, to peer groups at universities, research
institutes, and to relevant networks like IC5 of Mission Innovation. The goal
is to build up a project identity and establish a working relations with
stakeholders and related initiatives.
The second phase is on capacity building targeting key actors who can benefit
from SoFiA deliverables.
This phase has two dissemination requirements:
1. to disseminate open accessed knowledge identified and/or developed within the project, and
2. to empower stakeholder groups to secure the critical mass for the establishment of a meaningful system of co-creation in the field of Artificial Photosynthesis.
The third phase involves exploitation actions of the project. In this phase
the key stakeholders need to be equipped with the right skills, knowledge and
understanding of SoFiA results in order to achieve targeted scientific,
societal, economic, and environmental impact.
2.2 Objectives
The table below indicates specific objectives in relation to the above-
mentioned goals.
<table>
<tr>
<th>
Goals
</th>
<th>
</th>
<th>
Objectives
</th> </tr>
<tr>
<td>
Awareness
</td>
<td>
* Consolidate inter consortium communication and develop robust management structure • Develop a network of stakeholders within each country represented by Partner institutes.
* Participate in all major events related to Solar Fuels, AP, and Photo catalysis.
* Disseminate nationally and internationally the knowledge and approaches developed
* Networking with relevant projects, initiatives and networks encouraging cross fertilization of ideas.
</td> </tr>
<tr>
<td>
Capacity
Building &
Understanding
</td>
<td>
* Organize and attend workshops, summer schools on AP
* Engage with EAB member organizations and EC consultation services to get support
* Valorize the developed technology according to existing plan
</td> </tr>
<tr>
<td>
Exploitation
</td>
<td>
* Communicate the results of the capacity building process
* Generate and manage IPR
* Develop business plan, and start-up company for exploitation in phase II and III
</td> </tr> </table>
# DISSEMINATION ACTIONS
Our dissemination actions aim to establish critical mass and commitment from
strategic stakeholders through a lean and efficient plan. Due to the highly
interdisciplinary nature of the project, SoFiA deliverables will be
disseminated to diverse communities through strategic channels. Our External
Advisory Board (EAB) featuring stakeholders from industry, scientific
community, and policy experts, will be a key channel for providing guidance to
networking activities. Following are the planned actions in detail:
3.1 Dissemination to Scientific Community: Interdisciplinary results will be
communicated to diverse peer groups. Leadership of key partners in national
platforms is already established and will facilitate networking and community
building.
Our consortium features following community leaders: o SoFiA coordinator Leif
Hammarström chairs Swedish Consortium for Artificial Photosynthesis o PI Erwin
Reisner chairs UK Solar Fuels Network
* PI Huib Bakker is a leading pioneer in spectroscopic techniques for probing water based systems in nanoscale and directs NWO-I institute AMOLF
* TECLIS is a European Pioneer in Soap Foam instrumentation o MCS is a European leader in microfluidic technology
Scientific Publications: We will prioritize Gold or Green (with 6 months
embargo) open access publication. At least 8 publications are estimated in
journals with the highest impact in multidisciplinary science, materials
sciences, nanotechnology, and in chemistry. KPI- Impact factor of accepted
journals, citations, author h-index.
Preliminary list of potential titles of specific papers for scientific
publications:
Science, Nature, Nature Materials, Nature Photonics, Nature Nanotechnology,
Nature Energy, Advanced Materials, Journal of the American Chemical Society,
Angewandte Chemie, ACS Nano, Energy & Environmental Science, Advanced Energy
Materials.
Publications will be available for downloading from the project website (
_www.sofiaproject.eu_ ), and will be deposited in public repositories
including ZENODO, as described in the Data Management Plan (DMP)*.
Note: Open Access cost sharing plan will be delivered with the first updated
D&E plan report in M12
*DMP summary: SoFiA will provide open access to raw data corresponding to modelling & simulation, as well as data required to reproduce the results presented in scientific publications. These data will be stored in Zenodo (a research data repository created by CERN) ensuring their public availability and long-time preservation. Details will be provided in a data management plan (DMP), to be delivered by M6 and updated periodically (M12, M30, M42). Main modelling results will be disseminated through the European Materials Modelling Council ensuring wide research visibility.
Course material and Dissertations: Our IPR protected findings, concept design
and selected experimental results will be included as graduate level course
material at partner Universities. Course update plans will be included as
chapters in final two project periodic reports on M30 & M48. Among our
researchers, we have one co-funded PhD candidate at POLITO (working on
theoretical modelling tasks and expected to graduate by 2022). His
dissertation will be attended by all members of the SoFiA consortium.
Conferences & Workshops: In June 2020, our partner ICTP (UNESCO flagship
institute) will host Conference on the Complex Interactions of Light and
Biological Matter: Experiments meet Theory. We will organize in a special AP
session showcasing our project through posters and oral communications and
will host a workshop and an information kiosk dedicated to SoFiA. A summer
school for PhD students, on AP will be organized and hosted by UU (by M30).
Our start-up WI (non-beneficiary) is supported by Sofia Tech Park (STP) - a
(Bulgarian flagship) EU project. With support of WI and STP we will host a
workshop (by M46) on solar fuels with focus on AP in Bulgaria (energetically
poor/unsustainable region), and tailored for an audience usually remote from
the EU policy dialogue. All partners will attend the most relevant conferences
(including MRS, ACS, ISF, etc.). SoFiA will also participate in the annual EU
Sustainable Energy Week (EUSEW) Policy Conferences. KPI- Attendance in Summer
School, Workshop, and attendee/student feedback.
Table 3.1 Targeted Conferences and Scheduled Meetings: We will implement the
highly interdisciplinary project through a set of scheduled two day meetings
and 4 hour short meetings at the sidelines of conferences.
<table>
<tr>
<th>
#
</th>
<th>
Date
</th>
<th>
Type
</th>
<th>
Venue
</th>
<th>
Notes
</th> </tr>
<tr>
<td>
2019
</td>
<td>
8- 9 Jan
</td>
<td>
Kickoff
</td>
<td>
Milan
</td>
<td>
Project tasks and deliverables reviewed, critical risks discussed, internal
communication protocols consolidated
</td> </tr>
<tr>
<td>
18- 20 June
</td>
<td>
Conference
</td>
<td>
Brussels
</td>
<td>
EUSEW policy conference. PIs will attend session conducted by SUNRISE CSA for
Flagship project on Solar Fuels. A steering committee review meeting scheduled
on 20 th June after conference.
</td> </tr>
<tr>
<td>
24– 28 June
</td>
<td>
Conference
</td>
<td>
Sofia
</td>
<td>
Oral communication at the 8 th Bubble and Drop conference. SoFiA PI Dr.
Alain Cagna from TECLIS leads the Conference Scientific Committee.
</td> </tr>
<tr>
<td>
23 -25 Sept
</td>
<td>
First joint
SC + S&T meeting
</td>
<td>
Sofia
</td>
<td>
9 months of management and research activities will be reviewed, progress and
risks will be analyzed, and project updates will be consolidated. Hired Post-
doctoral researchers will meet and consolidate internal communication
protocols and web based science outreach. EAB members will be introduced
through skype/webex video conferencing.
</td> </tr>
<tr>
<td>
11-19 Nov
</td>
<td>
Conference
</td>
<td>
Salt Lake City, Utah
</td>
<td>
IMECE (International Mechanical Engineering Congress & Exposition)
</td> </tr>
<tr>
<td>
20-24 Nov
</td>
<td>
Conference
</td>
<td>
Hiroshima,
</td>
<td>
ISF-3 _http://www.photoenergy-conv.net/ICARP2019/transportation.html_
</td> </tr>
<tr>
<td>
2020
</td>
<td>
Feb
</td>
<td>
1 st periodic review
</td>
<td>
Brussels
</td>
<td>
Progress over first reporting period will be presented at EC with focus on 1
st milestone MS1. An SC meeting will precede.
</td> </tr>
<tr>
<td>
March
</td>
<td>
Conference
</td>
<td>
Noordwijk
</td>
<td>
N3C, The Netherlands' Catalysis & Chemistry conference. _https://n3c.nl/_ .
The core AP group of Pis will attend the conference and a short meeting will
be scheduled at the sidelines.
</td> </tr>
<tr>
<td>
2-6 March
</td>
<td>
Conference
</td>
<td>
Denver
</td>
<td>
The APS March Meeting
</td> </tr>
<tr>
<td>
3 -8
May
</td>
<td>
Conference
</td>
<td>
Tuscany
</td>
<td>
Gordon Research Conference: Advancing Complexity, Selectivity and Efficiency
in Artificial Photosynthesis.
</td> </tr>
<tr>
<td>
June
</td>
<td>
Conference
</td>
<td>
Brussels
</td>
<td>
EUSEW 2020 is the largest sustainable energy policy conference in Europe
attended by >3000 energy stakeholders. We will target an Energy Day at Uppsala
University in accordance with EUSEW communication, and also target an Energy
Talk at the networking village. A short SC meeting will be held at the
sidelines of EUSEW
</td> </tr>
<tr>
<td>
July
</td>
<td>
Conference
</td>
<td>
Lausanne
</td>
<td>
23 rd International Conference on Photochemical Conversion & Storage of
Solar Energy.
</td> </tr>
<tr>
<td>
August
</td>
<td>
Conference
</td>
<td>
USA
</td>
<td>
Gordon Research Conference on Donor-Acceptor-Interactions.
</td> </tr>
<tr>
<td>
Sept
</td>
<td>
2 nd joint SC
\+ S&T meeting
</td>
<td>
Sofia Bulgaria
</td>
<td>
A 2 day meeting will review management and research progress and take critical
decisions for milestones. Workshop facility at Sofia Tech Park will be
inspected for schedule workshop in 2021.
</td> </tr>
<tr>
<td>
2021
</td>
<td>
Jan
</td>
<td>
SC meeting
</td>
<td>
Video
</td>
<td>
Yearly management review by Webex/Skype
</td> </tr>
<tr>
<td>
May
</td>
<td>
Trade fair
</td>
<td>
</td>
<td>
Intersolar Europe. Project delegation will be led by our SME partners
</td> </tr>
<tr>
<td>
June
</td>
<td>
EUSEW
</td>
<td>
Brussels
</td>
<td>
Yearly policy conference
</td> </tr>
<tr>
<td>
Sept
</td>
<td>
2 nd periodic review + SC,
S&T, EAB
</td>
<td>
Brussels
</td>
<td>
Progress in 2 nd period will be reviewed and will be preceded by a 2 day
joint SC, S&T and 1 st EAB meeting.
</td> </tr>
<tr>
<td>
Nov
</td>
<td>
Conference
</td>
<td>
Grenoble
</td>
<td>
ISF-4: International Solar Fuels Conference
</td> </tr>
<tr>
<td>
2022
</td>
<td>
Feb
</td>
<td>
Conference
</td>
<td>
Ventura
</td>
<td>
Gordon Research Conference on Renewable Energy: Solar Fuels.
</td> </tr>
<tr>
<td>
May
</td>
<td>
Trade Fair
</td>
<td>
Not decided
</td>
<td>
Intersolar Europe. Project delegation will be led by our SME partners
</td> </tr>
<tr>
<td>
June
</td>
<td>
Conference
</td>
<td>
Brussels
</td>
<td>
EUSEW
</td> </tr>
<tr>
<td>
July
</td>
<td>
Conference
</td>
<td>
Seoul
</td>
<td>
IPS-24 Korea
</td> </tr>
<tr>
<td>
Sept
</td>
<td>
Workshop
</td>
<td>
Sofia
</td>
<td>
Planned Workshop on AP to be hosted by UU at SoFiA Tech Park
</td> </tr>
<tr>
<td>
</td>
<td>
Jan 2023
</td>
<td>
Final Review
</td>
<td>
Brussels
</td>
<td>
Final project review meeting
</td> </tr> </table>
Related EU projects will be monitored and contacted. Key representatives will
be invited for lectures at the Bulgaria workshop. SoFiA will enhance
networking possibilities with the following programs: FET Flagship CSA –
Sunrise, FET projects – A- Leaf (Proactive) and Diacat (Open), from ERC
Grantees in AP (COFLeaf; ENLIGHT; HyMAP; HYMEM; photocatH2ode; TripleSolar;
and others.) KPI- New collaborations for Phase II, III and feedback from AP
experts. The FET flagship CSA website link _https://www.sunriseaction.com/_ is
available through our project website footer _www.sofiaproject.eu_ .
3.2 Dissemination to Policy Makers and to Industrial sector - Climate & Policy
experts at EAB** will be consulted to indicate policy hook for market uptake.
In 2017 June, our associate WI participated at the Networking Village of
European Sustainable Energy Week (EUSEW) - a Policy & Networking Conference
organized annually at Brussels by EC with an attendance of > 3000
stakeholders. We have budgeted for annual attendance at EUSEW and we target
strategic communications at its Networking Village in the final 2 years. Our
consultant associate Suzana Carp has prepared an op-ed to be submitted in
Euractiv featuring EU efforts in context of Solar Fuels and mentioning SoFiA
FET Open project among other EU support initiatives. KPIInterest from
investors, acceptance in EUSEW networking village.
\- In 2021 and 2022, SoFiA consortium will participate in Intersolar Europe
which is the world’s leading exhibition for the solar industry and its
partners and takes place annually at the Messe München exhibition center in
Munich, Germany. For critical coverage of breakthrough results we have
identified policy journals: ENDS Europe, and Brussels based Politico and
Euractiv. To communicate with EU policymakers, after the first periodic review
meeting on M14, the coordinator will contact OBSERVE- a FET-CSA that supports
Europe in FET, and FET2RIN, a network connecting FET projects to potential
investors. Through EAB** meetings, IPR protected research findings will be
communicated to Air Liquide and Unilever who are interested in commercial
exploitation. EAB feedback will be critical in drafting our phase II proposal
and a business plan. TECLIS has communicated interest to receive free business
coaching offered through EC instruments.
**SoFiA EAB will provide non-binding strategic & scientific advice to the
consortium to maximize impact and will offer guidance when the consortium
requires. The EAB is composed of accomplished experts from industry,
scientific community, and EU policy consultants. They will be in a privileged
position to receive (confidential) information on the project. SC- EAB
meetings will be held where EAB members will not have authority to vote on any
consortium matters or bear judiciary responsibilities. Following are the EAB
members:
1. Julian Popov: Guidance in Environment Policy-
Julian Popov is the Chairman of the Building Performance Institute Europe,
Fellow of the European Climate
Foundation and Former Minister of Environment of Bulgaria. He is the founding
Vice Chancellor and current Board Member of the New Bulgarian University,
former Chairman of the Bulgarian School of Politics and cofounder of the
Tunisian School of Politics (established following the Arab Spring). Julian is
author of two books and writes regularly on energy policies and international
affairs. He was recently voted as one of the 40 most influential voices on
European energy policies and also as one of the 40 most influential voices in
the European energy efficiency policies by the Brussels agency EurActiv. He
lives in London with his family.
2. Prof. Dr. Simeon Stoyanov. Unilever: Advice in Surfactant science and in project Dissemination-
Prof. Dr. Simeon Stoyanov received PhD from Essen University Germany. In the
past he has worked in the Laboratory of Physical Chemistry in University of
Sofia, Bulgaria, as a visiting scientist in the Ecole NormaleSuperieure,
Paris, France, University of Erlangen, Germany and as researcher in Henkel R&D
in Dusseldorf Germany. Currently Prof Stoyanov is a senior scientist Colloids
& Interfaces at Unilever R&D Vlaardingen-The Netherlands, special chair
professor at University of Wageningen- The Netherlands and visiting professor
at University College London, UK. My research interests include applied and
fundamental physical chemistry /Soft- Matter, which include: composite
materials and product formulation, foams and emulsions, physical-chemistry of
digestion, encapsulation & targeted delivery, nano-science/technology, biomass
utilization and bio-surfactants. He is co-author of more than 80 research
publications, 75 patents, books and books chapters in various fields of
physical- chemistry and soft condensed matter.
3. European Gas Research Group (GERG): Advice in project Dissemination and Exploitation-
Dr. Robert Judd general Secretary GERG - The European Gas Research Group is a
R&D development organization that provides both support and stimulus for the
technological innovation necessary to ensure that the European gas industry
can rise to meet the technological challenges of the new century. It was
founded to strengthen the gas industry within the European Community and it
achieves this by promoting research and technological innovation. Established
as a network to enable exchange of information between a select groups of
specialist R&D centres to avoid duplication of effort, it has grown steadily
to around 30 members whilst retaining and expanding its original aims. Its
priorities are networking, technical information exchange, and the promotion
and facilitation of collaborative R&D.
4. Air Liquide: Advice in project implementation.
Pavol Pranda, Sr. Staff Engineer - CO 2 Scientific Leader, m-Lab Air Liquide
5. Shell: Advice in project implementation
Note: Sébastien Vincent-Bonnieu, PhD - Reservoir Engineer from Shell Global
Solutions International BV had given us a support letter with acceptance as a
potential EAB member, which we submitted with our proposal. Unfortunately he
has recently resigned from Shell and is now employed by EU space Agency. We
are currently looking for his replacement in the EAB.
3.3 White Paper: A white paper will be submitted at the end of the project,
providing a general overview on the expected impact of SoFiA project in EU.
This white paper, drafted with the support of EAB members, will be sent and
presented to relevant policy makers.
# COMMUNICATION TOOLS AND ACTIVITIES
4.1 National level communications will be managed by Media Relation Units at
partner institutes. Since we have all been a child, and excited about soap
bubbles, we expect SoFiA outreach to be an enthusiastic exercise.
4.2 Project website & social media networking has been set up on M2 and is
being updated on a monthly basis. We have a logo that conveys the core
scientific, technological and environmental message through imagery and
strategic choice of colors. Website will include news & events, links to
partners’ websites, media, public reports, publications etc. The website will
also promote important (public) results from related projects in AP and will
administer an open forum as a knowledge-sharing tool for partners and user
communities. Twitter, Instagram, and Youtube will be used to promote the
website content. An impact assessment of the entire social media
communications activities will be carried out by monitoring web hits, likes,
followers, retweets (KPI). Videos & news bytes will also be promoted through
Hassim Al-Ghaili’s science communication website which has >16M fans. The
existing Wikipedia page on AP will be updated with critical results from
SoFiA.
4.3 Educational Communication: Press releases of selected publications will be
sent to general scientific magazines: Chemistry World, C&EN, Research*EU
Results, and Horizon: the EU RIA Magazine. In accordance with EUSEW directive
we will organize annual “Energy Days” where our educational videos will be
shared to local school children (and teachers/parents) in local languages,
through a team of established entertainers working with soap bubble-based
science demonstrations & magic shows. We will actively involve a young artist
(Nicky Assmann: crosscutting collaboration) who has been working with ultra-
large area soap film art installations. She will bring in hands on experience
on soap film stability to our design team while engaging the fine arts
community. Her large area portable soap film art installation will be a crowd
puller to our kiosk at Conferences and at EU researchers’ night. PIs will
apply for TED talks and in Pint of Science _https://pintofscience.com/_
4.4 Communication through Philanthropists: SoFiA will be registered at Prof.
Bertrand Piccard’s (EC supported) Solar Impulse-World Alliance for Efficient
Solutions in order to be presented at the United Nations Climate Change
Conferences (COP) and at other preeminent international platforms.
5. EXPLOITATION ACTIONS
1. IPR Management: SoFiA deliverables are expected to generate significant intellectual property to be exploited by our start up and partner SMEs. The SoFiA steering committee will monitor and identify any sensitive data worthy to being protected, and prepare appropriate IP protection. The IP management has been defined by the SoFiA Consortium Agreement, which is based on the standard DESCA model. The BG11261026/10/17 that was filed on by our executive body WI, protecting our foundational concepts only in Bulgaria has been strategically withdrawn from being published. This would have been an impediment towards filing an international application. Instead, a PCT application will be directly filed by July 2019. By M30 a basic patent landscape and IP plan will be prepared to guide subsequent IP protection. Based on bibliometric patent data, an overview of the trends in the innovation activities in AP will be used to deliver the strategic IP plan.
2. Technology Valorization: POLITO Business Management department in association with our partner SMEs will deliver a techno-economic report by M46. The report will target an article in Financial Times or similar publication and results will be promoted to Energy policy-makers in Brussels and to sustainable/solar energy investors (audience identification tools like an influence map, tailored invitations, and social media engagement will be used). The reports will be the basis for an industry driven technology maturity project proposal (SoFiA II) and a basic Business Plan for exploitation by our SME partners and start-up with support of our EAB members, in the subsequent project phases (II and III). This proposal and the basic Business plan will be submitted for review to final SC+EAB meeting in M48 and submitted to targeted RIA calls. TECLIS will receive free business consultation through EC instruments. Furthermore, the consortium is planning an amendment by M13 to potentially include our external associate and start-up WI as a non-funded beneficiary allowing it to receive business coaching through EC instruments.
KPI: Proposal accepted, Investments.
3. Knowledge Management and Protection Strategy: The process of effectively using organizational knowledge will be defined according to the protocols imposed by the pilot on Open Research Data. The project’s password protected intranet linked with a repository maintained at UU server will be the main instrument for information sharing and knowledge management. The SoFiA intranet is password protected and only the partners participating in this project have access to it. The intranet will contain all the information and documents generated as a result of this action as illustrated below. The consortium members will be notified by e-mail when an important document is uploaded in the intranet.
6
. INFORMATION ON EU FUNDING
Unless the Commission requests or agrees otherwise, or unless it is
impossible, any dissemination of results (in any form, including electronic)
must: (a) display the EU emblem and (b) include the following text: “This
project has received funding from the European Union’s Horizon 2020 research
and innovation programme under grant agreement No 828838 ”. When displayed
together with another logo, the EU emblem must have appropriate prominence.
Applications for protection of results (including patent applications) filed
by or on behalf of a beneficiary must — unless the Commission requests or
agrees otherwise or unless it is impossible — include the following: “The
project leading to this application has received funding from the European
Union’s Horizon 2020 research and innovation programme under grant agreement
No 828838”
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1523_BlockStart_828853.md
|
# Introduction
The BlockStart project (hence, “BlockStart” or the “Project”) and all its
consortium partners have implemented the necessary measures in order to comply
with the applicable National and European laws on personal data protection.
For the avoidance of doubt, BlockStart, as a Coordination and Support Action,
does not involve research activities and therefore, will not require to
generate, collect and/or process research data, namely, involving and having
as research scope personal data.
This document will present the policies, principles and implemented measures
to ensure the said compliance hereunder with the applicable personal data
protection requirements, contextualized with an overview of relevant data
flows and datasets occurring throughout the Project, and constitutes the core
of the applicable detailed data protection policy for the Project.
# Purpose of the personal data processing under the Project
The processing regarding personal data that may occur under the Project will
be strictly related to its use for the purposes of (and without prejudice of
the fact that in many cases the relevant entities will be legal persons and,
as such, the corresponding data used for such purposes will not constitute
personal data):
* sending communications by email to the professional contacts of the potential interested entities related to the relevant calls and events;
* involving the participating entities in the developed activities under this Project;
* executing the due fund transfers for the involved entities under the Project terms and conditions;
* complying with any reporting obligations in relation to the European Commission under the Project.
No personal sensitive data (defined by Article 9 of the GDPR as “data
revealing racial or ethnic origin, political opinions, religious or
philosophical beliefs, or trade union membership, (…) genetic data, biometric
data for the purpose of uniquely identifying a natural person, data concerning
health or data concerning a natural person’s sex life or sexual orientation”)
will be collected by the BlockStart project.
# Personal data categories
As said, under BlockStart, as a Coordination and Support Action, research
activities will not be carried out and therefore, and the activities to be
carried under the Project will not generate, collect or involve the processing
of research data, namely, involving and having as research scope personal
data.
Nevertheless, in order to manage, execute and implement the Project measures
proposed in the corresponding Work Packages, it will create a dataset
containing personal data about the professional contacts of data subjects
working for and/or representing the entities that participate in the
BlockStart’s activities (e.g.: DLT developers, SMEs, DLT experts,
intermediaries, policymakers), as point of contacts of such entities, for the
purposes mentioned above, as well as defined and established in the Work
Packages.
Therefore, the only personal data processed by the BlockStart consortium
encompasses the following personal data categories:
* full name;
* professional email;
* professional telephone contact;
* country of establishment;
* short CV and/or LinkedIn profile;
* bank accounts of beneficiaries (DLT developers, SMEs, DLT experts), in order to enable the due fund transfers to the aforementioned sub-grantees;
* attendance sheets (with names and signatures of people present at events);
* photo and video recording of events (e.g.: ideation kick-offs, workshops, demo days, webinars). Such recordings (e.g.: general perspective of an auditorium, video of a beneficiary pitch, testimonials by the participants), to the extent applicable in accordance with the applicable personal data protection laws, will only capture people that had expressed consent for the use of their image (limited to the promotion of BlockStart’s open call and results).
# Who will have access to personal data
Only the BlockStart consortium parties will have access to the said personal
data and, within the BlockStart consortium parties organizations, strictly
only its representatives, directors, employees, advisors and/or subcontractors
that have a need to know basis, and ensuring that persons authorised to
process the personal data have committed themselves to confidentiality or are
under an appropriate statutory obligation of confidentiality. Technical and
operational measures will be implemented to ensure that users/relevant data
subjects will be able to access, rectify, cancel and oppose processing and
storage of their personal data.
# How long personal data will be stored
The gathered personal data under the Project will be stored by the
Controller(s) until the end of the Project (February 2022). Data (including,
if strictly needed, personal data processed by the BlockStart consortium
parties in accordance with the current terms) needed to answer to potential
audits by the European Commission services (e.g.: data that enables the
assessment of BlockStart activities’ impact), may be kept for up to 5 years
after the end of the Project (prospectively until May 2027) and, to the extent
applicable, the potential processing of the relevant personal data for such
purpose would be supported on the necessity of the consortium parties to
comply with an applicable legal obligation to which such entities are subject
to hereunder. However, any personal data may always be deleted earlier, if a
data subject explicitly requests their records to be deleted and the
applicable requirements to the exercise and execution of the right to erasure
are fulfilled. In order to follow the principle of data minimization, personal
data will be deleted/destructed the earliest possible, in accordance with the
applicable criteria and requirements resulting from the applicable personal
data protection laws, namely, the General Data Protection Regulation (“GDPR”).
# How personal data will be collected and processed
BlockStart consortium will put into place several measures to ensure full
compliance with the applicable personal data protection requirements, namely
through:
### Personal Data Processing Principles Compliance
Personal data processing will be carried out in compliance with the applicable
personal data processing principles, namely:
1. such processing activities must be lawful, fair and transparent;
2. it should involve only data that is necessary and proportionate to achieve the specific task or purpose for which they were collected (Article 5(1) GDPR);
3. personal data will be requested only when strictly needed, and only for the purposes stated when personal data is requested. When requesting personal data, disclaimers will be shown, with a clear statement of the purpose of collecting and keeping such information;
4. personal data shall be kept accurate and, where necessary, kept up to date; every reasonable step will be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay;
5. personal data storage will be limited for no longer than is necessary for the purposes for which the personal data are processed;
6. personal data will be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.
### Compliance with applicable national and EU legislation
All the personal data that will be processed under this Project will be
processed in compliance with the applicable international, EU and national law
(in particular, the GDPR, applicable national data protection laws and other
relevant applicable laws on this matter). All BlockStart consortium partners
comply with the applicable national and EU laws in force regarding personal
data protection. All BlockStart activities will be carried out in compliance
with the GDPR.
### Informed consent procedures
Individuals whose personal data is collected by BlockStart consortium partners
within the frame of BlockStart Project (namely, through registration on the
consortium partner F6S platform, through submission of contact/newsletter sign
up/other forms within BlockStart website, and other online tools managed by
the BlockStart consortium as, for example, a webinar platform) will be
informed, upon collection of such personal data, of the processing terms of
their personal data.
Such information knowledge is confirmed by the individual by ticking the box
expressly confirming that the individual has taken knowledge of the applicable
BlockStart Personal Data Protection Policy (which terms are presented at
_www.blockstart.eu/data-protection/_ ) , where applicable terms to the
processing of personal data under the Project is detailed and explained to the
corresponding data subject (“BlockStart Personal Data Protection Policy”).
In particular, personal data collected through the consortium partner F6S
platform will be collected in accordance with the said applicable laws (on
that matter, and to the extent applicable, F6S datarelated policies are
accessible through the following links on the mentioned platform:
_www.f6s.com/privacy-policy_ ; _www.f6s.com/terms_ ; _www.f6s.com/data-
security_ ; _www.f6s.com/cookie-policy_ ; _www.f6s.com/cookie-table_ ) .
The legal agreements signed within the frame of the BlockStart Project (e.g.:
agreements with external evaluators, and sub-grantees - DLT developers and
SMEs) include articles concerning the compliance with ethical standards and
guidelines, as well as binding such entities to the applicable personal data
protection laws requirements.
The contract to be signed with sub-grantees will refer to the obligation to
conduct their activities following such principles, in a responsible manner
and complying with applicable legislation and H2020 rules and guidelines.
In the case of evaluators, the contract signed includes the signing of a Code
of Conduct which, among others, includes the principles of fairness,
independence, impartiality, and confidentiality.
The data collected through interviews and focus groups with SMEs, as well as
wider surveys sent out to a panel of SMEs across European countries, with data
collection including information related to regulatory and supervisory bodies,
industry associations, innovation hubs, major companies, major SMEs and
innovators, research organisations, legal service providers, consulting
service providers, major funding providers (VC, angel investors, PE, etc.) and
under the open call evaluations, will not include personal data collection.
### Right to information, access, rectification, erasure, restriction of
processing, data portability, object and not to be subject to a decision based
solely on automated processing, including profiling
Data subjects whose personal data is being processed under the Project can
contact the Project Coordinator (via the contact information in BlockStart
Personal Data Protection Policy) to exercise their rights in accordance with
the applicable personal data protection laws.
### Anonymization/pseudonymization of personal data
Anonymization of the processed personal data under BlockStart will be
performed through the use of aggregated and statistical data whenever
possible.
The objective of these techniques is to ensure that the data subject is not
directly or indirectly identifiable or re-identifiable. Recital 26 of the GDPR
establishes that “to determine whether a natural person is identifiable,
account should be taken of all the means reasonably likely to be used such as
singling out, either by the controller or by another person to identify the
natural person directly or indirectly”. Assessing what “reasonably likely”
means becomes a key point in the risk assessment of re-identification, in
which all objective factors and contextual elements will be taken into
account. The cost, amount of time required, and available state-of-the-art
technology are aspects to be considered when assessing what means are
“reasonably likely” to be used in attempting to re-identify data.
It should be recalled that, as the GDPR states, “the principles of data
protection should therefore not apply to anonymous information, namely
information which does not relate to an identified or identifiable natural
person or to personal data rendered anonymous in such a manner that the data
subject is not or no longer identifiable. This Regulation does not therefore
concern the processing of such anonymous information, including for
statistical or research purposes”.
To avoid the risk of re-identification, whenever possible, and to minimize it
when true anonymization is not possible, the BlockStart Consortium will set up
a range of measures and safeguards in relation to the datasets provided by
beneficiaries and other entities. The measures consist in limiting access to
the datasets (only when needed) and legal agreements in place containing
clauses to this effect (compliance with data protection regulations).
Full datasets will be accessible only under certain conditions, when necessary
for the implementation of the Project. This will reduce the risk of
unauthorised identification of data subjects in the datasets, and will allow a
higher control of any misuses of data.
DLT developers will only have access to the SMEs datasets and the right to
carry out data processing over them during the Pilot. Unless an additional
agreement is reached between SMEs and the DLT developers, the developers have
the obligation to delete the data once the Pilot ends.
Selected sub-grantees (both DLT developers and SMEs) will sign a sub-grant
Agreement with Bright Pixel as BlockStart Coordinator. This agreement will
establish the rights and obligations of the parties. Among others, it will
include personal data protection clauses, including:
* the acknowledgement that the sub-grantees will be the data controllers of any new dataset or piece of personal information that the sub-grantees may produce in the course of the Pilot process;
* the obligation not to try to re-identify anonymised data;
* the obligation to delete, at the finalization of the Pilot process, the data to which the subgrantees has been granted access during the Pilot process, except an agreement is entered into with the SME;
* Declaration of Honour: to be signed at the time of the Sub-grant Agreement. This declaration will, among other topics, include the commitment to comply with data protection regulations and the commitment not to use the data for purposes other than those within the BlockStart framework;
* confidentiality clause: the datasets made available will be classified as “confidential”. While the confidentiality clause is irrelevant in personal data protection, its compliance would minimize the harm caused in case of breach of the personal data protection.
# Description of data flows
BlockStart main data flows result from the activities deriving from four Work
Packages (WP): WP2 (Engage), WP3 (Prototype), WP 4 (Pilot) and WP5 (Impact).
Here is a summarized description of the most relevant data flows:
## Data flow 1: Engage
From the beginning of BlockStart Project, until the end of the third open call
(around July 2021), the consortium will develop activities under the Engage
phase (Work Package 2) that will demand the processing of data (including, in
some cases, personal data).
Firstly, in order to define the open call themes and sectors, a Sector
analysis activity will take place. Each sector will be mapped through desk
research of public available information covering regulatory and supervisory
bodies, industry associations, innovation hubs, big companies, SMEs,
developers, entrepreneurs and innovators, research organisations, legal
service providers, consulting service providers and major funding providers
(VC, angel investors, PE, etc.). Discussions with DLT experts, corporates and
SMEs will help validate the information gathered. The professional contacts
needed to get in touch with the aforementioned entities will be obtained from
public sources (company websites and Linkedin accounts) and organization
contacts within the consortium partners’ network. This activity will result in
brief one-pagers describing DLT feasibility and potential for DLT in SMEs per
key sector, with only aggregated information being presented.
Subsequently, a DLT Assessment Tool will be built, and made publicly
available, so that SMEs may check their potential for DLT implementation.
With the main sectors defined, BlockStart programme orientations and DLT
Assessment Tool publicly available, the open call will be ready to receive
applications from DLT developers, SMEs and external experts. At least two
webinars per call are planned to clarify the conditions of BlockStart open
call and programme. When registering for these webinars (via a dedicated page
for those particular events, on F6S platform), people wishing to attend will
need to provide their name and professional email.
Applications should be submitted through F6S platform (
_www.f6s.com/blockstart_ ) , following F6S data-related policies (
_www.f6s.com/privacy-policy_ ; _www.f6s.com/terms_ ;
_www.f6s.com/datasecurity_ ; _www.f6s.com/cookie-policy_ ;
_www.f6s.com/cookie-table_ ) . Some personal data, mentioned in section “3.
Personal data categories” of the present document, will be requested, in order
to assess the applicants, and contact the selected ones. Applicants will be
asked to sign a declaration of honor assuring the information provided is
true.
Consortium members and advisory board will evaluate the applications of
external experts.
Consortium members, external experts and advisory board will evaluate the
applications of SMEs and DLT developers.
All evaluators, acting pro-bono or being paid under a subcontractor's
contract, have to sign a declaration certifying: (i) that they will perform a
confidential, fair and equitable evaluation; (ii) their independence from
affiliation; (iii) confidentiality and absence of conflict of interest
(disqualifying or potential); (iv) that they will not discuss the proposals
with others during the process; (v) strictly, that they will not get in
contact with applicants; (vi) compliance with EC rules.
## Data flow 2: Prototype
After the Engage phase, the Prototype stage will follow. It will start with an
Ideation Kick-off, an event organized to connect the 20 selected DLT
developers with 10 SMEs and mentors from the consortium, advisory board and
external experts. At the end of the event, 10 DLT developers will be selected.
Product and technical development of DLT prototype will take place in the
following 4 months, with the developments done by the 10 selected DLT being
continuously validated by users (selected SMEs, advisory board and other
potential customers), through meetings and mentorship facilitated by the
consortium partners.
DLT developers will be asked to sign a sub-grantee contract, provide legal
documentation that verify their existence as an entity, proof of bank account
and a summary of their activity (describing solution, process and development
log), in order to receive a grant for the Prototype development.
Professional contacts of all aforementioned participants will be vital to
allow the Ideation kick-offs and remaining Prototype activities.
## Data flow 3: Pilot
In the third and last stage of BlockStart’s call process, the evaluators will
determine which 5 DLT solutions developed during the Prototype should be
selected to continue to Pilot stage. A group of 20 SMEs will also be selected.
DLT developers will be evaluated on the quality of the solution,
implementation readiness and interest shown by SMEs, with SMEs being chosen
mostly based on the results obtained through the DLT Assessment Tool. Just
before the development of the Pilots, DLT developers will be encouraged to
agree with individual SMEs on a “collaboration strategy” (e.g: exclusive use,
discounted use or free use of DLT solutions for an initial period).
Throughout the development and implementation of the Pilots in the SMEs,
consortium partners will work as facilitators of the relationship between all
parts involved. This follow-up will be operationalized namely through meetings
and bi-weekly updates with developers.
In the end, successful Pilots will be presented in a Demo Day event, targeting
investors, SMEs and other potential clients, industry associations and other
types of intermediaries.
A public DLT solutions portfolio (basic information about the projects) and
public Beneficiaries dataset (including list of entities who signed a sub-
grantee contract, project description and funding received) are due at the end
of BlockStart programme.
## Data flow 4: Impact
Taking place in parallel to Engage, Prototype and Pilot phases of each call,
“Work Package 5 - Impact” will include a set of initiatives devised to
disseminate the lessons learned throughout BlockStart.
One of the cornerstones of the Impact Work Package will be creation of Sector
specific DLT maturity assessments. These will mainly consist in public reports
about the potential for DLT implementation in specific sectors, with
aggregated and statistical data based on desk research, interviews to SMEs and
intermediaries, use of DLT Assessment tool by SMEs, Prototype and Pilot
developments and feedback from participant DLT developers and SMEs. These
reports will be disseminated through BlockStart website and directly to
potentially interested stakeholders, like government organizations, legal
bodies, SMEs and industry associations.
Besides the document format itself, reports will also sustain the creation of
training resources like webinars and physical workshops to interested
intermediaries (governmental agencies, research institutions,
acceleration/incubation programmes and other agents detected throughout
BlockStart). These trainings will be disseminated through BlockStart online
platforms, consortium members websites and social media channels, presence at
events devoted to DLT/Blockchain and directly to intermediaries whose contacts
are publicly available. The ultimate goal will be to empower intermediaries to
help in the promotion of DLT adoption by SMEs.
Best practices and recommendations deriving from BlockStart will also be
shared with policymakers, aiming to clarify the Blockchain phenomenon, thus
facilitating the development of a compelling regulatory framework and
government support activities. This will be achieved not only through reports,
but also the organization of workshops and a conference, for which will be
invited entities like national governments, relevant ministries (Economic,
Finance, Transportation, etc.), Central Banks, Industry associations (Banking,
Financial Services, Insurance, Fintech), representatives of European Central
Bank, European Banking Authority, European Security and Market Authority,
European Insurance and Occupational Pension authority, and other European and
national authorities.
BlockStart consortium will make these invitations through the public contacts
available in the aforementioned institutions websites, and the training
initiatives will, by default, share aggregated information. If a particular
use case (referring to an organization, never an individual person) deserves
that an exception is applied to this principle, it will always need to be
accompanied by an explicit consent by the corresponding organization.
# Datasets
The datasets resulting from the BlockStart’s data flows described in the
previous section (“7. Description of data flows”) are presented in the tables
below, each including the following fields:
● **Dataset reference:** a unique reference to each of the datasets
### ● Relevant work package(s) ● Description of the dataset
* **Data utility:** to whom could the data be useful and the reason why it is worth generating, keeping and/or sharing
* **Type:** collected, generated
* **Nature:** text, numbers, image, video, etc.
* **Scale:** the expected size of the dataset in MB/GB
* **Origin:** where does the data in the dataset come from, from which sources it has been collected
* **Archiving and storage:** where the data will be stored
* **Data sharing policy:** stakeholders with whom the data will be shared
* **Preservation time** ● **Additional preservation cost**
## Open call applicants
_Table 1 Dataset Open call applicants:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-1
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP2 Engage, WP3 Prototype, WP4 Pilot
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, professional contacts and other data (described in section “3. Personal
data categories” of this document) needed to contact and assess applicants -
DLT developers, SMEs and external experts
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Personal data needed to contact DLT developers, SMEs and external experts, in
order to better assess them (during open call selection stage), and to connect
with the selected ones (in the scope of the Prototype and Pilot stages).
Answers to the open call application form will be instrumental to support the
decision on which beneficiaries are the best fit to BlockStart program
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format, images/schemes in some cases
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Applications through F6S platform ( _www.f6s.com/blockstart_ ) , contacts
resulting from consortium members’ networks, desk research of contacts
publicly available online or interactions within the project’s activities
(e.g.: events)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
F6S platform ( _www.f6s.com/blockstart_ )
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Mixed:
* Public, shared in _www.blockstart.eu_ : DLT solutions portfolio (basic description about each project), Beneficiaries dataset (including list of entities who signed a sub-grantee contract and corresponding funding received)
* Internal, only shared within consortium members: applicants’/beneficiaries’ contacts
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May 2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Open call applicants’ ratings
_Table 2 Open call applicants’ ratings:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-2
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP2 Engage, WP3 Prototype, WP4 Pilot
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Numerical and categorical ratings of the open call applications, based on the
criteria presented in the open call documentation
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Support the selection of BlockStart’s beneficiaries (DLT developers, SMEs and
external experts) by the consortium, Advisory Board and external experts
(except, obviously, in the process of selecting external experts)
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Generated
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 100MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Applications through F6S platform ( _www.f6s.com/blockstart_ )
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
F6S platform ( _www.f6s.com/blockstart_ )
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## SMEs data for Pilot developments
_Table 3 Dataset SMEs data for Pilot developments:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-3
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP4 Pilot
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Data provided by the beneficiary SMEs
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Enabling the development of pilots by the DLT developers, in order to meet SME
needs
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Medium (approximately 500MB per SME)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Data provided by beneficiary SMEs
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated spreadsheet in project’s Google Drive (accessible only to consortium
members and to the DLT developer creating the pilot development related to the
SME)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Entities potentially interested in Ideation Kick-off
_Table 4 Dataset Entities potentially interested in Ideation Kick-off:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-4
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP3 Prototype
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, entity, professional contacts and justification of intention to
participate in BlockStart’s Ideation Kick-offs by DLT experts, investors,
SMEs, corporates, industry associations and other types of intermediaries
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Database of people potentially interested in participating in BlockStart’s
Ideation Kick-offs, enabling the consortium to define who will be able to
attend each event, and send the corresponding invitations
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Declarations of interest through _www.blockstart.eu_ , project’s social
media and profile at F6S platform
( _www.f6s.com/blockstart_ ) , contacts resulting from consortium members’
networks, desk research of contacts publicly available online or interactions
within the project’s activities (e.g.: events)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated spreadsheet in project’s Google Drive (accessible only to consortium
members)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Entities potentially interested in Demo Day
_Table 5 Dataset Entities potentially interested in Demo Day:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-5
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP4 Pilot
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, entity, professional contacts and justification of intention to
participate in BlockStart’s Demo Days by DLT experts, investors, SMEs,
corporates, industry associations and other types of intermediaries
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Database of people potentially interested in participating in BlockStart’s
Demo Days, enabling the consortium to define who will be able to attend each
event, and send the corresponding invitations
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Declarations of interest through _www.blockstart.eu_ , project’s social
media and profile at F6S platform
( _www.f6s.com/blockstart_ ) , contacts resulting from consortium members’
networks, desk research of contacts publicly available online or interactions
within the project’s activities (e.g.: events)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated spreadsheet in project’s Google Drive (accessible only to consortium
members)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May 2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Entities potentially interested in DLT Assessment tool
_Table 6 Dataset Entities potentially interested in DLT Assessment tool:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-6
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP2 Engage
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, entity and professional contacts of people potentially interested in
using and/or disseminating the DLT Assessment tool. Submissions through the
DLT Assessment tool (by SMEs)
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Enrich the database of SMEs eligible to participate in the Prototype and/or
Pilot stages of BlockStart. Involve intermediaries who, by disseminating the
DLT Assessment tool, will contribute to increase the knowledge on the uses of
DLT, while also helping BlockStart to reach out to further SMEs (who may turn
into potential participants in Prototype and/or Pilot stages of the program,
or even future clients of the portfolio of DLT developments)
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Submissions on the DLT Assessment tool (probably to be held on Typeform),
declarations of interest through _www.blockstart.eu_ , project’s social
media and profile at F6S platform ( _www.f6s.com/blockstart_ ) , contacts
resulting from consortium members’ networks, desk research of contacts
publicly available online or interactions within the project’s activities
(e.g.: events)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Typeform (probably), dedicated spreadsheet in project’s Google Drive
(accessible only to consortium members)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Open call webinars participants and recordings
_Table 7 Dataset Open call webinars participants and recordings:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-7
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP2 Engage
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name and professional email of people interested in watching the webinars, and
video recording of the webinars
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Professional contacts are needed to share the weblink allowing access to the
webinar. The video recording of the webinars will enable people/entities
potentially interested to discover more about the program in more convenient
times (and not necessarily when it is streamed live)
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected and Generated
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Text and video format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small for text (approximately 10MB). Medium for video (approximately 50GB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Registrations in webinars throug h a dedicated page for the event at the F6S
platform ( _www.f6s.com/blockstart_ ) . Recording of open call webinars
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated private area in BlockStart’s F6S account for name and email of
participants (accessible only to consortium members). Video recording
published in BlockStart’s public YouTube channel
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Mixed (name and email of participants only shared within consortium members,
video recordings publicly available)
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Intermediaries training participants and recordings
_Table 8 Dataset Intermediaries training participants and recordings:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-8
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP5 Impact
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name and professional email of people interested in watching the webinars, and
video recording of the webinars
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Professional contacts are needed to share the weblink allowing access to the
webinar. The video recording of the webinars will enable intermediaries
potentially interested to discover more about DLT in more convenient times
(and not necessarily when it is streamed live)
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected and Generated
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Text and video format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small for text (approximately 10MB). Medium for video (approximately 50GB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Registrations in webinars throug h a dedicated page for the event at the F6S
platform ( _www.f6s.com/blockstart_ ) . Recording of webinars
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated private area in BlockStart’s F6S account for name and email of
participants (accessible only to consortium members). Video recording
published in BlockStart’s public YouTube channel
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Mixed (name and email of participants only shared within consortium members,
video recordings publicly available)
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May 2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Policy workshops participants and recordings
_Table 9 Policy workshops participants and recordings:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-9
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP5 Impact
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name and professional email of people interested in watching the webinars, and
video recording of the webinars
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Professional contacts are needed to share the weblink allowing access to the
webinar. The video recording of the webinars will enable policymakers and
other relevant people/entities potentially interested to discover more about
DLT in more convenient times (and not necessarily when it is streamed live)
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected and Generated
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Text and video format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small for text (approximately 10MB). Medium for video (approximately 50GB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Registrations in webinars throug h a dedicated page for the event at the F6S
platform ( _www.f6s.com/blockstart_ ) . Recording of webinars
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated private area in BlockStart’s F6S account for name and email of
participants (accessible only to consortium members). Video recording
published in BlockStart’s public YouTube channel
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Mixed (name and email of participants only shared within consortium members,
video recordings publicly available)
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Advisory Board members
_Table 10 Dataset Advisory Board members:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-10
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP2 Engage, WP3 Prototype, WP4 Pilot, WP5 Impact
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, professional contacts and specialization of the Advisory Board members
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Database of experts who will help the consortium in the definition of the open
call sectors, in selection and evaluation of applicants (DLT developers, SMEs
and external experts) for the Ideation Kickoff, Prototype and Pilot, and by
giving strategic counseling regarding other project’s activities (e.g.:
dissemination, trainings)
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
BlockStart’s Grant Agreement. New members may be added to the Advisory Board,
coming from the consortium members’ networks, or resulting from interactions
within the project’s activities (e.g.: events)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated spreadsheet in project’s Google Drive (accessible only to consortium
members)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Mixed (name, entity, role and LinkedIn profile of each Advisory Board member
shared in _www.blockstart.eu_ , contacts only shared within consortium
members)
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Sector analysis participants
_Table 11 Dataset Sector analysis participants:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-11
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP2 Engage
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, professional contacts and specialization of experts in DLT and/or
industry sectors
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Experts who help the consortium in the analysis supporting the definition of
open call sectors
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Advisory Board members (dataset BS-10) and other DLT/industry experts coming
from the consortium members’ networks or resulting from interactions within
the project’s activities (e.g.:
events)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated spreadsheet in project’s Google Drive (accessible only to consortium
members)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
## Sector-specific DLT maturity assessments participants
_Table 12 Dataset Sector-specific DLT maturity assessments participants:_
<table>
<tr>
<th>
**Dataset reference:**
</th>
<th>
BS-12
</th> </tr>
<tr>
<td>
**Relevant work package(s):**
</td>
<td>
WP5 Impact
</td> </tr>
<tr>
<td>
**Description of the dataset:**
</td>
<td>
Name, professional contacts and specialization of experts in DLT and/or
industry sectors
</td> </tr>
<tr>
<td>
**Data utility:**
</td>
<td>
Experts who will provide insights relevant to the assessment of Sector-
specific DLT maturity
</td> </tr>
<tr>
<td>
**Type:**
</td>
<td>
Collected
</td> </tr>
<tr>
<td>
**Nature:**
</td>
<td>
Mainly text format
</td> </tr>
<tr>
<td>
**Scale:**
</td>
<td>
Small (approximately 50MB)
</td> </tr>
<tr>
<td>
**Origin:**
</td>
<td>
Experts willing to participate, including Advisory Board members, DLT
developers, SMEs, investors and intermediaries (namely within datasets BS-1,
BS-4, BS-5, BS-6, BS-8, BS-10 and BS-11)
</td> </tr>
<tr>
<td>
**Archiving and storage:**
</td>
<td>
Dedicated spreadsheet in project’s Google Drive (accessible only to consortium
members)
</td> </tr>
<tr>
<td>
**Data sharing policy:**
</td>
<td>
Mixed (name, entity and role of participant may be included in the reports if
that reference is expressly authorized. Contacts only shared within consortium
members)
</td> </tr>
<tr>
<td>
**Preservation time:**
</td>
<td>
Until the end of the project (February 2022). Data needed to answer to
potential audits by the European Commission services may be kept for up to 5
years after the end of the Project (May
2027)
</td> </tr>
<tr>
<td>
**Additional preservation cost:**
</td>
<td>
None
</td> </tr> </table>
# Security
Taking into account the state of the art, the costs of implementation and the
nature, scope, context and purposes of processing as well as the risk of
varying likelihood and severity for the rights and freedoms of natural
persons, the BlockStart Consortium partners will implement and ensure
appropriate technical and organisational measures to ensure a level of
security appropriate to the risk, namely, personal data will be, to the extent
possible and necessary, anonymised in order to transform it in ordinary data
(non-personal data) and, as such, be statistically compiled for the purpose of
the project.
To prevent unauthorised access to personal data or the equipment used for
processing personal data, following security measures will be implemented:
* all personal data will be safely stored in the password-protected accounts of the F6S platform where the data is held (following F6S data-related policies: _www.f6s.com/privacypolicy_ ; _www.f6s.com/terms_ ; _www.f6s.com/data-security_ ; _www.f6s.com/cookie-policy_ ; _www.f6s.com/cookie-table_ ) , and other online platforms (e.g: Typeform for the DLT Assessment Tool, Zoom for the webinars). In any circumstance, all personal contacts stored in whichever platform (always password-protected) will only accessible by people from the consortium working in the scope of BlockStart project;
* no personal data should be locally stored in any computer disk;
* all computers must be protected (only accessible through fingerprint or password).
Moreover, the BlockStart platforms/repository can only be accessed, upon
request, by members of the Project consortium. The emails of the person
requesting access must be listed in the project staff. If the status of a team
member is changed to “not involved” or similar, the access will be revoked. By
revoking access, the system can only automatically ensure that the person will
not have access to the contents of the platforms/repository. It is not
possible to ensure that documents or data that have been downloaded are
deleted.
In order to prevent data breaches in case of theft of any of the tools used to
access the BlockStart repository, all team members are requested to minimize
the number of documents downloaded. Additionally, all team members are invited
to adopt appropriate security measures to protect computers, laptops, mobile
phones and similar tools to prevent unauthorized access in case of leaving the
tool unattended or in case of loss or theft.
# Ethics
The personal data processing operations to be carried out under this project
are not subject to opinions and/or approvals by ethics committees and/or
public authorities, but merely subject to compliance with applicable legal
framework related to personal data protection (namely, GDPR).
# Further clarification
Information on BlockStart’s personal data protection policy may be found at
_www.blockstart.eu/dataprotection/_ .
# Conclusion
The current Data Management Plan aimed to present the principles and measures
the consortium will adopt to ensure the compliance of BlockStart’s prospective
data flows with legislation on personal data protection.
Starting with the purpose of personal data processing, it will be strictly
limited to the execution of the Project’s activities included in the
respective Grant Agreement’s Work Packages, and no personal sensitive data
will be collected.
To enable communication within BlockStart’s scope, a dataset will be created
including the professional contacts of data subjects working for and/or
representing the participant entities. It will mostly consist of emails from
DLT developers, SMEs, DLT experts, intermediaries and policymakers. Only the
Consortium parties will have access to the said personal data and, within the
Consortium parties organizations, only those strictly with a need to know
basis.
The gathered personal data will be deleted the earliest possible, in
accordance with the BlockStart activities and the applicable criteria and
requirements resulting from the applicable personal data protection laws,
namely, the GDPR. If it is absolutely needed, it may be kept until the end of
the project (February 2022) or, if the data happens to be needed to answer
potential audits, up to 5 years after the end of the project (prospectively
May 2027).
Among the principles to be followed by the consortium on data processing are
included data minimization, compliance with national and European legislation,
data subject information procedures, right to access and correct personal data
and, whenever possible, the anonymization/pseudonymization of personal data.
In order to prevent unauthorised access to personal data or the equipment used
for processing personal data, several security measures will be implemented
all throughout the Project by the consortium parties, namely the storage of
personal data in password-protected accounts of F6S platform.
BlockStart’s data flows and datasets resulting from the activities developed
within the Project’s Work Packages (with special emphasis in Engage,
Prototype, Pilot and Impact) were also explained in detail in this
deliverable.
This document will be updated in two occasions, in order to incorporate the
lessons learned from the first open call (month 16 – December 2020) and the
second and third open calls (month 28 – December 2021).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1527_ACCENTO_831815.md
|
EXECUTIVE SUMMARY
This document, DI .3 Data Management Plan (DMP) is a deliverable of the
ACCENTO project launched under the ITD/IADP/TA/TE CFP08-01, funded by the
European Union's H2020 through Clean Sky 2 Programme under Grant Agreement
#831815.
The objective of ACCENTO project is to carry out advanced investigations on
different LowPressure Turbine Active Clearance Control (LPTACC) pipes and
target plates. The aim is to develop design/verification procedures and models
to confidently predict the aero-thermal behaviour of the impingement system.
This goal will be pursued by means of dedicated experimental tests and
numerical simulations. The great ambition of ACCENTO is to design a modular
test rig able to accommodate ACC pipes which will be operated at engine
representative conditions in terms of non-dimensional relevant parameters. The
rig will be able to provide reliable data for impingement cooling heat
transfer characterization in a wide range of operating points. The effect on
the system perfomance of the radiative heat transfer between the target plate
and the pipes will be evaluated by means of a dedicated rig. The rigs will
provide high quality data in conjunction with controlled operating conditions
to validate CFD tools for the heat transfer and, more in general, for the ACC
system characterization. The expected outcomes of the project can be
summarized as follows:
design, commissioning and testing of a modular rig for LPTACC impingement heat
transfer coefficient,
<table>
<tr>
<th>
</th>
<th>
Call Reference N O : H2020- JTl-CS2-2018-CFP08 - PROPOSAL TITLE: ACCENTO -
ID
831815
</th>
<th>
5
</th> </tr> </table>
' validation of suitable CFD methodologies e development of design
correlations for impingement holes discharge coefficient and jets Nusselt
number.
The scope of this DMP is to outline how the research data collected or
generated within ACCENTO will be handled during and after the end of the
project. This report has to be considered as an open document which will
evolve during the project execution: major updates of the report will be
delivered at the end of each reporting period.
The expected type of research data that will be collected or generated along
the project lie in the following categories:
1. Time averaged flow field which characterize the impinging jets (CFD results)
2. Time averaged temperature in the free-stream and on the target plate (CFD results and experiments)
3. Nusselt number distribution on the target plate (CFD results and experiments)
4. Pressure and temperature distribution within the manifolds (CFD results and experiments)
5. mass flow rate through the impingement holes (CFD results and experiments)
1. DATA MANAGEMENT AND RESPONSIBILITY
1.1. DMP internal consortium policy
The ACCENTO project is engaged in the Open Research Data Pilot (OPT-IN) which
aims to improve and maximise access to and re-use of research data generated
by Horizon 2020 projects and takes into account the need to balance openness
and protection of scientific information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, security as well as data management
and preservation questions.
The management of the project data/results requires decisions about the
sharing of the data, the format/standard, the maintenance, the preservation,
etc. Thus the Data Management Plan (DMP) is the key element for such
management and is established to describe how the project will collect, share
and protect the data produced during its execution.
As a living document, the DMP will be up-dated over the project execution
whenever necessary. In particular, major updates of the document will be
released at the end of each reporting period.
The following general policy for data management and responsibility has been
agreed for the ACCENTO project:
* No data will be passed into the Open Access channel without an explicit agreement with signature, on every single item, of the Management Team of the ACCENTO project, which is formed by the PC (Project Coordinator) and by the TL (Topic Leader). Each ACCENTO consortium partner has to respect the policies set out in the DMP. Datasets have to be created, managed and properly stored.
<table>
<tr>
<th>
</th>
<th>
Call Reference N O : H2020- JTl-CS2-2018-CFP08 - PROPOSAL TITLE: ACCENTO -
ID 831815
</th>
<th>
6
</th> </tr> </table>
* The consortium ACCENTO individuates a responsible (Data Management Project Responsible (DMPR)) that will ensure the integrity of all the dataset, their compatibility, the criteria for the data storage and preservation, the long-term access policy, the maintenance policy, quality control, etc. The DMPR will discuss and validate these points with ACCENTO Management.
* For each single dataset that will be agreed to share and created during the project execution, it will be defined and enrolled a DataSet Responsible (DSR). He will be a representative of the ACCENTO partner that has generated such data and he/she will ensure the validation and registration of datasets and metadata, updates and management of the different versions, etc. The contact details of each DSR will be provided in each data set document presented in the annex I of the DMP
1.2. Data Management Responsible
The Data Management Project Responsible (DMPR) for published data will be
responsible for ACCENTO about the data uploaded in the public repository. Its
role includes:
* The checking of the database file;
* The correct format of the file;
* Easy accessibility of the shared file
<table>
<tr>
<th>
Data management Project Responsible (DMPR)
</th>
<th>
RICCARDO DA SOGHE
</th> </tr>
<tr>
<td>
DMPR Affiliation
</td>
<td>
ERGON RESEARCH (ERG)
</td> </tr>
<tr>
<td>
DMPR mail
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
</td>
<td>
+39-338-2536487
</td> </tr> </table>
3. DATA nature, and potential users
<table>
<tr>
<th>
</th>
<th>
Call Reference N O : H2020- JTl-CS2-2018-CFP08 - PROPOSAL TITLE: ACCENTO -
ID 831815
</th>
<th>
7
</th> </tr> </table>
Next table include the nature of the data that will be eventually shared by
considering each activity characterizing ACCENTO project.
<table>
<tr>
<th>
wp, ACTIVITY
</th>
<th>
DSR
</th>
<th>
NATURE OF DATA
</th>
<th>
OBJECTIVE
</th>
<th>
TYPE OF FILE
</th>
<th>
STANDARD
</th> </tr>
<tr>
<td>
WP2,
Experimental investigation
</td>
<td>
UNIFI (B.
Facchini)
</td>
<td>
Test article CAD geometry, Experimental results
</td>
<td>
proof of concept for different LPTACC impingement schemes considering both
flat and curved target surfaces
</td>
<td>
ASCII,
CAD formats
</td>
<td>
CGNS,
Parasolid
</td> </tr>
<tr>
<td>
WP3, Numerical investigation
</td>
<td>
ERG (R. Da Soghe)
</td>
<td>
CFD Results and User define code
</td>
<td>
Definition and validation of scale resolving CFD methods for the prediction of
heat loads on im in ement tar et surface.
</td>
<td>
ASCII,
HDF5
</td>
<td>
CGNS, Plot3D, Fluent.
C source code
</td> </tr>
<tr>
<td>
WP3, Correlative approaches
</td>
<td>
ERG (R. Da Soghe)
</td>
<td>
Correlations structure and coefficients
</td>
<td>
Definition of correlations for the estimation of the impingement heat load and
for the mass flow rate split along the manifold.
</td>
<td>
Text
</td>
<td>
.pdf
</td> </tr> </table>
4. Data Summary
Types and formats of data are digital and their description is included in
table of S I .3.
Potential users of the data generated by ACCENTO could be Universities,
Research Centers or SMEs and of course all the aeroengine manufacturers.
During scientific conferences and public events in which ACCENTO results will
be presented, information will be given about the availability of research
data and some details about the public repository.
For each data collection that will be open to public, a dedicated dataset
document will be completed in Annex I of the DMP once the data are generated.
Depending on the nature of the data, the expected size of the generated
datasets could range from few Mbyte (CAD files, Spreadsheets, PDF files) up to
several TBytes in the case of time dependent CFD or experimental results.
2. FAIR DATA
Following guidelines on data management in Horizon 2020, ACCENTO partners will
ensure that the research data from the project is Findable, Accessible,
Interoperable and Re-usable (FAIR).
1. Making data findable, including provisions for metadata
The databases generated in the project will be identified by means of a
Digital Object Identifier (DOI) and archived on the ZENODO data repository (
_http://zenodo.org_ ) together with pertinent keywords. The choice of adequate
keywords will be included to promote and ease the discoverability of data.
These keywords will include a number of common keywords in the aeroengine heat
transfer area but also generic keywords that can help to attract researchers
from other research areas to use and adapt ACCENTO results to their scientific
fields.
To facilitate the search and use of specific result in the most complex data
set such as CFD or experimental results, a metadata readme .txt file may
accompany the specific data repository, including a list describing the
contents of each directory and standard file nomenclature. The metadata will
be clearly identified by the directory name and the label readme at the end,
to permit the user to identify this file as metadata.
<table>
<tr>
<th>
</th>
<th>
Call Reference N O : H2020- JTl-CS2-2018-CFP08 - PROPOSAL TITLE: ACCENTO -
ID 831815
</th>
<th>
8
</th> </tr> </table>
Documents generated during the project are referenced following the
convention: "ACCENTO-<year>-<Type> <Title> <Version>.<extension>" Where:
* <year>: Identify the year of document release <Type>:
* MoM: Minutes of Meeting
KOM: Kick of Meeting o TN: Technical Note or Updates o DS: Data Set o DX.Y:
Deliverable (and the associated deliverable number: "X. Y" as example) o MX-
Meeting: Presentation during technical meeting and the associated meeting
month
* CP: Conference Presentation (for green open access documents) o PU: Journal Publication (for green open access documents) <Title>:
* Description of the document
* Version will be defined with RX. Y being Y the minor revision (modifications between members of the same affiliation), X the major revision (members of different affiliations, official CS2JU revisions etc. etc.)
* <extension>:
* depends on the document type
Modifications brought to the documents are identified in the "Document
history" section on the front page. The corresponding "Reason of change"
column details the origin of the modifications and summarizes the implemented
modifications.
<table>
<tr>
<th>
DOCUMENT HISTORY
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Version
</td>
<td>
Date
</td>
<td>
Chan ed by
</td>
<td>
Reason of change
</td> </tr>
<tr>
<td>
1.0
</td>
<td>
01.01.2019
</td>
<td>
A. Aaa
</td>
<td>
First version
</td> </tr> </table>
2. Making data openly accessible
By default, all scientific publications will be made publicly available with
due respect of the Green / Gold access regulations applied by each scientific
publisher: all the related scientific data will be made available on open
research data repositories. All the open research data publicly released by
ACCENTO will be store in the ZENODO open repository as strongly suggested by
the CS2JU as it permits a direct connection with the EU OpenAIRE system.
As mentioned above, no data will be passed into the Open Access channel
without an explicit agreement, on every single item, of the Management Team of
the ACCENTO project, which is formed by the PC and by the TL. Each ACCENTO
consortium partner has to respect the policies set out in the DMP.
3. Making data interoperable
<table>
<tr>
<th>
</th>
<th>
Call Reference N O : H2020- JTl-CS2-2018-CFP08 - PROPOSAL TITLE: ACCENTO -
ID
831815
</th>
<th>
9
</th> </tr> </table>
The interoperability of the ACCENTO published datasets will be enforced by the
adoption of:
• generally used extensions, adopting well established formats (whenever it is
made possible), clear metadata, keywords to facilitate discovery and
integration of SPLEEN data for other purposes, detailed documentation (such
as user guide, for instance).
User interfaces will be developed and documented where needed. A clear and
common vocabulary will be adopted for the definition of the datasets,
including variable names, spatial and temporal references and units (complying
with SI standards).
4. Increase of data re-use (through clarifying licenses)
ACCENTO is expected to produce a considerable volume of novel data and
knowledge through experimental investigation that will be presented to the
outside world through a carefully designed set of dissemination actions (See
DI .2 for more details).
The ACCENTO consortium will specify a license for all publicly available
files. A licence agreement is a legal arrangement between the
creator/depositor of the data set and the data repository, signifying what a
user is allowed to do with the data. An appropriate licence for the published
datasets will be selected by the ACCENTO consortium as a whole by using the
standards proposed by Creative Commons (2017) [1
Open data will be made available and accessible at the earliest opportunity on
the "Zenodo" repository. This fast publication of data is expected to promote
the data re-use by other researchers and industrials active in the field of
aeroengine combustors and CFD modelling in general, thereby contributing to
the dissemination of ACCENTO concepts, developed models and state-of the art
experimental results. Possible users will have to adhere with the "Zenodo"
Terms of Use and to agree with the licensing content.
3. ALLOCATION OF RESOURCES
Costs related to the open access and data strategy:
* Data storage in partner data repositories and storage systems: Included in partners structural operating cost.
* Data archiving with ZENODO data repositories: Free of charge.
4. DATA SECURITY
The exchange of data among the ACCENTO partners during the project execution
will be carried out by using a web based service available at ERG. A dedicated
storage area has been created and access granted to all ACCENTO members.
<table>
<tr>
<th>
</th>
<th>
Call Reference N O : H2020- JTl-CS2-2018-CFP08 - PROPOSAL TITLE: ACCENTO -
ID 831815
</th>
<th>
10
</th> </tr> </table>
5. ETHICAL ASPECTS
The ACCENTO consortium complies with the ethical principles as set out in
Article 34 of the Grant Agreement, which states that all activities must be
can•ied out in compliance with:
1. Ethical principles (including the highest standards of research integrity — as set out, for instance in the European Code of Conduct for Research Integrity — and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct)
2. Applicable international, EU and national law.
These ethical principles also cover the data management activities. The data
generated in the frame of the ACCENTO project are not subject to ethical
issues.
6. OTHER ISSUES
The ACCENTO project does not make use of other
national/funder/sectorial/departmental procedures for data management.
REFERENCES
[1] Creative Commons (2017). Creative Commons Attribution 4.0 International.
n.d.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1546_UNITI_848261.md
|
**Introduction:** The EU Tinnitus Database will be used for the storage of
clinical data whithin the UNITI project. Historically, the EU Tinnitus
Database is based on the ESIT Database which is used in the H2020 project ESIT
and hosted by the tinnitus group from the University of Regensburg.
**Types of data:** The following types of data are stored in the EU Tinnitus
Database: Medical and tinnitus-related history; audiological examinations;
questionnaire data. It also allows uploading and storing data files for
individual patient.
**Sources of data:** The EU Tinnitus database is fed with data collected by
the individual clinical partners.
**Ownership of data:** The owner of the data is always the Centre where the
data was collected. This centre is represented by the principal investigator
who is responsible for it.
**Pseudonymisation of data:** The EU Tinnitus database does not store personal
information like names, addresses, phone numbers, e-mail addresses, or IP
addresses that can be used to identify a certain participant directly. A
system of two-level pseudo identifiers will be used to anonymise the data. The
first pseudo identifier (PSID1) will be generated in the system(s) by the
system that is the source of the data (i.e., it generates them or it is used
to record them in the first place). Data communicated to the unified UNITI
database will have PSID1 but before stored in it, PSID1 will be replaced by a
second pseudo identifier (PSID2) which will be generated by the UNITI
database. Subsequently, data will be recorded with PSID2. PSID1 will PSID1 and
PSID2 will be used to link and combine data from different sources. In cases
where the data are inserted directly into the UNITI database only PSID2 will
be generated from the real identifier.
To enable responses to requests made under GDPR, the association between the
real identifier of a patient and PSID1 (PSID2) will be maintained in encrypted
form at the source system that provided the data for the particular patient.
The association between PSID2 and PSID1 (when necessary) will also be
maintained separately and in encrypted form. These associations will be
accessible only to authorised and authenticated software components of the
UNITI platform and users with appropriate authorisation rights, who may need
to access the real identity of a patient.
Furthermore, when the data are exported for further analysis, PSID2 is removed
and replaced by a random unique identifier. All types of data analysis can
only be executed upon the anonymised data set.
Personal information of individual patients is maintained only within the
system of the local centre, which has responsibility for the patient and,
therefore, needs to identify the patient. If a patient can be identified
individually, e.g. by the value of the external identifier attribute, and
requests for their data to be anonymised or deleted according to the rules set
by the General Data Protection Regulation (GDPR), the database administrative
staff of the local systems of the centres anonymise or delete the participants
data permanently, as requested.
**Data quality:** The first step taken to ensure high-quality data within this
framework was to implement the user interface using standardised input fields
that can be used to assist and restrict user input where reasonable. One
example is the use of standardised inputs for integral values that will report
ill-formed input to the user automatically. Whenever a participant decides to
save their current progress, the data are validated on the server side and the
results will be reported to the user in different ways. For every validation
error, the corresponding question is highlighted and the part that includes
the error is indicated. Furthermore, the overall state of data entry is
indicated with localised status messages and textual instructions.
**Languages:** The database offers support for different languages. These
include Dutch, German, Greek, Italian, Polish, Spanish, and Swedish. The
available languages cover all languages of the UNITI clinical centres, plus
additional languages. The language settings of the database can be changed by
the different users.
**Web access:** The database can be reached via www.tinnitus-database.eu. Only
contributors and members of the EU Tinnitus Database have access to it.
**Database manager:** The database will be managed by Jorge Simoes from UHREG
who is overseeing the database, ensuring the maintenance, the data handling
plan, data quality and further developments, if needed.
**Role based access control.** Different levels of access to the database are
provided subject to the roles of individual users. Users are members of staff
of the different clinical centres. In general, users are researchers and they
can only see the data of their own centre. At the moment, the roles supported
(Clinical) Centre Editors, Experts and Administrators. Role based access
control is enforced by a role management system, allowing Centre Editors to
view and edit individual patient datasets of their respective centre. In
addition to the rights of Centre Editors, Centre Experts can export data of
all the patients of their centre for data analysis. Centre Administrators have
all rights of the Centre Editors and Centre Experts but they can also add new
users to the database or remove users from it. There is also a database
administrator user (Superadmin) based at the University of Regensburg, who has
access to the data from all centres. Our expectation is that the above role
types will remain in UNITI and may be expanded by other additional roles
(e.g., Centre admin users with enhanced rights.
**Local data analysis:** Local data analysis means a data analysis that is
based on the data set that has been collected at the Centre that is performing
the data analysis. The person who is responsible for the local data analysis
is identical to the owner of the data. Individual centres (i.e., clinical
partners in UNITI) can perform data analysis on their own data. This right
arises from their role.
**Multi-centre analysis:** The standardised data assessment and storage in the
EU Tinnitus database allows efficient multi-centre data analysis of the
participating partners. Multi-centre analysis means a data analysis that is
based on data that have been collected by two or more partner centres. If one
or more partner centres plan to perform a multi-centre data analysis, they
need to contact the owners of the datasets that will be included in the
envisaged analysis. An agreement on the aim of multi-centre data analysis and
the authorships for the paper that might result of the analysis need to be
settled and signed by all data owners. After sending this data analysis
agreement to the database administrator, the administrator provides access to
the respective data set as determined by the agreement. Multi-centre analysis
can only be applied to anonymised datasets.
**Data Handling Plan.** A data handling plan has been developed as a manual
for understanding and analysing the data stored in the EU Tinnitus Database
that will be included in the UNITI database. This plan gives and overview of
how to interpret data, how questionnaire items are coded, how missing values
are coded, the rules for calculating sum scores of the clinical questionnaires
and all other information that are needed for unified interpretation of the
data. The data handling plan is written in English and is accessible to all
contributors via the EU
Tinnitus Database in a dedicated download section. The data handling plan is
kept up to date by UHREG using a revision numbering system. The database
adminsitrator is overseeing the updates of the data handling plan.
**Data protection:** Data protection is considered and relevant ethical, legal
and privacy concerns will be addressed respectively. The Data Protection
Officer (DPO) of the University of Regensburg (Germany) is also the Data
Protection Officer of the EU Tinnitus Database. He is responsible for
overseeing the data protection strategy and implementation to ensure
compliance with the EU General Data Protection Regulation (GDPR, 2016/679).
Furthermore to ensure that the project will be able to respond to data subject
access requests (DSA) under GDPR associations between the pseudo identifiers
and real identifiers of subjects will be maintained separately from the
clinical data, in encrypted form and in
**Data transfer:** The rules for the secure data transfer within the UNITI
consortium is regulated by UNITI Consortium Agreement under chapter 12.
**Data export and data analysis:** Within the internal section of the
database, the staff can monitor and review data entry and export the data when
needed. Authorised staff can configure custom selection criteria depending on
analysis or study requirements. For example, the data export can be configured
to exclude datasets that are not fully validated, are missing certain items,
or meet other criteria for exclusion. Each data export will be automatically
recorded in a log file, that contains all configuration settings by the user
and a time stamp. This allows reconstruction of the data export, if needed.
The data will be exported without any personal information of the participants
and the pseudonymization code will be removed automatically. Therefore, all
data analysis will be performed on so-called anonymized data sets. The data
export will be saved in a CSV file, with horizontal or vertical data format.
The CSV file is readable by all major statistical software packages. The
majority of data analyses will be executed using the open source statistical
software package R (www.r-project.org). However, each researcher is free to
use the statistical software of choice.
**Physical database:** The data is stored in a relational database format
using Maria DB 11 (www.mariadb.org), which runs in a Linux environment (Debian
Buster, www.debian.org/releases/buster/) and a LAMP technology stack
(www.whatis.techtarget.com/definition/LAMP-Linux-Apache-MySQL-PHP).
**Backup:** A backup of the database is performed every night for data
security reasons whereby the database is backed up to a server hosted by the
Strato AG (Berlin, Germany) and to a second server located at the DBIS
institute. All servers are located in Germany. A Secure Sockets Layer (SSL)
protocol is used for all data transfers.
## Unified database for smartphone data
**Introduction:** The unified database for smartphone data has the purpose to
store the data that is collected with the smartphone apps within UNITI.
**Types of data:** The following types of data are stored in the app database
and the local smartphone devices:
* questionnaire data collected using Ecological Momentary Assessment (EMA) methodology;
* time stamps of app usage;
* in case of auditory stimulation: subjective ratings of the tinnitus suppression;
* in case of psychoeducation app: results of the quiz;
* in case users allow it: GPS location and sound pressure of the surrounding environment while filling out the EMA questionnaires.
**Sources of data:** The smartphone device, its sensors and the UNITI app
running on it. **Ownership of data:** The patient who uses the smartphone
device and UNITI app.
**Physical database:** The data collected from the smartphone is stored in a
relational database using Maria DB 11, which runs in a Linux environment
(Debian Buster) and a LAMP technology stack. This database will be separate
from the UNITI database and may be referred to as the “smartphone database”.
The smartphone database will only be accessible to the patients through the
UNITI mobile app running on the smartphones of the patients. Access to it will
also require the authentication of the particular UNITI mobile app instance as
well as the authentication of the user of this app (the patient). Smart phone
data will also be stored separately in the UNITI database for analysis
purposes.
**Pseudonymization of data:** The smartphone database does not store personal
information like names, addresses, phone numbers, e-mail addresses, or IP
addresses that can be used to identify a certain participant directly. Data
from the smartphone will be stored in the smartphone database and the UNITI
database separately after pseudo-anonymisation as described in Section 2.1.
Furthermore, on the smartphone database, data will be stored in encrypted
form.
**Data quality:** Checks similar to those described for the UNITI database
will also be implemented for the mobile app in order to ensure the quality of
the data collected by it. **Languages:** The mobile app offers support for
different languages, as described for the UNITI database.
**Access of data:** The patients can see their own data on their smartphone
device. Such data will be maintained in encrypted form on the mobile
accessible only after proper authentication of the patient/user of the mobile.
This authentication will be based on password or fingerprint authentication.
Mobile app data will also be transferred and backed up periodically on a
backend server of the UNITI platform (separate from the UNITI unified
database), to ensure availability in the event of loss of the mobile device
and to ensure that the storage space on the device will always be sufficient
for storing the latest data of the patient. Backed up data will be stored in
encrypted form and will be accessible only to the client app running on the
mobile device. Users of the UNITI unified database will not have access to
such data (although they will have access to their pseudo-anonymised
counterparts).
## Matching data from the EU Tinnitus Database with the app data
A unique patient identifier will be used in the EU Tinnitus Database as well
as for the smartphone apps. This identifier will be used to match the data
from the two SQL databases.
# Data handling after the project and openly accessible data
## Publication data
We envisage that a repository for depositing publications and data related to
them will be needed. Zenodo (zenodo.org) is a good candidate for such
repository and it will be considered further as a candidate repository for
publication data. Furthermore, the DataCite Metadata Schema
(schema.datacite.org), will be adopted in this case.
Furthermore, UNITI advocates the “FAIR” principles 1 with regards to
supporting open access to publication data and anonymised patient data. The
FAIR principles require data to be findable (F), accessible (A), interoperable
(I) and re-usable (R). These principles precede implementation choices and do
not necessarily suggest any specific technology, standard, or implementation-
solution. The measures that we will take to support these principles are
summarised below.
## Findable data
**F1** : (meta)data are assigned a globally unique and persistent identifier
* A persistent and unique Digital Object Identifier (DOI) is issued to every published record. Moreover, DOI versioning is supported and enables users to update the record’s files after they have been made public and researchers to easily cite either specific versions of a record or to cite, via a top-level DOI, all the versions of a record.
**F2** : data are described with rich metadata (defined by R1 below)
* The repository’s metadata schema will be compliant with DataCite's Metadata Schema minimum and recommended terms, with a few additional enrichments. As there are no specific metadata schemas that can be used with the UNITI data this more generic schema will be adopted.
**F3** : metadata clearly and explicitly include the identifier of the data it
describes • The DOI is a top-level and a mandatory field in the metadata of
each record.
**F4** : (meta)data are registered or indexed in a searchable resource
* Metadata of each record will be indexed and searchable directly in the repository’s search engine immediately after publishing.
* Metadata of each record is sent to DataCite servers during DOI registration and indexed there.
## Accessible data
**A1** : (meta)data are retrievable by their identifier using a standardized
communications protocol
* Metadata for individual records as well as record collections are harvestable using the OAI-PMH protocol by the record identifier and the collection name.
* Metadata is also retrievable through the public REST API.
**A1.1** : the protocol is open, free, and universally implementable
* See point A1. OAI-PMH and REST are open, free and universal protocols for information retrieval on the web.
**A1.2** : the protocol allows for an authentication and authorisation
procedure, where necessary
* Metadata are publicly accessible and licensed under public domain. No authorization is ever necessary to retrieve it.
**A2** : metadata are accessible, even when the data are no longer available •
Data and metadata will be retained for the lifetime of the repository.
* Metadata are stored in high-availability database servers at ULM, which are separate to the data itself.
## Interoperable data
**I1:** (meta)data use a formal, accessible, shared, and broadly applicable
language for knowledge representation.
* Zenodo uses JSON Schema as internal representation of metadata and offers export to other popular formats such as Dublin Core or MARCXML.
**I2:** (meta)data use vocabularies that follow FAIR principles
* For certain terms it refers to open, external vocabularies, e.g.: license (Open Definition), funders (FundRef) and grants (OpenAIRE).
**I3:** (meta)data include qualified references to other (meta)data
* Each referenced external piece of metadata is qualified by a resolvable URL.
## Re-usable data
**R1:** (meta)data are richly described with a plurality of accurate and
relevant attributes
* Each record contains a minimum of DataCite's mandatory terms, with optionally additional DataCite recommended terms and Zenodo's enrichments.
**R1.1:** (meta)data are released with a clear and accessible data usage
license
* License is one of the mandatory terms in Zenodo's metadata, and is referring to an Open Definition license, but within HOLOBALANCE restricted access will be chosen for the patient data.
* Data downloaded by the users is subject to the license specified in the metadata by the uploader.
**R1.2:** (meta)data are associated with detailed provenance
* All data and metadata uploaded is tracable to a registered Zenodo user.
* Metadata can optionally describe the original authors of the published work.
## Patient data
The UNITI project aims towards open data. However, depending on the
regulations of the local ethical committees, it might be that not all data can
be made open. Our overall aim is to turn as much data as possible into open
data, which will be provided only in anonymized form. To ensure the correct
usage of the data that will be made open, we aim for an open access
publication that will describe the data set, the interpretation of the values,
the recruitment procedure and all other details that will be needed for
further research on the particular open dataset.
It is foreseen, that at the end of the project, the associations between
pseudo identifiers enabling the pseudonymization of patient data, described in
Section 2, will be completely deleted. This will lead to full anonymisation of
the acquired data set. Nevertheless, it is possible that, different
regulations of the ethical committees of the different clinical partners or
legal interpretations of GDPR taken at national level, might enforce an
earlier or later time point of deletion.
After full anonymisation is carried out, our aim is to maintain the fully
anonymised EU Tinnitus Database for at least 10 years after the lifetime of
the UNITI project in order to ensure that the data are findable, accessible,
interoperable and re-usable (FAIR).
Towards this direction, the UNITI consortium will also consider the
possibility of offering the data set that may become open source in formats
that would enable the viewing of the data through commonly used software such
as video viewers and/or text editing software. For processing, it will also
consider making the dataset available in formats that would enable their
processing in open source data analysis software like, for instance:
* GNU Octave (www.gnu.org/software/octave/)
* Scilab (www.scilab.org)
The possibility of providing open source data through the Open Research Data
(ORD) pilot initiative 2 of the European Commission will also be
investigated. ORD has been created primarily with the intention to enable the
provision of data needed to validate the results presented in scientific
publications. It does cover however other types of data that are voluntarily
offered by beneficiaries of Horizon 2020 projects. A portal making available
such datasets has also been set up and maintained by the EU, i.e., the EU Open
Data Portal (ODP) 3 . The use of ORD and ODP will be considered as a means
of offering as open source data sets required for the validation of UNITI
publications and/or wider anonymised datasets, as discussed above.
All the above actions as well as any other action that may be necessary for
the purpose of providing anonymised datasets of UNITI as open source data will
be taken only after approval from the UNITI management board.
# Conclusions
This deliverable is the first version of the data management plan of UNITI and
has described how data collected for the purposes of the project will be
handled during and after the end of the project.
In summary, UNITI commits the specific security and privacy control measures
to ensure the preservation of the integrity, availability and confidentiality
of the data that will be collected in the project as well as the preservation
of the regulatory requirements of the General Data Protection Directive
(GDPR). The project also commits to making the data about its publications and
the collected clinical data available as open source, if that will be allowed
by ethics and other requirements of the clinical partners who own the
datasets.
As a final concluding remark, it should be noted that although, we have
conducted an initial analysis aimed at envisaging what would be necessary for
the above purposes. a possibility to encounter a need to revise it. If such a
need arises during of the project, the data management plan will be amended
and communicated to all stakeholders, who should be notified about the
amendments.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1550_CATANA_864719.md
|
# CATANA H2020 CS2 PROJECT
## Deliverable 6.2
### 1\. Data management and responsibility
The CATANA project is engaged in the Open Research Data (ORD) pilot which aims
to improve and maximise access to and re-use of research data generated by
Horizon 2020 projects and takes into account the need to balance openness and
protection of scientific information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, security as well as data management
and preservation questions.
The management of the project data/results requires decisions about the
sharing of the data, the format/standard, the maintenance, the preservation,
etc.
Thus the Data Management Plan (DMP) is a key element of good data management
and is established to describe how the project will collect, share and protect
the data produced during the project. As a living document, the DMP will be
updated over the lifetime of the project whenever necessary.
In this frame the following policy for data management and responsibility has
been agreed for the CATANA project:
* **The CATANA Management Team (Coordinator ECL, Leader ECL-LMFA/ ECL-LTDS / VKI)** analyses the results of the CATANA project and will decide the criteria to select the Data for which make the OPT-IN. They individuate for all the dataset a responsible (Data Management Project Responsible (DMPR)) that will ensure dataset integrity and compatibility for its internal and external use during the programme lifetime, etc. They also decide where to upload the data, when upload, when how often update, etc.
* **The Data Management Project Responsible (DMPR)** is in charge of the integrity of all the dataset, their compatibility, the criteria for the data storage and preservation, the long-term access policy, the maintenance policy, quality control, the DMP’s update, etc. Of course he will discuss and validate these points with the Project Management team (ECL and VKI).
<table>
<tr>
<th>
**Data management Project Responsible (DMPR)**
</th>
<th>
**Christoph BRANDSTETTER**
</th> </tr>
<tr>
<td>
DMPR Affiliation
</td>
<td>
Ecole Centrale de Lyon
</td> </tr>
<tr>
<td>
DMPR mail
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
DMPR telephone number
</td>
<td>
+33 (0) 4.72.18.61.94
</td> </tr> </table>
* **The Data Set Responsibles (DSR)** are in charge of their single Dataset and should be the partner producing the data: validation and registration of datasets and metadata, updates and management of the different versions, etc. The contact details of each DSR will be provided in each data set document presented in the annex I of the DMP.
In the next section “2. Data summary”, the CATANA Project Management Team (ECL
and VKI) have listed the project’s data/results that will be generated by the
project and have identified which data will be open. The anticipated database
resulting from the experimental campaigns and the development of the open-
test-case will be exhaustive. All validated data will be made accessible to
the open domain. The internal database only contains non-validated results and
data concerning facility infrastructure and manufacturing documents which are
not related to the open-test-case.
Page **4 / 14**
# CATANA H2020 CS2 PROJECT
## Deliverable 6.2
### 2\. Data summary
The next table presents the different dataset generated by the CATANA project.
For each dataset that will be open to public, a dedicated dataset document
will be completed in Annex I once the data are generated.
_Explanation of the columns:_
* **Nature of the data** : experimental data, numerical data, documentation, software code, hardware, etc.
* **WP generation** : work package in which the database is generated
* **WP using** : work package in which data are reused in the CATANA project - **Data producer** : partner who generates the data - **Format** : can be .pdf / .step / .txt / .bin, etc.
* **Volume** : expected size of the data
* **Purpose / objective** : purpose of the dataset and its relation to the objectives of the project.
* **Dissemination level** : internal (non-validated results and data concerning facility infrastructure and manufacturing documents) / public (validated experimental and design data, metadata and open-test-case)
Page **5 / 14**
<table>
<tr>
<th>
</th>
<th>
**Dataset**
</th>
<th>
**Nature of the data**
</th>
<th>
**WP**
**generation**
</th>
<th>
**WP**
**using**
</th>
<th>
**Data producer**
</th>
<th>
**Format**
</th>
<th>
**Volume**
</th>
<th>
**Purpose/objectives**
</th>
<th>
**Dissemination Level**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
**Test Bench / Geometry Data**
</td>
<td>
CAD/Plan
</td>
<td>
WP 1/2
</td>
<td>
WP
3,4,5
</td>
<td>
ECL-LMFA
</td>
<td>
.pdf,
.step
</td>
<td>
1 GB
</td>
<td>
* Contains plans and CAD of test vehicles.
* Provides necessary information for test bench implementation and numerical simulation.
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
**Rotor Blade**
**Manufacturing**
**Documents**
</td>
<td>
CAD/Plan
</td>
<td>
WP 1/2
</td>
<td>
WP
3,4,5,6
</td>
<td>
ECL-LMFA
</td>
<td>
.txt,
.step,
.pdf
</td>
<td>
1 GB
</td>
<td>
Allows Fabrication of Rotor blades by
Manufacturer
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
**Accessory Part**
**Manufacturing Documents**
</td>
<td>
CAD/Plan
</td>
<td>
WP 1/2
</td>
<td>
WP
3,4,5,6
</td>
<td>
ECL-LMFA
</td>
<td>
.txt,
.step,
.pdf
</td>
<td>
1 GB
</td>
<td>
\- Allows fabrication of Nose Cone, Liner,
Interblade Platform, OGV, OGV Platform, Liner Inserts
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
**components screening data**
</td>
<td>
Experimental measurements, tomography
</td>
<td>
WP 2
</td>
<td>
WP 4,5
</td>
<td>
ECL-LMFA
</td>
<td>
.txt,
.bin
</td>
<td>
1 TB
</td>
<td>
* Contains all measurements in measured primary units (generally volt). Including geometry, roughness, weight, porosity
* Provides measurement ready to be converted in the physical units.
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
**Rotor structural**
**Data (Ping**
**Test/Vacuum**
**Test PHARE-1)**
</td>
<td>
Metrology
</td>
<td>
WP 3
</td>
<td>
WP
4,5,6
</td>
<td>
ECL-LTDS ECL-LMFA
</td>
<td>
.txt,
.bin
</td>
<td>
10 GB
</td>
<td>
* Contains sensors calibration and position, test-bench qualification tests, tests log …
* Provides necessary information on the measurements and 3D test bench setup.
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
**Measurement Results PHARE-2**
</td>
<td>
Metrology
</td>
<td>
WP5
</td>
<td>
WP 6
</td>
<td>
ECL-LMFA
VKI
</td>
<td>
.txt,
.tdms,
.bin,
.b16,
.cgns
</td>
<td>
100 TB
</td>
<td>
\- Raw data and calibrated data of:
Performance Instr.; Strain Gauges, Wall
Pressure/Microphone, Tt/Pt Probes, Tip
Timing, Tip Clearance, PIV, LDA
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
**Validated experimental data**
</td>
<td>
Experimental Measurements
</td>
<td>
WP 5
</td>
<td>
WP 6
</td>
<td>
ECL-LMFA
VKI
</td>
<td>
.txt,
.tdms,
.bin,
.b16,
.cgns
</td>
<td>
1 TB
</td>
<td>
* Contains measurement descriptions and the operating conditions from the validated experimental database.
* Provides necessary information to perform analysis of the validated experimental database.
</td>
<td>
Public
</td> </tr> </table>
# CATANA H2020 CS2 PROJECT
## Deliverable 6.2
<table>
<tr>
<th>
**9**
</th>
<th>
**Published experimental data**
</th>
<th>
Experimental Measurement
</th>
<th>
WP 5
</th>
<th>
WP 6
</th>
<th>
ECL-LMFA
VKI
</th>
<th>
.txt,
.tdms,
.bin,
.b16,
.cgns
</th>
<th>
50 GB
</th>
<th>
* Contains experimental data used for publication purposes.
* Provides an experimental open-access database for the research community.
</th>
<th>
Public
</th> </tr>
<tr>
<td>
**10**
</td>
<td>
**Published Open Test Case Data**
</td>
<td>
CAD/Plan
Experimental
Measurement
</td>
<td>
WP 1,2,3
</td>
<td>
WP 6
</td>
<td>
ECL-LMFA
ECL-LTDS
VKI
</td>
<td>
.txt,
.tdms,
.cgns,
.docx,
.pdf
</td>
<td>
50 GB
</td>
<td>
* Contains geometry data of Rotor, Annulus and OGV incl. roughness and measured tip clearance for each blade, Measured
structure-dynamic spectra of Rotor blades,
Eigenmodes, structural damping, mistuning patterns,
* Provides necessary information to perform analysis of the validated experimental database (metadata).
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
**Experimental**
**Documentation DATA**
</td>
<td>
Documentation
</td>
<td>
WP 3,4,5
</td>
<td>
</td>
<td>
ECL-LMFA
ECL-LTDS
VKI
</td>
<td>
.docx+ .pdf
</td>
<td>
10 MB
</td>
<td>
* Contains the experimental strategy setup, all plan
* Provides the necessary setup to realize experiments.
</td>
<td>
Internal
</td> </tr> </table>
Page **7 / 14**
# CATANA H2020 CS2 PROJECT
## Deliverable 6.3
### 3\. FAIR Data
**3.1 Making data findable**
#### Public database (data sets 9 and 10)
The databases generated in the project will be identified by means of a
Digital Object Identifier linked to the published paper, and archived on the
ZENODO searchable data repository together with pertinent keywords. As part of
the attached documentation, the file naming convention will be specified on a
case-by-case basis. In case of successive versions of a given dataset, version
numbers will be used. Where relevant, the databases will be linked to
metadata.
Articles and the attached data will be findable via their DOI, unique and
persistent identifier. A DOI is usually issued to every published record on
each publisher review and on other repositories as ZENODO, HAL and
ResearchGate. A homepage of CATANA project will be created on ResearchGate
with a link to ZENODO to make data findable.
#### Internal database
##### Database repository
Internal databases are composed of both the methods and the results. ECL and
VKI (as indicated in Table 1) are the owners of all results. Partners are
owners of methods used to generate results. Each owner is responsible for its
database repository.
##### Data identification
Each measurement raw data are identified by a unique identifier. Each
measurement is recorded in the test log using this identifier and the
measurement information. A validated measurement data uses the same
identification as the corresponding raw data. Main information on measurement
is reported in the data experimental guide.
###### 3.2 Making data openly accessible
By default, all scientific publications will be made publicly available with
due respect of the Green / Gold access regulations applied by each scientific
publisher. Whenever possible, the papers will be made freely accessible
through the project web site and the open access online repositories ArXiv and
HAL.
The public databases that will be selected to constitute the project
validation benchmarks will be archived on the ZENODO platform, and linked from
the CATANA project website. If the volume of the produced data exceeds
limitations of the open repository, the data will be made accessible via the
CATANA website hosted by Ecole Centrale de Lyon with a link to the data
included in the ZENODO repository. The most relevant data will be stored
directly in ZENODO. Ascii-readable file formats will be preferred for small
datasets, and binary encoding will be implemented for large datasets, using
freely available standard formats (e.g. the CFD Generic Notation System) for
which the source and compiled import libraries are freely accessible. In the
latter case, the structure of the binary records (headers) will be documented
as part of the dataset. The CATANA Consortium as a whole will examine the
suitability of the datasets produced by the project for public dissemination,
as well as their proper archival and documentation. Each dataset will be
associated with a name of a partner responsible for its maintenance.
##### Access procedures
Databases declared public will be available on online depositories (ZENODO) to
a third party. All data set contains conditions to use public data in the file
header. These conditions are an obligation to refer to the original papers,
the project name and a reference to Clean Sky 2 Joint Undertaking under the
European Union’s Horizon 2020.
##### Tools to read or reuse data
Public data are produced in common electronic document/data/image formats
(.docx, .pdf, .txt, .jpg, etc.) that do not require specific software.
#### Internal database
##### Access procedures
After agreement within the CATANA Consortium, all validated experimental data
and cleared geometry information can be shared and published according to the
consortium agreement. At long term the data generated by ECL and VKI can be
used for internal research.
###### 3.3 Making data interoperable
The interoperability of the published datasets will be enforced by the
adoption of freely available data standards and documentation. Ad-hoc
interfaces will be developed and documented where needed. A common vocabulary
will be adopted for the definition of the datasets, including variable names
and units.
Classical vocabulary in turbomachinery domain is used (based on the experience
of all partners in turbomachinery publications).
#### Public database (databases 9 and 10)
The validated experimental database which will be opened for publication will
contain both dimensional (SIUnits) as well as dimensionless variables (Mach-
Number, Pressure Ratio , efficiency, etc.) to maximize the impact of the
generated results, specifically to permit comparison with other cases allow
the community to validate computational methods.
#### Internal database
Validated databases are used for analysis. These databases are directly
expressed in physical units (using SI unit system). Necessary information
about results are recorded in the different data guides
**3.4 Increase data re-use**
_Data licence_
Data from public databases are open access and used a common creative licence
(CC-BY).
##### Data quality assurance processes
The project will be run in the frame of the quality plan developed at LMFA,
since 2005, in the context of the measurement campaigns carried out with the
high-speed compressors of LMFA. This quality plan is based on an ISO 9001
version 2000 approach and the use of an intranet tool (MySQL database coupled
with a dynamic php web site) to store, to classify and to share the data
between the partners, such as measurement data, documents including a
reference system.
Page
_After the end of the project_
#### Public database (databases 6 and 7)
With the impulsion of CATANA project, the open access databases can be used by
other laboratories and industrials to made comparison with other machines.
Methods developed and physical analyses become references to other test cases
and improve the knowledge of the community.
#### Internal database
The experimental setup and the huge quantity of experimental and numerical
data cannot be completely exploited in the CATANA project. The project is the
starting point to a long collaboration. At the end of the project, the re-use
of data and test bench can be:
* Analysis of data generated in CATANA project:
* Subsequent projects for consortium members.
* Additional academic partners to work on not exploited data.
* Supplementary experimental measurements: o Using the already installed compressor module on new operating conditions o Measurements of supplementary field with CATANA project results.
* Investigation of numerical prediction performances: o Calibrate aerodynamic, structure-dynamic, acoustic and coupled numerical methods
### 4\. Allocation of resources
#### _Costs related to the open access and data strategy_
* Data storage in partner data repositories: Included in partners structural operating cost.
* Data archiving with ZENODO data repositories: Free of charge.
#### _Data manager responsible during the project_
The Project Coordinator (ECL) is responsible for the establishment, the
updates during the lifetime of the project and the respect of the Data
Management Plan. The relevant experimental data and the generated data from
numerical simulations during the CATANA project will be made available to the
Consortium members within the frame of the IPR protection principles and the
present Data Management Plan.
#### _Responsibilities of partners_
ECL (all measurement/geometric data) and VKI (probe measurements) are the
owners of all generated data. Methods and analysis keep the ownership of the
partner which generates it. Every partner is responsible for the data it
produces, and must contribute actively to the data management as set in the
DMP.
### 5\. Data security
#### Public database (databases 6 and 11)
_Long-term preservation_ : Using ZENODO repositories.
_Data Transfer_ : Using ZENODO web platforms
Intellectual property: All data set contains are attached to a common creative
licence.
#### Internal Data
_Long-term preservation:_ ensured by partner institutions’ data repositories.
_Data Transfer:_ depending on the data volume:
* Small and medium size files are transferred by partners securitised data exchange platform (Renater FileSender, OpenTrust MFT ...)
* Huge size files are transferred by an external hard disk during face to face meeting. This type of transfer is infrequent and only concerns transfer of final databases from ECL and VKI.
_Intellectual property:_ Data are confidential and need to strictly respect
the definition of data producer, user and owner.
6. **Ethical aspects**
No ethical issue has been identified.
7. **Other**
No other procedure for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1551_AManECO_864733.md
|
# 1\. INTRODUCTION
In AMANECO several kind of data will be generated, so Deliverable D6.2 aims at
providing information and guidance for the correct management of that data.
The following list provides an overview of the different type of data
generated:
* Specifications for each testing sample
* Experimental data of testing samples comprising manufacturing, post-treatment and characterization
* Numerical data from FEM and CFD simulations
* Correlation between data from experimental and modelling tasks
* CAD model for the design of heat exchanger
* Experimental data of heat exchanger including manufacturing, post-treatment and characterization
* LCI data during the heat exchanger manufacturing and post-treatment
* LCA data as a correlation between LCI database and process conditions
# 2\. DATA MANAGEMENT AND RESPONSABILITY
## 2.1. DMP Internal Consortium Policy
According to ORD requirements, the AMANECO Data Management Plan will be ruled
by
FAIR (Findable, Accessible, Interoperable and Reusable) Data Management
Protocols. The ORD pilot applies primarily to the data needed to validate the
results presented in scientific publications. Open-access to other data is
encouraged on a voluntary basis if it is not sensitive or subject to
protection.
Publishable data will be made accessible within 6 months of publishing the
data in peer reviewed scientific articles or similar, unless beneficiaries
have outlined justifiable reasons for maintaining data confidentiality.
Each beneficiary is responsible for their records and documentation in
relation to data generated, which must be in line with the accepted standards
in the respective field (if do exist). To avoid losses, beneficiaries must
take measures to ensure that data is backed-up.
The IPR Committee will meet at each face-to face meeting as well as every time
(via teleconference) any WP leader proposes open access of generated data.
## 2.2. Data Management Responsible
The Project Data Contact will be the Project Coordinator, who is the direct
contact with the European Commission and the Topic Manager. She will ensure
that the data Management Plan is respected with the support of the WP leaders.
She will be in charge of:
* Ensuring the data is correctly uploaded into repositories through periodical checks
* Completing the DMP with the links related to the data and its regular update
* Ensuring the data availability
* Ensuring that information related to accessible data is in accordance with the produced data
<table>
<tr>
<th>
**Project Data Contact (PDC)**
</th>
<th>
Emma Gil
</th> </tr>
<tr>
<td>
**PDC Affiliation**
</td>
<td>
LORTEK
</td> </tr>
<tr>
<td>
**PDC mail**
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
**PDC telephone number**
</td>
<td>
+34 943 882 303
</td> </tr> </table>
# 3\. FAIR Data
## 3.1. Making data findable, including provisions for metadata
AMANECO takes part in the ORD Pilot, so it is expected to deposit generated
and collected data in an open online research repository.
The primary repository selected in AMANECO is ZENODO, which was developed by
CERN as part of the OpenAIRE (Open Access Infrastructure for Research in
Europe) project. ZENODO allows researchers to deposit both publications and
data, providing tools to linking them to these through persistent identifiers
and data citations. It facilitates the finding, assessing, re-using and
interoperating of datasets which are the basic principles that ORD projects
must comply with.
The guidelines provided by ZENODO will be used by AMANECO to comply with FAIR
principles.
In order to store and make findable any AMANECO openly accessible data, the
chosen online repository (ZENODO or any other) needs to facilitate
identification of data and refer to standard identification mechanisms
(ideally persistent and unique identifiers such as Digital Object
Identifiers), which should be outlined.
The dataset naming should be according to this scheme: [Name of the
project]-[Type of Data]-[Name of dataset]-[Date] being :
* Name of the project: “AMANECO”
* Type of data “NUM”, “EXP”, “DES”
* Name of the dataset
* Date: YYYY/MM/DD
A file will be maintained in the Project Sharepoint by the Project
Coordinator.
The partner generating the data must ensure that research outputs and data-
sets are _crossreferencing_ each other (e.g. scientific publications and the
data behind them)
## 3.2. Making data openly accessible
In order to maximise the impact of AMANECO data, the project will facilitate
sharing of results and data within and outside the consortium. Selected data
and results will be shared with the scientific community and other
stakeholders through publications in scientific journals and presentations at
conferences, as well as through open access data repositories. There will be
an open access policy applied to these following the rules outlined in the
Grant Agreement.
The IPR Committee will review and approve all data that is identified as
appropriate for open access. This process will be carried out on an ongoing
basis to facilitate the publication of appropriate data as soon as possible.
The IPR Committee is responsible for the IPR issues within AMANECO and their
approval will avoid any possible conflicts between open access and IPR issues.
All data will be made available for verification and re-use, unless the WP
leader can justify why data cannot be made openly accessible. The IPR
Committee will assess such justifications and make the final decision, based
on examination of the following elements regarding confidentiality of
datasets:
* Commercial sensitivity of datasets
* Data confidentiality for security reasons
* Conflicts between open-access rules and national and European legislation (e.g.
data protection regulations).
* Sharing data could jeopardise the objectives of the project
* Other legitimate reasons, to be validated by the IPR Committee
Upon deciding that a database should be kept confidential, the reasons for
doing so will be included in an updated version of the DMP. The data will be
accessible through:
* Publications in scientific journals
* The Project website
* ZENODO repository (or any other repository complying with statements in section
3.1)
To encourage re-use and further application of project results, all AMANECO
data that underlies scientific publications will be made available via open-
access online platforms, unless subject to protection, OR unless release of
all or part of the data to open-access platforms could jeopardise the
project's main objectives.
## 3.3. Making data interoperable
Partners will observe OpenAIRE guidelines for online interoperability,
including as set of guidelines that includes OpenAIRE Guidelines for
Literature Repositories, OpenAIRE Guidelines for Data Archives, etc.
These guidelines can be found at: _https://guidelines.openaire.eu/en/latest/_
. Partners will also ensure that AMANECO data observes FAIR data principles
under H2020 open-access policy:
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h202
0-hi-oa-datamgt_en.pdf_
Information relating to the interoperability of AMANECO datasets has been
collected in Table 1: Data Summary
As the project progresses and data are identified and collected, further
information on making data interoperable will be outlined, if necessary, in
subsequent versions of the DMP. In specific, information on data and metadata
vocabularies, standards or methodology to follow to facilitate
interoperability and whether the project uses standard vocabulary for all data
types present to allow interdisciplinary interoperability
## 3.4. Increase data re-use (through clarifying licenses)
AMANECO is expected to produce a novel data and knowledge through experimental
approaches that will be presented to the scientific community and industry,
through a carefully designed portfolio of dissemination actions. Datasets
uploaded in the ZENODO repository will be freely accessible after an embargo
period determined per dataset if required.
As the project progresses and data is identified and collected, further
information on increasing data re-use will be outlined in subsequent versions
of the DMP. In specific, information on how data will be licenced to permit
the widest reuse possible, when the data will be made available for re-use,
whether the data produced and/or used in the project is useable by third
parties and specifications of length of time for which the data will remain
reusable will be provided.
# 4\. Allocation of resources
The Data Management will be carried out as part of WP5 and will be handled by
the WP leaders, under the supervision of the Project Coordinator.
Costs related to open-access to research data in Horizon 2020 are eligible for
reimbursement under the conditions defined in the H2020 Grant Agreement, in
particular Article 6 and Article 6.2.D.3, but also other articles relevant for
the cost category chosen. Costs cannot be claimed retrospectively. Project
beneficiaries will be responsible for applying for reimbursement for costs
related to making data accessible to others beyond the consortium.
# 5\. Data security
AMANECO will ensure safety store of data by the following ways:
* Use of ZENODO (or similar repository)
* All along the project, data are shared and stored in a secured SharePoint hosted by the Project Coordinator to respect its security and confidentiality policy.
* Each beneficiary will keep a back-up of the own generated data
6\. Ethical aspects
N/A
# 7\. Other issues
N/A
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1552_RECLAIM_869884.md
|
# Introduction
## Summary
The vision of RECLAIM is to demonstrate technologies and strategies to support
a new paradigm for refurbishment and re-manufacturing of large industrial
equipment in factories, paving the way to a circular economy. Its ultimate
goal is to save valuable resources by reusing equipment instead of discarding
them. RECLAIM will support legacy industrial infrastructures with advanced
technological solutions with built-in capabilities for in-situ repair, self-
assessment and optimal re-use strategies. It will establish new concepts and
strategies for repair and equipment upgrade and factory layouts’ redesign in
order to gain economic benefits to the manufacturing sector.
The technological core of RECLAIM is a novel Decision Support Framework (DSF)
that guides the optimal refurbishment and re-manufacturing of
electromechanical machines and robotics systems.
Over the project period, RECLAIM will generate a large amount of R&D data.
These data come from pilot plants in various branches of the industry. From
direct and indirect sensor signals, theoretical and numerical analyses,
simulations as well as prototype device testing and validation. As a project
participating in the Open Research Data Pilot in Horizon 2020, RECLAIM will
make its research data **FAIR** , which means they are _F_ indable, _A_
ccessible, _I_ nteroperable and _R_ e-Usable.
## Scope and structure of the deliverable
The present report is the deliverable D1.3 of the project, RECLAIM’s Data
Management Plan (DMP). The DMP’s purpose is, therefore, to provide the main
elements of the data management policy to be used by the Consortium. It
describes:
* **types and formats of data to be generated, collected and processed,**
* **the standards to be applied,**
* **the data-reservation methods,**
* **the data-sharing policies for re-use.**
The present document is the 1 st version of RECLAIM DMP, containing a
summary of the data sets; i.e., types, formats and sources (WPs and partner
names) and specific conditions to be applied for sharing and reuse. As a
living document, the DMP will be modified and refined through updates as the
project implementation progresses and/or significant changes occur. At minimum
one more iteration will be submitted, at M42, with the corresponding updates
in the context of the normal course of the project.
The document covers the following topics:
* **General principles for Data Management Plan**
* **Necessary Information for the description of RECLAIM Data sets**
* **Conclusions and remarks**
# General Principles
## Research data types and open access policy of RECLAIM
RECLAIM participates in the Pilot on Open Research Data (ORDP) launched by the
European Commission along with the Horizon2020 program. The members of the
consortium embrace the concepts and the principles of open science and
acknowledge the benefits of reusing and evaluating already produced data for
promoting and supporting research and innovation projects at European level.
The data generated during the project activities may be available in open
access for further analysis and exploitation.
The data generated over the project lifetime can be divided into three
categories:
* **Open Data** : Data that are publicly shared for re-use and exploitation
* **Private Data** : Data that are retained by individual partners for their own processes and tests
* **Confidential Data** : data that are available only for the members of the consortium and the EU commission services and subjected to the project nondisclosure-agreement
## IPR
As data is used as a basis for almost all activities within the RECLAIM
project, the handling of IPR (Intellectual Property Rights) related to data is
of high importance. IPR handling is explicitly addressed by Task T8.1
“Management of IPR”. Even if this task has only started this month (M6), first
activities have been started already. For example, IPR issues and activities
have been presented during the 6 months virtual meeting on 24 th March 2020.
Within the ongoing activities, IPR management will also take the handling of
RECLAIM data into account. Detailed measures and procedures will be reported
in the updated version of this Data Management Plan.
# Data sets
All Partners in RECLAIM have initially identified the data that will be
produced and/or used in the different WP’s and project activities. Changes
(addition/removal of data sets) and later updates resulting from the progress
of the project are marked accordingly in the next versions of the DMP. The
type of data set and corresponding details are given in the following
sections.
## Data sets overview
The following table provides an overview of the different data sets used and
produced during the RECLAIM project.
<table>
<tr>
<th>
**No. Data set name**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
DS.HWH.01.FRICTION_WELDING_MACHINE
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
DS.HWH.02.MAINTENANCE_DATA
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
DS.FEUP.01.PREDICTIVE_MAINTENANCE
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
DS.FEUP.02.DEGRADATION_DATA_SET
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
DS.FEUP.03.ANOMALY_DETECTION
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
DS.FEUP.04.QUALITY_PREDICTION
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
DS.ASTON.01.REMANUFACTURING_PROCESS
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
DS.ASTON.02.COST_BENCHMARKING_HISTORICAL
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
DS.ZORLUTEKS.01.BLEACHING_MACHINE
</td>
<td>
ZORLUTEKS
</td> </tr>
<tr>
<td>
**10**
</td>
<td>
DS.CERTH.01.DECISION_SUPPORT_FRAMEWORK_OUTPUT
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
DS.CERTH.02.IN_SITU_REPAIR_DATA_ANALYTICS_OUTPUT
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
**12**
</td>
<td>
DS.CERTH.03.AR_MECHANISMS_OUTPUT
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
**13**
</td>
<td>
DS.Gorenje.01.DW_Robot_Cells
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
DS.Gorenje.02.WHITE_ENAMELLING_LINE
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
DS.ADV-CTCR-TECNALIA.01.FORMING_MACHINE_FOR_REAR_PARTS
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
**16**
</td>
<td>
DS.ADV-CTCR-
TECNALIA.02.FORMING_MACHINE_FOR_REAR_PARTS_ROTOSTIR
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
**17**
</td>
<td>
DS.ADV-CTCR-TECNALIA.03.CUTTING_MACHINE
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
**18**
</td>
<td>
DS.FLUCHOS.01.FORMING_MACHINE_FOR_REAR_PARTS
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
**19**
</td>
<td>
DS.FLUCHOS.02.FORMING_MACHINE_FOR_REAR_PARTS_ROTOSTIR
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
**20**
</td>
<td>
DS.FLUCHOS.03.CUTTING_MACHINE
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
**21**
</td>
<td>
DS.SUPSI.01.FailuresHighLevelData_Gorenje
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**22**
</td>
<td>
DS.SUPSI.02.FailuresHighLevelData_FLUCHOS
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**23**
</td>
<td>
DS.SUPSI.03.FailuresHighLevelData_Podium
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**24**
</td>
<td>
DS.SUPSI.04.FailuresHighLevelData_Zorluteks
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**25**
</td>
<td>
DS.SUPSI.05.FailuresHighLevelData_HWH
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**26**
</td>
<td>
DS.SUPSI.06.LCAData_Gorenje
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**27**
</td>
<td>
DS.SUPSI.07.LCAData_FLUCHOS
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**28**
</td>
<td>
DS.SUPSI.08.LCAData_Podium
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**29**
</td>
<td>
DS.SUPSI.09.LCAData_Zorluteks
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
**30**
</td>
<td>
DS.SUPSI.10.LCAData_HWH
</td>
<td>
SUPSI
</td> </tr> </table>
A detailed description of each data set is given in the sections below.
## Harms & Wende
<table>
<tr>
<th>
**DS.HWH.01.FRICTION_WELDING_MACHINE**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during a friction welding process by a
friction welding machine
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Friction welding machine including the different sensors attached
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so,
</td> </tr>
<tr>
<td>
</td>
<td>
degradation models will be developed based on the data. In addition, the data
will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at HWH during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.HWH.02.MAINTENANCE_DATA**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is gathered and stored during the maintenance of a friction welding
machine
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
The data is gathered by the HWH service & repair department.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include any data related to the repair of a machine.
This includes a) machine data such as machine type, serial number, etc. b)
customer data such as customer name, delivery time, application etc. and c)
data on repair such as repair time, components changed, etc.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in pd documents. One document for each machine/repair task.
Most of the data is text. However, pictures might be included. Thus, the data
volume can be about several Megabytes per month.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for root cause analysis and for finding the maintenance
hotspots.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at HWH during the project duration.
</td> </tr> </table>
## University of Porto
<table>
<tr>
<th>
**DS.FEUP.01.PREDICTIVE_MAINTENANCE**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that will be used and produced by the Predictive Maintenance models
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Friction Welding machine and Predictive Maintenance Algorithm for failure
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
prediction, and maintenance action recommendation.
</th> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FEUP, HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3, T3.3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Inputs: time-series sensor and process data from components and / or
equipment; Maintenance actions (name, duration and components involved);
errors; malfunctions; production schedule.
Output: Which component will fail, when it will fail and recommendation of
maintenance actions and when to perform it (Component; Duration; Maintenance
Action).
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Since maintenance actions do not occur often, the space required is very low.
The historical data (input) might take several Gb of space.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The input data will be used for model training and testing of several PM
strategies, as the data output of the model will be used for decision making,
resulting in a database of recommendations and further refinement of the
model.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data will be given to the consortium only, and the inputs
and outputs will be anonymized.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
RECLAIM repository
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data can be stored in FEUP server or in any RECLAIM repository available
</td> </tr> </table>
<table>
<tr>
<th>
**DS.FEUP.02.DEGRADATION_DATA_SET**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that will be used and produced by the Degradation Model
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Friction Welding and Degradation Algorithm for failure degradation prediction
based on further machine use and current condition.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FEUP, HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP4, T4.2
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Inputs: time-series sensor and process data from components and / or
equipment; Time from last repair; Amount of time used; Current machine
conditions (Throughput, machine parameters).
Output: Mean time to failure (or similar KPI) according the future
parameterization and use of the machine.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated
</td>
<td>
Since degradation predictions that are critical and pertinent will be stored,
the
</td> </tr>
<tr>
<td>
volume of data
</td>
<td>
space required is very low. The historical data (input) might take several Mb
of space.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The input data will be used for model training and testing, as the data output
of the model will be used for decision making, resulting in a database of
important predictions and further refinement of the model.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data will be given to the consortium only, and the outputs
will be anonymized.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
RECLAIM repository
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data can be stored in FEUP server or in any RECLAIM repository available
</td> </tr> </table>
<table>
<tr>
<th>
**DS.FEUP.03.ANOMALY_DETECTION**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that will be used and produced by the Anomaly Detection algorithm
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Friction Welding and Anomaly Detection for observed misbehaviours that might
require attention, raise alarms / notifications, or machine shutdown.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FEUP, HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP3, T3.3, WP4 T4.2
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Inputs: time-series sensor and process data of normal (or abnormal) behaviour
from components and / or equipment.
Output: Based on the identified patterns, classify data into anomaly or normal
data.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Since anomalies might not be so often, the space required will be low (Mb).
The historical data (input) might take several Mb of space.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The input data will be used for pattern recognition and testing, as the data
output of the model will be used for decision making, as input for more
complex models like Predictive Maintenance or Degradation, and further
refinement of the model.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data will be given to the consortium only, and the outputs
will be anonymized.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
RECLAIM repository
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data can be stored in FEUP server or in any RECLAIM repository available
</td> </tr> </table>
<table>
<tr>
<th>
**DS.FEUP.04.QUALITY_PREDICTION**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that will be used and produced by the Process Quality model for further
process parameter estimation when a new product
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
needs to be produced, or an existing one needs to be calibrated.
</th> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Friction Welding and Process Quality prediction.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FEUP, HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
FEUP
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP4 T4.2
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Inputs: 1) Machine parameters; 2) Product / process quality; 3) Product specs.
Output: Quality Prediction and recommended parameters based on a quality
target.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Since calibrations might not be so often, the space required will be low (Mb).
The historical data (input) might take several Mb of space.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The input data will be used for model training and testing, as the data output
of the model will be used for decision making and further refinement of the
model.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data will be given to the consortium only, and the outputs
will be anonymized.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
RECLAIM repository
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data can be stored in FEUP server or in any RECLAIM repository available
</td> </tr> </table>
## Aston University
<table>
<tr>
<th>
**DS.ASTON.01.REMANUFACTURING_PROCESS**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data will be used for estimating the cost of remanufacturing/refurbishment
etc. Endof-Life (EoL) options.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Partners and Remanufacturers’ practice
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
HWH etc. remanufacturing practitioner
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
HWH etc. remanufacturing practitioner, ASTON
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH, ASTON
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP4, T4.3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The indicative metadata include: 1) remanufacturer name, 2) specific machine,
3) machine conditions, 4) remanufacturing activities, 5) resources required
for activities, 6) time required for activities.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Numerical number and text.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for estimating the cost of each End-of-Life strategy and
process.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data will be given to the consortium only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
RECLAIM repository
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data can be stored in Aston’s server or in RECLAIM project repository
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ASTON.02.COST_BENCHMARKING_HISTORICAL**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data will be used for estimating the cost of remanufacturing/refurbishment
etc. Endof-Life (EoL) options.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Partners and Remanufacturers’ practice, various public resources
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ASTON
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP4, T4.3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
The indicative metadata include: 1) data source 2) scenario of the data, 3)
time of the data being valid 5)
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Numerical number and text.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for estimating the cost of each End-of-Life strategy
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data will be given to the consortium only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
RECLAIM repository
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data can be stored in Aston’s server or in RECLAIM project repository
</td> </tr> </table>
## Zorluteks
<table>
<tr>
<th>
**DS.ZORLUTEKS.01.BLEACHING_MACHINE**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during a bleaching process by a bleaching
machine
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Bleaching machine includes different sensors attached such as temperature
sensors in washing baths and steamer, liquid level sensors in the washing
baths and bleaching chemical through, humidity sensor to measure humidity of
bleached fabric at the end of the machine. Furthermore, it is possible to
monitor velocity of the machine and recipes for different quality of feeding
fabric by PLC monitoring on the bleaching machine. PLC monitoring system also
helps determining electricity, steam and water consumptions daily. There is an
online platform which takes data from PLCs. By using this, energy
consumptions, efficiency and reasons for stops are detailed and analysed based
on each machine in the production plant.
</td> </tr> </table>
<table>
<tr>
<th>
Partners activities and responsibilities
</th> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
ZORLUTEKS
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ZORLUTEKS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
TEC, ADV and CTCR
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ZORLUTEKS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Metadata includes production-related information and machine-related
information. Production-related information can be exemplified types of fabric
with its production amount and production time obtained by SCADA System. Data
can be handled transiently. Machine-related information be illustrated as
serial number.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
Stored data in proprietary format can be exported to Excel. The volume is
estimated to several Megabytes per day based on the production volume.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data, amount of energy
consumption and efficiency of the machine in order to estimate the machine’s
current state and to predict the machine’s future behaviour. So that,
degradation models will be advanced based on the data.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and
</td>
<td>
The data will be shared via the RECLAIM
</td> </tr>
<tr>
<td>
distribution (How?)
</td>
<td>
repository and the respective data communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at Zorluteks during the project duration. Water, steam
and energy consumptions, efficiencies and production-related information such
as type of fabric with its production amount as well as production time in a
period of time can be stored in the on-line platform and SCADA System.
However, data obtained by sensors and recipes information used for different
types of feeding fabric do not be stored.
</td> </tr> </table>
## Center for Research and Technology Hellas
<table>
<tr>
<th>
**DS.CERTH.01.DECISION_SUPPORT_FRAMEWORK_OUTPUT**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Based on evaluation metrics to be defined, raw data from T3.1, the output of
data analysis components from T3.2-T3.4, T4.2 and T4.3, as well as lifetime
extension strategies from T4.1, the Decision Support Framework (DSF) will
infer 1) the most suitable remanufacturing/refurbishment strategy, 2) the
preferable timeframe for the implementation of the strategy, 3) the right
components to be
remanufactured/refurbished, 4) the optimal design alternative. In contrast
with the Optimization Toolkit of T3.4, which performs only operational
optimization in single machines, T4.4 performs operational optimization
globally, i.e. in whole production lines or set of machines of each pilot use
case, considering also business aspects (financial etc.) based on T4.1 and
T4.3.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Decision Support Framework (T4.4)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
* **pilots** 1 **(as end users)**
* **ICE (as responsible for data storage in RECLAIM Repository)**
* **partners from T3.1 (as responsible for communication)**
* **CERTH (as responsible for integration)**
* **partners from T5.1, T5.5, T7.4 (for meta-analysis)**
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
partners from T5.1, T5.5, T7.4
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ICE, pilots
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
* **data generation: T4.4**
* **data storage: T3.2, T6.3-T6.7**
* **data meta-analysis: T5.1, T5.5, T7.4**
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will be properly documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
* **JSON/CSV/XLSX/TXT format**
* **The volume cannot be estimated yet. If it is too high, temporal aggregation may take place.**
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* **The data will be directly visualized by the pilots.**
* **T5.1 will have input from T4.4 and will export the inference of refurbishment and re-manufacturing plan.**
* **The real-time 3D annotation module of the AR Mechanisms (T5.5) will**
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**receive proposed actions & parts IDs. **
• **T7.4 will use WP4 outputs, among other data, to develop reliable and
robust digital replica of the physical machines.**
</th> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
This depends on the pilots’ policy.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
Same as embargo periods for the DSF input data
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored on the Cloud and/or the respective pilot plant. The
storage duration will depend on the policy of the storage manager.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.CERTH.02.IN_SITU_REPAIR_DATA_ANALYTICS_OUTPUT**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
This data set is the output of the component corresponding to T5.2 and
building block 8. The exact role of it will depend on the pilot needs. In any
case, it will consist of algorithms and visual analytics. One possible option
is that a camera or laser sensor that will be taking 3D data from the product
is installed, and an image processing algorithm (supervised or unsupervised,
depending on the presence or absence of ground truth data respectively) will
be comparing it with the ideal form of the product and based on that will be
inferring (in the supervised case) what action should be taken on the
equipment producing it. If 3D data cannot be acquired, process data from
machinery data collectors may be used as input instead.
</td> </tr> </table>
<table>
<tr>
<th>
Source (e.g. which device?)
</th>
<th>
In-situ repair data analytics for situational awareness (T5.2)
</th> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
* **pilots (as end users)**
* **ICE (as responsible for data storage in RECLAIM Repository)**
* **partners from T3.1 (as responsible for communication)**
* **CERTH (as responsible for integration)**
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
None (no meta-analysis)
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
ICE, pilots
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
* **data generation: T5.2**
* **data storage: T3.2, T6.3-T6.7**
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
* **The format is still unknown.**
* **The volume cannot be estimated yet. If it is too high, temporal aggregation may take place.**
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be directly visualized by the pilots.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
This depends on the pilots’ policy.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
</td> </tr>
<tr>
<td>
</td>
<td>
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
same as embargo periods for the In-Situ Repair Data Analytics Toolkit input
data
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored on the Cloud and/or the respective pilot plant. The
storage duration will depend on the policy of the storage manager.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.CERTH.03.AR_MECHANISMS_OUTPUT**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
AR User Interface, contextual interaction.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
AR Mechanisms (T5.5)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
None (no meta-analysis)
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
CERTH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
T5.5
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
* **JSON format & others **
* **volume cannot be estimated**
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of
</td>
<td>
The data will be directly visualized by the
</td> </tr>
<tr>
<td>
the data analysis)
</td>
<td>
pilots.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
This depends on the pilots’ policy.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
AR devices (glasses, tablets etc.) will display the part of the machine that
needs repair and then a sequence of disassembly steps of the engine parts will
be displayed with images and videos.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
same as embargo periods for the AR input data
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored on the Cloud and/or the respective pilot plant. The
storage duration will depend on the policy of the storage manager.
</td> </tr> </table>
## Gorenje
<table>
<tr>
<th>
**DS.Gorenje.01.DW_Robot_Cells**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
DW Robot Cell set for making tubs consists of: A-cell, B-cell, C-cell, D-cell,
E-cell, outer bottom cell.
Data is generated and stored during different processes as spot welding,
punching, double bending, seam welding and other support processes by robot
cells set equipment.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
6 robot cells: different robots, punching and welding machines and other
equipment with different sensors and control units.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Gorenje Velenje
</td> </tr>
<tr>
<td>
Partner in charge of the data
</td>
<td>
Gorenje Velenje
</td> </tr> </table>
<table>
<tr>
<th>
collection (if different)
</th>
<th>
</th> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
Roboteh, ADV
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Gorenje Velenje
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP2, WP3, WP4
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include:
* **machine-related information such as serial number.**
* **production related information such as production site, tool/appliance or components appliances.**
Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format – program SAP and PIS (MES) system
but can usually exported to Excel. Depending on the machine and the production
volume, the volume is estimated to several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s and product current state and to predict future behaviour. To do so,
degradation models will be developed based on the data. In addition, the data
will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
eReports or WEB.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms. (WEB
applications)
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at Gorenje during the project duration or lifespan of
appliances.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.Gorenje.02.WHITE ENAMELLING LINE**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
White enamelling line with 3 main processes: spraying booth, furnace and
process parameter traceability. Data is stored by PLC monitoring units of
individual subsystems of the line.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Equipment for identification of parts (f.e.camera) is envisaged to identify
different semi-finished products in the process at different locations of the
production process. On-line measurement of air temperature and relative
humidity at different locations, measuring the applied thickness of the enamel
powder layer on semi-finished products and measuring the speed of the enamel
conveyor is envisaged.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Gorenje Mora
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Gorenje Mora
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ADV
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Gorenje Mora
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
WP2, WP3, WP4
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and
</td>
<td>
Indicative metadata will include:
</td> </tr>
<tr>
<td>
documentation?
</td>
<td>
* **machine-related information such as serial number.**
* **production related information such as production quantities of parts, records of enamel thickness)**
* **environmental data (temperature, humidity,…)**
Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format – program SAP and PIS (MES) system
but can usually be exported to Excel. Depending on the production volume, the
volume is estimated to several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data, keeping history.
Environmental data will be used for simulations or process parameters. In
addition, the data can be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
If possible we prefer NO access to our local data
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
No sharing data
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
Data will be stored on MORA servers
</td> </tr> </table>
## Advanticsys-Tecnalia-CTCR
<table>
<tr>
<th>
**DS.ADV-CTCR-TECNALIA.01.FORMING_MACHINE_FOR_REAR_PARTS**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during the forming operation of the rear
parts of
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
the shoes.
</th> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Rear parts forming machine including the different sensors attached.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ADV, CTCR, TECNALIA, FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several kilobytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so, degradation models will be developed based on the data. In addition, the
data will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM)
</td> </tr>
<tr>
<td>
</td>
<td>
Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at FLUCHOS and CTCR during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ADV-CTCR-TECNALIA.02.FORMING_MACHINE_FOR_REAR_PARTS_ROTOSTIR**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during the forming operation of the rear
parts of the shoes in the machine called ROTOSTIR.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Rear parts forming machine including the different sensors attached.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
ADV, CTCR, TECNALIA, FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated
</td>
<td>
The data is stored in proprietary format,
</td> </tr>
<tr>
<td>
volume of data
</td>
<td>
but can usually exported to Excel. Depending on the machine and the production
volume, the volume is estimated to several Kilobytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so, degradation models will be developed based on the data. In addition, the
data will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at FLUCHOS and CTCR during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.ADV-CTCR-TECNALIA.03.CUTTING_MACHINE**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during the cutting operation of the
components for the upper part of the footwear.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Rear parts forming machine including the different sensors attached.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
Partner in charge of the data
</td>
<td>
ADV, CTCR, TECNALIA, FLUCHOS
</td> </tr> </table>
<table>
<tr>
<th>
collection (if different)
</th>
<th>
</th> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
ADV, CTCR, TECNALIA
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Kilobytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so, degradation models will be developed based on the data. In addition, the
data will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at FLUCHOS and CTCR during the project duration.
</td> </tr> </table>
## Fluchos
<table>
<tr>
<th>
**DS.FLUCHOS.01.FORMING_MACHINE_FOR_REAR_PARTS**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during the forming operation of the rear
parts of the shoes.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Rear parts forming machine including the different sensors attached.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CTCR, TECNALIA, ADV
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several kilobytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so, degradation models will be developed based on the data. In addition, the
data will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at FLUCHOS and CTCR during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.FLUCHOS.02.FORMING_MACHINE_FOR_REAR_PARTS_ROTOSTIR**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during the forming operation of the rear
parts of the shoes in the machine called ROTOSTIR.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Rear parts forming machine including the different sensors attached.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CTCR, TEC, ADV
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Kilobytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so, degradation models will be developed based on the data. In addition, the
data will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at FLUCHOS and CTCR during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.FLUCHOS.03.CUTTING_MACHINE**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data that is generated and stored during the cutting operation of the
components for the upper part of the footwear.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Rear parts forming machine including the different sensors attached.
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
CTCR, TEC, ADV
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP3 and WP4.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Kilobytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used for analysing production data in order to estimate the
machine’s current state and to predict the machine’s future behaviour. To do
so, degradation models will be developed based on the data. In addition, the
data
</td> </tr>
<tr>
<td>
</td>
<td>
will partially be used for visualization.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Prognostic and Health Management (PHM) Toolkit.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored at FLUCHOS and CTCR during the project duration.
</td> </tr> </table>
## Scuola Universitaria Professionale della Svizzera Italiana
### Data sets related to Task 2.5
<table>
<tr>
<th>
**DS.SUPSI.01.FailuresHighLevelData_Gorenje**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Demonstration scenario data related to failure occurrences, **l** abour hours
spent on maintenance, number of breakdowns, operational time, OEE, etc.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP2 and the analysis will be
carried out within task 2.5.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
To design, develop and validate a methodology/tool to support companies in
structuring and perform a high-level analysis of the state and life expectancy
of the machines in the company, providing preliminary insight into the most
meaningful approaches to maintenance execution.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Reliability Analysis Tool.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.02.FailuresHighLevelData_FLUCHOS**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Demonstration scenario data related to failure occurrences, **l** abour hours
spent on maintenance, number of breakdowns, operational time, OEE, etc.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP2 and the analysis will be
carried out within task 2.5.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
To design, develop and validate a methodology/tool to support companies in
structuring and perform a high-level analysis of the state and life expectancy
of the machines in the company, providing
</td> </tr>
<tr>
<td>
</td>
<td>
preliminary insight into the most meaningful approaches to maintenance
execution.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Reliability Analysis Tool.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.03.FailuresHighLevelData_Podium**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Demonstration scenario data related to failure occurrences, **l** abour hours
spent on maintenance, number of breakdowns, operational time, OEE, etc.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Podium
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Podium
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Podium
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP2 and the analysis will be
carried out within task 2.5.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
To design, develop and validate a methodology/tool to support companies in
structuring and perform a high-level analysis of the state and life expectancy
of the machines in the company, providing preliminary insight into the most
meaningful approaches to maintenance execution.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Reliability Analysis Tool.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.04.FailuresHighLevelData_Zorluteks**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data related to failure occurrences, **l** abour hours spent on maintenance,
number of
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
breakdowns, operational time, OEE, etc.
</th> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Zorluteks
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Zorluteks
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Zorluteks
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP2 and the analysis will be
carried out within task 2.5.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
To design, develop and validate a methodology/tool to support companies in
structuring and perform a high-level analysis of the state and life expectancy
of the machines in the company, providing preliminary insight into the most
meaningful approaches to maintenance execution.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM
</td> </tr>
<tr>
<td>
the Commission Services) / Public
</td>
<td>
Reliability Analysis Tool.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.05.FailuresHighLevelData_HWH**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Demonstration scenario data related to failure occurrences, **l** abour hours
spent on maintenance, number of breakdowns, operational time, OEE, etc.
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP2 and the analysis will be
carried out within task 2.5.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include a) machine-related information such as serial
number and b) production related information such as production site or charge
ID. Data will properly be documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is stored in proprietary format, but can usually exported to Excel.
Depending on the machine and the production volume, the volume is estimated to
several Megabytes per day.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
To design, develop and validate a methodology/tool to support companies in
structuring and perform a high-level analysis of the state and life expectancy
of the machines in the company, providing preliminary insight into the most
meaningful approaches to maintenance execution.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only. Anonymized and consolidated data can be provided to the public via the
RECLAIM Reliability Analysis Tool.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
### Data sets related to Task 7.3
<table>
<tr>
<th>
**DS.SUPSI.06.LCAData_Gorenje**
</th>
<th>
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Raw materials, auxiliary materials, other natural resources, energy (in its
various forms), waste, products, co-products, emissions (air, water and soil)
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines, MRP, ERP, energy and waste bills; Ecoinvent
</td> </tr>
<tr>
<td>
</td>
<td>
database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Gorenje
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP5 and 7 and the analysis
will be carried out within task 7.3.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include production dates. Data will properly be
documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is exported to Excel. The volume is estimated to several Megabytes.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Real-time assessment of the sustainability performances and generation of
sustainability oriented use scenarios
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.07.LCAData_FLUCHOS**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Raw materials, auxiliary materials, other natural resources, energy (in its
various forms), waste, products, co-products, emissions (air, water and soil)
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines, MRP, ERP, energy and waste bills; Ecoinvent
database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
FLUCHOS
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP5 and 7 and the analysis
will be carried out within task 7.3.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include production dates. Data will properly be
documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is exported to Excel. The volume is estimated to several Megabytes.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Real-time assessment of the sustainability performances and generation of
sustainability oriented use scenarios
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.08.LCAData_Podium**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Raw materials, auxiliary materials, other natural resources, energy (in its
various forms), waste, products, co-products, emissions (air, water and soil)
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines, MRP, ERP, energy and waste bills; Ecoinvent
database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Podium
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Podium
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Podium
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP5 and 7 and the analysis
will be carried out within task 7.3.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include production dates. Data will properly be
documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is exported to Excel. The volume is estimated to several Megabytes.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Real-time assessment of the sustainability performances and generation of
sustainability oriented use scenarios
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.09.LCAData_Zorluteks**
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Raw materials, auxiliary materials, other natural resources, energy (in its
various forms), waste, products, co-products, emissions (air, water and soil)
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines, MRP, ERP, energy and waste bills; Ecoinvent
database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
Zorluteks
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
Zorluteks
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
Zorluteks
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP5 and 7 and the analysis
will be carried out within task 7.3.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include production dates. Data will properly be
documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is exported to Excel. The volume is estimated to several Megabytes.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Real-time assessment of the sustainability performances and generation of
sustainability oriented use scenarios
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
<table>
<tr>
<th>
**DS.SUPSI.10.LCAData_HWH**
</th>
<th>
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Raw materials, auxiliary materials, other natural resources, energy (in its
various forms), waste, products, co-products, emissions (air, water and soil)
</td> </tr>
<tr>
<td>
Source (e.g. which device?)
</td>
<td>
Use-cases’ Equipment and machines, MRP, ERP, energy and waste bills; Ecoinvent
</td> </tr>
<tr>
<td>
</td>
<td>
database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the device
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data collection (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis (if different)
</td>
<td>
SUPSI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage (if different)
</td>
<td>
HWH
</td> </tr>
<tr>
<td>
WPs and tasks
</td>
<td>
The data will be collected within activities of WP5 and 7 and the analysis
will be carried out within task 7.3.
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (Production and storage dates, places) and documentation?
</td>
<td>
Indicative metadata will include production dates. Data will properly be
documented.
</td> </tr>
<tr>
<td>
Standards, Format, Estimated volume of data
</td>
<td>
The data is exported to Excel. The volume is estimated to several Megabytes.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Real-time assessment of the sustainability performances and generation of
sustainability oriented use scenarios
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level (Confidential, only for members of
the Consortium and the Commission Services) / Public
</td>
<td>
Full access to the data sets will be given to the members of the consortium
only.
</td> </tr>
<tr>
<td>
Data sharing, re-use and distribution (How?)
</td>
<td>
The data will be shared via the RECLAIM repository and the respective data
communication mechanisms.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup):
where? For how long?
</td>
<td>
The data will be stored in the project repository during the project duration.
</td> </tr> </table>
# Conclusion
This report describes the RECLAIM Data Management Plan in its first version.
Herein, and the data sets identified until project month 6 are illustrated. Up
to now, 30 data sets reported by 9 project partners are available. IPR related
activities with respect to data management have already been started. Those
activities will be continued within Task T8.2 “Management of IPR”.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.